Inviting an author to review:
Find an author and click ‘Invite to review selected article’ near their name.
Search for authorsSearch for similar articles
7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Deep Learning and Minimally Invasive Endoscopy: Automatic Classification of Pleomorphic Gastric Lesions in Capsule Endoscopy

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          INTRODUCTION:

          Capsule endoscopy (CE) is a minimally invasive examination for evaluating the gastrointestinal tract. However, its diagnostic yield for detecting gastric lesions is suboptimal. Convolutional neural networks (CNNs) are artificial intelligence models with great performance for image analysis. Nonetheless, their role in gastric evaluation by wireless CE (WCE) has not been explored.

          METHODS:

          Our group developed a CNN-based algorithm for the automatic classification of pleomorphic gastric lesions, including vascular lesions (angiectasia, varices, and red spots), protruding lesions, ulcers, and erosions. A total of 12,918 gastric images from 3 different CE devices (PillCam Crohn's; PillCam SB3; OMOM HD CE system) were used from the construction of the CNN: 1,407 from protruding lesions; 994 from ulcers and erosions; 822 from vascular lesions; and 2,851 from hematic residues and the remaining images from normal mucosa. The images were divided into a training (split for three-fold cross-validation) and validation data set. The model's output was compared with a consensus classification by 2 WCE-experienced gastroenterologists. The network's performance was evaluated by its sensitivity, specificity, accuracy, positive predictive value and negative predictive value, and area under the precision-recall curve.

          RESULTS:

          The trained CNN had a 97.4% sensitivity; 95.9% specificity; and positive predictive value and negative predictive value of 95.0% and 97.8%, respectively, for gastric lesions, with 96.6% overall accuracy. The CNN had an image processing time of 115 images per second.

          DISCUSSION:

          Our group developed, for the first time, a CNN capable of automatically detecting pleomorphic gastric lesions in both small bowel and colon CE devices.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: not found
          • Article: not found

          Scikit-learn : machine learning in Python

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Wireless capsule endoscopy.

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The practical implementation of artificial intelligence technologies in medicine

              The development of artificial intelligence (AI)-based technologies in medicine is advancing rapidly, but real-world clinical implementation has not yet become a reality. Here we review some of the key practical issues surrounding the implementation of AI into existing clinical workflows, including data sharing and privacy, transparency of algorithms, data standardization, and interoperability across multiple platforms, and concern for patient safety. We summarize the current regulatory environment in the United States and highlight comparisons with other regions in the world, notably Europe and China.
                Bookmark

                Author and article information

                Contributors
                Journal
                Clin Transl Gastroenterol
                Clin Transl Gastroenterol
                CLTG
                CT9
                Clinical and Translational Gastroenterology
                Wolters Kluwer (Philadelphia, PA )
                2155-384X
                October 2023
                3 July 2023
                : 14
                : 10
                : e00609
                Affiliations
                [1 ]Precision Medicine Unit, Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, Porto, Portugal;
                [2 ]WGO Gastroenterology and Hepatology Training Center, Porto, Portugal;
                [3 ]Faculty of Medicine of the University of Porto, Alameda Professor Hernâni Monteiro, Porto, Portugal;
                [4 ]Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal;
                [5 ]Digestive Artificial Intelligence Development, Porto, Portugal;
                [6 ]ManopH Gastroenterology Clinic, Porto, Portugal.
                Author notes
                Correspondence: Miguel Mascarenhas Saraiva, MD, PhD. E-mail: miguelmascarenhassaraiva@ 123456gmail.com .
                Article
                CTG-23-0053 00007
                10.14309/ctg.0000000000000609
                10584281
                37404050
                7617a892-faec-4fd2-b3d4-72096b5057b5
                © 2023 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of The American College of Gastroenterology

                This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 13 February 2023
                : 02 June 2023
                Categories
                Article
                Endoscopy
                Custom metadata
                TRUE

                Gastroenterology & Hepatology
                artificial intelligence,capsule endoscopy,deep learning
                Gastroenterology & Hepatology
                artificial intelligence, capsule endoscopy, deep learning

                Comments

                Comment on this article