16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Distinguishing retinal angiomatous proliferation from polypoidal choroidal vasculopathy with a deep neural network based on optical coherence tomography

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This cross-sectional study aimed to build a deep learning model for detecting neovascular age-related macular degeneration (AMD) and to distinguish retinal angiomatous proliferation (RAP) from polypoidal choroidal vasculopathy (PCV) using a convolutional neural network (CNN). Patients from a single tertiary center were enrolled from January 2014 to January 2020. Spectral-domain optical coherence tomography (SD-OCT) images of patients with RAP or PCV and a control group were analyzed with a deep CNN. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were used to evaluate the model’s ability to distinguish RAP from PCV. The performances of the new model, the VGG-16, Resnet-50, Inception, and eight ophthalmologists were compared. A total of 3951 SD-OCT images from 314 participants (229 AMD, 85 normal controls) were analyzed. In distinguishing the PCV and RAP cases, the proposed model showed an accuracy, sensitivity, and specificity of 89.1%, 89.4%, and 88.8%, respectively, with an AUROC of 95.3% (95% CI 0.727–0.852). The proposed model showed better diagnostic performance than VGG-16, Resnet-50, and Inception-V3 and comparable performance with the eight ophthalmologists. The novel model performed well when distinguishing between PCV and RAP. Thus, automated deep learning systems may support ophthalmologists in distinguishing RAP from PCV.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: not found
          • Article: not found

          ImageNet classification with deep convolutional neural networks

            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            A survey on Image Data Augmentation for Deep Learning

              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

              Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
                Bookmark

                Author and article information

                Contributors
                daniel.dj.hwang@gmail.com
                jinyounghan@skku.edu
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                29 April 2021
                29 April 2021
                2021
                : 11
                : 9275
                Affiliations
                [1 ]Department of Ophthalmology, Hangil Eye Hospital, 35 Bupyeong-daero, Bupyeong-gu, Incheon, 21388 South Korea
                [2 ]Department of Ophthalmology, Catholic Kwandong University College of Medicine, Incheon, South Korea
                [3 ]GRID grid.264381.a, ISNI 0000 0001 2181 989X, Department of Applied Artificial Intelligence, , Sungkyunkwan University, ; 25-2, Sungkyunkwan-ro, Jongno-gu, Seoul, 03063 South Korea
                [4 ]RAON DATA, Seoul, South Korea
                [5 ]GRID grid.412010.6, ISNI 0000 0001 0707 9039, Department of Medicine, Kangwon National University Hospital, , Kangwon National University School of Medicine, ; Chuncheon, Gangwon-do South Korea
                [6 ]Seoul Plus Eye Clinic, Seoul, South Korea
                [7 ]Kong Eye Center, Seoul, South Korea
                [8 ]GRID grid.412480.b, ISNI 0000 0004 0647 3378, Department of Ophthalmology, , Seoul National University Bundang Hospital, ; Seongnam, South Korea
                Article
                88543
                10.1038/s41598-021-88543-7
                8085229
                33927240
                cd13a16e-d44a-4564-9cba-8785df40b511
                © The Author(s) 2021

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 3 December 2020
                : 5 April 2021
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/501100003725, National Research Foundation of Korea;
                Award ID: NRF-2018R1D1A1A02085647
                Award Recipient :
                Categories
                Article
                Custom metadata
                © The Author(s) 2021

                Uncategorized
                retinal diseases,medical imaging,machine learning
                Uncategorized
                retinal diseases, medical imaging, machine learning

                Comments

                Comment on this article