13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers-From the Radiology Editorial Board.

      Read this article at

      ScienceOpenPublisherPubMed
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references3

          • Record: found
          • Abstract: not found
          • Article: not found

          Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessment of Convolutional Neural Networks for Automated Classification of Chest Radiographs

            Purpose To assess the ability of convolutional neural networks (CNNs) to enable high-performance automated binary classification of chest radiographs. Materials and Methods In a retrospective study, 216 431 frontal chest radiographs obtained between 1998 and 2012 were procured, along with associated text reports and a prospective label from the attending radiologist. This data set was used to train CNNs to classify chest radiographs as normal or abnormal before evaluation on a held-out set of 533 images hand-labeled by expert radiologists. The effects of development set size, training set size, initialization strategy, and network architecture on end performance were assessed by using standard binary classification metrics; detailed error analysis, including visualization of CNN activations, was also performed. Results Average area under the receiver operating characteristic curve (AUC) was 0.96 for a CNN trained with 200 000 images. This AUC value was greater than that observed when the same model was trained with 2000 images (AUC = 0.84, P .05). Averaging the CNN output score with the binary prospective label yielded the best-performing classifier, with an AUC of 0.98 (P < .005). Analysis of specific radiographs revealed that the model was heavily influenced by clinically relevant spatial regions but did not reliably generalize beyond thoracic disease. Conclusion CNNs trained with a modestly sized collection of prospectively labeled chest radiographs achieved high diagnostic performance in the classification of chest radiographs as normal or abnormal; this function may be useful for automated prioritization of abnormal chest radiographs. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by van Ginneken in this issue.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Editor’s Note: Publication of AI Research in Radiology

                Bookmark

                Author and article information

                Journal
                Radiology
                Radiology
                Radiological Society of North America (RSNA)
                1527-1315
                0033-8419
                Mar 2020
                : 294
                : 3
                Affiliations
                [1 ] From the Department of Radiology, University of Wisconsin Madison School of Medicine and Public Health, 600 Highland Dr, Madison, WI 53792 (D.A.B., M.L.S.); Department of Radiology, New York University, New York, NY (L.M.); Department of Musculoskeletal Radiology (M.A.B.) and Institute for Technology Assessment (E.F.H.), Massachusetts General Hospital, Boston, Mass; Department of Medical Imaging, Hospital for Sick Children, University of Toronto, Toronto, Canada (B.B.E.W.); Department of Radiology, University of California-San Diego, San Diego, Calif (K.J.F.); Department of Cancer Imaging, Division of Imaging Sciences & Biomedical Engineering, Kings College London, London, England (V.J.G.); Department of Radiology and Biomedical Imaging, University of California-San Francisco, San Francisco, Calif (C.P.H.); and Department of Radiology and Radiologic Science, The Johns Hopkins University School of Medicine, Baltimore, Md (C.R.W.).
                Article
                10.1148/radiol.2019192515
                31891322
                bd515c0a-d388-47c6-9b58-3ad0d4d11913
                History

                Comments

                Comment on this article