5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body FDG-PET/CT

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Computational decision support systems could provide clinical value in whole-body FDG-PET/CT workflows. However, limited availability of labeled data combined with the large size of PET/CT imaging exams make it challenging to apply existing supervised machine learning systems. Leveraging recent advancements in natural language processing, we describe a weak supervision framework that extracts imperfect, yet highly granular, regional abnormality labels from free-text radiology reports. Our framework automatically labels each region in a custom ontology of anatomical regions, providing a structured profile of the pathologies in each imaging exam. Using these generated labels, we then train an attention-based, multi-task CNN architecture to detect and estimate the location of abnormalities in whole-body scans. We demonstrate empirically that our multi-task representation is critical for strong performance on rare abnormalities with limited training data. The representation also contributes to more accurate mortality prediction from imaging data, suggesting the potential utility of our framework beyond abnormality detection and location estimation.

          Abstract

          Computational decision support systems could provide clinical value in whole-body FDG PET/CT workflows, but labeled data is scarce and PET/CT imaging exams are cumbersome. Here, the authors describe a weak supervision framework that extracts regional abnormality labels from free-text radiology reports.

          Related collections

          Most cited references14

          • Record: found
          • Abstract: found
          • Article: not found

          Dermatologist-level classification of skin cancer with deep neural networks

          Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images—two orders of magnitude larger than previous datasets—consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

            Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Evaluating the yield of medical tests.

              A method is presented for evaluating the amount of information a medical test provides about individual patients. Emphasis is placed on the role of a test in the evaluation of patients with a chronic disease. In this context, the yield of a test is best interpreted by analyzing the prognostic information it furnishes. Information from the history, physical examination, and routine procedures should be used in assessing the yield of a new test. As an example, the method is applied to the use of the treadmill exercise test in evaluating the prognosis of patients with suspected coronary artery disease. The treadmill test is shown to provide surprisingly little prognostic information beyond that obtained from basic clinical measurements.
                Bookmark

                Author and article information

                Contributors
                eyuboglu@stanford.edu
                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group UK (London )
                2041-1723
                25 March 2021
                25 March 2021
                2021
                : 12
                : 1880
                Affiliations
                [1 ]GRID grid.168010.e, ISNI 0000000419368956, Department of Computer Science, , Stanford University, ; Stanford, CA USA
                [2 ]GRID grid.168010.e, ISNI 0000000419368956, Department of Radiology, , Stanford University, ; Stanford, CA USA
                [3 ]GRID grid.168010.e, ISNI 0000000419368956, Center for Artificial Intelligence in Medicine and Imaging, , Stanford University, ; Stanford, CA USA
                Author information
                http://orcid.org/0000-0002-8412-0266
                http://orcid.org/0000-0001-5157-9903
                http://orcid.org/0000-0002-1526-3685
                http://orcid.org/0000-0001-5579-6825
                http://orcid.org/0000-0002-4002-0562
                Article
                22018
                10.1038/s41467-021-22018-1
                7994797
                33767174
                b9fd1f49-1c85-47c4-954a-542d7ae95235
                © The Author(s) 2021

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 23 March 2020
                : 16 February 2021
                Funding
                Funded by: FundRef https://doi.org/10.13039/100004313, General Electric (GE);
                Funded by: FundRef https://doi.org/10.13039/100000092, U.S. Department of Health & Human Services | NIH | U.S. National Library of Medicine (NLM);
                Award ID: R01LM012966
                Award Recipient :
                Categories
                Article
                Custom metadata
                © The Author(s) 2021

                Uncategorized
                machine learning,three-dimensional imaging,computed tomography,positron-emission tomography,whole body imaging

                Comments

                Comment on this article