6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Pathways to breast cancer screening artificial intelligence algorithm validation

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          As more artificial intelligence (AI)-enhanced mammography screening tools enter the clinical market, greater focus will be placed on external validation in diverse patient populations. In this viewpoint, we outline lessons learned from prior efforts in this field, the need to validate algorithms on newer screening technologies and diverse patient populations, and conclude by discussing the need for a framework for continuous monitoring and recalibration of these AI tools. Sufficient validation and continuous monitoring of emerging AI tools for breast cancer screening will require greater stakeholder engagement and the creation of shared policies and guidelines.

          Related collections

          Most cited references13

          • Record: found
          • Abstract: found
          • Article: not found

          Ensuring Fairness in Machine Learning to Advance Health Equity

          Machine learning is used increasingly in clinical care to improve diagnosis, treatment selection, and health system efficiency. Because machine-learning models learn from historically collected data, populations that have experienced human and structural biases in the past—called protected groups —are vulnerable to harm by incorrect predictions or withholding of resources. This article describes how model design, biases in data, and the interactions of model predictions with clinicians and patients may exacerbate health care disparities. Rather than simply guarding against these harms passively, machine-learning systems should be used proactively to advance health equity. For that goal to be achieved, principles of distributive justice must be incorporated into model design, deployment, and evaluation. The article describes several technical implementations of distributive justice—specifically those that ensure equality in patient outcomes, performance, and resource allocation—and guides clinicians as to when they should prioritize each principle. Machine learning is providing increasingly sophisticated decision support and population-level monitoring, and it should encode principles of justice to ensure that models benefit all patients.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Diagnostic Accuracy of Digital Screening Mammography With and Without Computer-Aided Detection.

            After the US Food and Drug Administration (FDA) approved computer-aided detection (CAD) for mammography in 1998, and the Centers for Medicare and Medicaid Services (CMS) provided increased payment in 2002, CAD technology disseminated rapidly. Despite sparse evidence that CAD improves accuracy of mammographic interpretations and costs over $400 million a year, CAD is currently used for most screening mammograms in the United States.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Detection of Breast Cancer with Mammography: Effect of an Artificial Intelligence Support System

              Purpose To compare breast cancer detection performance of radiologists reading mammographic examinations unaided versus supported by an artificial intelligence (AI) system. Materials and Methods An enriched retrospective, fully crossed, multireader, multicase, HIPAA-compliant study was performed. Screening digital mammographic examinations from 240 women (median age, 62 years; range, 39-89 years) performed between 2013 and 2017 were included. The 240 examinations (100 showing cancers, 40 leading to false-positive recalls, 100 normal) were interpreted by 14 Mammography Quality Standards Act-qualified radiologists, once with and once without AI support. The readers provided a Breast Imaging Reporting and Data System score and probability of malignancy. AI support provided radiologists with interactive decision support (clicking on a breast region yields a local cancer likelihood score), traditional lesion markers for computer-detected abnormalities, and an examination-based cancer likelihood score. The area under the receiver operating characteristic curve (AUC), specificity and sensitivity, and reading time were compared between conditions by using mixed-models analysis dof variance and generalized linear models for multiple repeated measurements. Results On average, the AUC was higher with AI support than with unaided reading (0.89 vs 0.87, respectively; P = .002). Sensitivity increased with AI support (86% [86 of 100] vs 83% [83 of 100]; P = .046), whereas specificity trended toward improvement (79% [111 of 140]) vs 77% [108 of 140]; P = .06). Reading time per case was similar (unaided, 146 seconds; supported by AI, 149 seconds; P = .15). The AUC with the AI system alone was similar to the average AUC of the radiologists (0.89 vs 0.87). Conclusion Radiologists improved their cancer detection at mammography when using an artificial intelligence system for support, without requiring additional reading time. Published under a CC BY 4.0 license. See also the editorial by Bahl in this issue.
                Bookmark

                Author and article information

                Contributors
                Journal
                Breast
                Breast
                The Breast : official journal of the European Society of Mastology
                Elsevier
                0960-9776
                1532-3080
                09 September 2019
                August 2020
                09 September 2019
                : 52
                : 146-149
                Affiliations
                [a ]Department of Radiology, University of Washington School of Medicine, Department of Health Services, University of Washington School of Public Health, Hutchinson Institute for Cancer Outcomes Research, Seattle, WA, USA
                [b ]The University of Sydney, Faculty of Medicine and Health, Sydney School of Public Health, Australia
                [c ]Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
                [d ]Kaiser Permanente Washington Health Research Institute, Seattle, WA, USA
                Author notes
                []Corresponding author. Department of Radiology University of Washington School of Medicine, 1144 Eastlake Avenue East, LG-212, Seattle, WA, 98109, USA. stophlee@ 123456uw.edu
                Article
                S0960-9776(19)30554-5
                10.1016/j.breast.2019.09.005
                7061055
                31540699
                ebedd5df-883f-4080-8a78-e8f94d0f2509
                © 2019 Elsevier Ltd.

                This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

                History
                : 30 July 2019
                : 8 September 2019
                Categories
                Virtual special issue: Artificial Intelligence in Breast Cancer Care; Edited by Nehmat Houssami, Maria João Cardoso, Giuseppe Pozzi and Brigitte Seroussi

                Obstetrics & Gynecology
                artificial intelligence,breast cancer,screening,mammography,population health,validation,transparency,reproducibility

                Comments

                Comment on this article