62
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The false hope of current approaches to explainable artificial intelligence in health care

      , ,
      The Lancet Digital Health
      Elsevier BV

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references46

          • Record: found
          • Abstract: found
          • Article: not found

          High-performance medicine: the convergence of human and artificial intelligence

          Eric Topol (2019)
          The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient-doctor relationship or facilitate its erosion remains to be seen.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

            Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Dissecting racial bias in an algorithm used to manage the health of populations

              Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
                Bookmark

                Author and article information

                Journal
                The Lancet Digital Health
                The Lancet Digital Health
                Elsevier BV
                25897500
                November 2021
                November 2021
                : 3
                : 11
                : e745-e750
                Article
                10.1016/S2589-7500(21)00208-9
                34711379
                71a6e135-2c6f-4662-8542-fde2ac253402
                © 2021

                https://www.elsevier.com/tdm/userlicense/1.0/

                http://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article