14
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.

          Methods

          Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.

          Results

          Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.

          Conclusions

          To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.

          Related collections

          Most cited references37

          • Record: found
          • Abstract: found
          • Article: not found

          Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

          Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Dissecting racial bias in an algorithm used to manage the health of populations

            Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Decision aids for people facing health treatment or screening decisions.

              Decision aids are interventions that support patients by making their decisions explicit, providing information about options and associated benefits/harms, and helping clarify congruence between decisions and personal values.
                Bookmark

                Author and article information

                Contributors
                julia.amann@hest.ethz.ch
                Journal
                BMC Med Inform Decis Mak
                BMC Med Inform Decis Mak
                BMC Medical Informatics and Decision Making
                BioMed Central (London )
                1472-6947
                30 November 2020
                30 November 2020
                2020
                : 20
                : 310
                Affiliations
                [1 ]GRID grid.5801.c, ISNI 0000 0001 2156 2780, Health Ethics and Policy Lab, Department of Health Sciences and Technology, , ETH Zurich, ; Hottingerstrasse 10, 8092 Zurich, Switzerland
                [2 ]GRID grid.6363.0, ISNI 0000 0001 2218 4662, Charité Lab for Artificial Intelligence in Medicine—CLAIM, , Charité - Universitätsmedizin Berlin, ; Berlin, Germany
                [3 ]GRID grid.19822.30, ISNI 0000 0001 2180 2449, School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment, , Birmingham City University, ; Birmingham, UK
                Author information
                http://orcid.org/0000-0003-2155-5286
                Article
                1332
                10.1186/s12911-020-01332-6
                7706019
                33256715
                a812be07-ebf3-4f82-b738-238ca2a2f616
                © The Author(s) 2020

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 22 July 2020
                : 15 November 2020
                Funding
                Funded by: Horizon 2020 Research and Innovation Programme
                Award ID: 777107
                Categories
                Research Article
                Custom metadata
                © The Author(s) 2020

                Bioinformatics & Computational biology
                artificial intelligence,machine learning,explainability,interpretability,clinical decision support

                Comments

                Comment on this article