12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Explainability and causability in digital pathology

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black‐box machine‐learning systems more transparent. These XAI methods are a first step towards making black‐box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what‐if’‐questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human‐in‐the‐loop and bringing medical experts' experience and conceptual knowledge to AI processes.

          Related collections

          Most cited references48

          • Record: found
          • Abstract: found
          • Article: not found
          Is Open Access

          A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis

          Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Causability and explainability of artificial intelligence in medicine

            Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

              Background Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Methods Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. Results Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. Conclusions To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
                Bookmark

                Author and article information

                Contributors
                heimo.mueller@medunigraz.at
                Journal
                J Pathol Clin Res
                J Pathol Clin Res
                10.1002/(ISSN)2056-4538
                CJP2
                The Journal of Pathology: Clinical Research
                John Wiley & Sons, Inc. (Hoboken, USA )
                2056-4538
                12 April 2023
                July 2023
                : 9
                : 4 ( doiID: 10.1002/cjp2.v9.4 )
                : 251-260
                Affiliations
                [ 1 ] Diagnostic and Research Institute of Pathology Medical University of Graz Graz Austria
                [ 2 ] Charité‐Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt‐Universität zu Berlin Institute of Pathology Berlin Germany
                [ 3 ] DAI‐Labor, Agent Oriented Technologies (AOT) Technische Universität Berlin Berlin Germany
                [ 4 ] Human‐Centered AI Lab University of Natural Resources and Life Sciences Vienna Vienna Austria
                Author notes
                [*] [* ] Correspondence to: Heimo Müller, Diagnostic and Research Institute of Pathology, Medical University of Graz, Neue Stiftingtalstraße 6. A‐8010 Graz, Austria. E‐mail: heimo.mueller@ 123456medunigraz.at

                Author information
                https://orcid.org/0000-0003-2718-7648
                https://orcid.org/0000-0002-6786-5194
                https://orcid.org/0000-0002-9691-4872
                Article
                CJP2322
                10.1002/cjp2.322
                10240147
                37045794
                66fe7dfb-ff00-4849-84a3-ff64b700f949
                © 2023 The Authors. The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd.

                This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

                History
                : 17 February 2023
                : 03 November 2022
                : 16 March 2023
                Page count
                Figures: 2, Tables: 0, Pages: 10, Words: 6607
                Funding
                Funded by: Austrian Science Fund , doi 10.13039/501100002428;
                Award ID: P‐32554
                Funded by: German Federal Ministry for Economic Affairs and Climate Action (BMWK)
                Award ID: FKZ 01MK20002A
                Funded by: Horizon 2020 Framework Programme , doi 10.13039/100010661;
                Award ID: 824087
                Award ID: 826078
                Award ID: 857122
                Award ID: 874662
                Funded by: Österreichische Forschungsförderungsgesellschaft , doi 10.13039/501100004955;
                Award ID: 879881
                Categories
                Invited Review
                Invited Review
                Custom metadata
                2.0
                July 2023
                Converter:WILEY_ML3GV2_TO_JATSPMC version:6.2.8 mode:remove_FC converted:05.06.2023

                digital pathology,artificial intelligence,explainability,causability

                Comments

                Comment on this article