142
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Causability and explainability of artificial intelligence in medicine

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system

          This article is categorized under:

          • Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction

          Abstract

          Explainable AI.

          Related collections

          Most cited references15

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Learning Deep Features for Discriminative Localization

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Intelligible Models for HealthCare

                Bookmark

                Author and article information

                Contributors
                andreas.holzinger@medunigraz.at
                Journal
                Wiley Interdiscip Rev Data Min Knowl Discov
                Wiley Interdiscip Rev Data Min Knowl Discov
                10.1002/(ISSN)1942-4795
                WIDM
                Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery
                Wiley Periodicals, Inc (Hoboken, USA )
                1942-4787
                1942-4795
                02 April 2019
                Jul-Aug 2019
                : 9
                : 4 ( doiID: 10.1002/widm.2019.9.issue-4 )
                : e1312
                Affiliations
                [ 1 ] Institute for Medical Informatics, Statistics and Documentation Medical University Graz Graz Austria
                [ 2 ] Department of Biomedical Imaging and Image‐guided Therapy Computational Imaging Research Lab, Medical University of Vienna Vienna Austria
                [ 3 ] Institute of Pathology Medical University Graz Graz Austria
                Author notes
                [*] [* ] Correspondence

                Andreas Holzinger, Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, A‐8036, Austria.

                Email: andreas.holzinger@ 123456medunigraz.at

                Author information
                https://orcid.org/0000-0002-6786-5194
                Article
                WIDM1312
                10.1002/widm.1312
                7017860
                32089788
                43bc39c0-0b3a-441f-bd3a-12df1a959f4a
                © 2019 The Authors. WIREs Data Mining and Knowledge Discovery published by Wiley Periodicals, Inc.

                This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

                History
                : 19 November 2018
                : 26 January 2019
                : 24 February 2019
                Page count
                Figures: 4, Tables: 0, Pages: 13, Words: 10515
                Funding
                Funded by: FeatureCloud
                Award ID: 826078 H2020 EU Project
                Funded by: Hochschulraum‐Infrastrukturmittelfonds
                Funded by: MEFO
                Award ID: MEFO‐Graz
                Funded by: Austrian Science Fund FWF
                Award ID: I2714‐B31
                Funded by: EU under H2020
                Award ID: 765148
                Categories
                Human Centricity and User Interaction
                Advanced Review
                Advanced Reviews
                Custom metadata
                2.0
                July/August 2019
                Converter:WILEY_ML3GV2_TO_JATSPMC version:5.7.5 mode:remove_FC converted:13.02.2020

                artificial intelligence,causability,explainability,explainable ai,histopathology,medicine

                Comments

                Comment on this article