3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A mental models approach for defining explainable artificial intelligence

      research-article

      Read this article at

      ScienceOpenPublisherPMC
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Wide-ranging concerns exist regarding the use of black-box modelling methods in sensitive contexts such as healthcare. Despite performance gains and hype, uptake of artificial intelligence (AI) is hindered by these concerns. Explainable AI is thought to help alleviate these concerns. However, existing definitions for explainable are not forming a solid foundation for this work.

          Methods

          We critique recent reviews on the literature regarding: the agency of an AI within a team; mental models, especially as they apply to healthcare, and the practical aspects of their elicitation; and existing and current definitions of explainability, especially from the perspective of AI researchers. On the basis of this literature, we create a new definition of explainable, and supporting terms, providing definitions that can be objectively evaluated. Finally, we apply the new definition of explainable to three existing models, demonstrating how it can apply to previous research, and providing guidance for future research on the basis of this definition.

          Results

          Existing definitions of explanation are premised on global applicability and don’t address the question ‘understandable by whom?’. Eliciting mental models can be likened to creating explainable AI if one considers the AI as a member of a team. On this basis, we define explainability in terms of the context of the model, comprising the purpose, audience, and language of the model and explanation. As examples, this definition is applied to regression models, neural nets, and human mental models in operating-room teams.

          Conclusions

          Existing definitions of explanation have limitations for ensuring that the concerns for practical applications are resolved. Defining explainability in terms of the context of their application forces evaluations to be aligned with the practical goals of the model. Further, it will allow researchers to explicitly distinguish between explanations for technical and lay audiences, allowing different evaluations to be applied to each.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: not found
          • Article: not found

          Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

            • Record: found
            • Abstract: not found
            • Article: not found

            Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

              • Record: found
              • Abstract: found
              • Article: not found

              EuroSCORE II.

              To update the European System for Cardiac Operative Risk Evaluation (EuroSCORE) risk model. A dedicated website collected prospective risk and outcome data on 22,381 consecutive patients undergoing major cardiac surgery in 154 hospitals in 43 countries over a 12-week period (May-July 2010). Completeness and accuracy were validated during data collection using mandatory field entry, error and range checks and after data collection using summary feedback confirmation by responsible officers and multiple logic checks. Information was obtained on existing EuroSCORE risk factors and additional factors proven to influence risk from research conducted since the original model. The primary outcome was mortality at the base hospital. Secondary outcomes were mortality at 30 and 90 days. The data set was divided into a developmental subset for logistic regression modelling and a validation subset for model testing. A logistic risk model (EuroSCORE II) was then constructed and tested. Compared with the original 1995 EuroSCORE database (in brackets), the mean age was up at 64.7 (62.5) with 31% females (28%). More patients had New York Heart Association class IV, extracardiac arteriopathy, renal and pulmonary dysfunction. Overall mortality was 3.9% (4.6%). When applied to the current data, the old risk models overpredicted mortality (actual: 3.9%; additive predicted: 5.8%; logistic predicted: 7.57%). EuroSCORE II was well calibrated on testing in the validation data subset of 5553 patients (actual mortality: 4.18%; predicted: 3.95%). Very good discrimination was maintained with an area under the receiver operating characteristic curve of 0.8095. Cardiac surgical mortality has significantly reduced in the last 15 years despite older and sicker patients. EuroSCORE II is better calibrated than the original model yet preserves powerful discrimination. It is proposed for the future assessment of cardiac surgical risk.

                Author and article information

                Contributors
                m.merry@auckland.ac.nz
                pat@cs.auckland.ac.nz
                jim@cs.auckland.ac.nz
                Journal
                BMC Med Inform Decis Mak
                BMC Med Inform Decis Mak
                BMC Medical Informatics and Decision Making
                BioMed Central (London )
                1472-6947
                9 December 2021
                9 December 2021
                2021
                : 21
                : 344
                Affiliations
                GRID grid.9654.e, ISNI 0000 0004 0372 3343, School of Computer Science, , University of Auckland, ; Symonds St, Auckland, New Zealand
                Article
                1703
                10.1186/s12911-021-01703-7
                8656102
                e1efa9dd-023f-4187-b350-19f13ca0b75c
                © The Author(s) 2021

                Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

                History
                : 19 June 2021
                : 24 November 2021
                Categories
                Research
                Custom metadata
                © The Author(s) 2021

                Bioinformatics & Computational biology
                explainability,xai,black-box models,mental models
                Bioinformatics & Computational biology
                explainability, xai, black-box models, mental models

                Comments

                Comment on this article

                Related Documents Log