26
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background and Objective

          Machine learning (ML) models are increasingly being utilized in oncology research for use in the clinic. However, while more complicated models may provide improvements in predictive or prognostic power, a hurdle to their adoption are limits of model interpretability, wherein the inner workings can be perceived as a “black box”. Explainable artificial intelligence (XAI) frameworks including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are novel, model-agnostic approaches that aim to provide insight into the inner workings of the “black box” by producing quantitative visualizations of how model predictions are calculated. In doing so, XAI can transform complicated ML models into easily understandable charts and interpretable sets of rules, which can give providers with an intuitive understanding of the knowledge generated, thus facilitating the deployment of such models in routine clinical workflows.

          Methods

          We performed a comprehensive, non-systematic review of the latest literature to define use cases of model-agnostic XAI frameworks in oncologic research. The examined database was PubMed/MEDLINE. The last search was run on May 1, 2022.

          Key Content and Findings

          In this review, we identified several fields in oncology research where ML models and XAI were utilized to improve interpretability, including prognostication, diagnosis, radiomics, pathology, treatment selection, radiation treatment workflows, and epidemiology. Within these fields, XAI facilitates determination of feature importance in the overall model, visualization of relationships and/or interactions, evaluation of how individual predictions are produced, feature selection, identification of prognostic and/or predictive thresholds, and overall confidence in the models, among other benefits. These examples provide a basis for future work to expand on, which can facilitate adoption in the clinic when the complexity of such modeling would otherwise be prohibitive.

          Conclusions

          Model-agnostic XAI frameworks offer an intuitive and effective means of describing oncology ML models, with applications including prognostication and determination of optimal treatment regimens. Using such frameworks presents an opportunity to improve understanding of ML models, which is a critical step to their adoption in the clinic.

          Related collections

          Most cited references59

          • Record: found
          • Abstract: found
          • Article: not found

          Radiomics: Images Are More than Pictures, They Are Data

          This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): Explanation and Elaboration

            The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) Statement includes a 22-item checklist, which aims to improve the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD Statement is explained in detail and accompanied by published examples of good reporting. The document also provides a valuable reference of issues to consider when designing, conducting, and analyzing prediction model studies. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, it is recommended that authors include a completed checklist in their submission. The TRIPOD checklist can also be downloaded from www.tripod-statement.org.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

              Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
                Bookmark

                Author and article information

                Journal
                Transl Cancer Res
                Transl Cancer Res
                TCR
                Translational Cancer Research
                AME Publishing Company
                2218-676X
                2219-6803
                October 2022
                October 2022
                : 11
                : 10
                : 3853-3868
                Affiliations
                [1 ]Department of Radiation Oncology, City of Hope National Medical Center, Duarte , CA, USA, ;
                [2 ]deptDepartments of Bioengineering and Integrated Biology and Physiology , University of California Los Angeles , Los Angeles, CA, USA;
                [3 ]Department of Computational and Quantitative Medicine, City of Hope National Medical Center, Duarte , CA, USA, ;
                [4 ]Department of Medical Oncology, City of Hope National Medical Center, Duarte , CA, USA,
                Author notes

                Contributions: (I) Conception and design: C Ladbury; (II) Administrative support: A Amini; (III) Provision of study materials or patients: C Ladbury; (IV) Collection and assembly of data: C Ladbury; (V) Data analysis and interpretation: C Ladbury; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

                Correspondence to: Arya Amini, MD. Department of Radiation Oncology, City of Hope National Medical Center, 1500 E Duarte Rd., Duarte, CA 91010, USA. Email: aamini@ 123456coh.org .
                [^]

                ORCID: 0000-0002-2668-3415.

                Article
                tcr-11-10-3853
                10.21037/tcr-22-1626
                9641128
                36388027
                ab8d2618-0583-45e6-8de8-f4df7be97016
                2022 Translational Cancer Research. All rights reserved.

                Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0.

                History
                : 10 June 2022
                : 07 September 2022
                Categories
                Review Article

                explainable artificial intelligence (xai),local interpretable model-agnostic explanations (lime),machine learning (ml),shapley additive explanations (shap)

                Comments

                Comment on this article