+1 Recommend
1 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond


      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


          Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems’ black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.


          • We performed a mini-review for XAI in medicine and digital healthcare.

          • Our mini-review is comprehensive on most recent studies of XAI in the medical field.

          • We proposed two XAI methods and constructed two representative showcases.

          • One of our XAI models is based on weakly supervised learning for COVID-19 classification.

          • One of our XAI models is developed for ventricle segmentation in hydrocephalus patients.

          Related collections

          Most cited references217

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age

            Cathie Sudlow and colleagues describe the UK Biobank, a large population-based prospective study, established to allow investigation of the genetic and non-genetic determinants of the diseases of middle and old age.
              • Record: found
              • Abstract: found
              • Article: not found

              A survey on deep learning in medical image analysis

              Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.

                Author and article information

                Inf Fusion
                Inf Fusion
                An International Journal on Information Fusion
                1 January 2022
                January 2022
                : 77
                : 29-52
                [a ]National Heart and Lung Institute, Imperial College London, London, UK
                [b ]Royal Brompton Hospital, London, UK
                [c ]Imperial Institute of Advanced Technology, Hangzhou, China
                [d ]Hangzhou Ocean’s Smart Boya Co., Ltd, China
                [e ]University of California, San Diego, La Jolla, CA, USA
                [f ]Radiology Department, Shenzhen Second People’s Hospital, Shenzhen, China
                Author notes
                © 2021 The Authors

                This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).


                explainable ai,information fusion,multi-domain information fusion,weakly supervised learning,medical image analysis


                Comment on this article