47
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Explainable artificial intelligence: a comprehensive review

      Read this article at

      ScienceOpenPublisher
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references222

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Double-slit photoelectron interference in strong-field ionization of the neon dimer

          Wave-particle duality is an inherent peculiarity of the quantum world. The double-slit experiment has been frequently used for understanding different aspects of this fundamental concept. The occurrence of interference rests on the lack of which-way information and on the absence of decoherence mechanisms, which could scramble the wave fronts. Here, we report on the observation of two-center interference in the molecular-frame photoelectron momentum distribution upon ionization of the neon dimer by a strong laser field. Postselection of ions, which are measured in coincidence with electrons, allows choosing the symmetry of the residual ion, leading to observation of both, gerade and ungerade, types of interference.
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Plasma Hsp90 levels in patients with systemic sclerosis and relation to lung and skin involvement: a cross-sectional and longitudinal study

            Our previous study demonstrated increased expression of Heat shock protein (Hsp) 90 in the skin of patients with systemic sclerosis (SSc). We aimed to evaluate plasma Hsp90 in SSc and characterize its association with SSc-related features. Ninety-two SSc patients and 92 age-/sex-matched healthy controls were recruited for the cross-sectional analysis. The longitudinal analysis comprised 30 patients with SSc associated interstitial lung disease (ILD) routinely treated with cyclophosphamide. Hsp90 was increased in SSc compared to healthy controls. Hsp90 correlated positively with C-reactive protein and negatively with pulmonary function tests: forced vital capacity and diffusing capacity for carbon monoxide (DLCO). In patients with diffuse cutaneous (dc) SSc, Hsp90 positively correlated with the modified Rodnan skin score. In SSc-ILD patients treated with cyclophosphamide, no differences in Hsp90 were found between baseline and after 1, 6, or 12 months of therapy. However, baseline Hsp90 predicts the 12-month change in DLCO. This study shows that Hsp90 plasma levels are increased in SSc patients compared to age-/sex-matched healthy controls. Elevated Hsp90 in SSc is associated with increased inflammatory activity, worse lung functions, and in dcSSc, with the extent of skin involvement. Baseline plasma Hsp90 predicts the 12-month change in DLCO in SSc-ILD patients treated with cyclophosphamide.
              • Record: found
              • Abstract: found
              • Article: not found

              Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

              Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.

                Author and article information

                Journal
                Artificial Intelligence Review
                Artif Intell Rev
                Springer Science and Business Media LLC
                0269-2821
                1573-7462
                June 2022
                November 18 2021
                June 2022
                : 55
                : 5
                : 3503-3568
                Article
                10.1007/s10462-021-10088-y
                c3117388-e015-4b3b-846d-14fc32f9907d
                © 2022

                https://www.springer.com/tdm

                https://www.springer.com/tdm

                History

                Comments

                Comment on this article

                Related Documents Log