11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Machine Learning Interpretability: A Survey on Methods and Metrics

      , ,
      Electronics
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.

          Related collections

          Most cited references15

          • Record: found
          • Abstract: not found
          • Article: not found

          Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The Magical Mystery Four: How is Working Memory Capacity Limited, and Why?

              Working memory storage capacity is important because cognitive tasks can be completed only with sufficient ability to hold information as it is processed. The ability to repeat information depends on task demands but can be distinguished from a more constant, underlying mechanism: a central memory store limited to 3 to 5 meaningful items in young adults. I will discuss why this central limit is important, how it can be observed, how it differs among individuals, and why it may occur.
                Bookmark

                Author and article information

                Journal
                ELECGJ
                Electronics
                Electronics
                MDPI AG
                2079-9292
                August 2019
                July 26 2019
                : 8
                : 8
                : 832
                Article
                10.3390/electronics8080832
                9320b059-df6c-4af2-995f-b5f19f9539cf
                © 2019

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article