Blog
About

  • Record: found
  • Abstract: found
  • Article: found
Is Open Access

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

Preprint

, ,

Read this article at

Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

      Abstract

      At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model's behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model's behavior. In this work, we propose anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear. We compare aLIME to linear LIME with simulated experiments, and demonstrate the flexibility of aLIME with qualitative examples from a variety of domains and tasks.

      Related collections

      Most cited references 5

      • Record: found
      • Abstract: not found
      • Article: not found

      Probability Inequalities for Sums of Bounded Random Variables

        Bookmark
        • Record: found
        • Abstract: not found
        • Article: not found

        VQA: Visual Question Answering

          Bookmark
          • Record: found
          • Abstract: not found
          • Article: not found

          Mining high-speed data streams

            Bookmark

            Author and article information

            Journal
            2016-11-17
            1611.05817

            http://arxiv.org/licenses/nonexclusive-distrib/1.0/

            Custom metadata
            Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems
            stat.ML cs.AI cs.LG
            ScienceOpen disciplines:

            Comments

            Comment on this article