There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
One of the most important obstacles to deploying predictive models is the fact that
humans do not understand and trust them. Knowing which variables are important in
a model's prediction and how they are combined can be very powerful in helping people
understand and trust automatic decision making systems. Here we propose interpretable
decision sets, a framework for building predictive models that are highly accurate,
yet also highly interpretable. Decision sets are sets of independent if-then rules.
Because each rule can be applied independently, decision sets are simple, concise,
and easily interpretable. We formalize decision set learning through an objective
function that simultaneously optimizes accuracy and interpretability of the rules.
In particular, our approach learns short, accurate, and non-overlapping rules that
cover the whole feature space and pay attention to small but important classes. Moreover,
we prove that our objective is a non-monotone submodular function, which we efficiently
optimize to find a near-optimal set of rules. Experiments show that interpretable
decision sets are as accurate at classification as state-of-the-art machine learning
techniques. They are also three times smaller on average than rule-based models learned
by other methods. Finally, results of a user study show that people are able to answer
multiple-choice questions about the decision boundaries of interpretable decision
sets and write descriptions of classes based on them faster and more accurately than
with other rule-based models that were designed for interpretability. Overall, our
framework provides a new approach to interpretable machine learning that balances
accuracy, interpretability, and computational efficiency.