10
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Trust in AutoML: Exploring Information Needs for Establishing Trust in Automated Machine Learning Systems

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We explore trust in a relatively new area of data science: Automated Machine Learning (AutoML). In AutoML, AI methods are used to generate and optimize machine learning models by automatically engineering features, selecting models, and optimizing hyperparameters. In this paper, we seek to understand what kinds of information influence data scientists' trust in the models produced by AutoML? We operationalize trust as a willingness to deploy a model produced using automated methods. We report results from three studies -- qualitative interviews, a controlled experiment, and a card-sorting task -- to understand the information needs of data scientists for establishing trust in AutoML systems. We find that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an AutoML tool.

          Related collections

          Author and article information

          Journal
          17 January 2020
          Article
          10.1145/3377325.3377501
          2001.06509
          b5acc4f5-8226-482a-b4e8-cd21778606b8

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          IUI 2020
          cs.LG cs.CY cs.HC stat.ML

          Applied computer science,Machine learning,Artificial intelligence,Human-computer-interaction

          Comments

          Comment on this article