28
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      User Education in Automated Driving: Owner’s Manual and Interactive Tutorial Support Mental Model Formation and Human-Automation Interaction

      , , , ,
      Information
      MDPI AG

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Automated driving systems (ADS) and a combination of these with advanced driver assistance systems (ADAS) will soon be available to a large consumer population. Apart from testing automated driving features and human–machine interfaces (HMI), the development and evaluation of training for interacting with driving automation has been largely neglected. The present work outlines the conceptual development of two possible approaches of user education which are the owner’s manual and an interactive tutorial. These approaches are investigated by comparing them to a baseline consisting of generic information about the system function. Using a between-subjects design, N = 24 participants complete one training prior to interacting with the ADS HMI in a driving simulator. Results show that both the owner’s manual and an interactive tutorial led to an increased understanding of driving automation systems as well as an increased interaction performance. This work contributes to method development for the evaluation of ADS by proposing two alternative approaches of user education and their implications for both application in realistic settings and HMI testing.

          Related collections

          Most cited references22

          • Record: found
          • Abstract: found
          • Article: not found

          Inference by eye: confidence intervals and how to read pictures of data.

          Wider use in psychology of confidence intervals (CIs), especially as error bars in figures, is a desirable development. However, psychologists seldom use CIs and may not understand them well. The authors discuss the interpretation of figures with error bars and analyze the relationship between CIs and statistical significance testing. They propose 7 rules of eye to guide the inferential use of figures with error bars. These include general principles: Seek bars that relate directly to effects of interest, be sensitive to experimental design, and interpret the intervals. They also include guidelines for inferential interpretation of the overlap of CIs on independent group means. Wider use of interval estimation in psychology has the potential to improve research communication substantially. ((c) 2005 APA, all rights reserved).
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Performance of Bootstrapping Approaches to Model Test Statistics and Parameter Standard Error Estimation in Structural Equation Modeling

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Pilots' monitoring strategies and performance on automated flight decks: an empirical study combining behavioral and eye-tracking data.

              The objective of the study was to examine pilots' automation monitoring strategies and performance on highly automated commercial flight decks. A considerable body of research and operational experience has documented breakdowns in pilot-automation coordination on modern flight decks. These breakdowns are often considered symptoms of monitoring failures even though, to date, only limited and mostly anecdotal data exist concerning pilots' monitoring strategies and performance. Twenty experienced B-747-400 airline pilots flew a 1-hr scenario involving challenging automation-related events on a full-mission simulator. Behavioral, mental model, and eye-tracking data were collected. The findings from this study confirm that pilots monitor basic flight parameters to a much greater extent than visual indications of the automation configuration. More specifically, they frequently fail to verify manual mode selections or notice automatic mode changes. In other cases, they do not process mode annunciations in sufficient depth to understand their implications for aircraft behavior. Low system observability and gaps in pilots' understanding of complex automation modes were shown to contribute to these problems. Our findings describe and explain shortcomings in pilot's automation monitoring strategies and performance based on converging behavioral, eye-tracking, and mental model data. They confirm that monitoring failures are one major contributor to breakdowns in pilot-automation interaction. The findings from this research can inform the design of improved training programs and automation interfaces that support more effective system monitoring.
                Bookmark

                Author and article information

                Journal
                INFOGG
                Information
                Information
                MDPI AG
                2078-2489
                April 2019
                April 17 2019
                : 10
                : 4
                : 143
                Article
                10.3390/info10040143
                ff896133-b613-471d-85a9-db07295a9c5a
                © 2019

                https://creativecommons.org/licenses/by/4.0/

                History

                Comments

                Comment on this article