31
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          The ability to follow one another’s gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other’s hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader’s role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view).

          Methodology/Principal Findings

          43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar’s realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower’s ability to predict the avatar’s action. An analysis of the pattern of frequency across the two players’ hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar.

          Conclusion/Significance

          This work confirms that people can exploit gaze cues to predict another person’s movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user’s sense of affiliation.

          Related collections

          Most cited references27

          • Record: found
          • Abstract: found
          • Article: not found

          The eye contact effect: mechanisms and development.

          The 'eye contact effect' is the phenomenon that perceived eye contact with another human face modulates certain aspects of the concurrent and/or immediately following cognitive processing. In addition, functional imaging studies in adults have revealed that eye contact can modulate activity in structures in the social brain network, and developmental studies show evidence for preferential orienting towards, and processing of, faces with direct gaze from early in life. We review different theories of the eye contact effect and advance a 'fast-track modulator' model. Specifically, we hypothesize that perceived eye contact is initially detected by a subcortical route, which then modulates the activation of the social brain as it processes the accompanying detailed sensory information.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Action plans used in action observation.

            How do we understand the actions of others? According to the direct matching hypothesis, action understanding results from a mechanism that maps an observed action onto motor representations of that action. Although supported by neurophysiological and brain-imaging studies, direct evidence for this hypothesis is sparse. In visually guided actions, task-specific proactive eye movements are crucial for planning and control. Because the eyes are free to move when observing such actions, the direct matching hypothesis predicts that subjects should produce eye movements similar to those produced when they perform the tasks. If an observer analyses action through purely visual means, however, eye movements will be linked reactively to the observed action. Here we show that when subjects observe a block stacking task, the coordination between their gaze and the actor's hand is predictive, rather than reactive, and is highly similar to the gaze-hand coordination when they perform the task themselves. These results indicate that during action observation subjects implement eye motor programs directed by motor representations of manual actions and thus provide strong evidence for the direct matching hypothesis.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Prediction in joint action: what, when, and where.

              Drawing on recent findings in the cognitive and neurosciences, this article discusses how people manage to predict each other's actions, which is fundamental for joint action. We explore how a common coding of perceived and performed actions may allow actors to predict the what, when, and where of others' actions. The "what" aspect refers to predictions about the kind of action the other will perform and to the intention that drives the action. The "when" aspect is critical for all joint actions requiring close temporal coordination. The "where" aspect is important for the online coordination of actions because actors need to effectively distribute a common space. We argue that although common coding of perceived and performed actions alone is not sufficient to enable one to engage in joint action, it provides a representational platform for integrating the actions of self and other. The final part of the paper considers links between lower-level processes like action simulation and higher-level processes like verbal communication and mental state attribution that have previously been at the focus of joint action research. Copyright © 2009 Cognitive Science Society, Inc.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                2016
                9 June 2016
                : 11
                : 6
                : e0156874
                Affiliations
                [1 ]Learning Algorithms and Systems Laboratory, School of Engineering, EPFL, Lausanne, Switzerland
                [2 ]University Department of Adult Psychiatry, CHRU, & Laboratory Epsylon, EA 4556, Montpellier, France
                [3 ]Movement to Health Laboratory, EuroMov, Montpellier-1 University, Montpelier, France
                [4 ]Institut Universitaire de France, Paris, France
                Defence Science and Technology Group, AUSTRALIA
                Author notes

                Competing Interests: The authors have declared that no competing interests exist.

                Conceived and designed the experiments: MK AS AB. Performed the experiments: MK AS. Analyzed the data: MK AB BB. Contributed reagents/materials/analysis tools: MK AB. Wrote the paper: MK SR BB AB.

                Article
                PONE-D-15-51592
                10.1371/journal.pone.0156874
                4900607
                27281341
                ed1267d8-2903-4075-a31c-268b0b9ffc86
                © 2016 Khoramshahi et al

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 8 December 2015
                : 22 May 2016
                Page count
                Figures: 12, Tables: 0, Pages: 21
                Funding
                Funded by: EU project AlterEgo
                Award ID: 600010
                This research was supported by EU project AlterEgo under grant agreement number 600010. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article
                Biology and Life Sciences
                Behavior
                Biology and Life Sciences
                Anatomy
                Musculoskeletal System
                Limbs (Anatomy)
                Arms
                Hands
                Medicine and Health Sciences
                Anatomy
                Musculoskeletal System
                Limbs (Anatomy)
                Arms
                Hands
                Biology and Life Sciences
                Neuroscience
                Cognitive Science
                Cognitive Neuroscience
                Reaction Time
                Biology and Life Sciences
                Neuroscience
                Cognitive Neuroscience
                Reaction Time
                Engineering and Technology
                Mechanical Engineering
                Robotics
                Robots
                Biology and Life Sciences
                Anatomy
                Head
                Eyes
                Medicine and Health Sciences
                Anatomy
                Head
                Eyes
                Biology and Life Sciences
                Anatomy
                Ocular System
                Eyes
                Medicine and Health Sciences
                Anatomy
                Ocular System
                Eyes
                Engineering and Technology
                Mechanical Engineering
                Robotics
                Robotic Behavior
                Engineering and Technology
                Mechanical Engineering
                Robotics
                Research and Analysis Methods
                Research Design
                Survey Research
                Questionnaires
                Custom metadata
                The minimal data set underlying the findings in this study can be found in the following stable public repository: https://github.com/khoramshahi/Human-Avatar-interaction-dataset. This dataset is also in the Supporting Information files. More details about the dataset (e.g., data collection and motion capturing) are available upon contact via mahdi.khoramshahi@ 123456epfl.ch .

                Uncategorized
                Uncategorized

                Comments

                Comment on this article