48
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Multimodal Emotion Detection System during Human-Robot Interaction

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.

          Related collections

          Most cited references100

          • Record: found
          • Abstract: found
          • Article: not found

          Measuring emotion: the Self-Assessment Manikin and the Semantic Differential.

          The Self-Assessment Manikin (SAM) is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal, and dominance associated with a person's affective reaction to a wide variety of stimuli. In this experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgments, to the Semantic Differential scale devised by Mehrabian and Russell (An approach to environmental psychology, 1974) which requires 18 different ratings. Subjective reports were measured to a series of pictures that varied in both affective valence and intensity. Correlations across the two rating methods were high both for reports of experienced pleasure and felt arousal. Differences obtained in the dominance dimension of the two instruments suggest that SAM may better track the personal response to an affective stimulus. SAM is an inexpensive, easy method for quickly assessing reports of affective response in many contexts.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A survey of affect recognition methods: audio, visual, and spontaneous expressions.

            Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Facial expression recognition based on Local Binary Patterns: A comprehensive study

                Bookmark

                Author and article information

                Journal
                Sensors (Basel)
                Sensors (Basel)
                Sensors (Basel, Switzerland)
                Molecular Diversity Preservation International (MDPI)
                1424-8220
                November 2013
                14 November 2013
                : 13
                : 11
                : 15549-15581
                Affiliations
                [1 ] Robotics Lab, Universidad Carlos III de Madrid, Av. de la Universidad 30, Leganés, Madrid 28911, Spain; E-Mails: mmalfaz@ 123456ing.uc3m.es (M.M.); jgorosti@ 123456ing.uc3m.es (J.F.G.); salichs@ 123456ing.uc3m.es (M.A.S.)
                [2 ] Institute for Systems and Robotics (ISR), North Tower, Av.Rovisco Pais 1, Lisbon, 1049-001, Portugal; E-Mail: jseq@ 123456isr.ist.utl.pt
                Author notes
                [* ] Author to whom correspondence should be addressed; E-Mail: fernando.alonso@ 123456uc3m.es ; Tel.: +34-626-540-365.
                Article
                sensors-13-15549
                10.3390/s131115549
                3871074
                24240598
                b1bfee55-d337-4624-8971-22613f02f0df
                © 2013 by the authors; licensee MDPI, Basel, Switzerland.

                This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license ( http://creativecommons.org/licenses/by/3.0/).

                History
                : 07 August 2013
                : 24 September 2013
                : 22 October 2013
                Categories
                Article

                Biomedical engineering
                emotion recognition,affective computing,human,robot interaction,dialog systems,facs

                Comments

                Comment on this article