0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Learning to generate pointing gestures in situated embodied conversational agents

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal modes, i.e., language and speech. However, in a situated setting the non-verbal mode is crucial for an agent to adapt flexible communication strategies. In this work, we focus on learning to generate non-verbal communicative expressions in situated embodied interactive agents. Specifically, we show that an agent can learn pointing gestures in a physically simulated environment through a combination of imitation and reinforcement learning that achieves high motion naturalness and high referential accuracy. We compared our proposed system against several baselines in both subjective and objective evaluations. The subjective evaluation is done in a virtual reality setting where an embodied referential game is played between the user and the agent in a shared 3D space, a setup that fully assesses the communicative capabilities of the generated gestures. The evaluations show that our model achieves a higher level of referential accuracy and motion naturalness compared to a state-of-the-art supervised learning motion synthesis model, showing the promise of our proposed system that combines imitation and reinforcement learning for generating communicative gestures. Additionally, our system is robust in a physically-simulated environment thus has the potential of being applied to robots.

          Related collections

          Most cited references80

          • Record: found
          • Abstract: found
          • Article: not found

          Proximal Policy Optimization Algorithms

          We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            DeepMimic

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Synthesizing multimodal utterances for conversational agents

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Robot AI
                Front Robot AI
                Front. Robot. AI
                Frontiers in Robotics and AI
                Frontiers Media S.A.
                2296-9144
                30 March 2023
                2023
                : 10
                : 1110534
                Affiliations
                Division of Speech, Music and Hearing , KTH Royal Institute of Technology , Stockholm, Sweden
                Author notes

                Edited by: Nutan Chen, Volkswagen Group, Germany

                Reviewed by: Izidor Mlakar, University of Maribor, Slovenia

                Xingyuan Zhang, ARGMAX.AI Volkswagen Group Machine Learning Research Lab, Germany

                *Correspondence: Anna Deichler, deichler@ 123456kth.se

                This article was submitted to Robot Learning and Evolution, a section of the journal Frontiers in Robotics and AI

                Article
                1110534
                10.3389/frobt.2023.1110534
                10097883
                b9fe266c-58d2-48b2-bc20-ee5d3c8b728e
                Copyright © 2023 Deichler, Wang, Alexanderson and Beskow.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 28 November 2022
                : 16 March 2023
                Funding
                The work is funded by Advanced Adaptive Intelligent Agents project (Digital Futures) and Swedish Research Council, grant no 2018-05409.
                Categories
                Robotics and AI
                Original Research

                reinforcement learning,imitation learning,non-verbal communication,embodied interactive agents,gesture generation,physics-aware machine learning

                Comments

                Comment on this article