1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      lamBERT: Language and Action Learning Using Multimodal BERT

      Preprint
      , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Recently, the bidirectional encoder representations from transformers (BERT) model has attracted much attention in the field of natural language processing, owing to its high performance in language understanding-related tasks. The BERT model learns language representation that can be adapted to various tasks via pre-training using a large corpus in an unsupervised manner. This study proposes the language and action learning using multimodal BERT (lamBERT) model that enables the learning of language and actions by 1) extending the BERT model to multimodal representation and 2) integrating it with reinforcement learning. To verify the proposed model, an experiment is conducted in a grid environment that requires language understanding for the agent to act properly. As a result, the lamBERT model obtained higher rewards in multitask settings and transfer settings when compared to other models, such as the convolutional neural network-based model and the lamBERT model without pre-training.

          Related collections

          Author and article information

          Journal
          15 April 2020
          Article
          2004.07093
          630b667f-4090-44e6-89d1-fee400e8a21d

          http://arxiv.org/licenses/nonexclusive-distrib/1.0/

          History
          Custom metadata
          8 pages, 9 figures
          cs.LG cs.CL stat.ML

          Theoretical computer science,Machine learning,Artificial intelligence
          Theoretical computer science, Machine learning, Artificial intelligence

          Comments

          Comment on this article