3
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Language Features Matter: Effective Language Representations for Vision-Language Tasks

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Shouldn't language and vision features be treated equally in vision-language (VL) tasks? Many VL approaches treat the language component as an afterthought, using simple language models that are either built upon fixed word embeddings trained on text-only data or are learned from scratch. We believe that language features deserve more attention, and conduct experiments which compare different word embeddings, language models, and embedding augmentation steps on five common VL tasks: image-sentence retrieval, image captioning, visual question answering, phrase grounding, and text-to-clip retrieval. Our experiments provide some striking results; an average embedding language model outperforms an LSTM on retrieval-style tasks; state-of-the-art representations such as BERT perform relatively poorly on vision-language tasks. From this comprehensive set of experiments we propose a set of best practices for incorporating the language component of VL tasks. To further elevate language features, we also show that knowledge in vision-language problems can be transferred across tasks to gain performance with multi-task training. This multi-task training is applied to a new Graph Oriented Vision-Language Embedding (GrOVLE), which we adapt from Word2Vec using WordNet and an original visual-language graph built from Visual Genome, providing a ready-to-use vision-language embedding: http://ai.bu.edu/grovle.

          Related collections

          Most cited references7

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions

              Bookmark
              • Record: found
              • Abstract: not found
              • Conference Proceedings: not found

              Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering

                Bookmark

                Author and article information

                Journal
                17 August 2019
                Article
                1908.06327
                6f9f231b-95bd-48e1-80c9-fe5962d481cb

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                ICCV 2019 accepted paper
                cs.CV cs.CL

                Computer vision & Pattern recognition,Theoretical computer science
                Computer vision & Pattern recognition, Theoretical computer science

                Comments

                Comment on this article