4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Visual and Affective Multimodal Models of Word Meaning in Language and Mind

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          One of the main limitations of natural language‐based approaches to meaning is that they do not incorporate multimodal representations the way humans do. In this study, we evaluate how well different kinds of models account for people's representations of both concrete and abstract concepts. The models we compare include unimodal distributional linguistic models as well as multimodal models which combine linguistic with perceptual or affective information. There are two types of linguistic models: those based on text corpora and those derived from word association data. We present two new studies and a reanalysis of a series of previous studies. The studies demonstrate that both visual and affective multimodal models better capture behavior that reflects human representations than unimodal linguistic models. The size of the multimodal advantage depends on the nature of semantic representations involved, and it is especially pronounced for basic‐level concepts that belong to the same superordinate category. Additional visual and affective features improve the accuracy of linguistic models based on text corpora more than those based on word associations; this suggests systematic qualitative differences between what information is encoded in natural language versus what information is reflected in word associations. Altogether, our work presents new evidence that multimodal information is important for capturing both abstract and concrete words and that fully representing word meaning requires more than purely linguistic information. Implications for both embodied and distributional views of semantic representation are discussed.

          Related collections

          Most cited references87

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Deep Residual Learning for Image Recognition

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Glove: Global Vectors for Word Representation

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              An argument for basic emotions

              Paul Ekman (1992)
                Bookmark

                Author and article information

                Contributors
                simon.dedeyne@unimelb.edu.au
                Journal
                Cogn Sci
                Cogn Sci
                10.1111/(ISSN)1551-6709
                COGS
                Cognitive Science
                John Wiley and Sons Inc. (Hoboken )
                0364-0213
                1551-6709
                11 January 2021
                January 2021
                : 45
                : 1 ( doiID: 10.1111/cogs.v45.1 )
                : e12922
                Affiliations
                [ 1 ] School of Psychological Sciences University of Melbourne
                [ 2 ] School of Psychology University of New South Wales
                [ 3 ] Department of Computer Science KU Leuven
                Author notes
                [*] [* ] Correspondence should be sent to Simon De Deyne, School of Psychological Sciences, University of Melbourne, Melbourne, 3010 Vic., Australia. E‐mail: simon.dedeyne@ 123456unimelb.edu.au

                Author information
                https://orcid.org/0000-0002-7899-6210
                https://orcid.org/0000-0001-7648-6578
                https://orcid.org/0000-0002-6976-0732
                Article
                COGS12922
                10.1111/cogs.12922
                7816238
                33432630
                d40c188f-78e7-42f3-9a4a-8bfee6c7643c
                © 2020 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS).

                This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc-nd/4.0/ License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.

                History
                : 10 January 2019
                : 26 October 2020
                : 10 November 2020
                Page count
                Figures: 7, Tables: 8, Pages: 43, Words: 38292
                Funding
                Funded by: ARC , open-funder-registry 10.13039/501100000923;
                Award ID: DE14010749
                Award ID: DP150103280
                Funded by: CHIST‐ERA EU
                Award ID: MUSTER
                Categories
                Regular Article
                Regular Articles
                Custom metadata
                2.0
                January 2021
                Converter:WILEY_ML3GV2_TO_JATSPMC version:5.9.6 mode:remove_FC converted:20.01.2021

                multimodal representations,semantic networks,distributional semantics,visual features,affect

                Comments

                Comment on this article