1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          This paper discusses the current critique against neural network-based Natural Language Understanding solutions known as language models. We argue that much of the current debate revolves around an argumentation error that we refer to as the singleton fallacy: the assumption that a concept (in this case, language, meaning, and understanding) refers to a single and uniform phenomenon, which in the current debate is assumed to be unobtainable by (current) language models. By contrast, we argue that positing some form of (mental) “unobtanium” as definiens for understanding inevitably leads to a dualistic position, and that such a position is precisely the original motivation for developing distributional methods in computational linguistics. As such, we argue that language models present a theoretically (and practically) sound approach that is our current best bet for computers to achieve language understanding. This understanding must however be understood as a computational means to an end.

          Related collections

          Most cited references34

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          A unified architecture for natural language processing

            Bookmark
            • Record: found
            • Abstract: not found
            • Conference Proceedings: not found

            Deep Contextualized Word Representations

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Minds, brains, and programs

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Artif Intell
                Front Artif Intell
                Front. Artif. Intell.
                Frontiers in Artificial Intelligence
                Frontiers Media S.A.
                2624-8212
                07 September 2021
                2021
                : 4
                : 682578
                Affiliations
                [ 1 ]AI Sweden, Stockholm, Sweden
                [ 2 ]RISE, Stockholm, Sweden
                Author notes

                Edited by: Kenneth Ward Church, Baidu, United States

                Reviewed by: Michael Zock, Centre National de la Recherche Scientifique (CNRS), France

                Iryna Gurevych, Darmstadt University of Technology, Germany

                *Correspondence: Magnus Sahlgren, magnus.sahlgren@ 123456ai.se

                This article was submitted to Language and Computation, a section of the journal Frontiers in Artificial Intelligence

                Article
                682578
                10.3389/frai.2021.682578
                8452877
                5269330e-5540-47c0-a012-064ffa0b4d4d
                Copyright © 2021 Sahlgren and Carlsson.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 18 March 2021
                : 20 August 2021
                Categories
                Artificial Intelligence
                Conceptual Analysis

                language models,natural language understanding,representation learning,neural networks,meaning

                Comments

                Comment on this article