19
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Linguistic generalization and compositionality in modern artificial neural networks

      1 , 2 , 3
      Philosophical Transactions of the Royal Society B: Biological Sciences
      The Royal Society

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In the last decade, deep artificial neural networks have achieved astounding performance in many natural language-processing tasks. Given the high productivity of language, these models must possess effective generalization abilities. It is widely assumed that humans handle linguistic productivity by means of algebraic compositional rules: are deep networks similarly compositional? After reviewing the main innovations characterizing current deep language-processing networks, I discuss a set of studies suggesting that deep networks are capable of subtle grammar-dependent generalizations, but also that they do not rely on systematic compositional rules. I argue that the intriguing behaviour of these devices (still awaiting a full understanding) should be of interest to linguists and cognitive scientists, as it offers a new perspective on possible computational strategies to deal with linguistic productivity beyond rule-based compositionality, and it might lead to new insights into the less systematic generalization patterns that also appear in natural language.

          This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.

          Related collections

          Most cited references20

          • Record: found
          • Abstract: found
          • Article: not found

          An integrated theory of language production and comprehension.

          Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The proactive brain: using analogies and associations to generate predictions.

            Moshe Bar (2007)
            Rather than passively 'waiting' to be activated by sensations, it is proposed that the human brain is continuously busy generating predictions that approximate the relevant future. Building on previous work, this proposal posits that rudimentary information is extracted rapidly from the input to derive analogies linking that input with representations in memory. The linked stored representations then activate the associations that are relevant in the specific context, which provides focused predictions. These predictions facilitate perception and cognition by pre-sensitizing relevant representations. Predictions regarding complex information, such as those required in social interactions, integrate multiple analogies. This cognitive neuroscience framework can help explain a variety of phenomena, ranging from recognition to first impressions, and from the brain's 'default mode' to a host of mental disorders.
              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Surfing Uncertainty

              Andy Clark (2016)
                Bookmark

                Author and article information

                Journal
                Philosophical Transactions of the Royal Society B: Biological Sciences
                Phil. Trans. R. Soc. B
                The Royal Society
                0962-8436
                1471-2970
                December 16 2019
                February 03 2020
                December 16 2019
                February 03 2020
                : 375
                : 1791
                : 20190307
                Affiliations
                [1 ]Catalan Institute for Advanced Studies and Research, Barcelona, Catalunya, Spain
                [2 ]Department of Translation and Language Sciences, Universitat Pompeu Fabra, Carrer Roc Boronat 138, Barcelona 08018, Spain
                [3 ]Facebook Artificial Intelligence Research, Paris, France
                Article
                10.1098/rstb.2019.0307
                6939347
                31840578
                29a61c87-f606-4b77-9cf8-7e6aca0edd3f
                © 2020

                https://royalsociety.org/-/media/journals/author/Licence-to-Publish-20062019-final.pdf

                https://royalsociety.org/journals/ethics-policies/data-sharing-mining/

                History

                Comments

                Comment on this article