15
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Showcasing the interaction of generative and emergent linguistic knowledge with case marker omission in spoken Japanese

      research-article

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Linguists debate the nature of grammatical knowledge. Many argue it is innate knowledge of syntactic structure that we use when generating utterances; others argue it emerges from linguistic experience, and forms exemplars for modeling novel utterances. Yet, still others argue that grammatical forms are processed in parallel by both types of knowledge (innate or otherwise), and crucially, that these two processing routes compete with each other.

          Our objective is to support the dual route argument with a corpus study illustrating the interaction of these two types of knowledge. We interpret two proxies as indicators of these two types of knowledge: syntactic complexity for generative knowledge and dispersion for emergent knowledge. Previous psycholinguistic work has shown that increased syntactic complexity correlates with increased judgment reaction times. Conversely, increased dispersion correlates with decreased judgment reaction times. If these two processing mechanisms compete, then we predict an interaction: specifically, we hypothesize that the more dispersed a linguistic form is, the less influence syntactic complexity has.

          We conducted a mixed-effects logistic regression analysis on case marker omissions in Japanese. Our results show that a casual speech style, a dispersed object–verb pair, and a syntactically simple noun phrase for the object correlate with increased case marker omission. More importantly, syntactic complexity and dispersion interact: as dispersion decreases, the estimated coefficient for syntactic complexity increases. These results support the claim that generative knowledge and emergent knowledge compete during language processing.

          Related collections

          Most cited references72

          • Record: found
          • Abstract: found
          • Article: not found

          The faculty of language: what is it, who has it, and how did it evolve?

          M. Hauser (2002)
          We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Expectation-based syntactic comprehension.

            Roger Levy (2008)
            This paper investigates the role of resource allocation as a source of processing difficulty in human sentence comprehension. The paper proposes a simple information-theoretic characterization of processing difficulty as the work incurred by resource reallocation during parallel, incremental, probabilistic disambiguation in sentence comprehension, and demonstrates its equivalence to the theory of Hale [Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of NAACL (Vol. 2, pp. 159-166)], in which the difficulty of a word is proportional to its surprisal (its negative log-probability) in the context within which it appears. This proposal subsumes and clarifies findings that high-constraint contexts can facilitate lexical processing, and connects these findings to well-known models of parallel constraint-based comprehension. In addition, the theory leads to a number of specific predictions about the role of expectation in syntactic comprehension, including the reversal of locality-based difficulty patterns in syntactically constrained contexts, and conditions under which increased ambiguity facilitates processing. The paper examines a range of established results bearing on these predictions, and shows that they are largely consistent with the surprisal theory.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              What do we mean by prediction in language comprehension?

              We consider several key aspects of prediction in language comprehension: its computational nature, the representational level(s) at which we predict, whether we use higher level representations to predictively pre-activate lower level representations, and whether we 'commit' in any way to our predictions, beyond pre-activation. We argue that the bulk of behavioral and neural evidence suggests that we predict probabilistically and at multiple levels and grains of representation. We also argue that we can, in principle, use higher level inferences to predictively pre-activate information at multiple lower representational levels. We also suggest that the degree and level of predictive pre-activation might be a function of the expected utility of prediction, which, in turn, may depend on comprehenders' goals and their estimates of the relative reliability of their prior knowledge and the bottom-up input. Finally, we argue that all these properties of language understanding can be naturally explained and productively explored within a multi-representational hierarchical actively generative architecture whose goal is to infer the message intended by the producer, and in which predictions play a crucial role in explaining the bottom-up input.
                Bookmark

                Author and article information

                Contributors
                Journal
                2397-1835
                Glossa: a journal of general linguistics
                Ubiquity Press
                2397-1835
                19 June 2018
                2018
                : 3
                : 1
                : 72
                Affiliations
                [1 ]Kwansei Gakuin University, 2-1 Gakuen, Sanda, 669-1337, JP
                Author information
                http://orcid.org/0000-0003-0541-2648
                Article
                10.5334/gjgl.500
                d4fa925e-f740-4211-85b7-a5d95852766d
                Copyright: © 2018 The Author(s)

                This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.

                History
                : 04 August 2017
                : 14 April 2018
                Categories
                Research

                General linguistics,Linguistics & Semiotics
                frequency,emergent grammar,dispersion,Japanese,grammatical case

                Comments

                Comment on this article