24
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Processing ambiguities in attachment and pronominal reference

      research-article

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The nature of ambiguity resolution has important implications for models of sentence processing in general. Studies of structural ambiguities, such as modifier attachment ambiguities, have generally supported a model in which a single analysis of ambiguous material is adopted without a cost to processing. Concurrently, a separate literature has observed a processing penalty for ambiguities in pronominal reference, suggesting that potential referents compete for selection during the processing of ambiguous pronouns. We argue that the apparent distinction between the ambiguity resolution mechanisms in attachment and pronominal reference ambiguities warrants further study. We present evidence from two experiments measuring eye movements during reading, showing that the separation held in the literature between these two ambiguity types is, at least, not uniformly supported.

          Related collections

          Most cited references75

          • Record: found
          • Abstract: not found
          • Article: not found

          lmerTest Package: Tests in Linear Mixed Effects Models

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Random effects structure for confirmatory hypothesis testing: Keep it maximal.

            Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F 1 and F 2 tests, and in many cases, even worse than F 1 alone. Maximal LMEMs should be the 'gold standard' for confirmatory hypothesis testing in psycholinguistics and beyond.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Evaluating significance in linear mixed-effects models in R

              Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
                Bookmark

                Author and article information

                Contributors
                Journal
                2397-1835
                Glossa: a journal of general linguistics
                Ubiquity Press
                2397-1835
                28 July 2020
                2020
                : 5
                : 1
                : 77
                Affiliations
                [1 ]Department of Linguistics, Simon Fraser University, Burnaby, CA
                [2 ]Department of Language and Linguistic Science, University of York, Heslington, York, UK
                [3 ]Department of Linguistics, University of Massachusetts Amherst, Amherst, MA, US
                Article
                10.5334/gjgl.852
                c99b11ab-6a89-4e86-b807-74f4af6dc5c2
                Copyright: © 2020 The Author(s)

                This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.

                History
                : 09 November 2018
                : 11 February 2020
                Categories
                Research

                General linguistics,Linguistics & Semiotics
                ambiguity resolution,adjunct attachment,Sentence processing,reading,pronoun reference,eye-tracking

                Comments

                Comment on this article