29
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The reliability of acceptability judgments across languages

      research-article

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The reliability of acceptability judgments made by individual linguists has often been called into question. Recent large-scale replication studies conducted in response to this criticism have shown that the majority of published English acceptability judgments are robust. We make two observations about these replication studies. First, we raise the concern that English acceptability judgments may be more reliable than judgments in other languages. Second, we argue that it is unnecessary to replicate judgments that illustrate uncontroversial descriptive facts; rather, candidates for replication can emerge during formal or informal peer review. We present two experiments motivated by these arguments. Published Hebrew and Japanese acceptability contrasts considered questionable by the authors of the present paper were rated for acceptability by a large sample of naive participants. Approximately half of the contrasts did not replicate. We suggest that the reliability of acceptability judgments, especially in languages other than English, can be improved using a simple open review system, and that formal experiments are only necessary in controversial cases.

          Related collections

          Most cited references49

          • Record: found
          • Abstract: found
          • Article: not found

          A power primer.

          One possible reason for the continued neglect of statistical power analysis in research in the behavioral sciences is the inaccessibility of or difficulty with the standard material. A convenient, although not comprehensive, presentation of required sample sizes is provided here. Effect-size indexes and conventional values for these are given for operationally defined small, medium, and large effects. The sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests: (a) the difference between independent means, (b) the significance of a product-moment correlation, (c) the difference between independent rs, (d) the sign test, (e) the difference between independent proportions, (f) chi-square tests for goodness of fit and contingency tables, (g) one-way analysis of variance, and (h) the significance of a multiple or multiple partial correlation.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors.

            Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.
              Bookmark
              • Record: found
              • Abstract: not found
              • Book: not found

              Ken Hale: A life in language

                Bookmark

                Author and article information

                Contributors
                Journal
                2397-1835
                Glossa: a journal of general linguistics
                Ubiquity Press
                2397-1835
                13 September 2018
                2018
                : 3
                : 1
                : 100
                Affiliations
                [1 ]Johns Hopkins University, 3400 N. Charles St., Baltimore, MD, US
                [2 ]New York University, 10 Washington Place, New York, US
                Author information
                http://orcid.org/0000-0003-0435-6912
                Article
                10.5334/gjgl.528
                a839e52e-d773-4368-8c4c-67740c0399f0
                Copyright: © 2018 The Author(s)

                This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.

                History
                : 19 September 2017
                : 28 May 2018
                Categories
                Research

                General linguistics,Linguistics & Semiotics
                Japanese,acceptability judgments,Hebrew,reliability,experimental syntax

                Comments

                Comment on this article