59
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Bayesian Perspective on the Reproducibility Project: Psychology

      research-article
      1 , 2 , 3 , *
      PLoS ONE
      Public Library of Science

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset ( N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.

          Related collections

          Most cited references9

          • Record: found
          • Abstract: found
          • Article: not found

          Meta-regression approximations to reduce publication selection bias.

          Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A dirty dozen: twelve p-value misconceptions.

            The P value is a measure of statistical evidence that appears in virtually all medical research papers. Its interpretation is made extraordinarily difficult because it is not part of any formal system of statistical inference. As a result, the P value's inferential meaning is widely and often wildly misconstrued, a fact that has been pointed out in innumerable papers and books appearing since at least the 1940s. This commentary reviews a dozen of these common misinterpretations and explains why each is wrong. It also reviews the possible consequences of these improper understandings or representations of its meaning. Finally, it contrasts the P value with its Bayesian counterpart, the Bayes' factor, which has virtually all of the desirable properties of an evidential measure that the P value lacks, most notably interpretability. The most serious consequence of this array of P-value misconceptions is the false belief that the probability of a conclusion being in error can be calculated from the data in a single experiment without reference to external evidence or the plausibility of the underlying mechanism.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found

              The test of significance in psychological research.

                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                2016
                26 February 2016
                : 11
                : 2
                : e0149794
                Affiliations
                [1 ]Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
                [2 ]Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States of America
                [3 ]Department of Statistics, University of California, Irvine, Irvine, CA, United States of America
                Universiteit Gent, BELGIUM
                Author notes

                Competing Interests: The authors have declared that no competing interests exist.

                Conceived and designed the experiments: AE JV. Performed the experiments: AE JV. Analyzed the data: AE JV. Contributed reagents/materials/analysis tools: AE JV. Wrote the paper: AE JV.

                Article
                PONE-D-15-54563
                10.1371/journal.pone.0149794
                4769355
                26919473
                59e8f94a-83cb-4036-a8d6-c384ab67bfe5
                © 2016 Etz, Vandekerckhove

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 16 December 2015
                : 4 February 2016
                Page count
                Figures: 2, Tables: 3, Pages: 12
                Funding
                This work was partly funded by the National Science Foundation grants #1230118 and #1534472 from the Methods, 335 Measurements, and Statistics panel ( www.nsf.gov) and the John Templeton Foundation grant #48192 ( www.templeton.org). This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
                Categories
                Research Article
                Research and Analysis Methods
                Research Design
                Replication Studies
                Biology and Life Sciences
                Psychology
                Social Sciences
                Psychology
                Research and Analysis Methods
                Research Assessment
                Reproducibility
                People and Places
                Population Groupings
                Professions
                Analysts
                Physical Sciences
                Mathematics
                Probability Theory
                Statistical Distributions
                Physical Sciences
                Mathematics
                Statistics (Mathematics)
                Statistical Data
                Science Policy
                Open Science
                Research and Analysis Methods
                Research Design
                Custom metadata
                All relevant data are within the paper and its Supporting Information files.

                Uncategorized
                Uncategorized

                Comments

                Comment on this article