57
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies.

          Related collections

          Most cited references23

          • Record: found
          • Abstract: found
          • Article: not found

          An Agenda for Purely Confirmatory Research.

          The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the core of psychology's academic enterprise: almost without exception, psychologists do not commit themselves to a method of data analysis before they see the actual data. It then becomes tempting to fine tune the analysis to the data in order to obtain a desired result-a procedure that invalidates the interpretation of the common statistical tests. The extent of the fine tuning varies widely across experiments and experimenters but is almost impossible for reviewers and readers to gauge. To remedy the situation, we propose that researchers preregister their studies and indicate in advance the analyses they intend to conduct. Only these analyses deserve the label "confirmatory," and only for these analyses are the common statistical tests valid. Other analyses can be carried out but these should be labeled "exploratory." We illustrate our proposal with a confirmatory replication attempt of a study on extrasensory perception.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            The Rules of the Game Called Psychological Science.

            If science were a game, a dominant rule would probably be to collect results that are statistically significant. Several reviews of the psychological literature have shown that around 96% of papers involving the use of null hypothesis significance testing report significant outcomes for their main results but that the typical studies are insufficiently powerful for such a track record. We explain this paradox by showing that the use of several small underpowered samples often represents a more efficient research strategy (in terms of finding p < .05) than does the use of one larger (more powerful) sample. Publication bias and the most efficient strategy lead to inflated effects and high rates of false positives, especially when researchers also resorted to questionable research practices, such as adding participants after intermediate testing. We provide simulations that highlight the severity of such biases in meta-analyses. We consider 13 meta-analyses covering 281 primary studies in various fields of psychology and find indications of biases and/or an excess of significant results in seven. These results highlight the need for sufficiently powerful replications and changes in journal policies.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              The Statistical Crisis in Science

                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                25 November 2016
                2016
                : 7
                : 1832
                Affiliations
                Methodology and Statistics, Tilburg University Tilburg, Netherlands
                Author notes

                Edited by: Fiona Fidler, University of Melbourne, Australia

                Reviewed by: Rink Hoekstra, University of Groningen, Netherlands; Geoff Cumming, La Trobe University, Australia

                *Correspondence: Jelte M. Wicherts, j.m.wicherts@ 123456uvt.nl

                This article was submitted to Quantitative Psychology and Measurement, a section of the journal Frontiers in Psychology

                Article
                10.3389/fpsyg.2016.01832
                5122713
                27933012
                a5f5baf9-110a-49b8-b57f-5814c1d0e095
                Copyright © 2016 Wicherts, Veldkamp, Augusteijn, Bakker, van Aert and van Assen.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 30 July 2016
                : 04 November 2016
                Page count
                Figures: 0, Tables: 1, Equations: 0, References: 57, Pages: 12, Words: 0
                Funding
                Funded by: Nederlandse Organisatie voor Wetenschappelijk Onderzoek 10.13039/501100003246
                Award ID: 406-13-050
                Award ID: 406-15-198
                Award ID: 452-11-004
                Categories
                Psychology
                Review

                Clinical Psychology & Psychiatry
                questionable research practices,experimental design,significance testing,p-hacking,bias,significance chasing,research methods education

                Comments

                Comment on this article