41
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment

      review-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.

          Related collections

          Most cited references114

          • Record: found
          • Abstract: not found
          • Article: not found

          The ASA's Statement onp-Values: Context, Process, and Purpose

            Bookmark
            • Record: found
            • Abstract: not found
            • Book Chapter: not found

            Chi-Square Tests for Goodness of Fit and Contingency Tables

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Reproducible research in computational science.

              Roger Peng (2011)
              Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Hum Neurosci
                Front Hum Neurosci
                Front. Hum. Neurosci.
                Frontiers in Human Neuroscience
                Frontiers Media S.A.
                1662-5161
                03 August 2017
                2017
                : 11
                : 390
                Affiliations
                [1] 1Department of Psychology, University of Cambridge Cambridge, United Kingdom
                [2] 2Meta-Research Innovation Center at Stanford and Department of Medicine, Department of Health Research and Policy, and Department of Statistics, Stanford University Stanford, CA, United States
                Author notes

                Edited by: Satrajit S. Ghosh, Massachusetts Institute of Technology, United States

                Reviewed by: Bertrand Thirion, Institut National de Recherche en Informatique et en Automatique (INRIA), France; Cyril R. Pernet, University of Edinburgh, United Kingdom

                *Correspondence: Denes Szucs ds377@ 123456cam.ac.uk
                Article
                10.3389/fnhum.2017.00390
                5540883
                28824397
                c6988d28-de99-46c4-88a5-bf16c939a54d
                Copyright © 2017 Szucs and Ioannidis.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 03 February 2017
                : 13 July 2017
                Page count
                Figures: 4, Tables: 3, Equations: 3, References: 163, Pages: 21, Words: 18840
                Funding
                Funded by: James S. McDonnell Foundation 10.13039/100000913
                Award ID: 220020370
                Categories
                Neuroscience
                Review

                Neurosciences
                replication crisis,false positive findings,research methodology,null hypothesis significance testing,bayesian methods

                Comments

                Comment on this article