64
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Power Contours: Optimising Sample Size and Precision in Experimental Psychology and Human Neuroscience

      research-article

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          When designing experimental studies with human participants, experimenters must decide how many trials each participant will complete, as well as how many participants to test. Most discussion of statistical power (the ability of a study design to detect an effect) has focused on sample size, and assumed sufficient trials. Here we explore the influence of both factors on statistical power, represented as a 2-dimensional plot on which iso-power contours can be visualized. We demonstrate the conditions under which the number of trials is particularly important, that is, when the within-participant variance is large relative to the between-participants variance. We then derive power contour plots using existing data sets for 8 experimental paradigms and methodologies (including reaction times, sensory thresholds, fMRI, MEG, and EEG), and provide example code to calculate estimates of the within- and between-participants variance for each method. In all cases, the within-participant variance was larger than the between-participants variance, meaning that the number of trials has a meaningful influence on statistical power in commonly used paradigms. An online tool is provided ( https://shiny.york.ac.uk/powercontours/) for generating power contours, from which the optimal combination of trials and participants can be calculated when designing future studies.

          Translational Abstract

          Many studies in neuroscience and experimental psychology involve testing human participants multiple times in a given condition, and averaging across these repetitions to get a more accurate estimate of the true response. Yet most researchers do not have a principled way to decide how many trials they should conduct, and decisions are often made using arbitrary criteria. This is an important issue because the number of trials has a direct effect on the statistical power of a study—the likelihood that it is able to detect a real effect. In the context of the recent “replication crisis” in psychology, researchers need tools to optimize the quality of their research designs to increase power. Here we propose a way to visualize the combined effect of sample size (the number of participants tested) and number of trials per participant on statistical power, using a two-dimensional contour plot. We show by subsampling eight existing data sets from a range of widely used methods (including reaction times, EEG, MEG, and fMRI) that these contours are curved, and permit estimation of an optimal number of participants and trials at the study design stage. All of the analysis scripts, as well as an online tool, are provided to permit others to tailor our methods to their own experimental paradigms. We anticipate that this approach will facilitate the design of experimental studies that are more efficient, and more likely to report real effects.

          Related collections

          Most cited references44

          • Record: found
          • Abstract: found
          • Article: not found

          FSL.

          FSL (the FMRIB Software Library) is a comprehensive library of analysis tools for functional, structural and diffusion MRI brain imaging data, written mainly by members of the Analysis Group, FMRIB, Oxford. For this NeuroImage special issue on "20 years of fMRI" we have been asked to write about the history, developments and current status of FSL. We also include some descriptions of parts of FSL that are not well covered in the existing literature. We hope that some of this content might be of interest to users of FSL, and also maybe to new research groups considering creating, releasing and supporting new software packages for brain image analysis. Copyright © 2011 Elsevier Inc. All rights reserved.
            • Record: found
            • Abstract: found
            • Article: not found

            Nonparametric statistical testing of EEG- and MEG-data.

            In this paper, we show how ElectroEncephaloGraphic (EEG) and MagnetoEncephaloGraphic (MEG) data can be analyzed statistically using nonparametric techniques. Nonparametric statistical tests offer complete freedom to the user with respect to the test statistic by means of which the experimental conditions are compared. This freedom provides a straightforward way to solve the multiple comparisons problem (MCP) and it allows to incorporate biophysically motivated constraints in the test statistic, which may drastically increase the sensitivity of the statistical test. The paper is written for two audiences: (1) empirical neuroscientists looking for the most appropriate data analysis method, and (2) methodologists interested in the theoretical concepts behind nonparametric statistical tests. For the empirical neuroscientist, a large part of the paper is written in a tutorial-like fashion, enabling neuroscientists to construct their own statistical test, maximizing the sensitivity to the expected effect. And for the methodologist, it is explained why the nonparametric test is formally correct. This means that we formulate a null hypothesis (identical probability distribution in the different experimental conditions) and show that the nonparametric test controls the false alarm rate under this null hypothesis.
              • Record: found
              • Abstract: found
              • Article: not found

              Power failure: why small sample size undermines the reliability of neuroscience.

              A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.

                Author and article information

                Contributors
                Role: Editor
                Journal
                Psychol Methods
                Psychol Methods
                Psychological Methods
                American Psychological Association
                1082-989X
                1939-1463
                16 July 2020
                June 2021
                : 26
                : 3
                : 295-314
                Affiliations
                [1 ]Department of Psychology and York Biomedical Research Institute, University of York
                [2 ]School of Psychology, University of Southampton
                [3 ]Department of Psychology, University of York
                [4 ]School of Psychology, University of Lincoln
                [5 ]York Neuroimaging Centre, University of York
                [6 ]Department of Psychology, University of York
                Author notes
                We are grateful to everyone involved in collection of the data sets reanalyzed here, and particularly to those who made their data publicly available. This work was supported in part by a Wellcome Trust (ref: 105624) grant, through the Centre for Chronic Diseases and Disorders (C2D2) at the University of York, awarded to Daniel H. Baker. Data collection and sharing for part of this project was provided by the Cambridge Centre for Ageing and Neuroscience (CamCAN). CamCAN funding was provided by the U.K. Biotechnology and Biological Sciences Research Council (Grant BB/H008217/1), together with support from the U.K. Medical Research Council and University of Cambridge, United Kingdom. We also thank Tom Hartley for helpful comments and for suggesting inclusion of the Iowa Gambling Task data set, and all those who offered constructive suggestions based on the preprint.
                [open-data-small.gif]

                The data are available at http://dx.doi.org/10.17605/OSF.IO/EBHNK.

                [open-materials-small.gif]

                The analysis scripts are available at http://dx.doi.org/10.17605/OSF.IO/EBHNK.

                [*] [* ]Correspondence concerning this article should be addressed to Daniel H. Baker, Department of Psychology, University of York, Heslington, York YO10 5DD, United Kingdom daniel.baker@ 123456york.ac.uk
                Author information
                http://orcid.org/0000-0002-0161-443X
                http://orcid.org/0000-0002-2011-5150
                http://orcid.org/0000-0002-8043-4866
                http://orcid.org/0000-0003-4584-5501
                http://orcid.org/0000-0002-4115-4466
                http://orcid.org/0000-0003-0674-7829
                http://orcid.org/0000-0001-8255-9120
                Article
                met_26_3_295 2020-52357-001
                10.1037/met0000337
                8329985
                32673043
                fbddafc6-2afe-4ede-8eb6-4e16ee8d5a62
                © 2020 The Author(s)

                This article has been published under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/3.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Copyright for this article is retained by the author(s). Author(s) grant(s) the American Psychological Association the exclusive right to publish the article and identify itself as the original publisher.

                History
                : 5 March 2019
                : 5 February 2020
                : 26 May 2020
                Categories
                Articles

                statistical power,sample size,neuroscience
                statistical power, sample size, neuroscience

                Comments

                Comment on this article

                Related Documents Log