+1 Recommend
1 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Improving standards in brain-behavior correlation analyses


      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


          Associations between two variables, for instance between brain and behavioral measurements, are often studied using correlations, and in particular Pearson correlation. However, Pearson correlation is not robust: outliers can introduce false correlations or mask existing ones. These problems are exacerbated in brain imaging by a widespread lack of control for multiple comparisons, and several issues with data interpretations. We illustrate these important problems associated with brain-behavior correlations, drawing examples from published articles. We make several propositions to alleviate these problems.

          Related collections

          Most cited references6

          • Record: found
          • Abstract: found
          • Article: not found

          Toward evidence-based medical statistics. 1: The P value fallacy.

          An important problem exists in the interpretation of modern medical research data: Biological understanding and previous research play little formal role in the interpretation of quantitative results. This phenomenon is manifest in the discussion sections of research articles and ultimately can affect the reliability of conclusions. The standard statistical approach has created this situation by promoting the illusion that conclusions can be produced with certain "error rates," without consideration of information from outside the experiment. This statistical approach, the key components of which are P values and hypothesis tests, is widely perceived as a mathematically coherent approach to inference. There is little appreciation in the medical community that the methodology is an amalgam of incompatible elements, whose utility for scientific inference has been the subject of intense debate among statisticians for almost 70 years. This article introduces some of the key elements of that debate and traces the appeal and adverse impact of this methodology to the P value fallacy, the mistaken idea that a single number can capture both the long-run outcomes of an experiment and the evidential meaning of a single result. This argument is made as a prelude to the suggestion that another measure of evidence should be used--the Bayes factor, which properly separates issues of long-run behavior from evidential strength and allows the integration of background knowledge with statistical findings.
            • Record: found
            • Abstract: found
            • Article: not found

            What is the probability of replicating a statistically significant effect?

            If an initial experiment produces a statistically significant effect, what is the probability that this effect will be replicated in a follow-up experiment? I argue that this seemingly fundamental question can be interpreted in two very different ways and that its answer is, in practice, virtually unknowable under either interpretation. Although the data from an initial experiment can be used to estimate one type of replication probability, this estimate will rarely be precise enough to be of any use. The other type of replication probability is also unknowable, because it depends on unknown aspects of the research context. Thus, although it would be nice to know the probability of replicating a significant effect, researchers must accept the fact that they generally cannot determine this information, whichever type of replication probability they seek.
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Quantifying the Time Course of Visual Object Processing Using ERPs: It's Time to Up the Game

              Hundreds of studies have investigated the early ERPs to faces and objects using scalp and intracranial recordings. The vast majority of these studies have used uncontrolled stimuli, inappropriate designs, peak measurements, poor figures, and poor inferential and descriptive group statistics. These problems, together with a tendency to discuss any effect p   condition B. Here we describe the main limitations of face and object ERP research and suggest alternative strategies to move forward. The problems plague intracranial and surface ERP studies, but also studies using more advanced techniques – e.g., source space analyses and measurements of network dynamics, as well as many behavioral, fMRI, TMS, and LFP studies. In essence, it is time to stop amassing binary results and start using single-trial analyses to build models of visual perception.

                Author and article information

                Front Hum Neurosci
                Front Hum Neurosci
                Front. Hum. Neurosci.
                Frontiers in Human Neuroscience
                Frontiers Media S.A.
                03 May 2012
                : 6
                : 119
                [1] 1simpleCentre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow Glasgow, UK
                [2] 2simpleBrain Research Imaging Center, Division of Clinical Neurosciences, University of Edinburgh, Western General Hospital Edinburgh, UK
                Author notes

                Edited by: Russell A. Poldrack, University of Texas, USA

                Reviewed by: Martin M. Monti, University of California, Los Angeles, USA; Tal Yarkoni, University of Colorado at Boulder, USA

                *Correspondence: Guillaume A. Rousselet, Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK. e-mail: guillaume.rousselet@ 123456glasgow.ac.uk
                Copyright © 2012 Rousselet and Pernet.

                This is an open-access article distributed under the terms of the Creative Commons Attribution Non Commercial License, which permits non-commercial use, distribution, and reproduction in other forums, provided the original authors and source are credited.

                : 09 February 2012
                : 16 April 2012
                Page count
                Figures: 7, Tables: 0, Equations: 0, References: 26, Pages: 11, Words: 4204
                Perspective Article

                spearman correlation,multivariate statistics,multiple comparisons,skipped correlation,outliers,pearson correlation,robust statistics,confidence intervals


                Comment on this article