13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Establishment of Best Practices for Evidence for Prediction : A Review

      1 , 1 , 2
      JAMA Psychiatry
      American Medical Association (AMA)

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Great interest exists in identifying methods to predict neuropsychiatric disease states and treatment outcomes from high-dimensional data, including neuroimaging and genomics data. The goal of this review is to highlight several potential problems that can arise in studies that aim to establish prediction. A number of neuroimaging studies have claimed to establish prediction while establishing only correlation, which is an inappropriate use of the statistical meaning of prediction. Statistical associations do not necessarily imply the ability to make predictions in a generalized manner; establishing evidence for prediction thus requires testing of the model on data separate from those used to estimate the model’s parameters. This article discusses various measures of predictive performance and the limitations of some commonly used measures, with a focus on the importance of using multiple measures when assessing performance. For classification, the area under the receiver operating characteristic curve is an appropriate measure; for regression analysis, correlation should be avoided, and median absolute error is preferred. To ensure accurate estimates of predictive validity, the recommended best practices for predictive modeling include the following: (1) in-sample model fit indices should not be reported as evidence for predictive accuracy, (2) the cross-validation procedure should encompass all operations applied to the data, (3) prediction analyses should not be performed with samples smaller than several hundred observations, (4) multiple measures of prediction accuracy should be examined and reported, (5) the coefficient of determination should be computed using the sums of squares formulation and not the correlation coefficient, and (6) k-fold cross-validation rather than leave-one-out cross-validation should be used.

          Related collections

          Most cited references10

          • Record: found
          • Abstract: found
          • Article: not found

          Predicting Age Using Neuroimaging: Innovative Brain Ageing Biomarkers.

          The brain changes as we age and these changes are associated with functional deterioration and neurodegenerative disease. It is vital that we better understand individual differences in the brain ageing process; hence, techniques for making individualised predictions of brain ageing have been developed. We present evidence supporting the use of neuroimaging-based 'brain age' as a biomarker of an individual's brain health. Increasingly, research is showing how brain disease or poor physical health negatively impacts brain age. Importantly, recent evidence shows that having an 'older'-appearing brain relates to advanced physiological and cognitive ageing and the risk of mortality. We discuss controversies surrounding brain age and highlight emerging trends such as the use of multimodality neuroimaging and the employment of 'deep learning' methods.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Building a Science of Individual Differences from fMRI.

            To date, fMRI research has been concerned primarily with evincing generic principles of brain function through averaging data from multiple subjects. Given rapid developments in both hardware and analysis tools, the field is now poised to study fMRI-derived measures in individual subjects, and to relate these to psychological traits or genetic variations. We discuss issues of validity, reliability and statistical assessment that arise when the focus shifts to individual subjects and that are applicable also to other imaging modalities. We emphasize that individual assessment of neural function with fMRI presents specific challenges and necessitates careful consideration of anatomical and vascular between-subject variability as well as sources of within-subject variability.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              On the Plurality of (Methodological) Worlds: Estimating the Analytic Flexibility of fMRI Experiments

              How likely are published findings in the functional neuroimaging literature to be false? According to a recent mathematical model, the potential for false positives increases with the flexibility of analysis methods. Functional MRI (fMRI) experiments can be analyzed using a large number of commonly used tools, with little consensus on how, when, or whether to apply each one. This situation may lead to substantial variability in analysis outcomes. Thus, the present study sought to estimate the flexibility of neuroimaging analysis by submitting a single event-related fMRI experiment to a large number of unique analysis procedures. Ten analysis steps for which multiple strategies appear in the literature were identified, and two to four strategies were enumerated for each step. Considering all possible combinations of these strategies yielded 6,912 unique analysis pipelines. Activation maps from each pipeline were corrected for multiple comparisons using five thresholding approaches, yielding 34,560 significance maps. While some outcomes were relatively consistent across pipelines, others showed substantial methods-related variability in activation strength, location, and extent. Some analysis decisions contributed to this variability more than others, and different decisions were associated with distinct patterns of variability across the brain. Qualitative outcomes also varied with analysis parameters: many contrasts yielded significant activation under some pipelines but not others. Altogether, these results reveal considerable flexibility in the analysis of fMRI experiments. This observation, when combined with mathematical simulations linking analytic flexibility with elevated false positive rates, suggests that false positive results may be more prevalent than expected in the literature. This risk of inflated false positive rates may be mitigated by constraining the flexibility of analytic choices or by abstaining from selective analysis reporting.
                Bookmark

                Author and article information

                Journal
                JAMA Psychiatry
                JAMA Psychiatry
                American Medical Association (AMA)
                2168-622X
                November 27 2019
                Affiliations
                [1 ]Interdepartmental Neurosciences Program, Department of Psychology, Stanford University, Stanford, California
                [2 ]Inria Saclay Ile-de-France, Palaiseau, France
                Article
                10.1001/jamapsychiatry.2019.3671
                7250718
                31774490
                2a23037e-f000-4c23-9035-86fa0433b04f
                © 2019
                History

                Comments

                Comment on this article