32
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Large-scale analysis of test–retest reliabilities of self-regulation measures

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The ability to regulate behavior in service of long-term goals is a widely studied psychological construct known as self-regulation. This wide interest is in part due to the putative relations between self-regulation and a range of real-world behaviors. Self-regulation is generally viewed as a trait, and individual differences are quantified using a diverse set of measures, including self-report surveys and behavioral tasks. Accurate characterization of individual differences requires measurement reliability, a property frequently characterized in self-report surveys, but rarely assessed in behavioral tasks. We remedy this gap by ( i) providing a comprehensive literature review on an extensive set of self-regulation measures and ( ii) empirically evaluating test–retest reliability of this battery in a new sample. We find that dependent variables (DVs) from self-report surveys of self-regulation have high test–retest reliability, while DVs derived from behavioral tasks do not. This holds both in the literature and in our sample, although the test–retest reliability estimates in the literature are highly variable. We confirm that this is due to differences in between-subject variability. We also compare different types of task DVs (e.g., model parameters vs. raw response times) in their suitability as individual difference DVs, finding that certain model parameters are as stable as raw DVs. Our results provide greater psychometric footing for the study of self-regulation and provide guidance for future studies of individual differences in this domain.

          Related collections

          Most cited references30

          • Record: found
          • Abstract: found
          • Article: not found

          Self-control in decision-making involves modulation of the vmPFC valuation system.

          Every day, individuals make dozens of choices between an alternative with higher overall value and a more tempting but ultimately inferior option. Optimal decision-making requires self-control. We propose two hypotheses about the neurobiology of self-control: (i) Goal-directed decisions have their basis in a common value signal encoded in ventromedial prefrontal cortex (vmPFC), and (ii) exercising self-control involves the modulation of this value signal by dorsolateral prefrontal cortex (DLPFC). We used functional magnetic resonance imaging to monitor brain activity while dieters engaged in real decisions about food consumption. Activity in vmPFC was correlated with goal values regardless of the amount of self-control. It incorporated both taste and health in self-controllers but only taste in non-self-controllers. Activity in DLPFC increased when subjects exercised self-control and correlated with activity in vmPFC.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Delay of gratification in children

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences

              Individual differences in cognitive paradigms are increasingly employed to relate cognition to brain structure, chemistry, and function. However, such efforts are often unfruitful, even with the most well established tasks. Here we offer an explanation for failures in the application of robust cognitive paradigms to the study of individual differences. Experimental effects become well established – and thus those tasks become popular – when between-subject variability is low. However, low between-subject variability causes low reliability for individual differences, destroying replicable correlations with other factors and potentially undermining published conclusions drawn from correlational relationships. Though these statistical issues have a long history in psychology, they are widely overlooked in cognitive psychology and neuroscience today. In three studies, we assessed test-retest reliability of seven classic tasks: Eriksen Flanker, Stroop, stop-signal, go/no-go, Posner cueing, Navon, and Spatial-Numerical Association of Response Code (SNARC). Reliabilities ranged from 0 to .82, being surprisingly low for most tasks given their common use. As we predicted, this emerged from low variance between individuals rather than high measurement variance. In other words, the very reason such tasks produce robust and easily replicable experimental effects – low between-participant variability – makes their use as correlational tools problematic. We demonstrate that taking such reliability estimates into account has the potential to qualitatively change theoretical conclusions. The implications of our findings are that well-established approaches in experimental psychology and neuropsychology may not directly translate to the study of individual differences in brain structure, chemistry, and function, and alternative metrics may be required. Electronic supplementary material The online version of this article (doi:10.3758/s13428-017-0935-1) contains supplementary material, which is available to authorized users.
                Bookmark

                Author and article information

                Journal
                Proceedings of the National Academy of Sciences
                Proc Natl Acad Sci USA
                Proceedings of the National Academy of Sciences
                0027-8424
                1091-6490
                March 06 2019
                : 201818430
                Article
                10.1073/pnas.1818430116
                6431228
                30842284
                984f27a3-663e-4590-8a0a-654f6ebd70e6
                © 2019

                Free to read

                http://www.pnas.org/site/misc/userlicense.xhtml

                History

                Comments

                Comment on this article