• Record: found
  • Abstract: found
  • Article: found
Is Open Access

Single- and Dual-Process Models of Biased Contingency Detection

Read Bookmark
      There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


      Abstract. Decades of research in causal and contingency learning show that people’s estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artifacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive.

      Related collections

      Most cited references 58

      • Record: found
      • Abstract: found
      • Article: not found

      Measuring individual differences in implicit cognition: the implicit association test.

      An implicit association test (IAT) measures differential association of 2 target concepts with an attribute. The 2 concepts appear in a 2-choice task (2-choice task (e.g., flower vs. insect names), and the attribute in a 2nd task (e.g., pleasant vs. unpleasant words for an evaluation attribute). When instructions oblige highly associated categories (e.g., flower + pleasant) to share a response key, performance is faster than when less associated categories (e.g., insect & pleasant) share a key. This performance difference implicitly measures differential association of the 2 concepts with the attribute. In 3 experiments, the IAT was sensitive to (a) near-universal evaluative differences (e.g., flower vs. insect), (b) expected individual differences in evaluative associations (Japanese + pleasant vs. Korean + pleasant for Japanese vs. Korean subjects), and (c) consciously disavowed evaluative differences (Black + pleasant vs. White + pleasant for self-described unprejudiced White subjects).
        • Record: found
        • Abstract: found
        • Article: not found

        Statistical learning by 8-month-old infants.

        Learners rely on a combination of experience-independent and experience-dependent mechanisms to extract information from the environment. Language acquisition involves both types of mechanisms, but most theorists emphasize the relative importance of experience-independent mechanisms. The present study shows that a fundamental task of language acquisition, segmentation of words from fluent speech, can be accomplished by 8-month-old infants based solely on the statistical relationships between neighboring speech sounds. Moreover, this word segmentation was based on statistical learning from only 2 minutes of exposure, suggesting that infants have access to a powerful mechanism for the computation of statistical properties of the language input.
          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs

          Effect sizes are the most important outcome of empirical studies. Most articles on effect sizes highlight their importance to communicate the practical significance of results. For scientists themselves, effect sizes are most useful because they facilitate cumulative science. Effect sizes can be used to determine the sample size for follow-up studies, or examining effects across studies. This article aims to provide a practical primer on how to calculate and report effect sizes for t-tests and ANOVA's such that effect sizes can be used in a-priori power analyses and meta-analyses. Whereas many articles about effect sizes focus on between-subjects designs and address within-subjects designs only briefly, I provide a detailed overview of the similarities and differences between within- and between-subjects designs. I suggest that some research questions in experimental psychology examine inherently intra-individual effects, which makes effect sizes that incorporate the correlation between measures the best summary of the results. Finally, a supplementary spreadsheet is provided to make it as easy as possible for researchers to incorporate effect size calculations into their workflow.

            Author and article information

            [1]Primary Care and Public Health Sciences, King’s College London, UK
            [2]Department of Experimental Psychology, University College London, UK
            [3]Departamento de Fundamentos y Métodos de la Psicología, Universidad de Deusto, Bilbao, Spain
            Author notes
            Miguel A. Vadillo, Primary Care and Public Health Sciences, King’s College London, Addison House, Guy's Campus, London SE1 1UL, UK, Tel. +44 207 848-6620, Fax +44 207 848-6652, E-mail
            Exp Psychol
            Exp Psychol
            Experimental Psychology
            Hogrefe Publishing
            March 29, 2016
            : 63
            : 1
            : 3-19
            © 2016 Hogrefe Publishing

            Distributed under the Hogrefe OpenMind License

            Self URI (pdf): zea_63_1_3.pdf
            Theoretical Article


            Comment on this article

            Register to benefit from advanced discovery features on more than 34,000,000 articles

            Already registered?