Blog
About

14
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Is predictability salient? A study of attentional capture by auditory patterns

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In this series of behavioural and electroencephalography (EEG) experiments, we investigate the extent to which repeating patterns of sounds capture attention. Work in the visual domain has revealed attentional capture by statistically predictable stimuli, consistent with predictive coding accounts which suggest that attention is drawn to sensory regularities. Here, stimuli comprised rapid sequences of tone pips, arranged in regular (REG) or random (RAND) patterns. EEG data demonstrate that the brain rapidly recognizes predictable patterns manifested as a rapid increase in responses to REG relative to RAND sequences. This increase is reminiscent of the increase in gain on neural responses to attended stimuli often seen in the neuroimaging literature, and thus consistent with the hypothesis that predictable sequences draw attention. To study potential attentional capture by auditory regularities, we used REG and RAND sequences in two different behavioural tasks designed to reveal effects of attentional capture by regularity. Overall, the pattern of results suggests that regularity does not capture attention.

          This article is part of the themed issue ‘Auditory and visual scene analysis’.

          Related collections

          Most cited references 46

          • Record: found
          • Abstract: found
          • Article: not found

          Statistical learning of tone sequences by human infants and adults.

          Previous research suggests that language learners can detect and use the statistical properties of syllable sequences to discover words in continuous speech (e.g. Aslin, R.N., Saffran, J.R., Newport, E.L., 1998. Computation of conditional probability statistics by 8-month-old infants. Psychological Science 9, 321-324; Saffran, J.R., Aslin, R.N., Newport, E.L., 1996. Statistical learning by 8-month-old infants. Science 274, 1926-1928; Saffran, J., R., Newport, E.L., Aslin, R.N., (1996). Word segmentation: the role of distributional cues. Journal of Memory and Language 35, 606-621; Saffran, J.R., Newport, E.L., Aslin, R.N., Tunick, R.A., Barrueco, S., 1997. Incidental language learning: Listening (and learning) out of the corner of your ear. Psychological Science 8, 101-195). In the present research, we asked whether this statistical learning ability is uniquely tied to linguistic materials. Subjects were exposed to continuous non-linguistic auditory sequences whose elements were organized into 'tone words'. As in our previous studies, statistical information was the only word boundary cue available to learners. Both adults and 8-month-old infants succeeded at segmenting the tone stream, with performance indistinguishable from that obtained with syllable streams. These results suggest that a learning mechanism previously shown to be involved in word segmentation can also be used to segment sequences of non-linguistic stimuli.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            Attentional Routes to Conscious Perception

            The relationships between spatial attention and conscious perception are currently the object of intense debate. Recent evidence of double dissociations between attention and consciousness cast doubt on the time-honored concept of attention as a gateway to consciousness. Here we review evidence from behavioral, neurophysiologic, neuropsychological, and neuroimaging experiments, showing that distinct sorts of spatial attention can have different effects on visual conscious perception. While endogenous, or top-down attention, has weak influence on subsequent conscious perception of near-threshold stimuli, exogenous, or bottom-up forms of spatial attention appear instead to be a necessary, although not sufficient, step in the development of reportable visual experiences. Fronto-parietal networks important for spatial attention, with peculiar inter-hemispheric differences, constitute plausible neural substrates for the interactions between exogenous spatial attention and conscious perception.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Bayesian surprise attracts human attention.

              We propose a formal Bayesian definition of surprise to capture subjective aspects of sensory information. Surprise measures how data affects an observer, in terms of differences between posterior and prior beliefs about the world. Only data observations which substantially affect the observer's beliefs yield surprise, irrespectively of how rare or informative in Shannon's sense these observations are. We test the framework by quantifying the extent to which humans may orient attention and gaze towards surprising events or items while watching television. To this end, we implement a simple computational model where a low-level, sensory form of surprise is computed by simple simulated early visual neurons. Bayesian surprise is a strong attractor of human attention, with 72% of all gaze shifts directed towards locations more surprising than the average, a figure rising to 84% when focusing the analysis onto regions simultaneously selected by all observers. The proposed theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction.
                Bookmark

                Author and article information

                Journal
                Philos Trans R Soc Lond B Biol Sci
                Philos. Trans. R. Soc. Lond., B, Biol. Sci
                RSTB
                royptb
                Philosophical Transactions of the Royal Society B: Biological Sciences
                The Royal Society
                0962-8436
                1471-2970
                19 February 2017
                19 February 2017
                : 372
                : 1714 , Theme issue ‘Auditory and visual scene analysis’ compiled and edited by Hirohito M. Kondo, Jun-Ichiro Kawahara, Anouk M. van Loon and Brian C.J. Moore
                Affiliations
                [1 ]Ear Institute, University College London , London WC1X 8EE, UK
                [2 ]École Normale Supérieure , Paris 75005, France
                [3 ]Wellcome Trust Centre for Neuroimaging, University College London , London WC1N 3BG, UK
                Author notes
                [†]

                These authors contributed equally to this study.

                One contribution of 15 to a theme issue ‘ Auditory and visual scene analysis’.

                Electronic supplementary material is available online at https://dx.doi.org/10.6084/m9.figshare.c.3583490.

                Article
                rstb20160105
                10.1098/rstb.2016.0105
                5206273
                28044016
                © 2017 The Authors.

                Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.

                Product
                Funding
                Funded by: Brain Research Trust, http://dx.doi.org/10.13039/501100000368;
                Funded by: Biotechnology and Biological Sciences Research Council, http://dx.doi.org/10.13039/501100000268;
                Award ID: BB/K003399/1
                Categories
                1001
                133
                14
                42
                Articles
                Research Article
                Custom metadata
                February 19, 2016

                Comments

                Comment on this article