53
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In order to maintain a coherent, unified percept of the external environment, the brain must continuously combine information encoded by our different sensory systems. Contemporary models suggest that multisensory integration produces a weighted average of sensory estimates, where the contribution of each system to the ultimate multisensory percept is governed by the relative reliability of the information it provides (maximum-likelihood estimation). In the present study, we investigate interactions between auditory and visual rate perception, where observers are required to make judgments in one modality while ignoring conflicting rate information presented in the other. We show a gradual transition between partial cue integration and complete cue segregation with increasing inter-modal discrepancy that is inconsistent with mandatory implementation of maximum-likelihood estimation. To explain these findings, we implement a simple Bayesian model of integration that is also able to predict observer performance with novel stimuli. The model assumes that the brain takes into account prior knowledge about the correspondence between auditory and visual rate signals, when determining the degree of integration to implement. This provides a strategy for balancing the benefits accrued by integrating sensory estimates arising from a common source, against the costs of conflating information relating to independent objects or events.

          Related collections

          Most cited references33

          • Record: found
          • Abstract: found
          • Article: not found

          The ventriloquist effect results from near-optimal bimodal integration.

          Ventriloquism is the ancient art of making one's voice appear to come from elsewhere, an art exploited by the Greek and Roman oracles, and possibly earlier. We regularly experience the effect when watching television and movies, where the voices seem to emanate from the actors' lips rather than from the actual sound source. Originally, ventriloquism was explained by performers projecting sound to their puppets by special techniques, but more recently it is assumed that ventriloquism results from vision "capturing" sound. In this study we investigate spatial localization of audio-visual stimuli. When visual localization is good, vision does indeed dominate and capture sound. However, for severely blurred visual stimuli (that are poorly localized), the reverse holds: sound captures vision. For less blurred stimuli, neither sense dominates and perception follows the mean position. Precision of bimodal localization is usually better than either the visual or the auditory unimodal presentation. All the results are well explained not by one sense capturing the other, but by a simple model of optimal combination of visual and auditory information.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Illusions. What you see is what you hear.

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Immediate perceptual response to intersensory discrepancy.

                Bookmark

                Author and article information

                Journal
                Proc Biol Sci
                RSPB
                Proceedings of the Royal Society B: Biological Sciences
                The Royal Society (London )
                0962-8452
                1471-2954
                20 June 2006
                07 September 2006
                : 273
                : 1598
                : 2159-2168
                Affiliations
                [1 ]Visual Neuroscience Group, School of Psychology, The University of Nottingham Nottingham NG7 2RD, UK
                [2 ]Department of Optometry, University of Bradford Bradford BD7 1DP, UK
                Author notes
                [* ]Author for correspondence ( nwr@ 123456psychology.nottingham.ac.uk ).
                Article
                rspb20063578
                10.1098/rspb.2006.3578
                1635528
                16901835
                fb939837-e599-4880-b26d-dec6192b966e
                Copyright © 2006 The Royal Society

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

                History
                : 21 March 2006
                : 5 April 2006
                Categories
                Research Article

                Life sciences
                audio-visual conflict,multisensory integration,bayesian modelling
                Life sciences
                audio-visual conflict, multisensory integration, bayesian modelling

                Comments

                Comment on this article