16
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found
      Is Open Access

      From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference : Multisensory integration and self-consciousness

      1 , 2 , 3 , 4
      Annals of the New York Academy of Sciences
      Wiley

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references161

          • Record: found
          • Abstract: found
          • Article: not found

          The ventriloquist effect results from near-optimal bimodal integration.

          Ventriloquism is the ancient art of making one's voice appear to come from elsewhere, an art exploited by the Greek and Roman oracles, and possibly earlier. We regularly experience the effect when watching television and movies, where the voices seem to emanate from the actors' lips rather than from the actual sound source. Originally, ventriloquism was explained by performers projecting sound to their puppets by special techniques, but more recently it is assumed that ventriloquism results from vision "capturing" sound. In this study we investigate spatial localization of audio-visual stimuli. When visual localization is good, vision does indeed dominate and capture sound. However, for severely blurred visual stimuli (that are poorly localized), the reverse holds: sound captures vision. For less blurred stimuli, neither sense dominates and perception follows the mean position. Precision of bimodal localization is usually better than either the visual or the auditory unimodal presentation. All the results are well explained not by one sense capturing the other, but by a simple model of optimal combination of visual and auditory information.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            On the other hand: dummy hands and peripersonal space.

            Where are my hands? The brain can answer this question using sensory information arising from vision, proprioception, or touch. Other sources of information about the position of our hands can be derived from multisensory interactions (or potential interactions) with our close environment, such as when we grasp or avoid objects. The pioneering study of multisensory representations of peripersonal space was published in Behavioural Brain Research almost 30 years ago [Rizzolatti G, Scandolara C, Matelli M, Gentilucci M. Afferent properties of periarcuate neurons in macaque monkeys. II. Visual responses. Behav Brain Res 1981;2:147-63]. More recently, neurophysiological, neuroimaging, neuropsychological, and behavioural studies have contributed a wealth of evidence concerning hand-centred representations of objects in peripersonal space. This evidence is examined here in detail. In particular, we focus on the use of artificial dummy hands as powerful instruments to manipulate the brain's representation of hand position, peripersonal space, and of hand ownership. We also review recent studies of the 'rubber hand illusion' and related phenomena, such as the visual capture of touch, and the recalibration of hand position sense, and discuss their findings in the light of research on peripersonal space. Finally, we propose a simple model that situates the 'rubber hand illusion' in the neurophysiological framework of multisensory hand-centred representations of space.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Integration of proprioceptive and visual position-information: An experimentally supported model.

              To localize one's hand, i.e., to find out its position with respect to the body, humans may use proprioceptive information or visual information or both. It is still not known how the CNS combines simultaneous proprioceptive and visual information. In this study, we investigate in what position in a horizontal plane a hand is localized on the basis of simultaneous proprioceptive and visual information and compare this to the positions in which it is localized on the basis of proprioception only and vision only. Seated at a table, subjects matched target positions on the table top with their unseen left hand under the table. The experiment consisted of three series. In each of these series, the target positions were presented in three conditions: by vision only, by proprioception only, or by both vision and proprioception. In one of the three series, the visual information was veridical. In the other two, it was modified by prisms that displaced the visual field to the left and to the right, respectively. The results show that the mean of the positions indicated in the condition with both vision and proprioception generally lies off the straight line through the means of the other two conditions. In most cases the mean lies on the side predicted by a model describing the integration of multisensory information. According to this model, the visual information and the proprioceptive information are weighted with direction-dependent weights, the weights being related to the direction-dependent precision of the information in such a way that the available information is used very efficiently. Because the proposed model also can explain the unexpectedly small sizes of the variable errors in the localization of a seen hand that were reported earlier, there is strong evidence to support this model. The results imply that the CNS has knowledge about the direction-dependent precision of the proprioceptive and visual information.
                Bookmark

                Author and article information

                Journal
                Annals of the New York Academy of Sciences
                Ann. N.Y. Acad. Sci.
                Wiley
                00778923
                August 2018
                August 2018
                June 06 2018
                : 1426
                : 1
                : 146-165
                Affiliations
                [1 ]Vanderbilt Brain Institute; Vanderbilt University; Nashville Tennessee
                [2 ]Laboratory of Cognitive Neuroscience (LNCO); Center for Neuroprosthetics (CNP); Ecole Polytechnique Federale de Lausanne (EPFL); Lausanne Switzerland
                [3 ]Department of Neurology; University of Geneva; Geneva Switzerland
                [4 ]MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV); University of Lausanne; Lausanne Switzerland
                Article
                10.1111/nyas.13867
                93e0575b-8e7e-4a32-94d2-6448ff63290d
                © 2018

                http://doi.wiley.com/10.1002/tdm_license_1.1

                http://creativecommons.org/licenses/by-nc/4.0/


                Comments

                Comment on this article