13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Peripersonal space representation develops independently from visual experience

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-to-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects’ reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects’ reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one’s own and others’ peripersonal space representation.

          Related collections

          Most cited references51

          • Record: found
          • Abstract: found
          • Article: not found

          Grasping objects: the cortical mechanisms of visuomotor transformation.

          Grasping requires coding of the object's intrinsic properties (size and shape), and the transformation of these properties into a pattern of distal (finger and wrist) movements. Computational models address this behavior through the interaction of perceptual and motor schemas. In monkeys, the transformation of an object's intrinsic properties into specific grips takes place in a circuit that is formed by the inferior parietal lobule and the inferior premotor area (area F5). Neurons in both these areas code size, shape and orientation of objects, and specific types of grip that are necessary to grasp them. Grasping movements are coded more globally in the inferior parietal lobule, whereas they are more segmented in area F5. In humans, neuropsychological studies of patients with lesions to the parietal lobule confirm that primitive shape characteristics of an object for grasping are analyzed in the parietal lobe, and also demonstrate that this 'pragmatic' analysis of objects is separated from the 'semantic' analysis performed in the temporal lobe.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A common reference frame for movement plans in the posterior parietal cortex.

            Orchestrating a movement towards a sensory target requires many computational processes, including a transformation between reference frames. This transformation is important because the reference frames in which sensory stimuli are encoded often differ from those of motor effectors. The posterior parietal cortex has an important role in these transformations. Recent work indicates that a significant proportion of parietal neurons in two cortical areas transforms the sensory signals that are used to guide movements into a common reference frame. This common reference frame is an eye-centred representation that is modulated by eye-, head-, body- or limb-position signals. A common reference frame might facilitate communication between different areas that are involved in coordinating the movements of different effectors. It might also be an efficient way to represent the locations of different sensory targets in the world.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Object representation in the ventral premotor cortex (area F5) of the monkey.

              Visual and motor properties of single neurons of monkey ventral premotor cortex (area F5) were studied in a behavioral paradigm consisting of four conditions: object grasping in light, object grasping in dark, object fixation, and fixation of a spot of light. The employed objects were six different three-dimensional (3-D) geometric solids. Two main types of neurons were distinguished: motor neurons (n = 25) and visuomotor neurons (n = 24). Motor neurons discharged in association with grasping movements. Most of them (n = 17) discharged selectively during a particular type of grip. Different objects, if grasped in similar way, determined similar neuronal motor responses. Visuomotor neurons also discharged during active movements, but, in addition, they fired also in response to the presentation of 3-D objects. The majority of visuomotor neurons (n = 16) showed selectivity for one or few objects. The response was present both in object grasping in light and in object fixation conditions. Visuomotor neurons that selectively discharged to the presentation of a given object discharged also selectively during grasping of that object. In conclusion, object shape is coded in F5 even when a response to that object is not required. The possible visual or motor nature of this object coding is discussed.
                Bookmark

                Author and article information

                Contributors
                corrado.sinigaglia@unimi.it
                Journal
                Sci Rep
                Sci Rep
                Scientific Reports
                Nature Publishing Group UK (London )
                2045-2322
                15 December 2017
                15 December 2017
                2017
                : 7
                : 17673
                Affiliations
                [1 ]ISNI 0000 0004 1790 9464, GRID grid.462365.0, MOMILab, , IMT School for Advanced Studies Lucca, ; I-55100 Lucca, Italy
                [2 ]ISNI 0000 0004 1757 3729, GRID grid.5395.a, Research Center “E. Piaggio”, , University of Pisa, ; Pisa, I-56100 Italy
                [3 ]ISNI 0000 0001 2181 4941, GRID grid.412451.7, Department of Neuroscience and Imaging and Clinical Science, , University G. d’Annunzio, ; Chieti, I-66100 Italy
                [4 ]ISNI 0000 0001 2181 4941, GRID grid.412451.7, Institute for Advanced Biomedical Technologies - ITAB, , Foundation University G. d’Annunzio, ; Chieti, I-66100 Italy
                [5 ]ISNI 0000 0001 0942 6946, GRID grid.8356.8, Centre for Brain Science, , Department of Psychology, University of Essex, ; Colchester, UK
                [6 ]ISNI 0000 0004 1757 2822, GRID grid.4708.b, Department of Philosophy, , University of Milan, via Festa del Perdono 7, ; I-20122 Milano, Italy
                [7 ]ISNI 0000 0004 1757 2822, GRID grid.4708.b, CSSA, Centre for the Study of Social Action, , University of Milan, ; Milan, I-20122 Italy
                Article
                17896
                10.1038/s41598-017-17896-9
                5732274
                29247162
                3507e98a-bdb8-4e8c-a19d-ac0a61e173a4
                © The Author(s) 2017

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 14 August 2017
                : 1 December 2017
                Categories
                Article
                Custom metadata
                © The Author(s) 2017

                Uncategorized
                Uncategorized

                Comments

                Comment on this article