39
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      A neural code for three-dimensional object shape in macaque inferotemporal cortex

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Previous investigations of the neural code for complex object shape have focused on two-dimensional (2D) pattern representation. This might be the primary mode for object vision, based on simplicity and direct relation to the retinal image. In contrast, 3D shape representation requires higher-dimensional coding based on extensive computation. Here, for the first time, we provide evidence of an explicit neural code for complex 3D object shape. We used a novel evolutionary stimulus strategy and linear/nonlinear response models to characterize 3D shape responses in macaque monkey inferotemporal cortex (IT). We found widespread tuning for 3D spatial configurations of surface fragments characterized by their 3D orientations and joint principal curvatures. Configural representation of 3D shape could provide specific knowledge of object structure critical for guidance of complex physical interactions and evaluation of object functionality and utility.

          Related collections

          Most cited references46

          • Record: found
          • Abstract: found
          • Article: not found

          Invariant visual representation by single neurons in the human brain.

          It takes a fraction of a second to recognize a person or an object even when seen under strikingly different conditions. How such a robust, high-level representation is achieved by neurons in the human brain is still unclear. In monkeys, neurons in the upper stages of the ventral visual pathway respond to complex images such as faces and objects and show some degree of invariance to metric properties such as the stimulus size, position and viewing angle. We have previously shown that neurons in the human medial temporal lobe (MTL) fire selectively to images of faces, animals, objects or scenes. Here we report on a remarkable subset of MTL neurons that are selectively activated by strikingly different pictures of given individuals, landmarks or objects and in some cases even by letter strings with their names. These results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Sparse coding and decorrelation in primary visual cortex during natural vision.

            Theoretical studies suggest that primary visual cortex (area V1) uses a sparse code to efficiently represent natural scenes. This issue was investigated by recording from V1 neurons in awake behaving macaques during both free viewing of natural scenes and conditions simulating natural vision. Stimulation of the nonclassical receptive field increases the selectivity and sparseness of individual V1 neurons, increases the sparseness of the population response distribution, and strongly decorrelates the responses of neuron pairs. These effects are due to both excitatory and suppressive modulation of the classical receptive field by the nonclassical receptive field and do not depend critically on the spatiotemporal structure of the stimuli. During natural vision, the classical and nonclassical receptive fields function together to form a sparse representation of the visual world. This sparse code may be computationally efficient for both early vision and higher visual processing.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A feedforward architecture accounts for rapid categorization.

              Primates are remarkably good at recognizing objects. The level of performance of their visual system and its robustness to image degradations still surpasses the best computer vision systems despite decades of engineering effort. In particular, the high accuracy of primates in ultra rapid object categorization and rapid serial visual presentation tasks is remarkable. Given the number of processing stages involved and typical neural latencies, such rapid visual processing is likely to be mostly feedforward. Here we show that a specific implementation of a class of feedforward theories of object recognition (that extend the Hubel and Wiesel simple-to-complex cell hierarchy and account for many anatomical and physiological constraints) can predict the level and the pattern of performance achieved by humans on a rapid masked animal vs. non-animal categorization task.
                Bookmark

                Author and article information

                Journal
                9809671
                21092
                Nat Neurosci
                Nature neuroscience
                1097-6256
                1546-1726
                5 September 2008
                5 October 2008
                November 2008
                12 August 2009
                : 11
                : 11
                : 1352-1360
                Affiliations
                [1 ]Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, 3400 N Charles St, Baltimore, Maryland 21218, USA.
                [2 ]Department of Biomedical Engineering, Johns Hopkins University, School of Medicine, 720 Rutland Ave, Baltimore, Maryland 21205, USA.
                [3 ]Department of Neuroscience, Johns Hopkins University, School of Medicine, 725 N Wolfe St, Baltimore, Maryland 21205, USA.
                Article
                nihpa68444
                10.1038/nn.2202
                2725445
                18836443
                175cfcff-a3f0-480b-ad22-3e6f45d1d118
                History
                Funding
                Funded by: National Eye Institute : NEI
                Award ID: R01 EY016711-01 ||EY
                Categories
                Article

                Neurosciences
                Neurosciences

                Comments

                Comment on this article