23
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Spatiotemporal dynamics of similarity-based neural representations of facial identity

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          <p id="d1845643e200">Humans can rapidly discriminate among many highly similar facial identities across identity-preserving image transformations (e.g., changes in facial expression), an ability that requires the system to rapidly transform image-based inputs into a more abstract, identity-based representation. We used magnetoencephalography to provide a temporally precise description of this transformation within human face-selective cortical regions. We observed a transition from an image-based representation toward an identity-based representation after ∼200 ms, a result suggesting that, rather than computing a single representation, a given face-selective region may represent multiple distinct types of information about face identity at different times. Our results advance our understanding of the microgenesis of fine-grained, high-level neural representations of object identity, a process critical to human visual expertise. </p><p class="first" id="d1845643e203">Humans’ remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level “image-based” and higher level “identity-based” model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise. </p>

          Related collections

          Most cited references35

          • Record: found
          • Abstract: found
          • Article: not found

          A cortical region consisting entirely of face-selective cells.

          Face perception is a skill crucial to primates. In both humans and macaque monkeys, functional magnetic resonance imaging (fMRI) reveals a system of cortical regions that show increased blood flow when the subject views images of faces, compared with images of objects. However, the stimulus selectivity of single neurons within these fMRI-identified regions has not been studied. We used fMRI to identify and target the largest face-selective region in two macaques for single-unit recording. Almost all (97%) of the visually responsive neurons in this region were strongly face selective, indicating that a dedicated cortical area exists to support face processing in the macaque.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A feedforward architecture accounts for rapid categorization.

            Primates are remarkably good at recognizing objects. The level of performance of their visual system and its robustness to image degradations still surpasses the best computer vision systems despite decades of engineering effort. In particular, the high accuracy of primates in ultra rapid object categorization and rapid serial visual presentation tasks is remarkable. Given the number of processing stages involved and typical neural latencies, such rapid visual processing is likely to be mostly feedforward. Here we show that a specific implementation of a class of feedforward theories of object recognition (that extend the Hubel and Wiesel simple-to-complex cell hierarchy and account for many anatomical and physiological constraints) can predict the level and the pattern of performance achieved by humans on a rapid masked animal vs. non-animal categorization task.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Functional compartmentalization and viewpoint generalization within the macaque face-processing system.

              Primates can recognize faces across a range of viewing conditions. Representations of individual identity should thus exist that are invariant to accidental image transformations like view direction. We targeted the recently discovered face-processing network of the macaque monkey that consists of six interconnected face-selective regions and recorded from the two middle patches (ML, middle lateral, and MF, middle fundus) and two anterior patches (AL, anterior lateral, and AM, anterior medial). We found that the anatomical position of a face patch was associated with a unique functional identity: Face patches differed qualitatively in how they represented identity across head orientations. Neurons in ML and MF were view-specific; neurons in AL were tuned to identity mirror-symmetrically across views, thus achieving partial view invariance; and neurons in AM, the most anterior face patch, achieved almost full view invariance.
                Bookmark

                Author and article information

                Journal
                Proceedings of the National Academy of Sciences
                Proc Natl Acad Sci USA
                Proceedings of the National Academy of Sciences
                0027-8424
                1091-6490
                January 10 2017
                January 10 2017
                : 114
                : 2
                : 388-393
                Article
                10.1073/pnas.1614763114
                5240702
                28028220
                71d0092d-8d3e-40b2-a9bb-09b8b8e0194a
                © 2017

                http://www.pnas.org/site/misc/userlicense.xhtml

                History

                Comments

                Comment on this article