18
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.

          Related collections

          Most cited references65

          • Record: found
          • Abstract: not found
          • Article: not found

          Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            ImageNet: A large-scale hierarchical image database

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception

              Using functional magnetic resonance imaging (fMRI), we found an area in the fusiform gyrus in 12 of the 15 subjects tested that was significantly more active when the subjects viewed faces than when they viewed assorted common objects. This face activation was used to define a specific region of interest individually for each subject, within which several new tests of face specificity were run. In each of five subjects tested, the predefined candidate “face area” also responded significantly more strongly to passive viewing of (1) intact than scrambled two-tone faces, (2) full front-view face photos than front-view photos of houses, and (in a different set of five subjects) (3) three-quarter-view face photos (with hair concealed) than photos of human hands; it also responded more strongly during (4) a consecutive matching task performed on three-quarter-view faces versus hands. Our technique of running multiple tests applied to the same region defined functionally within individual subjects provides a solution to two common problems in functional imaging: (1) the requirement to correct for multiple statistical comparisons and (2) the inevitable ambiguity in the interpretation of any study in which only two or three conditions are compared. Our data allow us to reject alternative accounts of the function of the fusiform face area (area “FF”) that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.
                Bookmark

                Author and article information

                Contributors
                Role: Reviewing Editor
                Journal
                eLife
                Elife
                eLife
                eLife
                eLife Sciences Publications, Ltd
                2050-084X
                07 March 2018
                2018
                : 7
                : e32962
                Affiliations
                [1 ]deptLaboratory of Brain and Cognition National Institutes of Health BethesdaUnited States
                [2 ]deptDepartment of Psychology New York University New York CityUnited States
                [3 ]deptNeuroscience Program Bates College MaineUnited States
                [4 ]deptPrinceton Neuroscience Institute Princeton University PrincetonUnited States
                [5 ]deptStanford Vision Lab Stanford University StanfordUnited States
                [6 ]deptDepartment of Psychology University of Illinois Urbana-ChampaignUnited States
                [7 ]deptBeckman Institute University of Illinois Urbana-ChampaignUnited States
                [8]California Institute of Technology United States
                [9]California Institute of Technology United States
                Author information
                http://orcid.org/0000-0002-5536-6128
                http://orcid.org/0000-0003-3540-5019
                http://orcid.org/0000-0001-9802-5828
                http://orcid.org/0000-0001-6861-8964
                Article
                32962
                10.7554/eLife.32962
                5860866
                29513219
                9809b64e-82c7-4220-8d5c-45e0168f1e75

                This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

                History
                : 19 October 2017
                : 02 March 2018
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000002, National Institutes of Health;
                Award ID: ZIAMH002909
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/501100003246, Nederlandse Organisatie voor Wetenschappelijk Onderzoek;
                Award ID: Rubicon Fellowship
                Award Recipient :
                Funded by: FundRef http://dx.doi.org/10.13039/100000006, Office of Naval Research;
                Award ID: Multidisciplinary Research Initiative Grant N000141410671
                Award Recipient :
                The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
                Categories
                Research Article
                Neuroscience
                Custom metadata
                Deep network features exhibit a robust correlation with brain activity in scene-selective cortex, but are not sufficient to explain human scene categorization behavior, which is strongly shaped by information about the function (possibility for action) of the scene.

                Life sciences
                scene perception,variance partitioning,behavioral categorization,deep neural network,computational model,fmri,human

                Comments

                Comment on this article