4
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Predicting eye movement patterns from fMRI responses to natural scenes

      research-article
      1 , , 1 , 2
      Nature Communications
      Nature Publishing Group UK

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Eye tracking has long been used to measure overt spatial attention, and computational models of spatial attention reliably predict eye movements to natural images. However, researchers lack techniques to noninvasively access spatial representations in the human brain that guide eye movements. Here, we use functional magnetic resonance imaging (fMRI) to predict eye movement patterns from reconstructed spatial representations evoked by natural scenes. First, we reconstruct fixation maps to directly predict eye movement patterns from fMRI activity. Next, we use a model-based decoding pipeline that aligns fMRI activity to deep convolutional neural network activity to reconstruct spatial priority maps and predict eye movements in a zero-shot fashion. We predict human eye movement patterns from fMRI responses to natural scenes, provide evidence that visual representations of scenes and objects map onto neural representations that predict eye movements, and find a novel three-way link between brain activity, deep neural network models, and behavior.

          Abstract

          Human eye movements when viewing scenes can reflect overt spatial attention. Here, O’Connell and Chun predict human eye movement patterns from BOLD responses to natural scenes. Linking brain activity, convolutional neural network (CNN) models, and eye movement behavior, they show that brain activity patterns and CNN models share representations that guide eye movements to scenes.

          Related collections

          Most cited references36

          • Record: found
          • Abstract: found
          • Article: not found

          A hybrid approach to the skull stripping problem in MRI.

          We present a novel skull-stripping algorithm based on a hybrid approach that combines watershed algorithms and deformable surface models. Our method takes advantage of the robustness of the former as well as the surface information available to the latter. The algorithm first localizes a single white matter voxel in a T1-weighted MRI image, and uses it to create a global minimum in the white matter before applying a watershed algorithm with a preflooding height. The watershed algorithm builds an initial estimate of the brain volume based on the three-dimensional connectivity of the white matter. This first step is robust, and performs well in the presence of intensity nonuniformities and noise, but may erode parts of the cortex that abut bright nonbrain structures such as the eye sockets, or may remove parts of the cerebellum. To correct these inaccuracies, a surface deformation process fits a smooth surface to the masked volume, allowing the incorporation of geometric constraints into the skull-stripping procedure. A statistical atlas, generated from a set of accurately segmented brains, is used to validate and potentially correct the segmentation, and the MRI intensity values are locally re-estimated at the boundary of the brain. Finally, a high-resolution surface deformation is performed that accurately matches the outer boundary of the brain, resulting in a robust and automated procedure. Studies by our group and others outperform other publicly available skull-stripping tools. Copyright 2004 Elsevier Inc.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            A cortical representation of the local visual environment.

            Medial temporal brain regions such as the hippocampal formation and parahippocampal cortex have been generally implicated in navigation and visual memory. However, the specific function of each of these regions is not yet clear. Here we present evidence that a particular area within human parahippocampal cortex is involved in a critical component of navigation: perceiving the local visual environment. This region, which we name the 'parahippocampal place area' (PPA), responds selectively and automatically in functional magnetic resonance imaging (fMRI) to passively viewed scenes, but only weakly to single objects and not at all to faces. The critical factor for this activation appears to be the presence in the stimulus of information about the layout of local space. The response in the PPA to scenes with spatial layout but no discrete objects (empty rooms) is as strong as the response to complex meaningful scenes containing multiple objects (the same rooms furnished) and over twice as strong as the response to arrays of multiple objects without three-dimensional spatial context (the furniture from these rooms on a blank background). This response is reduced if the surfaces in the scene are rearranged so that they no longer define a coherent space. We propose that the PPA represents places by encoding the geometry of the local environment.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception

              Using functional magnetic resonance imaging (fMRI), we found an area in the fusiform gyrus in 12 of the 15 subjects tested that was significantly more active when the subjects viewed faces than when they viewed assorted common objects. This face activation was used to define a specific region of interest individually for each subject, within which several new tests of face specificity were run. In each of five subjects tested, the predefined candidate “face area” also responded significantly more strongly to passive viewing of (1) intact than scrambled two-tone faces, (2) full front-view face photos than front-view photos of houses, and (in a different set of five subjects) (3) three-quarter-view face photos (with hair concealed) than photos of human hands; it also responded more strongly during (4) a consecutive matching task performed on three-quarter-view faces versus hands. Our technique of running multiple tests applied to the same region defined functionally within individual subjects provides a solution to two common problems in functional imaging: (1) the requirement to correct for multiple statistical comparisons and (2) the inevitable ambiguity in the interpretation of any study in which only two or three conditions are compared. Our data allow us to reject alternative accounts of the function of the fusiform face area (area “FF”) that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.
                Bookmark

                Author and article information

                Contributors
                thomas.oconnell@yale.edu
                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group UK (London )
                2041-1723
                4 December 2018
                4 December 2018
                2018
                : 9
                : 5159
                Affiliations
                [1 ]ISNI 0000000419368710, GRID grid.47100.32, Department of Psychology, , Yale University, ; New Haven, 06520 USA
                [2 ]ISNI 0000000419368710, GRID grid.47100.32, Department of Neuroscience, , Yale School of Medicine, ; New Haven, 06520 USA
                Author information
                http://orcid.org/0000-0001-9895-8943
                Article
                7471
                10.1038/s41467-018-07471-9
                6279768
                30514836
                24c37faf-cee7-4442-893d-4c8b893ae224
                © The Author(s) 2018

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 21 December 2017
                : 2 November 2018
                Categories
                Article
                Custom metadata
                © The Author(s) 2018

                Uncategorized
                Uncategorized

                Comments

                Comment on this article