13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Toward a universal decoder of linguistic meaning from brain activation

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Prior work decoding linguistic meaning from imaging data has been largely limited to concrete nouns, using similar stimuli for training and testing, from a relatively small number of semantic categories. Here we present a new approach for building a brain decoding system in which words and sentences are represented as vectors in a semantic space constructed from massive text corpora. By efficiently sampling this space to select training stimuli shown to subjects, we maximize the ability to generalize to new meanings from limited imaging data. To validate this approach, we train the system on imaging data of individual concepts, and show it can decode semantic vector representations from imaging data of sentences about a wide variety of both concrete and abstract topics from two separate datasets. These decoded representations are sufficiently detailed to distinguish even semantically similar sentences, and to capture the similarity structure of meaning relationships between sentences.

          Abstract

          Previous work decoding linguistic meaning from imaging data has generally been limited to a small number of semantic categories. Here, authors show that a decoder trained on neuroimaging data of single concepts sampling the semantic space can robustly decode meanings of semantically diverse new sentences with topics not encountered during training.

          Related collections

          Most cited references26

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          A Tutorial on Spectral Clustering

          In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Identifying natural images from human brain activity.

            A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person's visual experience from measurements of brain activity alone.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Predicting human brain activity associated with the meanings of nouns.

              The question of how the human brain represents conceptual knowledge has been debated in many scientific fields. Brain imaging studies have shown that different spatial patterns of neural activation are associated with thinking about different semantic categories of pictures and words (for example, tools, buildings, and animals). We present a computational model that predicts the functional magnetic resonance imaging (fMRI) neural activation associated with words for which fMRI data are not yet available. This model is trained with a combination of data from a trillion-word text corpus and observed fMRI data associated with viewing several dozen concrete nouns. Once trained, the model predicts fMRI activation for thousands of other concrete nouns in the text corpus, with highly significant accuracies over the 60 nouns for which we currently have fMRI data.
                Bookmark

                Author and article information

                Contributors
                francisco.pereira@gmail.com
                evelina9@mit.edu
                Journal
                Nat Commun
                Nat Commun
                Nature Communications
                Nature Publishing Group UK (London )
                2041-1723
                6 March 2018
                6 March 2018
                2018
                : 9
                : 963
                Affiliations
                [1 ]ISNI 0000 0004 0546 1113, GRID grid.415886.6, Medical Imaging Technologies, , Siemens Healthineers, ; Princeton, NJ 08540 USA
                [2 ]ISNI 0000 0001 2341 2786, GRID grid.116068.8, Department of Brain and Cognitive Sciences, , MIT, ; Cambridge, MA 02139 USA
                [3 ]DeepMind, London, N1C 4AG UK
                [4 ]ISNI 000000041936754X, GRID grid.38142.3c, Department of Psychology and Center for Brain Science, , Harvard University, ; Cambridge, MA 02138 USA
                [5 ]ISNI 0000 0001 2341 2786, GRID grid.116068.8, McGovern Institute for Brain Research, , MIT, ; Cambridge, MA 02139 USA
                [6 ]ISNI 0000000121901201, GRID grid.83440.3b, Gatsby Computational Neuroscience Unit, , University College London, ; London, WC1E 6BT UK
                [7 ]ISNI 000000041936754X, GRID grid.38142.3c, Department of Psychiatry, , Harvard Medical School, ; Boston, MA 02115 USA
                [8 ]ISNI 0000 0004 0386 9924, GRID grid.32224.35, Department of Psychiatry, , Massachusetts General Hospital, ; Boston, MA 02114 USA
                Author information
                http://orcid.org/0000-0002-6546-3298
                http://orcid.org/0000-0001-7758-6896
                http://orcid.org/0000-0003-3823-514X
                Article
                3068
                10.1038/s41467-018-03068-4
                5840373
                29511192
                a2562fdb-c453-4b10-bae2-eb959e8fbb3c
                © The Author(s) 2018

                Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

                History
                : 10 September 2017
                : 13 January 2018
                Categories
                Article
                Custom metadata
                © The Author(s) 2018

                Uncategorized
                Uncategorized

                Comments

                Comment on this article