39
views
0
recommends
+1 Recommend
0 collections
    3
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Neurocomputational Model of the N400 and the P600 in Language Processing

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Ten years ago, researchers using event‐related brain potentials (ERPs) to study language comprehension were puzzled by what looked like a Semantic Illusion: Semantically anomalous, but structurally well‐formed sentences did not affect the N400 component—traditionally taken to reflect semantic integration—but instead produced a P600 effect, which is generally linked to syntactic processing. This finding led to a considerable amount of debate, and a number of complex processing models have been proposed as an explanation. What these models have in common is that they postulate two or more separate processing streams, in order to reconcile the Semantic Illusion and other semantically induced P600 effects with the traditional interpretations of the N400 and the P600. Recently, however, these multi‐stream models have been called into question, and a simpler single‐stream model has been proposed. According to this alternative model, the N400 component reflects the retrieval of word meaning from semantic memory, and the P600 component indexes the integration of this meaning into the unfolding utterance interpretation. In the present paper, we provide support for this “Retrieval–Integration (RI)” account by instantiating it as a neurocomputational model. This neurocomputational model is the first to successfully simulate the N400 and P600 amplitude in language comprehension, and simulations with this model provide a proof of concept of the single‐stream RI account of semantically induced patterns of N400 and P600 modulations.

          Related collections

          Most cited references55

          • Record: found
          • Abstract: found
          • Article: not found

          From sensation to cognition.

          M. Mesulam (1998)
          Sensory information undergoes extensive associative elaboration and attentional modulation as it becomes incorporated into the texture of cognition. This process occurs along a core synaptic hierarchy which includes the primary sensory, upstream unimodal, downstream unimodal, heteromodal, paralimbic and limbic zones of the cerebral cortex. Connections from one zone to another are reciprocal and allow higher synaptic levels to exert a feedback (top-down) influence upon earlier levels of processing. Each cortical area provides a nexus for the convergence of afferents and divergence of efferents. The resultant synaptic organization supports parallel as well as serial processing, and allows each sensory event to initiate multiple cognitive and behavioural outcomes. Upstream sectors of unimodal association areas encode basic features of sensation such as colour, motion, form and pitch. More complex contents of sensory experience such as objects, faces, word-forms, spatial locations and sound sequences become encoded within downstream sectors of unimodal areas by groups of coarsely tuned neurons. The highest synaptic levels of sensory-fugal processing are occupied by heteromodal, paralimbic and limbic cortices, collectively known as transmodal areas. The unique role of these areas is to bind multiple unimodal and other transmodal areas into distributed but integrated multimodal representations. Transmodal areas in the midtemporal cortex, Wernicke's area, the hippocampal-entorhinal complex and the posterior parietal cortex provide critical gateways for transforming perception into recognition, word-forms into meaning, scenes and events into experiences, and spatial locations into targets for exploration. All cognitive processes arise from analogous associative transformations of similar sets of sensory inputs. The differences in the resultant cognitive operation are determined by the anatomical and physiological properties of the transmodal node that acts as the critical gateway for the dominant transformation. Interconnected sets of transmodal nodes provide anatomical and computational epicentres for large-scale neurocognitive networks. In keeping with the principles of selectively distributed processing, each epicentre of a large-scale network displays a relative specialization for a specific behavioural component of its principal neurospychological domain. The destruction of transmodal epicentres causes global impairments such as multimodal anomia, neglect and amnesia, whereas their selective disconnection from relevant unimodal areas elicits modality-specific impairments such as prosopagnosia, pure word blindness and category-specific anomias. The human brain contains at least five anatomically distinct networks. The network for spatial awareness is based on transmodal epicentres in the posterior parietal cortex and the frontal eye fields; the language network on epicentres in Wernicke's and Broca's areas; the explicit memory/emotion network on epicentres in the hippocampal-entorhinal complex and the amygdala; the face-object recognition network on epicentres in the midtemporal and temporopolar cortices; and the working memory-executive function network on epicentres in the lateral prefrontal cortex and perhaps the posterior parietal cortex. Individual sensory modalities give rise to streams of processing directed to transmodal nodes belonging to each of these networks. The fidelity of sensory channels is actively protected through approximately four synaptic levels of sensory-fugal processing. The modality-specific cortices at these four synaptic levels encode the most veridical representations of experience. Attentional, motivational and emotional modulations, including those related to working memory, novelty-seeking and mental imagery, become increasingly more pronounced within downstream components of unimodal areas, where they help to create a highly edited subjective version of the world. (ABSTRACT TRUNCATED)
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language.

            Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent developments in the cortical organization of vision, and on data from a variety of sources, we propose a new framework for understanding aspects of the functional anatomy of language which moves towards remedying this situation. The framework posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, which is involved in mapping sound onto meaning, and a dorsal stream, which is involved in mapping sound onto articulatory-based representations. The ventral stream projects ventro-laterally toward inferior posterior temporal cortex (posterior middle temporal gyrus) which serves as an interface between sound-based representations of speech in the superior temporal gyrus (again bilaterally) and widely distributed conceptual representations. The dorsal stream projects dorso-posteriorly involving a region in the posterior Sylvian fissure at the parietal-temporal boundary (area Spt), and ultimately projecting to frontal regions. This network provides a mechanism for the development and maintenance of "parity" between auditory and motor representations of speech. Although the proposed dorsal stream represents a very tight connection between processes involved in speech perception and speech production, it does not appear to be a critical component of the speech perception process under normal (ecologically natural) listening conditions, that is, when speech input is mapped onto a conceptual representation. We also propose some degree of bi-directionality in both the dorsal and ventral pathways. We discuss some recent empirical tests of this framework that utilize a range of methods. We also show how damage to different components of this framework can account for the major symptom clusters of the fluent aphasias, and discuss some recent evidence concerning how sentence-level processing might be integrated into the framework.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Ventral and dorsal pathways for language.

              Built on an analogy between the visual and auditory systems, the following dual stream model for language processing was suggested recently: a dorsal stream is involved in mapping sound to articulation, and a ventral stream in mapping sound to meaning. The goal of the study presented here was to test the neuroanatomical basis of this model. Combining functional magnetic resonance imaging (fMRI) with a novel diffusion tensor imaging (DTI)-based tractography method we were able to identify the most probable anatomical pathways connecting brain regions activated during two prototypical language tasks. Sublexical repetition of speech is subserved by a dorsal pathway, connecting the superior temporal lobe and premotor cortices in the frontal lobe via the arcuate and superior longitudinal fascicle. In contrast, higher-level language comprehension is mediated by a ventral pathway connecting the middle temporal lobe and the ventrolateral prefrontal cortex via the extreme capsule. Thus, according to our findings, the function of the dorsal route, traditionally considered to be the major language pathway, is mainly restricted to sensory-motor mapping of sound to articulation, whereas linguistic processing of sound to meaning requires temporofrontal interaction transmitted via the ventral route.
                Bookmark

                Author and article information

                Contributors
                brouwer@coli.uni-saarland.de
                Journal
                Cogn Sci
                Cogn Sci
                10.1111/(ISSN)1551-6709
                COGS
                Cognitive Science
                John Wiley and Sons Inc. (Hoboken )
                0364-0213
                1551-6709
                21 December 2016
                May 2017
                : 41
                : Suppl Suppl 6 , Reading, Language Comprehension and Language Production ( doiID: 10.1111/cogs.2017.41.issue-S6 )
                : 1318-1352
                Affiliations
                [ 1 ] Department of Language Science and TechnologySaarland University
                [ 2 ] Center for Language and Cognition GroningenUniversity of Groningen
                Author notes
                [*] [* ]Correspondence should be send to Harm Brouwer, Department of Language Science and Technology, Saarland University, Building C7.1, 66123 Saarbrücken, Germany. E‐mail: brouwer@ 123456coli.uni-saarland.de
                Article
                COGS12461
                10.1111/cogs.12461
                5484319
                28000963
                fb6e106c-78a4-4750-8412-51970f1ee2c3
                Copyright © 2016 The Authors. Cognitive Science is published by Wiley Periodicals, Inc. on behalf of the Cognitive Science Society.

                This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.

                History
                : 26 August 2015
                : 20 September 2016
                : 29 September 2016
                Page count
                Figures: 6, Tables: 1, Pages: 35, Words: 16381
                Funding
                Funded by: Netherlands Organization for Scientific Research (NWO) PGW
                Award ID: 10‐26
                Funded by: EU 7th Framework Programme Marie Curie Initial Training Network “Language and Perception”
                Award ID: 316748
                Categories
                Regular Article
                Regular Articles
                Custom metadata
                2.0
                cogs12461
                May 2017
                Converter:WILEY_ML3GV2_TO_NLMPMC version:5.1.2 mode:remove_FC converted:26.06.2017

                language comprehension,event‐related potentials,n400,p600,retrieval–integration account,computational modeling,neural networks

                Comments

                Comment on this article