12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Ideophones in Japanese modulate the P2 and late positive complex responses

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response than arbitrary adverbs, as well as a sustained late positive complex. Our results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of arbitrary words in comparison to ideophones. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds.

          Related collections

          Most cited references40

          • Record: found
          • Abstract: found
          • Article: not found

          Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study.

          The aim of this study was (1) to provide behavioral evidence for multimodal feature integration in an object recognition task in humans and (2) to characterize the processing stages and the neural structures where multisensory interactions take place. Event-related potentials (ERPs) were recorded from 30 scalp electrodes while subjects performed a forced-choice reaction-time categorization task: At each trial, the subjects had to indicate which of two objects was presented by pressing one of two keys. The two objects were defined by auditory features alone, visual features alone, or the combination of auditory and visual features. Subjects were more accurate and rapid at identifying multimodal than unimodal objects. Spatiotemporal analysis of ERPs and scalp current densities revealed several auditory-visual interaction components temporally, spatially, and functionally distinct before 200 msec poststimulus. The effects observed were (1) in visual areas, new neural activities (as early as 40 msec poststimulus) and modulation (amplitude decrease) of the N185 wave to unimodal visual stimulus, (2) in the auditory cortex, modulation (amplitude increase) of subcomponents of the unimodal auditory N1 wave around 90 to 110 msec, and (3) new neural activity over the right fronto-temporal area (140 to 165 msec). Furthermore, when the subjects were separated into two groups according to their dominant modality to perform the task in unimodal conditions (shortest reaction time criteria), the integration effects were found to be similar for the two groups over the nonspecific fronto-temporal areas, but they clearly differed in the sensory-specific cortices, affecting predominantly the sensory areas of the nondominant modality. Taken together, the results indicate that multisensory integration is mediated by flexible, highly adaptive physiological processes that can take place very early in the sensory processing chain and operate in both sensory-specific and nonspecific cortical structures in different ways.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Visual speech speeds up the neural processing of auditory speech.

            Synchronous presentation of stimuli to the auditory and visual systems can modify the formation of a percept in either modality. For example, perception of auditory speech is improved when the speaker's facial articulatory movements are visible. Neural convergence onto multisensory sites exhibiting supra-additivity has been proposed as the principal mechanism for integration. Recent findings, however, have suggested that putative sensory-specific cortices are responsive to inputs presented through a different modality. Consequently, when and where audiovisual representations emerge remain unsettled. In combined psychophysical and electroencephalography experiments we show that visual speech speeds up the cortical processing of auditory signals early (within 100 ms of signal onset). The auditory-visual interaction is reflected as an articulator-specific temporal facilitation (as well as a nonspecific amplitude reduction). The latency facilitation systematically depends on the degree to which the visual signal predicts possible auditory targets. The observed auditory-visual data support the view that there exist abstract internal representations that constrain the analysis of subsequent speech inputs. This is evidence for the existence of an "analysis-by-synthesis" mechanism in auditory-visual speech perception.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A review of the evidence for P2 being an independent component process: age, sleep and modality.

              This article reviews the event-related potential (ERP) literature in relation to the P2 waveform of the human auditory evoked potential. Within the auditory evoked potential, a positive deflection at approximately 150-250 ms is a ubiquitous feature. Unlike other cognitive components such as N1 or the P300, remarkably little has been done to investigate the underlying neurological correlates or significance of this waveform. Indeed until recently, many researchers considered it to be an intrinsic part of the 'vertex potential' complex, involving it and the earlier N1. This review seeks to describe the evidence supportive of P2 being the result of independent processes and highlights several features, such as its persistence from wakefulness into sleep, the general consensus that unlike most other EEG phenomena it increases with age, and the fact that it can be generated using respiratory stimuli.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                02 July 2015
                2015
                : 6
                : 933
                Affiliations
                [1] 1Department of Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen Netherlands
                [2] 2Division of Psychology and Language Sciences, University College London UK
                Author notes

                Edited by: Gabriella Vigliocco, University College London, UK

                Reviewed by: Horacio A. Barber, Universidad de La Laguna, Spain; Michiko Asano, Rikkyo University, Japan

                *Correspondence: Gwilym Lockwood, Department of Neurobiology of Language, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, Netherlands, gwilym.lockwood@ 123456mpi.nl

                This article was submitted to Language Sciences, a section of the journal Frontiers in Psychology

                Article
                10.3389/fpsyg.2015.00933
                4488605
                26191031
                71bee734-df53-4188-b38a-9639b26013de
                Copyright © 2015 Lockwood and Tuomainen.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 20 October 2014
                : 22 June 2015
                Page count
                Figures: 3, Tables: 1, Equations: 0, References: 72, Pages: 10, Words: 0
                Funding
                Funded by: Max Planck Institute for Psycholinguistics and University College London
                Categories
                Psychology
                Original Research

                Clinical Psychology & Psychiatry
                sound-symbolism,japanese,ideophone,event-related potential,p2,cross-modal correspondence,synaesthesia/synesthesia

                Comments

                Comment on this article