41
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant ( pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production.

          Related collections

          Most cited references117

          • Record: found
          • Abstract: found
          • Article: not found

          Alpha-band oscillations, attention, and controlled access to stored information

          Alpha-band oscillations are the dominant oscillations in the human brain and recent evidence suggests that they have an inhibitory function. Nonetheless, there is little doubt that alpha-band oscillations also play an active role in information processing. In this article, I suggest that alpha-band oscillations have two roles (inhibition and timing) that are closely linked to two fundamental functions of attention (suppression and selection), which enable controlled knowledge access and semantic orientation (the ability to be consciously oriented in time, space, and context). As such, alpha-band oscillations reflect one of the most basic cognitive processes and can also be shown to play a key role in the coalescence of brain activity in different frequencies.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language.

            Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent developments in the cortical organization of vision, and on data from a variety of sources, we propose a new framework for understanding aspects of the functional anatomy of language which moves towards remedying this situation. The framework posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, which is involved in mapping sound onto meaning, and a dorsal stream, which is involved in mapping sound onto articulatory-based representations. The ventral stream projects ventro-laterally toward inferior posterior temporal cortex (posterior middle temporal gyrus) which serves as an interface between sound-based representations of speech in the superior temporal gyrus (again bilaterally) and widely distributed conceptual representations. The dorsal stream projects dorso-posteriorly involving a region in the posterior Sylvian fissure at the parietal-temporal boundary (area Spt), and ultimately projecting to frontal regions. This network provides a mechanism for the development and maintenance of "parity" between auditory and motor representations of speech. Although the proposed dorsal stream represents a very tight connection between processes involved in speech perception and speech production, it does not appear to be a critical component of the speech perception process under normal (ecologically natural) listening conditions, that is, when speech input is mapped onto a conceptual representation. We also propose some degree of bi-directionality in both the dorsal and ventral pathways. We discuss some recent empirical tests of this framework that utilize a range of methods. We also show how damage to different components of this framework can account for the major symptom clusters of the fluent aphasias, and discuss some recent evidence concerning how sentence-level processing might be integrated into the framework.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources.

              An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub- and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                10 July 2014
                2014
                : 5
                : 656
                Affiliations
                [1] 1Department of Audiology and Speech Pathology, University of Tennessee Health Science Center Knoxville, TN, USA
                [2] 2Department of Communication Disorders, University of Arkansas Fayetteville, AR, USA
                [3] 3Speech-Language Pathology Program, College of Health Sciences, Midwestern University Chicago, IL, USA
                Author notes

                Edited by: Riikka Mottonen, University of Oxford, UK

                Reviewed by: Iiro P. Jääskeläinen, Aalto University, Finland; Anna J. Simmonds, Imperial College London, UK

                *Correspondence: Tim Saltuklaroglu, Department of Audiology and Speech Pathology, University of Tennessee Health Sciences Center, 553 South Stadium Hall, UT, Knoxvilee TN 37996, USA e-mail: tsaltukl@ 123456uthsc.edu

                This article was submitted to Language Sciences, a section of the journal Frontiers in Psychology.

                Article
                10.3389/fpsyg.2014.00656
                4091311
                25071633
                0f2b630f-b2a3-4657-983c-d20c9941c008
                Copyright © 2014 Jenson, Bowers, Harkrider, Thornton, Cuellar and Saltuklaroglu.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 08 March 2014
                : 08 June 2014
                Page count
                Figures: 6, Tables: 0, Equations: 0, References: 140, Pages: 17, Words: 14038
                Categories
                Psychology
                Original Research Article

                Clinical Psychology & Psychiatry
                speech perception,speech production,eeg,mu rhythm,independent component analysis

                Comments

                Comment on this article