1
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Auditory evoked potentials to speech and nonspeech stimuli are associated with verbal skills in preschoolers

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Highlights

          • ERPs to speech and matched nonspeech sounds were recorded in 63 preschoolers.

          • P1 and N2 were larger for nonspeech than speech sounds, the opposite being true for N4.

          • Differences between speech and nonspeech ERPs were associated with verbal skills.

          • ERP lateralization was associated with phonological and naming abilities.

          • The results suggest that ERPs are useful measures of children’s cortical functioning.

          Abstract

          Children’s obligatory auditory event-related potentials (ERPs) to speech and nonspeech sounds have been shown to associate with reading performance in children at risk or with dyslexia and their controls. However, very little is known of the cognitive processes these responses reflect. To investigate this question, we recorded ERPs to semisynthetic syllables and their acoustically matched nonspeech counterparts in 63 typically developed preschoolers, and assessed their verbal skills with an extensive set of neurocognitive tests. P1 and N2 amplitudes were larger for nonspeech than speech stimuli, whereas the opposite was true for N4. Furthermore, left-lateralized P1s were associated with better phonological and prereading skills, and larger P1s to nonspeech than speech stimuli with poorer verbal reasoning performance. Moreover, left-lateralized N2s, and equal-sized N4s to both speech and nonspeech stimuli were associated with slower naming. In contrast, children with equal-sized N2 amplitudes at left and right scalp locations, and larger N4s for speech than nonspeech stimuli, performed fastest. We discuss the possibility that children’s ERPs reflect not only neural encoding of sounds, but also sound quality processing, memory-trace construction, and lexical access. The results also corroborate previous findings that speech and nonspeech sounds are processed by at least partially distinct neural substrates.

          Related collections

          Most cited references28

          • Record: found
          • Abstract: found
          • Article: not found

          Lateralization of auditory-cortex functions.

          In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Listening to language at birth: evidence for a bias for speech in neonates.

            The nature and origin of the human capacity for acquiring language is not yet fully understood. Here we uncover early roots of this capacity by demonstrating that humans are born with a preference for listening to speech. Human neonates adjusted their high amplitude sucking to preferentially listen to speech, compared with complex non-speech analogues that controlled for critical spectral and temporal parameters of speech. These results support the hypothesis that human infants begin language acquisition with a bias for listening to speech. The implications of these results for language and communication development are discussed. For a commentary on this article see Rosen and Iverson (2007).
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Central auditory plasticity: changes in the N1-P2 complex after speech-sound training.

              To determine whether the N1-P2 complex reflects training-induced changes in neural activity associated with improved voice-onset-time (VOT) perception. Auditory cortical evoked potentials N1 and P2 were obtained from 10 normal-hearing young adults in response to two synthetic speech variants of the syllable /ba/. Using a repeated measures design, subjects were tested before and after training both behaviorally and neurophysiologically to determine whether there were training-related changes. In between pre- and post-testing sessions, subjects were trained to distinguish the -20 and -10 msec VOT /ba/ syllables as being different from each other. Two stimulus presentation rates were used during electrophysiologic testing (390 msec and 910 msec interstimulus interval). Before training, subjects perceived both the -20 msec and -10 msec VOT stimuli as /ba/. Through training, subjects learned to identify the -20 msec VOT stimulus as "mba" and -10 msec VOT stimulus as "ba." As subjects learned to correctly identify the difference between the -20 msec and -10 msec VOT syllabi, an increase in N1-P2 peak-to-peak amplitude was observed. The effects of training were most obvious at the slower stimulus presentation rate. As perception improved, N1-P2 amplitude increased. These changes in waveform morphology are thought to reflect increases in neural synchrony as well as strengthened neural connections associated with improved speech perception. These findings suggest that the N1-P2 complex may have clinical applications as an objective physiologic correlate of speech-sound representation associated with speech-sound training.
                Bookmark

                Author and article information

                Contributors
                Journal
                Dev Cogn Neurosci
                Dev Cogn Neurosci
                Developmental Cognitive Neuroscience
                Elsevier
                1878-9293
                1878-9307
                14 April 2016
                June 2016
                14 April 2016
                : 19
                : 223-232
                Affiliations
                [a ]Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki, Finland
                [b ]Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Denmark
                Author notes
                [* ]Corresponding author. soila.kuuluvainen@ 123456helsinki.fi
                Article
                S1878-9293(15)30056-6
                10.1016/j.dcn.2016.04.001
                6988591
                27131343
                f7b088a6-e925-48ca-9ef5-23cc389fc99e
                © 2016 The Authors

                This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

                History
                : 14 July 2015
                : 23 December 2015
                : 1 April 2016
                Categories
                Original Research

                Neurosciences
                auditory,event-related potential,speech,nonspeech,verbal skills,children
                Neurosciences
                auditory, event-related potential, speech, nonspeech, verbal skills, children

                Comments

                Comment on this article