6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Phoneme-free prosodic representations are involved in pre-lexical and lexical neurobiological mechanisms underlying spoken word processing

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Highlights

          • We hypothesize that prosody and phoneme information is processed independently.

          • We tested this by applying word onset priming and analyzing event-related potentials.

          • ERPs for prosody and phoneme overlap differed in timing and distribution.

          • Both effects did not interact, suggesting that both cues are used independently.

          • Our results are evidence for phoneme-free prosodic processing in speech recognition.

          Abstract

          Recently we reported that spoken stressed and unstressed primes differently modulate Event Related Potentials (ERPs) of spoken initially stressed targets. ERP stress priming was independent of prime–target phoneme overlap. Here we test whether phoneme-free ERP stress priming involves the lexicon. We used German target words with the same onset phonemes but different onset stress, such as MANdel (“almond”) and manDAT (“mandate”; capital letters indicate stress). First syllables of those words served as primes. We orthogonally varied prime–target overlap in stress and phonemes. ERP stress priming did neither interact with phoneme priming nor with the stress pattern of the targets. However, polarity of ERP stress priming was reversed to that previously obtained. The present results are evidence for phoneme-free prosodic processing at the lexical level. Together with the previous results they reveal that phoneme-free prosodic representations at the pre-lexical and lexical level are recruited by neurobiological spoken word recognition.

          Related collections

          Most cited references48

          • Record: found
          • Abstract: not found
          • Article: not found

          The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure.

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Neural specializations for speech and pitch: moving beyond the dichotomies.

            The idea that speech processing relies on unique, encapsulated, domain-specific mechanisms has been around for some time. Another well-known idea, often espoused as being in opposition to the first proposal, is that processing of speech sounds entails general-purpose neural mechanisms sensitive to the acoustic features that are present in speech. Here, we suggest that these dichotomous views need not be mutually exclusive. Specifically, there is now extensive evidence that spectral and temporal acoustical properties predict the relative specialization of right and left auditory cortices, and that this is a parsimonious way to account not only for the processing of speech sounds, but also for non-speech sounds such as musical tones. We also point out that there is equally compelling evidence that neural responses elicited by speech sounds can differ depending on more abstract, linguistically relevant properties of a stimulus (such as whether it forms part of one's language or not). Tonal languages provide a particularly valuable window to understand the interplay between these processes. The key to reconciling these phenomena probably lies in understanding the interactions between afferent pathways that carry stimulus information, with top-down processing mechanisms that modulate these processes. Although we are still far from the point of having a complete picture, we argue that moving forward will require us to abandon the dichotomy argument in favour of a more integrated approach.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech.

              Cortical analysis of speech has long been considered the domain of left-hemisphere auditory areas. A recent hypothesis poses that cortical processing of acoustic signals, including speech, is mediated bilaterally based on the component rates inherent to the speech signal. In support of this hypothesis, previous studies have shown that slow temporal features (3-5 Hz) in nonspeech acoustic signals lateralize to right-hemisphere auditory areas, whereas rapid temporal features (20-50 Hz) lateralize to the left hemisphere. These results were obtained using nonspeech stimuli, and it is not known whether right-hemisphere auditory cortex is dominant for coding the slow temporal features in speech known as the speech envelope. Here we show strong right-hemisphere dominance for coding the speech envelope, which represents syllable patterns and is critical for normal speech perception. Right-hemisphere auditory cortex was 100% more accurate in following contours of the speech envelope and had a 33% larger response magnitude while following the envelope compared with the left hemisphere. Asymmetries were evident regardless of the ear of stimulation despite dominance of contralateral connections in ascending auditory pathways. Results provide evidence that the right hemisphere plays a specific and important role in speech processing and support the hypothesis that acoustic processing of speech involves the decomposition of the signal into constituent temporal features by rate-specialized neurons in right- and left-hemisphere auditory cortex.
                Bookmark

                Author and article information

                Contributors
                Journal
                Brain Lang
                Brain Lang
                Brain and Language
                Academic Press
                0093-934X
                1090-2155
                1 September 2014
                September 2014
                : 136
                : 100
                : 31-43
                Affiliations
                [a ]University of Tübingen, Developmental Psychology, Germany
                [b ]University of Hamburg, Biological Psychology and Neuropsychology, Germany
                Author notes
                [* ]Corresponding author. Address: University of Tübingen, Developmental Psychology, Schleichstraße 4, D-72076 Tübingen, Germany. Fax: +49 (0)7071 29 5219. ulrike.schild@ 123456uni-tuebingen.de
                Article
                S0093-934X(14)00103-5
                10.1016/j.bandl.2014.07.006
                4159568
                25128904
                268b4c82-8eff-48ce-b810-26b744722f73
                © 2014 The Authors

                This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

                History
                : 21 July 2014
                Categories
                Article

                Neurosciences
                spoken word recognition,lexical access,lexical stress,erps,word fragment priming,lexical decision

                Comments

                Comment on this article