18
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The role of segmental and durational cues in the processing of reduced words

      research-article
      ,
      Language and Speech
      SAGE Publications
      Acoustic reduction, word recognition, speech perception, gating, phonetic detail

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          In natural conversations, words are generally shorter and they often lack segments. It is unclear to what extent such durational and segmental reductions affect word recognition. The present study investigates to what extent reduction in the initial syllable hinders word comprehension, which types of segments listeners mostly rely on, and whether listeners use word duration as a cue in word recognition. We conducted three experiments in Dutch, in which we adapted the gating paradigm to study the comprehension of spontaneously uttered conversational speech by aligning the gates with the edges of consonant clusters or vowels. Participants heard the context and some segmental and/or durational information from reduced target words with unstressed initial syllables. The initial syllable varied in its degree of reduction, and in half of the stimuli the vowel was not clearly present. Participants gave too short answers if they were only provided with durational information from the target words, which shows that listeners are unaware of the reductions that can occur in spontaneous speech. More importantly, listeners required fewer segments to recognize target words if the vowel in the initial syllable was absent. This result strongly suggests that this vowel hardly plays a role in word comprehension, and that its presence may even delay this process. More important are the consonants and the stressed vowel.

          Related collections

          Most cited references24

          • Record: found
          • Abstract: not found
          • Article: not found

          Fitting linear mixed-effects models using lme4

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Interactions between sentence context and word frequency in event-related brain potentials.

            Event-related brain potentials (ERPs) were recorded as subjects silently read a set of unrelated sentences. The ERP responses elicited by open-class words were sorted according to word frequency and the ordinal position of the eliciting word within its sentence. We observed a strong inverse correlation between sentence position and the amplitude of the N400 component of the ERP. In addition, we found that less frequent words were associated with larger N400s than were more frequent words, but only if the eliciting words occurred early in their respective sentences. We take this interaction between sentence position and word frequency as evidence that frequency does not play a mandatory role in word recognition, but can be superseded by the contextual constraint provided by a sentence.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Pitch accent in spoken-word recognition in Japanese.

              Three experiments addressed the question of whether pitch-accent information may be exploited in the process of recognizing spoken words in Tokyo Japanese. In a two-choice classification task, listeners judged from which of two words, differing in accentual structure, isolated syllables had been extracted (e.g., ka from baka HL or gaka LH); most judgments were correct, and listeners' decisions were correlated with the fundamental frequency characteristics of the syllables. In a gating experiment, listeners heard initial fragments of words and guessed what the words were; their guesses overwhelmingly had the same initial accent structure as the gated word even when only the beginning CV of the stimulus (e.g., na- from nagasa HLL or nagashi LHH) was presented. In addition, listeners were more confident in guesses with the same initial accent structure as the stimulus than in guesses with different accent. In a lexical decision experiment, responses to spoken words (e.g., ame HL) were speeded by previous presentation of the same word (e.g., ame HL) but not by previous presentation of a word differing only in accent (e.g., ame LH). Together these findings provide strong evidence that accentual information constrains the activation and selection of candidates for spoken-word recognition.
                Bookmark

                Author and article information

                Contributors
                Journal
                Lang Speech
                Lang Speech
                LAS
                splas
                Language and Speech
                SAGE Publications (Sage UK: London, England )
                0023-8309
                1756-6053
                04 September 2017
                September 2018
                : 61
                : 3
                : 358-383
                Affiliations
                [1-0023830917727774]Centre for Language Studies, Radboud University Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, the Netherlands
                [2-0023830917727774]Centre for Language Studies, Radboud University Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, the Netherlands
                Author notes
                [*]Marco van de Ven, Centre for Language Studies, Radboud University Nijmegen, P.O. Box 9104, 6500 HD Nijmegen, The Netherlands. Email: Marco.vandeVen@ 123456pwo.ru.nl
                Article
                10.1177_0023830917727774
                10.1177/0023830917727774
                6099978
                28870139
                254370ea-4459-4231-b06a-bac8b09cb028
                © The Author(s) 2017

                This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License ( http://www.creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page ( https://us.sagepub.com/en-us/nam/open-access-at-sage).

                History
                Funding
                Funded by: European Research Council: Starting grant awarded to Mirjam Ernestus, ;
                Award ID: 284108
                Funded by: Netherlands Organization for Scientific Research: vici grant awarded to Mirjam Ernestus, ;
                Award ID: 277-70-010
                Categories
                Articles

                acoustic reduction,word recognition,speech perception,gating,phonetic detail

                Comments

                Comment on this article