12
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Perceptual Auditory Aftereffects on Voice Identity Using Brief Vowel Stimuli

      research-article
      1 , 3 , * , 1 , 2
      PLoS ONE
      Public Library of Science

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Humans can identify individuals from their voice, suggesting the existence of a perceptual representation of voice identity. We used perceptual aftereffects – shifts in perceived stimulus quality after brief exposure to a repeated adaptor stimulus – to further investigate the representation of voice identity in two experiments. Healthy adult listeners were familiarized with several voices until they reached a recognition criterion. They were then tested on identification tasks that used vowel stimuli generated by morphing between the different identities, presented either in isolation (baseline) or following short exposure to different types of voice adaptors (adaptation). Experiment 1 showed that adaptation to a given voice induced categorization shifts away from that adaptor’s identity even when the adaptors consisted of vowels different from the probe stimuli. Moreover, original voices and caricatures resulted in comparable aftereffects, ruling out an explanation of identity aftereffects in terms of adaptation to low-level features. In Experiment 2, we show that adaptors with a disrupted configuration, i.e., altered fundamental frequency or formant frequencies, failed to produce perceptual aftereffects showing the importance of the preserved configuration of these acoustical cues in the representation of voices. These two experiments indicate a high-level, dynamic representation of voice identity based on the combination of several lower-level acoustical features into a specific voice configuration.

          Related collections

          Most cited references25

          • Record: found
          • Abstract: found
          • Article: not found

          Thinking the voice: neural correlates of voice perception.

          The human voice is the carrier of speech, but also an "auditory face" that conveys important affective and identity information. Little is known about the neural bases of our abilities to perceive such paralinguistic information in voice. Results from recent neuroimaging studies suggest that the different types of vocal information could be processed in partially dissociated functional pathways, and support a neurocognitive model of voice perception largely similar to that proposed for face perception.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Prototype-referenced shape encoding revealed by high-level aftereffects.

            We used high-level configural aftereffects induced by adaptation to realistic faces to investigate visual representations underlying complex pattern perception. We found that exposure to an individual face for a few seconds generated a significant and precise bias in the subsequent perception of face identity. In the context of a computationally derived 'face space,' adaptation specifically shifted perception along a trajectory passing through the adapting and average faces, selectively facilitating recognition of a test face lying on this trajectory and impairing recognition of other faces. The results suggest that the encoding of faces and other complex patterns draws upon contrastive neural mechanisms that reference the central tendency of the stimulus category.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              "Who" is saying "what"? Brain-based decoding of human voice and speech.

              Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, USA )
                1932-6203
                2012
                23 July 2012
                : 7
                : 7
                : e41384
                Affiliations
                [1 ]Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
                [2 ]International Laboratories for Brain, Music and Sound (BRAMS), Université de Montréal and McGill University, Québec, Canada
                [3 ]Department of Psychological and Brain Sciences, Indiana University-Bloomington, Bloomington, Indiana, United States of America
                Northwestern University, United States of America
                Author notes

                Conceived and designed the experiments: ML PB. Performed the experiments: ML. Analyzed the data: ML. Wrote the paper: ML PB.

                Article
                PONE-D-12-03379
                10.1371/journal.pone.0041384
                3402520
                22844469
                f0aa5c1f-4aeb-4170-836f-f6a0853792b6
                Latinus, Belin. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
                History
                : 1 February 2012
                : 25 June 2012
                Page count
                Pages: 7
                Categories
                Research Article
                Biology
                Anatomy and Physiology
                Physiological Processes
                Computational Biology
                Natural Language Processing
                Neuroscience
                Cognitive Neuroscience
                Cognition
                Sensory Perception
                Psychoacoustics
                Psychophysics
                Sensory Systems
                Auditory System
                Learning and Memory
                Social and Behavioral Sciences
                Psychology
                Sensory Perception

                Uncategorized
                Uncategorized

                Comments

                Comment on this article