9
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Vocal Imitations of Non-Vocal Sounds

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations.

          Related collections

          Most cited references26

          • Record: found
          • Abstract: found
          • Article: not found

          Imitation: is cognitive neuroscience solving the correspondence problem?

          Imitation poses a unique problem: how does the imitator know what pattern of motor activation will make their action look like that of the model? Specialist theories suggest that this correspondence problem has a unique solution; there are functional and neurological mechanisms dedicated to controlling imitation. Generalist theories propose that the problem is solved by general mechanisms of associative learning and action control. Recent research in cognitive neuroscience, stimulated by the discovery of mirror neurons, supports generalist solutions. Imitation is based on the automatic activation of motor representations by movement observation. These externally triggered motor representations are then used to reproduce the observed behaviour. This imitative capacity depends on learned perceptual-motor links. Finally, mechanisms distinguishing self from other are implicated in the inhibition of imitative behaviour.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Rapid formation of robust auditory memories: insights from noise.

            Before a natural sound can be recognized, an auditory signature of its source must be learned through experience. Here we used random waveforms to probe the formation of new memories for arbitrary complex sounds. A behavioral measure was designed, based on the detection of repetitions embedded in noises up to 4 s long. Unbeknownst to listeners, some noise samples reoccurred randomly throughout an experimental block. Results showed that repeated exposure induced learning for otherwise totally unpredictable and meaningless sounds. The learning was unsupervised and resilient to interference from other task-relevant noises. When memories were formed, they emerged rapidly, performance became abruptly near-perfect, and multiple noises were remembered for several weeks. The acoustic transformations to which recall was tolerant suggest that the learned features were local in time. We propose that rapid sensory plasticity could explain how the auditory brain creates useful memories from the ever-changing, but sometimes repeating, acoustical world. Copyright 2010 Elsevier Inc. All rights reserved.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              The sound of round: evaluating the sound-symbolic role of consonants in the classic Takete-Maluma phenomenon.

              Köhler (1929) famously reported a bias in people's matching of nonsense words to novel object shapes, pointing to possible naïve expectations about language structure. The bias has been attributed to synesthesia-like coactivation of motor or somatosensory areas involved in vowel articulation and visual areas involved in perceiving object shape (Ramachandran & Hubbard, 2001). We report two experiments testing an alternative that emphasizes consonants and natural semantic distinctions flowing from the auditory perceptual quality of salient acoustic differences among them. Our experiments replicated previous studies using similar word and image materials but included additional conditions swapping the consonant and vowel contents of words; using novel, randomly generated words and images; and presenting words either visually or aurally. In both experiments, subjects' image-matching responses showed evidence of tracking the consonant content of words. We discuss the possibility that vowels and consonants both play a role and consider some methodological factors that might influence their relative effects. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
                Bookmark

                Author and article information

                Contributors
                Role: Editor
                Journal
                PLoS One
                PLoS ONE
                plos
                plosone
                PLoS ONE
                Public Library of Science (San Francisco, CA USA )
                1932-6203
                2016
                16 December 2016
                : 11
                : 12
                : e0168167
                Affiliations
                [001]Equipe Perception et Design Sonores, STMS-IRCAM-CNRS-UPMC, Institut de Recherche et de Coordination Acoustique Musique, Paris, France
                Max Planck Institute for Human Cognitive and Brain Sciences, GERMANY
                Author notes

                Competing Interests: The authors have declared that no competing interests exist.

                • Conceptualization: GL OH PS NM.

                • Data curation: GL FV.

                • Formal analysis: GL.

                • Funding acquisition: GL PS.

                • Investigation: GL FV.

                • Methodology: GL OH FV PS NM.

                • Project administration: GL PS.

                • Software: GL FV.

                • Supervision: GL.

                • Validation: GL.

                • Visualization: GL.

                • Writing – original draft: GL.

                • Writing – review & editing: GL OH FV NM PS.

                Author information
                http://orcid.org/0000-0001-5442-6857
                Article
                PONE-D-16-27820
                10.1371/journal.pone.0168167
                5161510
                27992480
                bc4e5d81-6118-437a-b0ca-1f96ef0fc37b
                © 2016 Lemaitre et al

                This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

                History
                : 12 July 2016
                : 24 November 2016
                Page count
                Figures: 6, Tables: 4, Pages: 28
                Funding
                Funded by: Seventh Framework Programme (BE)
                Award ID: 618067
                This work was financed by the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission (grant number: 618067, SkAT-VG).
                Categories
                Research Article
                Physical Sciences
                Physics
                Acoustics
                Biology and Life Sciences
                Behavior
                Animal Behavior
                Animal Signaling and Communication
                Vocalization
                Biology and Life Sciences
                Zoology
                Animal Behavior
                Animal Signaling and Communication
                Vocalization
                Engineering and Technology
                Signal Processing
                Audio Signal Processing
                Engineering and Technology
                Signal Processing
                Signal Filtering
                Physical Sciences
                Physics
                Acoustics
                Acoustic Signals
                Physical Sciences
                Mathematics
                Applied Mathematics
                Algorithms
                Research and Analysis Methods
                Simulation and Modeling
                Algorithms
                Social Sciences
                Linguistics
                Speech
                Biology and Life Sciences
                Behavior
                Custom metadata
                Data are available from Zenodo at the following URL: https://zenodo.org/record/57468#.V4T1a6uM67A.

                Uncategorized
                Uncategorized

                Comments

                Comment on this article