+1 Recommend
0 collections
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates


      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.


          Vocal learners such as humans and songbirds can learn to produce elaborate patterns of structurally organized vocalizations, whereas many other vertebrates such as non-human primates and most other bird groups either cannot or do so to a very limited degree. To explain the similarities among humans and vocal-learning birds and the differences with other species, various theories have been proposed. One set of theories are motor theories, which underscore the role of the motor system as an evolutionary substrate for vocal production learning. For instance, the motor theory of speech and song perception proposes enhanced auditory perceptual learning of speech in humans and song in birds, which suggests a considerable level of neurobiological specialization. Another, a motor theory of vocal learning origin, proposes that the brain pathways that control the learning and production of song and speech were derived from adjacent motor brain pathways. Another set of theories are cognitive theories, which address the interface between cognition and the auditory-vocal domains to support language learning in humans. Here we critically review the behavioral and neurobiological evidence for parallels and differences between the so-called vocal learners and vocal non-learners in the context of motor and cognitive theories. In doing so, we note that behaviorally vocal-production learning abilities are more distributed than categorical, as are the auditory-learning abilities of animals. We propose testable hypotheses on the extent of the specializations and cross-species correspondences suggested by motor and cognitive theories. We believe that determining how spoken language evolved is likely to become clearer with concerted efforts in testing comparative data from many non-human animal species.

          Related collections

          Most cited references132

          • Record: found
          • Abstract: found
          • Article: not found

          Repetition and the brain: neural models of stimulus-specific effects.

          One of the most robust experience-related cortical dynamics is reduced neural activity when stimuli are repeated. This reduction has been linked to performance improvements due to repetition and also used to probe functional characteristics of neural populations. However, the underlying neural mechanisms are as yet unknown. Here, we consider three models that have been proposed to account for repetition-related reductions in neural activity, and evaluate them in terms of their ability to account for the main properties of this phenomenon as measured with single-cell recordings and neuroimaging techniques. We also discuss future directions for distinguishing between these models, which will be important for understanding the neural consequences of repetition and for interpreting repetition-related effects in neuroimaging data.
            • Record: found
            • Abstract: found
            • Article: not found

            Statistical learning by 8-month-old infants.

            Learners rely on a combination of experience-independent and experience-dependent mechanisms to extract information from the environment. Language acquisition involves both types of mechanisms, but most theorists emphasize the relative importance of experience-independent mechanisms. The present study shows that a fundamental task of language acquisition, segmentation of words from fluent speech, can be accomplished by 8-month-old infants based solely on the statistical relationships between neighboring speech sounds. Moreover, this word segmentation was based on statistical learning from only 2 minutes of exposure, suggesting that infants have access to a powerful mechanism for the computation of statistical properties of the language input.
              • Record: found
              • Abstract: found
              • Article: not found

              Statistical learning of tone sequences by human infants and adults.

              Previous research suggests that language learners can detect and use the statistical properties of syllable sequences to discover words in continuous speech (e.g. Aslin, R.N., Saffran, J.R., Newport, E.L., 1998. Computation of conditional probability statistics by 8-month-old infants. Psychological Science 9, 321-324; Saffran, J.R., Aslin, R.N., Newport, E.L., 1996. Statistical learning by 8-month-old infants. Science 274, 1926-1928; Saffran, J., R., Newport, E.L., Aslin, R.N., (1996). Word segmentation: the role of distributional cues. Journal of Memory and Language 35, 606-621; Saffran, J.R., Newport, E.L., Aslin, R.N., Tunick, R.A., Barrueco, S., 1997. Incidental language learning: Listening (and learning) out of the corner of your ear. Psychological Science 8, 101-195). In the present research, we asked whether this statistical learning ability is uniquely tied to linguistic materials. Subjects were exposed to continuous non-linguistic auditory sequences whose elements were organized into 'tone words'. As in our previous studies, statistical information was the only word boundary cue available to learners. Both adults and 8-month-old infants succeeded at segmenting the tone stream, with performance indistinguishable from that obtained with syllable streams. These results suggest that a learning mechanism previously shown to be involved in word segmentation can also be used to segment sequences of non-linguistic stimuli.

                Author and article information

                Front Evol Neurosci
                Front Evol Neurosci
                Front. Evol. Neurosci.
                Frontiers in Evolutionary Neuroscience
                Frontiers Media S.A.
                16 August 2012
                : 4
                [1] 1simpleInstitute of Neuroscience, Newcastle University Newcastle upon Tyne, UK
                [2] 2simpleCentre for Behavior and Evolution, Newcastle University Newcastle upon Tyne, UK
                [3] 3simpleDepartment of Neurobiology, Howard Hughes Medical Institute, Duke University Durham, NC, USA
                Author notes

                Edited by: Angela D. Friederici, Max Planck Institute for Human Cognitive and Brain Sciences, Germany

                Reviewed by: Josef P. Rauschecker, Georgetown University School of Medicine, USA; Kazuo Okanoya, The University of Tokyo, Japan

                *Correspondence: Christopher I. Petkov, Institute of Neuroscience, Newcastle University Medical School, Framlington Place, Newcastle upon Tyne, NE2 4HH, UK. e-mail: chris.petkov@ 123456ncl.ac.uk
                Erich D. Jarvis, Department of Neurobiology, Howard Hughes Medical Institute, Box 3209, Duke University Medical Center, Durham, NC 27710, USA. e-mail: jarvis@ 123456neuro.duke.edu
                Copyright © 2012 Petkov and Jarvis.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.

                Page count
                Figures: 5, Tables: 0, Equations: 0, References: 200, Pages: 24, Words: 22115
                Review Article

                humans, neurobiology, avian, speech, monkeys, evolution, vertebrates, communication


                Comment on this article