6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      A role for onomatopoeia in early language: evidence from phonological development

      Language and Cognition
      Cambridge University Press (CUP)

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          abstract

          Onomatopoeia appear in high quantities in many infants’ earliest words, yet there is minimal research in this area. Instead, findings from the wider iconicity literature are generalised to include onomatopoeia, leading to the assumption that their iconic status makes them inherently learnable, thereby prompting their early production. In this review we bring together the literature on onomatopoeia specifically and iconicity more generally to consider infants’ acquisition from three perspectives: perception, production, and interaction. We consider these findings in relation to Imai and Kita’s (2014) ‘sound symbolism bootstrapping hypothesis’ to determine whether their framework can account for onomatopoeia alongside other iconic forms.

          Related collections

          Most cited references21

          • Record: found
          • Abstract: found
          • Article: not found

          Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study.

          The aim of this study was (1) to provide behavioral evidence for multimodal feature integration in an object recognition task in humans and (2) to characterize the processing stages and the neural structures where multisensory interactions take place. Event-related potentials (ERPs) were recorded from 30 scalp electrodes while subjects performed a forced-choice reaction-time categorization task: At each trial, the subjects had to indicate which of two objects was presented by pressing one of two keys. The two objects were defined by auditory features alone, visual features alone, or the combination of auditory and visual features. Subjects were more accurate and rapid at identifying multimodal than unimodal objects. Spatiotemporal analysis of ERPs and scalp current densities revealed several auditory-visual interaction components temporally, spatially, and functionally distinct before 200 msec poststimulus. The effects observed were (1) in visual areas, new neural activities (as early as 40 msec poststimulus) and modulation (amplitude decrease) of the N185 wave to unimodal visual stimulus, (2) in the auditory cortex, modulation (amplitude increase) of subcomponents of the unimodal auditory N1 wave around 90 to 110 msec, and (3) new neural activity over the right fronto-temporal area (140 to 165 msec). Furthermore, when the subjects were separated into two groups according to their dominant modality to perform the task in unimodal conditions (shortest reaction time criteria), the integration effects were found to be similar for the two groups over the nonspecific fronto-temporal areas, but they clearly differed in the sensory-specific cortices, affecting predominantly the sensory areas of the nondominant modality. Taken together, the results indicate that multisensory integration is mediated by flexible, highly adaptive physiological processes that can take place very early in the sensory processing chain and operate in both sensory-specific and nonspecific cortical structures in different ways.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Multisensory visual-auditory object recognition in humans: a high-density electrical mapping study.

            Multisensory object-recognition processes were investigated by examining the combined influence of visual and auditory inputs upon object identification--in this case, pictures and vocalizations of animals. Behaviorally, subjects were significantly faster and more accurate at identifying targets when the picture and vocalization were matched (i.e. from the same animal), than when the target was represented in only one sensory modality. This behavioral enhancement was accompanied by a modulation of the evoked potential in the latency range and general topographic region of the visual evoked N1 component, which is associated with early feature processing in the ventral visual stream. High-density topographic mapping and dipole modeling of this multisensory effect were consistent with generators in lateral occipito-temporal cortices, suggesting that auditory inputs were modulating processing in regions of the lateral occipital cortices. Both the timing and scalp topography of this modulation suggests that there are multisensory effects during what is considered to be a relatively early stage of visual object-recognition processes, and that this modulation occurs in regions of the visual system that have traditionally been held to be unisensory processing areas. Multisensory inputs also modulated the visual 'selection-negativity', an attention dependent component of the evoked potential this is usually evoked when subjects selectively attend to a particular feature of a visual stimulus.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Common themes and cultural variations in Japanese and American mothers' speech to infants.

              This study explored both universal features and cultural variation in maternal speech. Japanese and American mothers' speech to infants at 6, 12, and 19 months was compared in a cross-sectional study of 60 dyads observed playing with toys at home. Mothers' speech in both cultures shared common characteristics, such as linguistic simplification and frequent repetition, and mothers made similar adjustments in their speech to infants of different ages. American mothers labeled objects more frequently and consistently than did Japanese mothers, while Japanese mothers used objects to engage infants in social routines more often than did American mothers. American infants had larger noun vocabularies than did Japanese infants, according to maternal report. The greater emphasis on object nouns in American mothers' speech is only partially attributable to structural differences between Japanese and English. Cultural differences in interactional style and beliefs about child rearing strongly influence the structure and content of speech to infants.
                Bookmark

                Author and article information

                Journal
                applab
                Language and Cognition
                Lang. cogn.
                Cambridge University Press (CUP)
                1866-9808
                1866-9859
                June 2019
                January 10 2019
                June 2019
                : 11
                : 02
                : 173-187
                Article
                10.1017/langcog.2018.23
                9faa9319-1458-4c31-a93e-0c494b5d6e41
                © 2019

                https://www.cambridge.org/core/terms

                History

                Comments

                Comment on this article