25
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Language by mouth and by hand

      editorial

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          What is the basis of the human capacity for language: Is language shaped only by sensorimotor constraints and experience, or are some aspects of language universal, abstract, and potentially amodal? The set of papers assembled in this collection represent state of the art research on this age-old set of questions. To gauge the universality of language structure and its abstraction, the first group of papers examines the grammatical organization of mature languages across modalities. The papers by Baus et al. (2014) and Guellai et al. (2014) suggest that, despite marked differences in modality, the phonology of signed and spoken languages share aspects of design. Specifically, Baus and colleagues demonstrate that syllable-like units are extracted by signers automatically even when the task does not demand it. Using a similar interference paradigm, Guellaï and colleagues show that speakers (of Italian) automatically extract prosodic structure and use manual gestures to help them do it; the cues to prosody that are found in co-speech gesture play a role in disambiguating the syntactic structure of the speech they accompany. The typological survey described in Napoli and Sutton-Spence (2014) extends the study of grammatical universals to syntax. Like spoken languages, sign languages overwhelmingly favor subject-first structures (i.e., SOV and SVO). Unlike spoken languages, however, sign languages show a strong preference for the SOV over the SVO order. This aspect of grammatical organization may thus be influenced by modality, although the fact that signed and spoken languages differ not only with respect to modality but also with respect to age (i.e., spoken languages are older than sign languages) makes it difficult to pinpoint the source of this difference. Further insights into grammar and its origins are presented in papers on the genesis of sign languages in Deaf communities and in individual homesigners (deaf individuals who have not been exposed to an established sign language and who use their own homemade gestures to communicate with the hearing individuals in their worlds). Given the poverty of linguistic input available to these individuals, and the fact that the manual modality affords iconic depiction, we might expect emerging sign languages to be overwhelmingly iconic. But the role of iconicity is actually far more constrained and nuanced than one might have presumed. Considering homesigns, Coppola and Brentari (2014) find that the spontaneous emergence of morphophonology in an individual homesigning child mirrors the organization of mature sign languages (i.e., greater finger complexity in Object-handshapes than in Handling-handshapes). But remarkably, this abstract grammatical organization emerges prior to the arguably more iconic organization of morphosyntax (i.e., associating Object-handshapes with no-agent events and Handling-handshape with agent events). Moving to another example, this time a sign language that is growing up in Nicaragua, Kocab et al. (2015) find that, contrary to naïve expectations, signers do not immediately rely on iconic spatial devices to mark referential shifts, but rely instead on abstract lexical markers. Further glimpses into the spontaneous emergence of abstract syntactic organization can be found in Kastner et al. (2014), who document how prosody is used to mark the kernels of syntactic embedding in Kafr Qasem Sign Language, a sign language emerging in Israel. The possibility that signed and spoken languages might both rely on abstract grammatical organization brings the ongoing debate between algebraic (symbolic, rule-based) vs. associationist accounts of spoken language into the domain of sign language—what computational mechanisms are used by signers to support linguistic productivity? The papers by Caselli and Cohen-Goldberg (2014), on one hand, and Berent et al. (2014), on the other hand, suggest that a full account of sign language computation (like spoken language computation) requires both systems, hence, “words and rules (Pinker, 1999).” Considering first the evidence for associations, Caselli and Cohen-Goldberg trace lexical competition in sign language to the same set of dynamic associative principles proposed for spoken languages. Nonetheless, Berent et al. find that signers can extend certain phonological generalizations across the board in a rule-governed way—even to novel signs with features that are unattested in their language. Building on past computational work, Berent et al. suggest that generalizations of this sort are the hallmark of powerful algebraic rules that support the capacity for discrete infinity in the manual modality. Our review has so far highlighted commonalities across different language modalities and different levels of experience. But the effects of modality and experience are undeniable and significant—the papers by Supalla et al. (2014) and Emmorey et al. (2014) underscore some of these effects. Considering first experience, Supalla and colleagues find that language experience shapes language fluency, which, in turn, shapes the quality of signers' working-memory storage—fluent signers retain global semantic structure, less fluent signers focus on lexical detail and linear order. Considering language modality, Emmorey and colleagues find that, even though signed and spoken languages share neural substrates, sign language comprehension and production engages a unique network of sensorimotor regions that are directly linked to the visual/manual channel; sign comprehension uniquely suppresses visual occipital activity, whereas sign production engages parietal regions involved in manual motor simulation. The final four papers in this volume consider the development of sign languages and their evolution. Morgan (2014) argues that, across modalities, combinatorial structure emerges gradually out of a system that is initially holistic. Lillo-Martin et al. (2014) investigate the developmental of linguistic communication in bimodal bilingual children. Although these children are clearly sensitive to the language of their interlocutors and they modulate their language choice accordingly, the findings nonetheless reveal an overwhelming preference for speech over sign. In contrast, when adult speakers are engaged in a communication game, Fay et al. (2014) find a strong advantage for gestures over speech (alone, or even in combination with gesture)—a finding that the authors attribute to the affordance of the manual modality for iconicity. The gesture advantage in adult speakers does not speak directly to language evolution in humans, but the results are in line with the possibility that proto-language was gestural. How could such a gestural system give rise to the evolution of spoken language? This question is addressed by Woll (2014), who suggests that echo-phonology might provide the missing link. Conflict of interest statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: found
          • Article: found
          Is Open Access

          Creating a communication system from scratch: gesture beats vocalization hands down

          How does modality affect people's ability to create a communication system from scratch? The present study experimentally tests this question by having pairs of participants communicate a range of pre-specified items (emotions, actions, objects) over a series of trials to a partner using either non-linguistic vocalization, gesture or a combination of the two. Gesture-alone outperformed vocalization-alone, both in terms of successful communication and in terms of the creation of an inventory of sign-meaning mappings shared within a dyad (i.e., sign alignment). Combining vocalization with gesture did not improve performance beyond gesture-alone. In fact, for action items, gesture-alone was a more successful means of communication than the combined modalities. When people do not share a system for communication they can quickly create one, and gesture is the best means of doing so.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: found
            Is Open Access

            How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

            To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H2 15O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Lexical access in sign language: a computational model

              Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                16 February 2015
                2015
                : 6
                : 78
                Affiliations
                [1] 1Phonology and Reading Lab, Department of Psychology, Northeastern University Boston, MA, USA
                [2] 2Goldin-Meadow Laboratory, Department of Psychology, University of Chicago Chicago, IL, USA
                Author notes
                *Correspondence: i.berent@ 123456neu.edu

                This article was submitted to Language Sciences, a section of the journal Frontiers in Psychology.

                Edited and reviewed by: Manuel Carreiras, Basque Center on Cognition, Brain and Language, Spain

                Article
                10.3389/fpsyg.2015.00078
                4329806
                1bc7bfcc-df99-4058-ae57-ed02716376e9
                Copyright © 2015 Berent and Goldin-Meadow.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 13 January 2015
                : 14 January 2015
                Page count
                Figures: 0, Tables: 0, Equations: 0, References: 15, Pages: 2, Words: 1541
                Categories
                Psychology
                Editorial Article

                Clinical Psychology & Psychiatry
                sign language,universal grammar,modality,language evolution,rules,home signs,emerging sign langauges,lexical access

                Comments

                Comment on this article