24
views
0
recommends
+1 Recommend
1 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Ecological Language: A multimodal approach to language learning and processing in the brain ESRC and H2020

      Impact
      Science Impact, Ltd.

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The human brain has evolved the ability to support communication in complex and dynamic environments. In such environments, language is learned, and mostly used in face-to-face contexts in which processing and learning is based on multiple cues: linguistic (such as lexical, syntactic), but also discourse, prosody, face and hands (gestures). Yet, our understanding of how language is learnt and processed, and its associated neural circuitry, comes almost exclusively from reductionist approaches in which the multimodal signal is reduced to speech or text. Our current work provides a new way to study language comprehension and learning using a real-world approach in which language is analysed in its rich face-to-face multimodal environment (i.e., language’s ecological niche). Experimental rigour is not compromised by the use of innovative technologies for data coding and state-of-the-art modelling and data analysis. We study how the different cues available in face-to-face communication dynamically contribute to processing and learning in adults, children and aphasic patients in contexts representative of everyday conversation. In our project we focus on language development and on cues that are iconic (i.e., they evoke imagery of properties of objects, actions in the world), in a larger project, we consider all the cues available. For both projects, the starting point is to collect and annotate a corpus of naturalistic language (2-4 years old children, adults). This corpus gives us information about the distribution of multimodal cues in real-world dyadic communication. We then derive quantitative informativeness measures for each cue and their combination using computational models, tested and refined on the basis of behavioural and neuroscientific data. We use converging methodologies (behavioural, EEG, fMRI and lesion-symptom mapping) and we investigate different populations (2-4 years old children, healthy and aphasic adults) in order to develop mechanistic accounts of multimodal communication at the cognitive as well as neural level that can explain processing and learning (by both children and adults) and can have impact on the rehabilitation of language functions after stroke.

          Related collections

          Author and article information

          Journal
          Impact
          impact
          Science Impact, Ltd.
          2398-7073
          February 22 2019
          February 22 2019
          : 2019
          : 1
          : 78-80
          Article
          10.21820/23987073.2019.1.78
          510d1086-f492-40f2-8271-b5b52772604c
          © 2019

          This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

          History

          Earth & Environmental sciences,Medicine,Computer science,Agriculture,Engineering
          Earth & Environmental sciences, Medicine, Computer science, Agriculture, Engineering

          Comments

          Comment on this article