Blog
About

168
views
0
recommends
+1 Recommend
1 collections
    9
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The Parallels between the Study of Crossmodal Correspondence and the Design of Cross-Sensory Mappings

      Electronic Visualisation and the Arts (EVA 2017) (EVA)

      Electronic Visualisation and the Arts

      11 – 13 July 2017

      Display and interaction design, Mappings, Perceptual learning, Crossmodal correspondence

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The aim of this paper is to examine how recent research findings and methodologies from the field of cognitive science could be utilised to inform the design and evaluation of cross-modal mappings for multimodal user interaction. In this paper we argue that by using empirical methods to enact embodied knowledge about cross-modal correspondences, we can form an adequate empirical framework for the design of multimodal mappings that successfully align with prior perceptual knowledge. This alignment can significantly improve the human computer dialogue and the analytical, creative and pedagogical value of user interfaces.

          Related collections

          Most cited references 52

          • Record: found
          • Abstract: found
          • Article: not found

          Crossmodal correspondences: a tutorial review.

          In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain "know" which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Benefits of multisensory learning.

            Studies of learning, and in particular perceptual learning, have focused on learning of stimuli consisting of a single sensory modality. However, our experience in the world involves constant multisensory stimulation. For instance, visual and auditory information are integrated in performing many tasks that involve localizing and tracking moving objects. Therefore, it is likely that the human brain has evolved to develop, learn and operate optimally in multisensory environments. We suggest that training protocols that employ unisensory stimulus regimes do not engage multisensory learning mechanisms and, therefore, might not be optimal for learning. However, multisensory-training protocols can better approximate natural settings and are more effective for learning.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Infants deploy selective attention to the mouth of a talking face when learning speech.

              The mechanisms underlying the acquisition of speech-production ability in human infancy are not well understood. We tracked 4-12-mo-old English-learning infants' and adults' eye gaze while they watched and listened to a female reciting a monologue either in their native (English) or nonnative (Spanish) language. We found that infants shifted their attention from the eyes to the mouth between 4 and 8 mo of age regardless of language and then began a shift back to the eyes at 12 mo in response to native but not nonnative speech. We posit that the first shift enables infants to gain access to redundant audiovisual speech cues that enable them to learn their native speech forms and that the second shift reflects growing native-language expertise that frees them to shift attention to the eyes to gain access to social cues. On this account, 12-mo-old infants do not shift attention to the eyes when exposed to nonnative speech because increasing native-language expertise and perceptual narrowing make it more difficult to process nonnative speech and require them to continue to access redundant audiovisual cues. Overall, the current findings demonstrate that the development of speech production capacity relies on changes in selective audiovisual attention and that this depends critically on early experience.
                Bookmark

                Author and article information

                Contributors
                Conference
                July 2017
                July 2017
                : 175-182
                Affiliations
                Centre for Interaction Design

                Edinburgh Napier University

                10 Colinton Road, EH10 5DT

                Scotland
                Article
                10.14236/ewic/EVA2017.39
                © Tsiros. Published by BCS Learning and Development Ltd. Proceedings of EVA London 2017, UK

                This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

                Electronic Visualisation and the Arts (EVA 2017)
                EVA
                London, UK
                11 – 13 July 2017
                Electronic Workshops in Computing (eWiC)
                Electronic Visualisation and the Arts
                Product
                Product Information: 1477-9358 BCS Learning & Development
                Self URI (journal page): https://ewic.bcs.org/
                Categories
                Electronic Workshops in Computing

                Comments

                Comment on this article