7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.

          Related collections

          Most cited references53

          • Record: found
          • Abstract: not found
          • Article: not found

          A Fast Algorithm for the Minimum Covariance Determinant Estimator

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Young children do not integrate visual and haptic form information.

            Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability [1, 2]. When does this capacity for crossmodal integration develop? Here, we show that prior to 8 years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions in which the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, whereas for orientation discrimination, vision dominates. By 8-10 years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              An experimental system for auditory image representations.

              This paper presents an experimental system for the conversion of images into sound patterns. The system was designed to provide auditory image representations within some of the known limitations of the human hearing system, possibly as a step towards the development of a vision substitution device for the blind. The application of an invertible (1-to-1) image-to-sound mapping ensures the preservation of visual information. The system implementation involves a pipelined special purpose computer connected to a standard television camera. The time-multiplexed sound representations, resulting from a real-time image-to-sound conversion, represent images up to a resolution of 64 x 64 pixels with 16 gray-tones per pixel. A novel design and the use of standard components have made for a low-cost portable prototype conversion system having a power dissipation suitable for battery operation. Computerized sampling of the system output and subsequent calculation of the approximate inverse (sound-to-image) mapping provided the first convincing experimental evidence for the preservation of visual information in the sound representations of complicated images. However, the actual resolution obtainable with human perception of these sound representations remains to be evaluated.
                Bookmark

                Author and article information

                Contributors
                Journal
                Front Psychol
                Front Psychol
                Front. Psychol.
                Frontiers in Psychology
                Frontiers Media S.A.
                1664-1078
                10 July 2020
                2020
                : 11
                : 1443
                Affiliations
                [1] 1 Department of Psychology, University of Bath , Bath, United Kingdom
                [2] 2 Department of Computer Science, University of Bath , Bath, United Kingdom
                [3] 3 School of Sport and Exercise Sciences, Liverpool John Moores University , Liverpool, United Kingdom
                Author notes

                Edited by: Luigi F. Cuturi, Italian Institute of Technology (IIT), Italy

                Reviewed by: Andrew Joseph Kolarik, Anglia Ruskin University, United Kingdom; Florina Ungureanu, Gheorghe Asachi Technical University of Iași, Romania

                *Correspondence: Karin Petrini, k.petrini@ 123456bath.ac.uk

                These authors share first authorship

                This article was submitted to Perception Science, a section of the journal Frontiers in Psychology

                Article
                10.3389/fpsyg.2020.01443
                7381305
                32754082
                a8979dea-8742-4552-8d51-1e8b96c3bb45
                Copyright © 2020 Jicol, Lloyd-Esenkaya, Proulx, Lange-Smith, Scheller, O'Neill and Petrini.

                This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

                History
                : 13 February 2020
                : 29 May 2020
                Page count
                Figures: 7, Tables: 1, Equations: 6, References: 60, Pages: 17, Words: 14551
                Funding
                Funded by: University of Bath Alumni
                Categories
                Psychology
                Original Research

                Clinical Psychology & Psychiatry
                navigation,visual impairment and blindness,sensory substitution device,audiotactile,spatial cognition,egocentric,allocentric,multisensory integration

                Comments

                Comment on this article