Blog
About

6
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      Mobile Navigation Using Haptic, Audio, and Visual Direction Cues with a Handheld Test Platform

      Read this article at

      ScienceOpenPublisher
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references 19

          • Record: found
          • Abstract: found
          • Article: not found

          Auditory-visual integration during multimodal object recognition in humans: a behavioral and electrophysiological study.

           M Giard,  F Péronnet (1999)
          The aim of this study was (1) to provide behavioral evidence for multimodal feature integration in an object recognition task in humans and (2) to characterize the processing stages and the neural structures where multisensory interactions take place. Event-related potentials (ERPs) were recorded from 30 scalp electrodes while subjects performed a forced-choice reaction-time categorization task: At each trial, the subjects had to indicate which of two objects was presented by pressing one of two keys. The two objects were defined by auditory features alone, visual features alone, or the combination of auditory and visual features. Subjects were more accurate and rapid at identifying multimodal than unimodal objects. Spatiotemporal analysis of ERPs and scalp current densities revealed several auditory-visual interaction components temporally, spatially, and functionally distinct before 200 msec poststimulus. The effects observed were (1) in visual areas, new neural activities (as early as 40 msec poststimulus) and modulation (amplitude decrease) of the N185 wave to unimodal visual stimulus, (2) in the auditory cortex, modulation (amplitude increase) of subcomponents of the unimodal auditory N1 wave around 90 to 110 msec, and (3) new neural activity over the right fronto-temporal area (140 to 165 msec). Furthermore, when the subjects were separated into two groups according to their dominant modality to perform the task in unimodal conditions (shortest reaction time criteria), the integration effects were found to be similar for the two groups over the nonspecific fronto-temporal areas, but they clearly differed in the sensory-specific cortices, affecting predominantly the sensory areas of the nondominant modality. Taken together, the results indicate that multisensory integration is mediated by flexible, highly adaptive physiological processes that can take place very early in the sensory processing chain and operate in both sensory-specific and nonspecific cortical structures in different ways.
            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            Waypoint navigation with a vibrotactile waist belt

              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Driver reaction time to tactile and auditory rear-end collision warnings while talking on a cell phone.

              This study examined the effectiveness of rear-end collision warnings presented in different sensory modalities while drivers were engaged in cell phone conversations in a driving simulator. Tactile and auditory collision warnings have been shown to improve braking response time (RT) in rear-end collision situations. However, it is not clear how effective these warnings are when the driver is engaged in attentionally demanding secondary tasks, such as talking on a cell phone. Sixteen participants in a driving simulator experienced three collision warning conditions (none, tactile, and auditory) in three conversation conditions (none, simple hands free, complex hands free). Driver RT was captured from warning onset to brake initiation (WON2B). WON2B times for auditory warnings were significantly larger for simple conversations compared with no conversation (+148 ms), whereas there was no significant difference between these conditions for tactile warnings (+53 ms). For complex conversations, WON2B times for both tactile (+146 ms) and auditory warnings (+221 ms) were significantly larger than during no conversation. During complex conversations, tactile warnings produced significantly shorter WON2B times than no warning (-141 ms). Tactile warnings are more effective than auditory warnings during both simple and complex conversations. These results indicate that tactile rear-end collision warnings have the potential to offset some of the driving impairments caused by cell phone conversations.
                Bookmark

                Author and article information

                Journal
                IEEE Transactions on Haptics
                IEEE Trans. Haptics
                Institute of Electrical and Electronics Engineers (IEEE)
                1939-1412
                January 2012
                January 2012
                : 5
                : 1
                : 33-38
                Article
                10.1109/TOH.2011.58
                © 2012
                Product

                Comments

                Comment on this article