Auditory display designers are making increasingly effective and creative use of our ability to localise sound; to particular auditory events as occurring at particular locations. Many applications in which spatial audio has been applied could also benefit from exploiting another important ability of the auditory system; the detection and identification of sound source motion. The display of moving sources could improve usability, provide additional variables in sonification, make virtual environments more perceptually realistic and provide new creative possibilities for designers. Transaural cancellation allows the creation of spatial audio with just two loudspeakers. These techniques are now extended to create the illusion of a sound source moving along an arbitrary trajectory at an arbitrary rate. This paper discusses the application of synthesised sound source movement to a number of practical applications in auditory display. We seek to extend the use of Head-Related Transfer Functions (HRTFs) in stationary sound spatialisation to encompass movement synthesis. The detection of moving sources is not time-invariant so we propose and demonstrate the use of time-frequency spectrograms as a mechanism for characterising source movement. There are an infinite number of such trajectory-related spectrograms and we address the need for a continuous directional model to accommodate this.
Author and article information
School of Computer Science, Cybernetics and Electronic Engineering, University of
Reading, United Kingdom