The simulation of an auditory environment is audio signal processing depending on an acoustical model method and a given scene model. For signal processing we use a set of signal processing objects which can be glued to a signal processing network. The object structure depends on the requested acoustic model method in consideration of real-time aspects of an interactive virtual reality (VR). The final processing is executed on a common computer system.
In this paper I would like to briefly show the inner structure of this signal processing tool for 3D-audio and the communication interface to the visual animation tool. I outline the structures and the generation of the signal processing network depending on two acoustical model methods for the simulation of an auditory environment. Furthermore concepts for some improvements of these basic structures are shown considering the limited processing resources in real-time systems.