Audiovisual Installation: Interactive Imprinting of Sound in Water

The present project is an interactive audiovisual creation, a sound and vision installation where the user interactively activates sound in its analog form. The sound is used to influence a physical medium (in this case a tank full of water) producing an image through the turbulences on its surface. This image, properly illuminated, is reflected and visualised in space. The installation is designed in such a way that allows, through an interactive application, the projection of sound on the water surface and the direct capture of that feedback, i.e. sound waves formed on water. At the same time, the reflected image is projected on a white surface (" imprint of sound " – projection of sound as image). The innovation of the project consists in the interactional mechanism of the system developed, which allowes the users to influence the final visual effect while they are moving in the room of the installation. Moreover, the interactive methodology chosen, allowed the system to work succesfully with the use of low-budget sensors, without that being a burden to the operability of the system. The programmes used for the interactive mechanism are open source softwares. For the detection of blobs from the camera has been used Processing, and for the sound SuperCollider.


INTRODUCTION
The present project is an interactive audiovisual creation, a sound and vision installation where the user interactively activates sound in its analog form.The sound is used to influence a physical medium (in this case a tank full of water) producing an image through the turbulences on its surface.This image, properly illuminated, is reflected and visualised in space.The installation is designed in such a way that allows, through an interactive application, the projection of sound on the water surface and the direct capture of that feedback, i.e. sound waves formed on water.At the same time, the reflected image is projected on a white surface ("imprint of sound" -projection of sound as image).
Although, in the past, various artists have tried to transmit sound through various materials -liquids, solids or gases -the present project is an innovative one in that it ihas an interactive dimension.Creation has always been a synomym of innovation and that is what the present project aimed at.
The canadian artist and architecture Thomas McIntosh with his installation Ondulation [7][8][9],[web1-6] managed for the first time to produce what we call sound imprint on water.He constructed a huge water tank for his installation and used the sound to produce images on the water surface, using the light.His installation [web1-6] has been of great significance to our project, since we adopted and implemented a great part of his research, considering him a forerunner of our project.However, our technic differs significantly in that the sounds used are a product of interaction and not static sounds as in his case.The innovation of the project is the interactional mechanism of the system developed, which allows the users to influence the final visual effect while moving in the room of the installation.The installation takes place in a space where the sensor-camera and the "imprint" of the sound are combined at the same time.The room that the users visit is a space of interaction and a space where the results are projected.In the room there is a swallow water tank.The calm water surface is disrupted by sound waves produced by the loudspeakers that are hidden below the tank.The sound waves hit the bottom of the tank, creating concentring circles on the surface of the water while beams of light hit the surface provoking refraction.The reflections are projected on a white surface that is vertical to the surface, as in cinema.The project requires the design and programming of an application which manages this interaction.The camera functions as a motion detector, while the application "translates" the changes of the input of the camera into sound signal (output) with the use of the appropriate software.On the technical level, the motion sensor (camera) scans the space on two levels (x, y) providing exact coordinates of the moving persons.The movement of persons in space provokes changes to the sound, influencing the imprint.The architecture of the application consists of a camera whose data is processed using the appropriate libraries, producing a processed sound signal which is then reproduced by the loudspeakers (output).

ANALYSIS AND ARCHITECTURAL STRUCRURE OF THE INERACTIVE APPLICATION
On a technical level, the motion sensor (camera) scans the space in two levels (x, y), providing accurate indications-coordinates of the moving persons.The audience moving in the room provokes changes to the sound which influence the soundprint.The architecture of the application consists of a camera, whose data is processed with the appropriate libraries, producing processed sound signal, reproduced by the loudspeakers (output), as shown in figure 1 below: The steps that have been followed are necessary and must be carried out in the following order.Firstly, the necessary libraries [web13] for the creation and the smooth functioning of the application are inserted in the environment of Processing [1][2][3][4].Every library used has a specific role in environment and serves different purposes.Initially we upload the necessary packages for the OpenCV [web8] library.The packages hypermedia.video and java.awt [web7-8,15] are related to the real time capture of the camera, video file import, basic image corrections, such as brightness and contrast, but also provide all classes of user interfaces and functions for the editing of graphics and images.Java, the programming language of Processing, managed to become that popular because it is user-friendly and allows the users to create window applications and applets easily.Java.awt includes classes to create window components, which are used to create applications and various applets.
"These components include buttons, checkboxes, text areas, etc.The java.awt packadge contains classes which can be used to create objects of this type " Initially, is given the size of the window that appears on our computer once the application of Processing has been launched.The threshold window of this application corresponds to the field of view of the camera, according to its positioning in the room of the installation.Using the threshold filter, the image taken by the camera is transformed to black and white pixels, depending on the settings.Thus, Processing is able to perceive the changes made in the field of view of the camera and translate changes of pixels in movement on X and Y axis respectively.The OpenCV library realizes the differences in frames of the camera on X and Y axis.These differences are called panFromBlobXΗ for the X axis with a range of 0, 640 (camera capture dimensions 640*480) and values from -1, 1(pos -pan position, -1 for the left, +1 ifor the right loudspeaker), and pitchFromBlobY for y-axis with a range of 0, 480 and values 0, 4 (pitchRatio -The values of the increase of the frequency must be from 0.0 to 4.) [6].Then, using the library OscP5 the messages blobPosX, blobPosY are created for the panFromBlobX and the pitchFromBlobY, respectively, (audio panning for x-axis and pitching frequency for the y-axis) which are sent to Super Collider with the OscMessage method (the ΝetAddress of Super Collider 57120 (client server ip) was declared as myRemoteLocation 127.0.0.1).While this part of the application functions normally in Processing, we upload the buffers on the Super Collider server creating a buffer list ( buffer list concerns the audio samples to be processed).As far as the buffer list is concerned, it is uploaded on the Super Collider's localhost server and it is ready for processing.The use of one or more SynthDefs (synth definition) concerns the use of simple functions for the production of sound in this environment.[6], [web16-18].Super Collider's client server does not recognize neither simple functions or object oriented code (OOP-object oriented programming), nor its proper language, but only information needed to create an audio output [6],[web16-23], thanks to a simple class, named Synth Definition (SynthDef).A SynthDef contains information for the Unit Generators and for how those interlock and form the basic structural units of the Server, used to produce or process sound or to check the signal.In other words, SynthDef manages one or more UGens-unit generators, producing or processing sound signal [6], [web16-23].So, a SynthDef contains information such as the choice of sound outputs, the frequency, the name of the buffer, the depth, the intensity, the looping and much more.Then, after declaring the SynthDef to the Super Collider server, the OscResponders are created.

The
OscResponders concern the ΟSC communication between the two softwares.The OSC is used to send messages from one program to another.In this way this type of communication is achieved in Super Collider, once a NetAddr has been created as described above [6], [web14-15].In our case, there are two OscResponders which concer Processsing's blobPosX and blobPosY.For x-axis the responder pans both channels (pos-right / left) and the PitchRatio is influenced by the y-axis (pitchshifting frequency).Such an example can be used to send to the server as much OscResponders as desired, depending on the desired result [web16-23].From the main menu of Super Collider and the section Utils one can control the Osc Input Test to check if the two programmes communicate.Initially, The option Toogle Osc Input Test is selected and afterward the option Start Osc Input Test.Having followed these steps, we get a terminal in Super Collider's post window which contains information for the moving objects perceived by the camera in Processing, always on x and y-axis.Finally, a control panel setting has been created permiting all variables to be managed without being necessary to change the SynthDef every time.Moreover, there is a possiblility to alter the sound result in real time (live control).The last step is to launch the application by the easiest way, having declared the Synth as x or y (can be whatever letter of the latin alphabet).The use of ";" activates the option Play.The application runs normally unless the indication nil appears on the post window of Super Collider.At this stage the interactive application is completed and is ready to use.The sound produced is a product of interaction, while the feedback is directly visible since the vibrations-waves on the surface of the water are in real time translated into shapes, thanks to the light.

SPECIAL DESIGN OF THE CONSTRUCTION
The construction used to support the application was created entirely manually, while the material used is marine plywood 1.6mm thick.The sealing of the construction was done with high-quality rubber sealing white color with a basis of acrylic resins.For the holes made above the loudspeakers and their isolation we used membranes for drums of 14'' inches, Remo PS-1 PINSTRIPE type [web24-25], since they offer the right stability and flexibility, especially when the sound result reaches low pitched sound [web24].The use of such membranes was considered necessary since the water was creating a dimple in every other material used up until then, making impossible to detect ripples on the surface of the water.
The construction is completed in three stages: the first stage involes the design and the construction of the tank, having the following dimensions: 1,7m x 0.6m x 0.05m, and two holes for the loudspeakers of 0.32m diametre.The second stage involes the design of the loudspeakers and the construction of their cabinets [12], [web26-31].The third stage involves the aesthetic aspect and lining the tank with plexiglass sheets of 1.7m x 0.7m for the front and 0.6m x 0.7m for the sides.The design of the construction was inmporved during stages 1 and 2. During the first stage the construction of a smaller tank in the center measuring 0.25m x 0.08m x 0.05m doubles the depth of that area.The loudspeakers are placed symmetrically to the left and right of this smaller tank.Thus, when the wavelength reaches that point it is neutralized, since its width declines to zero [10][11].The simultaneous diffusion of two waves in the same area of the medium is called interference.To observe interference effects the wave sources must be coherent, i.e. have exactly the same frequency and be monochromatic, i.e. transmiting exclusively a wave of a concrete frequency and length [12][13].
Two waves that are transmited simultaneously in the same area of the medium can have a constructive or a destructive effect.If two waves having the same phase meet at the same area the result is a constructive interference.On the contrary, if two waves that have a phase difference of 180° meet at the same area the result is a destructive interference.Generally, the constructive interference presupposes a phase difference between the two waves being an integer multiple of 2π.On the contrary, to have a destructive interference the phase difference between two waves must be an odd multiple of the π angle.In other words, if two wave crests or two troughs are met the width of the crest or the trough is doubled, so there is a constructive interference.If at the same point, a crest meets a trough, the two waves conteract each other and the result is called destructive interference [10][11][12][13].A problem that appeared at this stage was the nonsyncronization of the sound sources (phase difference), the mismatch of the frequencies and the non-monochromatic sources, all of them depending on the function of our interactive system.The solution was to construct the smaller tank in the center of the tank, the loudspeakers being symmetrically placed to the right and left of this smaller tank.
At the second stage, designing and constructing the loudspeakers and their cabinets was a great challenge.To avoid problems, special emphasis was given to how the cabinets would be constructed in order to acquire the maximum possible quality of sound result, and also the best possible response in relation to the wave effect.The quality and the intensity depend on various factors.The intensity of a loudspeaker does not depend exclusively on the watts but mostly on its sensitivity when 1 watt of power is applied, measured from 1 meter away.Thus, the more sensitive the loudspeaker, the more the acoustic power produced [12].The solution to this problem was given by the construction of the bassreflex which allows the normal flow of the air inside the cabinet of the loudspeaker.The loudspeaker has a resonant frequency (Fs) which must be correctly combined with the resonant frequency of the hole to properly synchronize the back and the front waves.[12][13].The innovative bassreflex puts in motion the air which is under the surface of the loudspeaker, sending it directly to the membrane.Thus, the bassreflex strengthens the volume of the bass and the depth of the sound produced

LIGHTNING DESING
The light is the key component of the present interactive application, it is the medium used for the projection of the sound in space.The interactively produced sound is transformed into image thanks to the light, whose basic properties are refraction and reflection [11,14].The water acts as a mirror for the deviated light beam.Thus, the same light beam is reflected with an angle of reflection which equals the angle of the incidence [web32-34].To avoid any possible losses of the light beam, the use of a directional spotlight is suggested, as for example in theatre, where the spotlight is adjusted and directed by a user-lightning technician.
The light beam must be concentrated exclusively on the surface of the construction, helping that way the visual effect and the correct function of the camera, especially as far as blobs (shadow problems) are concerned.The solution was found when a projector was used as a spotlight, allowing the lightning of spesific spots of the surface of the water, just above the sound sources.More specifically, a video of 1440x900 pixels depicted on a black background a bright stripe along the projection surface (water).That way we managed to illuminate the source of the waves, which in combination with the interactive practice followed, gave to the user a sensation of sound painting.