Back to the Future and Forward to the Past: Zukunftsmusik and the aesthetics of immersive classical music experience

We showcase relevant aesthetic considerations of an interaction design for immersive experience informed by Zukunftsmusik. The ‘Zukunftsmusiker’ movement emerged at the verge of the 20th century and intends to encompass a holistic experience incorporating the active participation of all senses with its corresponding action. Amongst other features, we highlight aspects related to diegesis, agency, and attention centring of the visual and auditory field by examining two renditions of The Ravens , a 3D 360-degrees guided experience as well as a 6-degrees-of-freedom multi-view experience of the eponymous piano composition by the first author. With their design rationale of participatory experience, these renditions aim to set off a renewed technological consideration of the Zukunftsmusik paradigm. Ultimately, this paradigm may reach out towards an audience demographic that was previously mostly unknown in contemporary music: The user.


INTRODUCTION
Arnold Schoenberg postulated the novel character of New Music with the provocative statement that it "must be music which, though it is still music, differs in all essentials from previously composed music. Evidently it must express something, which has not yet been expressed in music. Evidently, in higher art, only that is worth being presented which has never before been presented" (Schönberg 1950, p.39). In a similar vein, the term 'Zukunftsmusik' (transl. music of the future) refers to its attribution to the novelty of works by composers such as Chopin, Liszt, Berlioz, as well as its most widely known proponent, Richard Wagner, who cemented its footing in his essay of the same name. Although previously used by certain parties with a negative connotation (for example, by the music critic L. A. Zellner in reference to Wagner´s music), the followers of Wagner soon owned the terminology, starting to refer to themselves as 'Zukunftsmusiker', with an objective of re-defining the music of their time, as well as the way it should evolve. Most notably, the doctrines of Zukunftsmusik foreshadow the aesthetic ideas of perspective that would only be considered decades later when Luigi Russolo published his futurist manifesto The Art of Noises (1913), the purpose of which was to change the way we think about music, and what we regard as music in the first place.
Throughout history, composers have been keen on developing "new techniques that open up previously unheard instrumental and expressive qualities" (Kallionpää 2014, p.8). They have focused on imagining and shaping the future of their artform, to which instrument developers have been responding by inventing tools that would allow for increasingly demanding composition and performance techniques. The rapidly advanced instrumental evolvement of the Baroque Era enabled the creation of the works of, for example, Bach, Corelli, Handel, Tartini, and many other composer contemporaries.
Similarly, the development of the virtuoso repertoire of the Romantic Era was in great deal linked to the simultaneous structural evolution of the concert instruments of the time, leading to such composers as Liszt, Alkan, and Rachmaninoff to extend and explore the capacities of the modern piano, transforming it from a solo keyboard instrument into an orchestral entity (Kallionpää 2014). The same pursuit of extending the possibilities of musical expression and aesthetics still prevails among the composers, performers, and sound designers of today, whose technical vocabularies extend to subdisciplines of algorithmic composition, machine learning, virtual reality (VR), augmented reality (AR), gamification, interactive music systems, extended instruments, and various combinations of mixed-media content. Many composers also write their own software according to their artistic needs, thus making them less dependent on the technological tools available, and freer to realize their creative visions.
Music is not an abstract art but a form of communication that relies on its reception among the audiences. In order not to become stagnant, classical music culture needs to live in the present time as well as the future, constantly renewing itself. Such challenges lie in the essence of contemporary classical music: how to find new listeners, whilst at the same time keep the current ones involved and, how to navigate between the traditions and the future (Kallionpää, Chamberlain & Gasselseder 2018)? The authors of this paper argue that with the help of augmented and virtual reality, immersive recordings of classical music works can be created, which may help to draw more attention amongst the listeners who would not normally be interested in it. Thus, "implementing innovative technologies in performance, composition, and recording opens up new aesthetic paradigms that can also reciprocally inform the aesthetic disciplines and reach a wider demographic of audiences" (Gasselseder & Kallionpaa 2019).
The role of the technological options discussed above brings along new questions related to the aesthetics of the technological artefact per se as well as the actual essence, and identity of musical compositions, the circumstances of its creation (i.e. composer), performance (i.e. tape), and its rendering (i.e. multichannel speaker system). To which direction is contemporary music heading at the moment, is there a common denominator to be considered as its aesthetic, and how is future music going to sound (or look) like?
This paper explores the idea of Zukunftsmusik in the framework of today's culture. We will investigate the meaning of the concept for composers, performers, producers, and audiences, as well as its challenges and benefits for the future. The scope of this discussion is limited to the case study The Ravens based on the eponymous composition of the first author and it's recording of the second author. This production takes advantage of a novel concept for an interactive concert experience in six degrees of freedom (6DoF) and significantly expands the aesthetic ambitions of traditional music performances. With this, the users are enabled to take on the roles as experiential co-creators by interacting with volumetric, 360-degree video and 3rd-order Ambisonics spatial audio from multiple perspectives within a bespoke software interface. Notably, we review the touching points between the interpretation of (deictic) perspective in Zukunftsmusik and its consideration in the presented case studies within the framework of situational context (see Gasselseder 2015).

ZUKUNFTSMUSIK
The longing for the idea of "the music of the future" existed long before any elaborate software and computers of today became available. What was it exactly that the composers, philosophers, and artists were originally after? Although the concept is usually regarded just as a curiosity belonging to the history of music, it was also connected to the overall philosophical and artistic tendencies of the Romantic Era. Whereas composers had previously been mainly considered as skilful craftsmen of their art, this new era saw them as "geniuses who could conjure incredible sonic worlds and atmospheres from their instruments": for example, virtuoso violinist Niccolo Paganini was considered to represent this idea of a wizard-like interpret between the human life and otherworldliness (Kallionpää 2014, p.8). Such a way of thinking did not only come from the concert audiences but was boosted by the writings of various notable personalities of the time. For example, on top of the essays by Richard Wagner, the concept was also incorporated in the works of Friedrich Nietzsche, who was a great admirer and a friend of the composer. Treating an artist as a divine genius figure resonated well with his idea of an 'Übermensch', which was one of the key topics that Nietzsche had introduced in his works. However, the philosopher later changed his stance on Wagner and Zukunftmusik, even dedicating an entire chapter of his book The Case of Wagner (1888) to argue why Wagner´s music was reactionary and musty, even naming it as "A Music Without a Future".

Definition
Although there was never a clear and exact definition for the term 'Zukunftmusik', the common nominator for the most discussions seemed to be that such music had to represent a greater art than what a regular, even if somewhat talented, person could produce. The term did not simply refer to the kind of music that the composers would perhaps end up writing in the future but got transcended into an extent of a philosophical concept, reflecting the total renewal and enhancement of the entire mankind. The regular means of musical expression would obviously not suffice to convey ideas of such magnitude, which is why more spectacular aesthetics, ideas, and methods had to be developed. In the case of Wagner, that meant extending the concept of an opera beyond itself (on top of the extended macro levels of dramatic narrative and the actual musical contents, the composer created his own epic universe around them). However, Wagner and his German contemporaries were not the only composers working on such ideas. For example, Russian composer Alexander Scriabin´s  composition Mysterium was intended to be a spectacle that would wipe out the original mankind, replacing it with "more noble beings". The work was not supposed to be just a musical performance but a ceremonial, synesthetic entity that would take place at the Himalayas in India. On top of the music and auditory stimuli, the work would also encompass the senses of smell and touch, as well as visual contents, such as, for example, wonderful colours and dancers. According to the composer, there would not be any spectators, but all the attendees would actively take part to the spectacle (and eventually be wiped out and destroyed). Scriabin completed an introductory part of the work, but it was left unfinished when he died in 1915.
Judging by the above examples, it is clear that the desideratum for augmenting and extending realities, as well as the limits of musical expression, already existed when the concept of Zukunftmusik was first introduced. Unfortunately we will never find out how exactly did Scriabin intend to implement his Mysterium: with his concept, he was already "living in the future" ("…for there are always many who provide what is suitable for the needs of our time in a much more accessible form than can be offered by someone who already belongs to the future", Schoenberg, 1950, 12).
The technologies of today would allow for the realisation of a production like the one planned by Scriabin (excluding, of course, the annihilation of the spectators!). According to the experiences of the authors, the core ideas and pursuits of the 'Zukunftmusik' are still the same as before: the composers, sound designers, recording producers, and performers alike are all eager to extend the limits of artistic expression and human perception, as well as the whole concept of what constitutes a musical composition or a classical music recording. Moreover, the hierarchy between the performers and listeners has become blurred: instead of expecting the audience to passively listen to the concert performances, we argue that immersion will be a key component of the future music compositions and recordings, both for the performers and listeners. Instead of seeing musical compositions as fixed entities, performers and listeners will more often than not be allowed to customize their experience according to their wishes at the real-time situation. Nonlinear forms, techniques of gamification, audience participation, and replacing certain parts of a musical performance with AR or VR environments will become more and more standardised.

Why does it matter?
As part of his concept of 'Zukunftmusic', Wagner introduced the idea of 'Gesamtkunstwerk', which translates as a universal artwork, or a hybrid of arts. We argue that this concept is well transferable to today´s desideratum of interdisciplinary art, which was also the starting point of the authors when planning for the recording project of The Ravens. Furthermore, instead of dealing with the audience as passive onlookers-or listeners, we wanted to provide them a more immersive experience, which was also the goal of some of the original 'Zukunftsmusikern'. By using multiple VR camera perspectives, as well as ambisonics sound recordings, we allowed the listeners to customize their own perception of the said artwork.

THE CONCERT WORK THE RAVENS
The recording of The Ravens is presented in two versions. The first, 'The Ravens 360', accounts for a traditional immersive format by presenting an equirectangular, 3-dimensional 360-degree video and spatial audio track for playback and control along the rotational axes in 3-degrees-of-freedom (pitch, yaw, roll). This footage format is highly compatible with established content delivery platforms (i.e. YouTube). While ensuring compatibility, our 360 version goes one step further and enhances the standard format by offering interactive online switching between camera perspectives via a bespoke player application implemented in Unity 5. Our second version, 'The Ravens VR', exploits a 6-degrees-of-freedom paradigm by adding translational movement to the feature set detailed above. This allows for a completely interactive experience in which users can roam freely inside the virtual environment and generate their own virtual perspectives in real-time. The VR version relies on a bespoke player software implementation in Unity 5, which enables us to dynamically generate displacement maps from depth information and to blend between multiple Ambisonics microphone perspectives.
Overall, The Ravens made use of unique recording and post-production techniques to honour its deictic as well as musical dramaturgy. The intention behind the production was to establish multiple stage platforms with a unified main protagonist (the pianist).

About the composition
The piano solo work The Ravens was commissioned by pianist Judith Engel, to form part of her contemporary music concert series in Salzburg, Austria, 2017. One of the focus points of the events was to celebrate the work of the Austrian poet Georg Trakl , as well as his pianist sister Grete Trakl . Trakl´s poem The Ravens was selected as the basis of the structure of the composition because of its distinguishably expressive and dark characteristics that the composer underlined with the help of extended techniques (such as, for example, the bell-like sounds created by plucking the bass strings of the piano) and the overall dramatic arch of the piece.
Using the performer´s voice (either sung or spoken) as part of the musical language of an instrumental solo composition is not unusual in the genre of contemporary music: for example, Kaija Saariaho and Betsy Jolas are illustrative examples of composers integrating textual elements in their solo or chamber music works (see, for example, Saariaho´s work Laconisme de l´aile, or Jolas´ solo piano work Mon Ami). Like in vocal works, the text can form and support the structure of the instrumental work, as the composer needs to take into consideration how to convey the message of a poem by musical means, as well as to plan the timing of their piece. The composer may also form their musical key motifs based of the text fragments: such motifs may be constructed by quite literal means (for example, by imitating a rhythm or sonic image of a certain word or syllable) or they can be of more abstract and symbolic nature (for example, a motif that reflects a concept or atmosphere of a certain aspect of the text). In the case of The Ravens, the relationship between the music and text is relatively abstract.
The Ravens begins with the pianist reading Trakl´s poem of the same name, after which she proceeds to play a slow passage that consist of a combination of extended techniques and standard piano techniques. The texture slowly intensifies, ending up in a cadenza-like virtuoso surface. Rather than just in the nature of the musical material itself, the novelty of this project lies in its VR interpretation, in which the spectator will see the English translation of the words projected on the wall, and can absorb the mysterious elements of the piece in a more immersive manner than in a traditional concert performance.

About the recording and production setup
If an operas' allure is carried by its musical rendition of story and characters, who or what is to account for the appeal of a concert? Is it the performer, the piano, the composition, or the situation that takes up the centre of attention? Indeed, all these factors may contribute to the concert experience in equal amounts and, thus, must be made equally accessible within a VR setting that is to respect the aesthetic desiderata of the Zukunftsmusik doctrines. 'The Ravens 360' and 'The Ravens VR' distinguish themselves from a traditional concert experience in that they alter acoustic as well as visual perspectives in 360 degrees depending on the syntax and semantic pacing of their introductory poem and subsequent musical interpretation. With that outset in mind, the production followed a similar credo to its operatic relative by establishing a range of perspectives that, while truthful to the situational context of a concert, would otherwise be unattainable to an audience member in a live setting.
The 360 and VR versions share the same recording arrangement. For the visual domain, six 360 degrees cameras were configured in stereoscopic pairs. The first pair made of two Kandao QooCams (Kandao 2018) was positioned diagonally at approximately 0 degrees and 1.5 meters behind the performer. This angle offers a good view on the pianist as well as the finger action on the keyboard but largely occludes her facial expressions during the performance. The location of the second pair of Yi 360 (Yi 2017) cameras was to account for these expressive gestures and set alongside the edge of the grand piano at about 30 degrees and 1 meter in front of the pianist. These camera positions were adjusted to be of similar vertical height (about 1.3 meters), whereas the third camera pair, made of two Kodak SP360-4K (Kodak 2015), was placed at a lower height of about 0.8 meters next to the tail edge of the grand piano. This perspective enables viewers to get a wider sense of the instruments' dimensions and an 'en face' angle towards the performer that puts her in direct eyes contact with the user alongside a distinct more immediate interaction with the grand piano (due to the visibility of the strings action).
On top of these camera options, the recording makes use of several microphone perspectives aiming to capture the immediacy of the instrument as well as the depth of the concerts' location, the Bösendorfer Saal of Mozarteum Salzburg, Austria. To this effect, a 3rd-order Ambisonics main microphone, the Zylia ZM-1e (Zylia 2018), was placed towards the edge between the stage and audience section at a distance to the piano and height of about 3 meters. Measurements of impulse responses at multiple positions throughout the chamber hall indicated a pleasant balance between early reflections and reverb tail at the selected position. This would allow the main microphone to act as the centrepiece for blending phase-aligned signals into an Ambisonics mix during post-production. These signals were derived from a close perspective of a Zoom H2n (Zoom 2011) 1storder (horizontal-only) Ambisonics recording in immediate proximity to the piano as well as two outrigger 1st-order configurations of Soundfield SPS200 (Soundfield 2008) microphones located in the middle of the audience section and towards the rear of the chamber hall.
In post-production, temporal and spatial synchronization of the footage was performed to create the stereoscopic video as well as phasealigned audio materials. For the 360 experience, our configuration enables a change of perspectives during playback when rotating towards the camera hotspot and zooming into the footage recorded on the primary camera. After a threshold value has been exceeded, the currently active perspective blends over to the perspective of the adjacent camera rig. Due to the lack of available software allowing for the playback and control of different camera and audio perspectives in 360 degrees, we developed a custom player accounting for the aforementioned requirements within the game development platform Unity 5.
'The Ravens VR' posed a further challenge to the design of the custom player, as it was to facilitate user interaction in 6DoF in close to real-time. Opting for a 2.5D paradigm that involves the mapping of flat surfaces onto 3D-objects, the player software devises dynamic displacement maps to translate the camera footage into a 3Denvironment. This is carried out by exploiting the depth information derived from the disparity exhibited in the stereoscopic camera arrangement. According to the generated depth map, a custom shader displaces each vertex of the sphere on which the video is textured. While this opens up the possibility to explore the virtual chamber hall, the further a user moves away from the original camera position the more artefacts of the displacement are revealed. Due to the nature of the flat capturing inherent to video, we are missing out on information that lies behind those objects facing the camera (i.e. accuracy of textures is twodimensional but displaced within a threedimensional volumetric simulation). This behaviour may be partially desirable if it suits the deictic characteristics and staging of the scene (i.e. CroakVR in Kallionpää & Gasselseder, 2019). In the case of The Ravens, the intention was to minimize artefacts within the vicinity of the pianist. To realize this, we took advantage of the depth maps derived from the remaining stereoscopic camera configurations and blend between the respective displaced textures in correspondence to the users' current position in the virtual chamber hall. This way we can ensure an acceptable level of artefacts surrounding the pianist, while the remainder of the chamber hall remains explorable but may turn slightly more abstract towards the last rows of the audience space.

DISCUSSION
'The Ravens 360' follows the rationale of a guided experience that gives users the ability to alter its dramaturgy by switching between different static perspectives at their discretion. Whereas the 360 version uses a non-interactive, pre-defined sequence of these perspectives, the VR version enables to take control over its progress on the basis of dynamically generated perspectives. Rather than succumbing to the directorial decision of the edit, the user is free to step outside the deictic dramaturgy and explore the currently active perspective within a 6DoF environment. Other than in traditional narrative formats, these dynamically extrapolated perspectives are to suggest a wider experiential range of agency by expanding the spatial characteristics of the stage and its perceived possible actions. For the VR version, the latter aspect, which in itself poses a key component of immersive experience, is highlighted by the range of movement spanning from observing the actual happenings on stage (i.e. watching the pianist over the shoulders) to exploring the audience room (i.e. walking up the stairs to the last row) (see Gasselseder 2015). Alongside these positional changes, we also adapt the acoustic perspective by utilizing a recording arrangement that allows for spatial blending between the four Ambisonics microphone positions distributed over the room. An accurate representation of the spatial characteristics of an acoustic space facilitates the sense of participating to the events unfolding, but also enhances the users' sense of possible actions within such an environment.
Even though the musical direction of the work may be not under the control of the user, the elevated perception of possible actions by means of 6DoF interaction may contribute to shifting situational awareness about the pianists' actions from sheer acknowledgement and sentience to the attribution of intentionality and simulation (see Engelsted 2017 for a further discussion on human experience). The ability to see and listen to stage objects in a variety of ways involves future-and object-directed actions that shine a new light upon the presence and absence of its acting agents (which apart from the pianist also includes the user).
With this frame of reference, we set particular emphasis onto the first ("close") camera perspective that intents to offer access to the mimetic qualities of the performance and support the sense of agency via the incentive of corporeal coarticulations towards the user. A series of empirical studies have shown embodied components of experience to be effective during passive music listening (see Gasselseder 2015& Leman 2010. These embodied experiences are believed to be mediated by the mirror neuron system (bottom-up pathway) as well as forward models of agency attribution (top-down pathway) that prepare the user for pre-reflective motor responses (see Gasselseder 2016, Decety & Grézes 2006, Jeannerod 1997. The abovementioned responses are assumed to rely on synchronization processes between audio-visual stimuli and their temporal relationships of perceived events that pave the way for generating expectations about the embodied schemata detailing the syntax of environment interaction and, in the case of corporal coarticulation, support the sense of social presence.
While 'The Raven 360' essentially recourses to a passive act of music listening, we believe that introducing the option to alter static perspectives around the users' viewing axes, adds a further component to the embodied experience by tapping into the proto-physical schematic representations that users devise during the interaction with a (virtual) environment. Taking advantage of its representational framework of physical properties, this perspective intends to establish a contextual link between the virtual and physical by prompting embodied coarticulations at the user end in response to the actual exhibited body actions of the performer. Apart from agency-related components of experience, we might assert that a mimetic link between the virtual and physical may also contribute to the sense of spatial presence if the content is to be consumed on an immersive display. In this way, we propose a design rationale that recurs to the notion of the extended body where partials of the performance' expressive characteristics are projected as syntax onto the current listening situation (Gasselseder 2015, Jauk 2012, Leman 2010. In a similar vein, we argue for the significance of 6DoF for music experience on the grounds of the perceptional upholding of illusions. Human perception is believed to be subject to a constant contrasting between expected and incoming sensory stimuli from our surroundings (see Bruner & Postman 1949). More recently, it has been proposed that veridical or believable perception is inherently linked to implicit sensorimotor understanding. In this view, the veracity of a percept is verified by contrasting its appearance with our knowledge how its attributes persist, change, or disappear across transitions. Thus, rather than ignoring seemingly irrelevant activities (such as walking alongside the piano), we decided to shift our design strategy for immersive content towards the activities that accompany and contextualize perceptual experience (see Gozli 2018). By enabling users to witness the movement pattern of the pianist as well as to explore and experience the acoustic dimensions of the room dynamically from multiple angles via movement, we expect to set an important first step towards a main desideratum of immersive design and Zukunftsmusik: To be absorbed within the intentionality of minds and present between the actions that the environment offers. Schoenberg (1950) quite provocatively argues that novelty is the most important feature of any composition that should be considered as a valid contribution to the art of music: only something that has not been presented before deserves to be heard (p.39). There are various ways to approach the concept of Zukunftsmusik: At the end, the selection of tools, methods, and concepts depends on the composer (and sometimes also the performer). However, we argue that the use of special technologies should never be an end to itself, but a meaningful component of the artistic expression in the realm of each composition project. Moreover, a "future music composer" may use innovative technologies for generating their musical material. This can mean, for example, converting complex mathematical models or theories into notated music for acoustic instruments, or using computer-based analysis of large amounts of data as one's compositional structure (see, for example, the first author´s work El Canto del Mar Infinito for Tampere Biennale 2020 festival, which was themed by sea protection: The presence of the sea is apparent in the musical material and instrumentation of the work, some of which was generated from an analysis of underwater sound worlds carried out by the second author).

CONCLUSION
We argue that a commonality of the Zukunftsmusik doctrines refers to the connotative co-creation or simulation of alternate or suggested realities, as often attributed towards a traditional music listening paradigm. Whereas listening alone may be inherently atomistic due to its reliance on a single domain, Zukunftsmusik intends to encompass a holistic experience incorporating the active participation of all senses with its corresponding action (cf. Wagner 1850). Robert Schumann reiterates this by comparing the aesthetics of one artform to another in presence of differing materials (1854/1965, p.21). Following a more apt definition of aesthetics (study of senses rather than beauty), this desideratum may be reframed as a translation between senses and their dominant artforms at the point of contact between the diegetic, mentalizing of the bodily (i.e. bottom-up) and the non-diegetic, embodiment of the mental (i.e. top-down) (cf. Grillparzer in Hanslick 1902, p.3). Schumann has laid out a further desideratum pertaining to the consensus, interaction and blending between creation and performance when he calls upon the utopia of a music culture that will find a "(…) great fuge, to which various peoples will alternate in their singing" (Schumann 1965, p.34). Classical music recordings have traditionally focused on conveying two-dimensional representations of concert performances that remain the same every time one watches them. Virtual reality breaks up this linear paradigm by handing over control over the hierarchies of diegesis to the user. Users can customize from which angles to watch, which character to pay attention to and listen to the musical drama, this way enabling them to engage more deeply into the artistic content. Similarly, the mode of recording does not dictate or limit a productions' artistic or technical decisions. Typically, one may aim for mitigating artefacts arising from 6DoF post-conversion for perceptual realism. The paradigm of deictic perception, however, can account for a higher tolerance of artefacts if the respective reference frame of the experience is to suggest a hybrid format that embraces both the stage presence as well as the artificial nature of a concert setting. If used in accordance with the diegesis of the display as well as the narrative-dramaturgic characteristics of the content, the experiential gains of naturalistic simulation as well as creative manipulation may emerge in function of user interaction. As in the Zukunftsmusik understanding of deictic perspectives, these user manoeuvrable angles of meaning can reach beyond the limits of a perceptually realistic staging, but, notwithstanding their creative expansions, are still to pay resemblance to the situational context of an audience setting. This balancing between nondiegetic and diegetic perspectives corresponds to dimensionalities commonly found in constructs of immersive experience, where imaginary components of absorption relate to a non-diegetic perspective on the narrative and sensory-spatial components of presence refer to diegetic perspective of drama experience.
Our case study highlights a prospect to fulfil some of these desiderata by affording a congruent useraware interaction on a within (i.e. absorption) and between (i.e. [social] presence) dimensionality of Zukunftsmusik. By moving from music absorption towards sensory and imaginary immersive experience, a new understanding of music aesthetics may arise beyond its resemblance to prototypes of set standards (i.e. widely accepted music culture). Furthermore, a renewed technological consideration of Zukunftsmusik may help to reach out towards the user as a new audience demographic that previously was mostly unknown in contemporary music. Chadabe (1996) underlines this notion by stressing that the evolution of electronic music hinged on "opening up to all sounds" (p.41). One may argue that this process is not exclusive to creators but also their (implicitly) interacting audiences. Hence, in answer to our introductory question about the evolvement of Zukunftsmusik, we may close our argument with the assertion that future audiences will show what will be the new sounds yet unheard of by their creators.