In our first submission we mostly developed a concept for the final performance. Because the whole Action Sound project has been highly dynamic, some of our practical and aesthetical preconceptions needed to be partly re-thought. However, we are convinced that we are able provide a mostly consistent concept.
This text will provide a short overview on three major aspects of the performance and on how they developed over time. As this text is regarded to be a continuation of our previous texts, we assume the reader is aware of the philosophical and aesthetical presumptions that were explained under the tab ‘Submission 1’.
Firstly, we shall have a look on the new role of the piano. In order to fulfil the conceptual symbiosis of music and noise (Clarke, 2005, 17) we needed to modify the piano in a couple of aspects. These modifications, however, were supposed to still guarantee reciprocal forms of musical interactions. The use of specific tools enabled us to achieve both aspects.
Secondly, the text will speak about forms of aural and visual extension through the help of electronics. The idea of creating a multi-modal interaction system thereby is as relevant as maintaining a certain understanding of aesthetics that is based on the equalitarian symbiosis of visuals and sound.
Thirdly, the musical score will be of special interest. As our way of interaction during the rehearsal changed over time, it was necessary to conform its characteristics to the new circumstances.
The new role of the piano
Regarding the role of the piano, the main innovative impulse laid in the fact that not one but two persons were supposed to play on it. One of the persons was traditionally sitting on a chair with the keys in front of him. The other person was standing instead. With his hands in the range of all strings, a completely new sonic environment was offered to him. In order to potentially get a wider variety of (atonal) noises, tones and timbres, we integrated the most versatile tools by alienating them from their original function. Tools, such as a comb, a head massager, chop sticks, an abductor, a shot glass, paper and blue tack were only some of the tools that, in connection with strings, were able to create sound experiences, that consist of both, tones and noises. In every case we wanted to fulfil the premise of not distinguishing between those two (Clarke, 2005, 17). The choice of the tools, however, was everything else than arbitrary.
Feeling obliged by Christian Wolff’s ideas we tried to find ways to combine innovative sounds with specific forms of reciprocal interaction. The use of the shot glass for instance was perfectly designed to fulfil that demand: while one person was playing a particular set of keys on the piano, the other one could place the shot glass exactly on the vibrating strings. By sliding alongside those strings, very interesting timbre changes could be achieved. It was furthermore possible to temporally develop certain playing conventions that, for example, consisted of interconnecting the position of the shot glass to the frequency of finger tips on the keyboard per fraction of time.
Aural and visual extension
Compared to our first submission, we chose a pretty similar light and noise setting for our final performance. Referring to aesthetic pioneers, such Walther Ruttmann and Oskar Fischinger (Emons, 2012, 53; 70), we still believed in a symbiotic approach.
Certain characteristics of the lights and electronic sound again were determined by aural input parameters. Compared to our first performance, the processing Max/MSP patches were of course optimized in order to being able to capture better the expected frequency and gain ranges.
All kinds of output signals were treated as equal impulse generators. The idea of a self-triggering network that consists of either electronic or human senders and receivers was kept. The presence of an audience during the final performance however additionally increased the number of potential trigger generators. And indeed, the fact that a few people came to the performance a little bit too late leads to the fact that we can actually hear door noises on the video recordings. As electronic systems cannot distinguish between noise that was made by the performers and noise that was created by the audience, it automatically became part of the performance. And even us, the performers, remember having been aware of the audience’s noises, which, of course, means that we somehow reacted to it.
We originally developed quite an explicit concept of the musical score in our first submission. The plan was to design it in a person-specific way and draw activity levels on graph paper. Furthermore, we thought of using a manipulated and more intuitive set of Christian Wollf’s action score symbols. By rehearsing several times in an improvised manner however, we developed certain interaction patterns that made a graphic score redundant. In order to provide both, an adequate amount of freedom and a loose temporal order, we agreed to wander through five different stages that implied different uses of sound, noise and light settings.
Although we had to re-conceptualize certain aspects of the performance due to different kinds of issues, we were able to achieve our philosophical, aesthetical and performance-related goals. Those were: to do music in an interactive and improvised way in the tradition of Christian Wollf; to conceptually equate music, lights and noises that either derive from the audience or the performers; to create network-like trigger signals that interact with each other; to influence and create visual and sonic signals through the extended use of electronic patches; and, of course, to have a good time.
Clarke, E. F. (2005). Ways of listening: An ecological approach to the perception of musical meaning. Oxford: Oxford University Press.
Emons, H. (2012). Für Auge und Ohr. Musik als Film oder die Verwandlung der Komposition ins Lichtspiel. Berlin: Frank und Timme.