Submission 1: For 1, 2 or 3 People

Performance

Reflection of Performance & Description of Our Interpretation

During the performance, The team’s interpretation of Wolff’s piece have used numbers of methods and techniques that were discovered through the team’s series of performances, and through the rehearsals and practice, few ideas came up for the performance and one of them was to use our voice for the replacement of the instrument for interpreting Wolff’s piece, which also became the focus of our interpretation, as we discovered the voice have more immediacy compared to individual’s instrument. Apart from using the voice, after encountering some network difficulties in Alison House with networking the Max MPS patch for viewing the scores, and realising the performances were being effected due to the performers needed to focus on the monitor, especially when there is no access to a foot switch, which meant the performers are require to turn the pages digitally by pressing the space button which became a major distraction for the performance when they also need coordinate with each others.

Here is a short list of the method we used during the performance:

  1. Voice, with a single run through of the first 28 gestures of the score, in two groups of three performers (with printed gestures).
  2. Individual’s chosen instrument, with a single run through of the first 28 gestures from the score, in two groups of three performers (with printed gestures).
  3. Joo’s solo interpretation with his Jitter patch on Max MSP, in a single run through of the first 28 gestures from the score.
  4. Individual’s chosen instrument, with a single run through on the 28 gestures that was divided/organised through rehearsals for 3 performer’s per group (approximately 9 gestures for each performer, and displayed with John’s Max MSP patch).
  5. Experimental performance with voice in two groups of three performers (the performers are required to pick one of the 28 gestures from the table, and flip them once they  performed that gesture, and flipping them back any one of the facing down gestures once all 28 gestures are facing down, and perform the gesture; this runs from two runs).
  6. Experimental performance with voice, involving the whole team (same method as the 5th performance, but flipping through the gestures by twice as many times).

All these methods were done by exploring the immediacy of each performer, and how we interact with not just each other’s sound but also interact in movement, and was decided amongst the team that the initial voice interpretation delivers the performance very well compared to the methods, and the voice interpretation was select as our main interpretation for this submission. With our voices, team was able to immediately perform the random gestures that  came up from the 28 gestures, and the performers are able to focus more on the sound they want to produce and how they will interact with each other.

Aside from the voice performance, the team also did a solo interpretation with Joo and his Max MSP patch, to examine the differences compared to the other performance the team have done, and the outcome was intriguing as there were more pauses, compared to the other 28 gestures the team did.

 

Theoretical Context: Our Understanding of the Work

In the 1950’s Christian Wolff along with other members of the New York school began to produce ‘open’ works that allowed for much more freedom in performance. Open works varied in style, though works such as For 1, 2 or 3 People focused on the interpretation of the performer, producing a result which the original composer of the score had little control over. Scores such as this could be described as ‘determinate with respect tot heir composition’ but ‘indeterminate with respect to their performance.’[1] This concept of the prompt, something which is given as a type of bait for the subject to be taken and then used in the formation of the artwork is symptomatic of many aspects of 20th Century art. Interestingly to this style of open-score though is that the audience are not taken into consideration, it is the performer that Wolff wanted as the centre of focus; ‘…it’s written for performers. I never gave a thought to an audience. I was just interested in how it would be played, and hat happened after that was out of my hands.’[2] The formalist art critic Clement Greenberg said once that what he liked best about music criticism is that it was all about the score, though this style of composition throws the score off balance and onto the performance. One can take each gesture on the score sheet as a prompt. It suggests some parameters, though these are a departure point and how they are played is often left to the instinct of the player. An example of this is the performer may be told to play a long note, but how long is not determined by the score, but by the performer. Sometimes the performer is told to play ‘anything’ which could be their voice, the walls, anything. It must be stated though that it is not a case of ‘anything goes’. How a group interprets it must fit within the parameters that Wolff does set. And at times these are explicit. Though the flow of the piece is quite open, sequence can be decided by the performer. It is the individual gestures where rules are set out, though how these are executed is much in the ball court of the players.

What Wolff is doing here is allowing room for experimentation and ultimately serendipity. His interest lies in what the performer makes of the piece; A Composition must make possible the freedom of dignity of the performer. It should allow both concentration and release. No sound or noise is preferable to any other sound or noise.’[3] There is a strange atmosphere that can take place in a performance of this piece, or more appropriately, in the development of a performance of it. Wolff is interested in the working relationship between composer and performer through the medium of the score.[4] A score, he said, was ‘…a kind of a beginning, indicating directions and conditions under which music can be made.’[5] His score allows the performer to create the work. There is no right or wrong presentation of the performance, however Wolff does want the performers to work intuitively in their execution of the score. Key to the work is that it is score for 1, 2 or 3 people, not musicians. It is his intention that this score could free people from the restraints of training, allowing them to enjoy performance for the sake of its action, it is a ‘social activity.’


[1] Cox/Warner, 2004.

[2] Krukowski/Wolff, 1997.

[3] Cox/Warner, 2004.

[4] Chase/Thomas, 2010.

[5] Ibid.

Explorations of the Score: Instrument Choices

Instrument choices – John

 

 

 

 

 

During the rehearsals and preparations for the performance of Wolff’s “For 1, 2 or 3 people”, I investigated a number of different instruments. Each instrument presented differing challenges and advantages in realising the aural gestures of Wolff’s score.

Korg Monotron –

The Monotron has a very simple performance interface and offers easily accessible methods of producing both the timbral and non-timbral changes required by the score. The instrument itself is however, very small and this inhibits expressive playing to some degree, as does the one control knob per function layout.

I had considered addressing these issues by developing a larger control surface (using my 50cm ribbon controller for example) and multiplexing the controls so that one controller would address multiple synthesis functions. The planned system for this was a Nintendo Wii nunchuck controller, producing the multiplexed control voltages for the Monotron via an Arduino board. The time remaining before performance and my experiences with other instruments made this solution less suitable.

Piano (Yamaha Grand, Lecture Room A) –

The piano presented a very direct way of producing the scored pitches required to perform the piece. However, it was not easy and often impossible to produce any timbral changes within the performed gestures and non-timbral changes were limited to note length.

There was the possibility of physically interacting with the strings themselves while playing and this would have opened up a far greater range of sonic control. Ultimately, I feel that my lack of experience with the instrument restricted my ability to perform a broad enough range of gestural control to properly realise the piece. Given this and the lack of availability of a piano in the performance space, the instrument was not selected for the final performance.

Bowed Dulcimer

The bowed dulcimer is a self designed/built instrument that offers the opportunity to pluck, bow or hammer it’s four strings, arranged in two distinct (upper and lower) registers. The instrument was capable of a wide range of gestural control but as it was still a prototype and in the process of being completed it was not used for rehearsals or the final performance. It is however the instrument I plan to develop for submission two.

Korg Prophecy

The Prophecy is a DSP based, first generation physical modelling synthesiser, offering a number of instrument models and a highly performative playing interface. I focused on the brass instrument models (Trombone and French horn) while practising the Wolff piece and found the combination of keyboard (right hand for pitches) and log/ribbon controller (left hand for articulation) was well suited for producing the musical gestures required.

This instrument seemed by far the best suited, of those I had tried, to playing the piece and it was my intention to perform with it. My experiences in the last two rehearsals were to challenge this decision.

Upright Electric Bass (unbranded)

The upright bass offered a very direct connection with playing the musical gestures and one I was very familiar with as a long time plucked string player (guitar and bass). The addition of the bow broadened the range of sound and modification possible. It required some exploration of extended technique to produce a range of timbral changes within the scored gestures but the instrument seemed more than capable of providing these.

The bass, with its lack of user interface or control surface, provided a very immediate and instinctive instrument for playing Wolff’s work. The limitations in its possible note lengths, ie string decay and bow length made the sonic output seem noticeably more consistent than many of the other instruments investigated. Given this, I chose to play the bass for the final performance.

Voice

In our final rehearsals we attempted to perform the piece using our voices alone and I feel that these were the most successful in terms of producing a cohesive realisation of the piece. The voice is a highly flexible instrument, capable of an almost infinite range of timbral and non-timbral variation.

It is the voice’s ability for the instantaneous expression of the gestures that allowed for a much tighter timing of the synchronisation elements of the score. This combined with the restricted possible note lengths, in this case restricted by the range of human breath, made the timing and general feel of the performance seem more successful. We agreed to perform vocal versions of the piece as well as our instrumentals, for the final performance.

Personal Instrument

My plan for the instrument for submission two is to produce an acoustic instrument that can be augmented by sound controlled DSP processes within Max/MSP.

To this end I designed a four string dulcimer that would allow a range of excitation possibilities; plucking, bowing and hammering. I also wanted the opportunity to extend the instrument’s aural range with various articulation options; hand contact with strings, slides, string preparations etc.

The instrument has two separate bridges and has four strings with a scale length of 27cm and four with a scale length of 68cm. The shorter string sections are pitched using individual sliding bridges and the longer sections are pitched using four guitar machine heads. Each section can be played independently and simultaneously.

Audio is produced by a series of piezo transducers; a larger pair positioned beneath the curved bridge (to allow easier bowing) of the long string section and four smaller transducers below the individual bridges of the shorter section (allowing individual audio outs for each string).

Instrument choices – Terence

For my instrument I designed a sampler inside Ableton Live. I also incorporated some synths using Native Instrument’s Reaktor. My intention here was to use the sounds I would normally use in my own compositions in performing For 1, 2 or 3 people. From my reading of Wolff’s intentions, I think it was important for me to bring my own style of music to the piece. I created two whitenoise channels using Reaktor, both channels having filter, compression and bitcrushing plugins. I also set up a sampler with the same plugins that used percussive elements of samples I have recorded. I also included loops of fire I had recorded that were processed using distortion. I was aware that the noise of my sounds would need to be controlled in order to play with the intensity of the other players. I set parameters to control levels of plugins, as well as using a midi controller to trigger samples and control levels during performance. For the purpose of performing my own music this kind of set up works well, though in the context of this piece, I found we all performed better the less we were fixated on our instrument, hence why the vocal performances were so successful.  For the next part of the project, I would like to look into developing a bowed guitar that use piezo contact microphones that can be fed through my Live setup so that I can be more gestural with my execution of a score.  Freedom from screens was something which appeared to free up our performance.

Instrument choices – Boss

After analyzing the 10 gestures from the first page of Christian Wolff’s “For 1, 2 or 3 people” I decided to use Kazoo as my instrument for my performance because…

1. The Kazoo is fairly easy to play. As it is not made by complicated interfaces and, it does not require any advanced skills in movements, such as other types of instruments like saxophone, trumpet or guitar.

2. Although this composition itself does not require much of advanced musical skills, the performer is however required to be familiar with his or her musical instrument to trigger different actions and details of the sound from the instrument according to the instructions of the score.

3. Without much of mechanism within the structure if the instrument, Kazoo can project sound directly from my mouth so that it is easy to control the gestures of sound. With Kazoo I can perform and interact with other performers freely because the simple design of instrument, and both the pitch and dynamic can be controlled by humming, so as a performer, I do not need to be concerned about the physical technique and I can focus on paying attention to the detail of my sound and sound gestures from other performers.

4. With Kazoo I am able to follow the simple instructions of this piece, such as playing short and long note, changing the direction of sound, legato or cooperate with the sound from other performers or the environment I’m in (pitch, dynamic, gesture, envelope).

5. I personally like the sound of Kazoo when perform with the other instruments. The acoustic sound of a Kazoo has a very unique characteristic, which can be very different from the instruments that inherit sampler and synthesis sound, which creates diversities for the sound as whole when performed with these instruments.

Issues found with the instrument after rehearsals

However, one of the major weakness I encountered by playing Kazoo is that, it is hard to pitch musical notations which is indicated in the score unless I use a guiding sound to indicate the correct note which I always do when rehearsal. This distracts me from focusing on coordinating with other performers. Kazoo could be an effective choice for performing the very first few pages of the score, as the musical gestures are fairly simple, but it may became inappropriate as towards the last few pages of the score involve much more complicated instructions of musical gesture.

Moreover, after doing the rehearsal with Kazoo, I found that it is hard to change the instrument’s timbre, by pulling down and up finger technique at its hole, change the position of mouthpiece or use small stick to poke at its membranes but its timbre does not change drastically.

Further progress

The idea of relationship between action and sound relationship (Alexander Refsum Jensenius, 2007), one challenging is that, how to design the sound model that matches with the action of the performer? So that, not only performer will be able to play the instrument intuitively but also meeting the audiences’ expectation to see the relationship between performer and sound such like we expect to hear impulsive sound when someone play a keyboard instrument.

I have been interested in building my own electronic instrument that gives me the space of interaction between movement and sound. I am looking for the game controller called Gametrak (image_001) which has two movable strings with x,y,z axis. I intend to match those flexible movements with Max Msp by mapping the value number from the 3 axis with the different parameters to control pitch, amplitude, envelop, oscillator, audio effect processing or to trigger audio files. It might be an effective and interesting instrument for interactive musical performance, playing with unique sound models and approaching to new musical gestures.

Instrument choices – Terry

Choosing the instrument from the impression of the piece:
As the first submission of our DMSP project Action/Sound, we have been asked to interpret a graphic score and examine the piece through our performance, which for this instance, it was Christian Wolff’s For 1, 2, and 3 People (1964). After the first read through the score itself, my initial impressions on Christian Wolff’s piece was not much different from other graphic scores I came across in the past, as graphically the score seemed very opened for interpretations and allow enough freedoms for the performer to perform the piece creatively, however, as we analyse the instruction pages as a group in detail, the piece soon revealed to be very instructed, with its own unique language and a fairly strict ruling over the piece, and these complex instructions were built around a unique language that Wolff created for the piece.

Although Wolff have created a unique language in his piece, but some of the notations shared same or similar meanings with the traditional notations in western music, such as  the black and white notes indicates the length of the notes, and for loud in volume/dynamic, for quiet in volume/dynamic, and the use of musical staff, (#) sharp and (b) flat symbols, wich demonstrated below in the image:

Some these instructions have also indicate that the piece was written for instrument with strings, such as symbols that indicates the performers to perform techniques involving plucking and pulling, which I personally relate them to string instruments.

Having to interpret a complex piece such as For 1, 2, and 3 People, that has its unique language; this meant the performer will need to learn a new language from the beginning, and due the constrained time we had, it seemed wise to focus on learning this new language rather than learning to build a new instrument, especially the goal is to understand the piece itself, so I decided to use an acoustic-electric guitar for this occasion.

The reasons for choosing a guitar as my primary instrument was mostly based on my understanding of the piece as written for instruments with “strings”, which mentioned above, but the ultimate decision still evolved from the experiences gained through the practiced and rehearsals as the initial idea was to create an augmented instrument on a guitar; however, instead of augmenting the instrument by alternating its hardware, I wanted to explore in the performance itself and new solutions of creating sound with an instrument without adding/attaching objects or removing any part from an instrument, which I called it as “organic approach”.

This idea of “organic approach” was inspired by Marina Rosenfield’s works Emotional Orchestra (2003), and Sheer Frost Orchestra (2006) which I have included videoclips for both works below:

vimeo.com/27389650

www.youtube.com/watch?v=QfCtHNSUq-c

Both works avoided hacking the hardwares of the instrument, but discovering the possibility of different sound through performing the existing instruments differently; in the Emotional Orchestra, the performer lineup was a mixture of professional musician and amateurs who have no knowledge for the instruments they are performing, and this very mixture created a new sound for the instrument. The Sheer Frost Orchestra on the other hand was exploring the instruments by triggering its sound with object without embedding them into the instrument itself. So I decide to use an instrument that I am comfortable with to experiment with the methods of performing, and that is also able to cope with instruments that will be used by others in the group, especially coping with their volume and dynamic, so I find the acoustic-electric guitar became a suitable choice for this submission. Apart from volume control, with an acoustic-electric guitar also enable me to adjust the texture of its sound with the built-in equalizer, which meant that I will be able to react immediately to the score.

Instrument choices – Burhan

Using Projected Light to Interpret Christian Wolf for 1, 2 or 3 People

Christian Wolf’s major piece for 1, 2 or 3 people is an experimental music that allows performers to interact and perform with an instrument of their choice. Improvisation by each of the performer is at the heart of this piece. It has a dynamic flow to it and a free spirit so that the combined affect of the performance can be of any energy level.

In order to make the performance visually appealing and to enhance its impact on the audience lighting may be used. The main idea is to superimpose lighting onto specific sounds. Lighting allows three basic parameters 1) Colour 2) illumination intensity [a fader] 3) instant flash. A midi controller can be connected to a projector to control the three parameters; a sound profile may be attached to the three parameter controllers i.e. illumination intensity and sound pitch/loudness, colour and sound distortion, instant flash and bang/short notes. This may not allow the full piece to be played as the piece is primarily aimed (as precieved by the group) at instruments with strings. However, certain notes need to be identified for this setup for example non-timbral, short and loud notes may be feasible. Nevertheless, the piece is aimed at up to three people and a couple of members in the team are looking into augumented guitars. Hence, the performer with the proposed instrument may take the specified notes, this will not only make interaction between the performers more prominent but also a visual impact.

Readily available Max/MSP/Jitter patches may be modified to achieve control over two devices in parallel. Rehearsals and fine tuning of the setup will lead to rectifying the playable notes.

Instrument choices – Joo

Performing my instrument:

A performer should hold a red light bulb in one hand and a blue light bulb in the other hand. The performer must be careful to be in the realm of Jitter program colour-detection. Colour-detection could be impossible if performers go backwards or both sides too further. The performer should wear white or black monotone clothes, gloves and facial masks, which are for clear detectability of red, blue and green colour. For the same reason, there should be a white or black monotone back screen behind the performer.

A performer should turn on the red light bulb first, putting it in the right position for a proper pitch and amplitude. By moving the hands smoothly, the performer can express ‘glissando’. And then make timbral changes – vibration(LFO effects) level and distortion level, by moving the blue light bulb. Taking another example, if a performer puts the blue light bulb at the top-right direction, and the red light bulb at the bottom-left direction, he can make a sound of distorted low frequency droning sound. And if a player wants to control noise, only green light bulb is enough for sound, which has different texture of sound from when each player holding red and blue light bulb. Each player with green light bulb can control the amplitude and noise pattern by moving the light along X-axis and Y-axis.

In the mode of “Keyb”, which is activiated by pushing the button “Freq”. In this mode, two performers are better. One performer can play the external midi keyboard for the fixed notes while the other make timbral changes and panning effects by using red and blue light bulbs. And in this mode, the level of noise amplitude is controlled by wheel on the external keyboard, while noise pattern is changed automatically in the way of interpolation.

An Explanation of My Patch

I made the way of using body motion to make a sound, and devised a body motion-tracking method. Performers can make sound with their body moved alongside X-axis and Y-axis of a frame confined by a camera. Moreover, to control more parameters of sound, I used another method, a color-tracking method. Different colors are set to control different parameters. Therefore, one performer can control at least 4 different parameters of sound at one time with two different color objects held by hands. In this performance, red, blue and green colours are used because those two colors are the basic elements in the system of RGB, so it is easier to be detected. And for the same reason, the better color-detectability, light bulbs are chosen as “color-instruments”.

To achieve a goal, I made a brief but multi-functional patch using Max/MSP/Jitter program. The performers are detected by a camera, and color-motion is detected by an object called ‘jit.findbounds’. The object ‘jit.brcosa’, to get more clear detection, enables performers to control the level of brightness, saturation and contrast on the incoming visual signal as well as the range of maximum and minimum of color-detectability.

There are three types of sound available to be made. The first one is by sine wave generaor, and the second one is distorted sine wave by the wave shaping synthesis. The last one is by noise sound.

The X-axis of the red object controls the pitch frequency of sine wave while Y-axis of the red object controls the level of amplitude, and the X-axis of the blue object controls the level of vibrato effects while the Y-axis of the blue object controls the level of distortion. In addition, with green object, a player can control noise sound – X-axis for noise pattern and Y-axis for noise amplitude.

To compensate for the fact that it is difficult to find a position for making an exact pitch, I made a second mode. This is enabled through clicking “Freq” button, turning into “Keyb”. When “Keyb” button is activiated, a performer can choose an exact pitch using an external keyboard or the internal virtual keyboard inside the patch. Then, the X-axis of red object controls a stereo panning of sine wave while that of blue object controls a stereo panning of distorted sound. And the Y-axis of red object controls the level of amplitude while that of blue object controls the level of vibrato.

The sound generated by red and blue color is going through a few sound effectors, including delay, flanger, EQ and reverb. All these effectors can be bypassed.

Why I chose the Colour-tracking motion method?

There is a dynamic element in the graphic score of Christian Wolff. That is focused on the interaction between the performers. Christian Wolff gave a large selection of choices to performers within fixed time spans in the way of ‘cueing’ technique, which refers that the play of one player is determined or influenced by that of another player. So the chains of action and reaction are made in the form of improvisation. In this piece, performers also have the large choices of pitch with limited and unlimited duration. The note intensity has a wide range from pianissimo to forte. Performers are asked to have timbral and non-timbral changes at least more than once at a time, and are sometimes forced to end sound according to another player. So, performers can have not only their potential of making music, but should be alert to each other. This is why the dynamic energy and tension between performers can be sustained during the performance of Christian Wolff.

I focused on this dynamic energy in realizing the graphic score of Christian Wolff. Even though performers don’t have to make big gestures while playing their instruments, emotional expression through body motion is followed whenever each sound is made and played. So, as for me, I made the way of using body motion to make a sound, and devised a body motion-tracking method. Performers can make sound with their body moved alongside X-axis and Y-axis of a frame confined by a camera. Moreover, to control more parameters of sound, I used another method, a color-tracking method. Different colors are set to control different parameters. Therefore, one performer can control at least 4 different parameters of sound at one time with two different color objects held by hands. In this performance, red, blue and green colours are used because those two colors are the basic elements in the system of RGB, so it is easier to be detected. And for the same reason, the better color-detectability, light bulbs are chosen as “color-instruments”.

Explorations of the Score: Technical Aspects

The final performance took place in the atrium of Alison House on Thursday the 14th of February 2013.

Instruments played;

Acoustic guitar (internal pickup)

Kazoo (Sure SM58 Dynamic microphone)

Colour motion sensor instrument (Max/MSP)

Sampler plus DSP effects (Ableton Live)

Korg Monotron

Upright electric double bass (internal pickup) + line 6 bass pod xt live amplifier modeling FX

Voices (Sure SM58 Dynamic microphones)

The instruments were mixed using a Mackie 1202 VLZ mixer with a TC Electronic FireworX Multi Effects Processor for a little reverb.

The performances were recorded with Ableton Live.

Three laptop stands were used to hold the score playing laptops during the performances of the work.

Exploration of the Score: Score Tools

 

 

 

 

 

The score tools were developed to allow the work to be successfully rehearsed and performed within the given time frame.

Gesture sheets.

Each of page one’s 28 score gestures were edited into individual A6 sheets complete with explanations of the performance instructions. The sheets allowed the gestures to be physically manipulated / sequenced and also aided in the group’s learning of the gesture symbols. Intrinsic instruction were displayed in bold, while the extrinsic instruction required for group synchronisation were displayed in italics.

Score players

The score players were developed to allow the score sequences to be automated and to allow experimentation with networked communication between players. Max/MSP was used to produce the score player patches.

The first score player allowed each player to see their current gesture and also the current gesture being played by the other two performers. As the player selected their next gesture this information was simultaneously transmitted to the other two players as well as the number of gestures performed.

As this system was explored it became apparent that the performers had little enough time to concentrate on their own gesture symbols and that displaying the other player’s symbols while interesting was not very useful during performance.

A second version of the player was developed, this time just displaying the other performer’s gesture number and position within the score. By this time however, our experimentation with the score players had resulted in the realisation that the need to screen watch was inhibiting our performance of the piece. This was in part due to the highly technical and detailed nature of some of Wolff’s performance instructions and may not have been so problematic with a more intuitive notation system. Following this the group returned to experimenting with the gesture sheets and performances based around improvisational selection of gestures.

The Final Performance:

After experimenting with many different types of performance, we came to the conclusion that the utilisation of our voices produced the most effective peformance. In using our voices we were really able to develop the social connection that Wolff wanted between the performers. While there were merits to the instruments we used, there was not enough time to develop our skills in using them as a cohesive group. Vocal performances freed us of the machines, allowing us to focus on the gestures and on each other.

59770607

Future Projections:

Group Networked Score

The group intends to produce a networked, collaborative notation system capable of allowing players of conventional and unconventional instruments to perform structured improvisations together. This first submission has given us a feel of how to perform together, where are strengths and weaknesses lie, and what is achievable in a small time frame. With more scope now for experimentation and development.

Resources/Appendix: 

Bibliography:

Works cited:

Licht, Alan. 2007. Sound Art Beyond Music, Beyond Categories. Rizzoli International Publications.

Kelley, Caleb. 2011. Sound: Documents of Contemporary Art.  MIT Press

Cox, Christopher and Warner, Daniel, 2004. Audio Culture: Readings in Modern Music, Continuum International Publishing.

 Krukowski, Damon/Wolff, Christian. 1997,  ‘Christian Wolff’, BOMB, No. 59, p. 48-51.

 Chase/Thomas, 2010, Changing the System: The Music of Christian Wolff. Ashgate.

 Works consulted:

Wolff, Christian / Patterson, David. 1994. ‘Cage and Beyond: An Annotated Interview with Christian Wolff, Perspectives of New Music, Vol. 32, No. 2, p. 54-87.

Chase, Stephen / Gresser, Clemens / Wolff, Christian. 2004. ‘ordinary Matters: Christian Wolff on His Recent Music’, Tempo, Vol. 58, No. 229, p. 19-27.

Walker Smith, Nicola. 2001. ‘Feldman on Wolff and Wolff on Feldman: Mutually Speaking’, The Musical Times, Vol. 142, No. 1876, p. 24-27.

Websites:

www.tate.org.uk/context-comment/articles/experimental-fields-light-and-shadow

www.technologyreview.com/view/428515/how-to-steer-sound-using-light/

www.photonics.com/Article.aspx?AID=36760

vimeo.com/27389650

www.youtube.com/watch?v=QfCtHNSUq-c