final performance: technical details

The performance I presented as the final project involves an audio/visual instrument composed of a few different systems.

Audio

The audio processes are essentially embodied by a Max (Cycling’74) patch. It is a rather complex system and makes use of many non native objects as well as one proprietary process (a compressor) so I invite who is interested in having it to contact me at my e-mail address to have instructions.

labelgen

The first sound heard through the piece is the signal of a microphone attached to a wooden box; the signal is then processed by two sets of eight resonant filter banks, each featuring eleven filters and each being independent from one another (they can keep ringing while other ones are excited). Let us consider the first set: each bank has fixed gains for each filter and a fixed fundamental frequency assigned to the first one, but this first filter has a very low gain, so that it is hard to perceive it compared to the others. The remaining ten filters have variable frequencies that are most of the time aliased beyond the Nyquist frequency and thus symmetrically reflecting within the audible spectrum, contributing to make the fundamental frequency of each bank imperceptible and to achieve a general timbral complexity. The current active bank is selected after each attack exceeding a value of 2.5 % of the full digital scale, while frequencies are shifted after a certain number of these attacks; this number is controlled by the height of the performer’s left shoulder. The second set works in a slightly different way: here the fundamental frequency of each bank is more audible, is lower and there isn’t aliasing, so that the final result is more similar to standard modal processes; however, fundamental frequencies across the banks are more spread out (55 to 220 Hz), so variety is preserved. Current bank is selected after each attack exceeding a value of 2.0 % of the full digital scale and frequencies are always moving in accordance to a random signal, whose rate and scale though is controlled by the height of the performer’s left shoulder. For both sets, the Q factor of the filters is scaled depending on their index (the higher the index, the higher the Q, but because of the aliasing higher index does not mean higher frequency) and globally multiplied by the value of the height of the performer’s shoulder. Lastly, the bank selection is not abrupt, but happens via a smooth routing with a ramp time of 5 milliseconds: this feature had to be introduced to deal with more continuous signal coming from the microphone; for similar reasons, a V-shaped envelope of a total time of 10 milliseconds multiplies the audio input so that, when frequencies of the first bank are shifted, its level is zero.

A second, slightly less “natural” voice is a live granulation of the reverberation – incidentally, a signal driven, attack responsive one: in turns, an ever-changing space – of the first voice. Even if it is controlled by the right shoulder of the performer, its sonic nature is more abstract and the gestures much less “organic”, providing most of the times crescendos which culminate with evenly-spaced short grains.
A third voice is the feedback system which can be heard towards the end of the first section of the performance.

Interface

As mentioned above, many parameters are controlled by the movement of the performer’s shoulders. This is done using a pair of stretch sensors (available here: www.adafruit.com/products/519) attached to my pants:

IMG_0184

the other end of the electrical cable is then connected to a voltage divider circuit connected to an Arduino board, as well explained I this tutorial: learn.adafruit.com/thermistor/using-a-thermistor.

The Max patch I used to interface Arduino is available here: playground.arduino.cc/uploads/Interfacing/Arduino2Max_Nov2012.zip.

I found this solution very effective compared to other body motion tracking systems, first of all because they can cost thousands of pounds and secondly because it naturally provides a physical feedback of the stretching force.

Visuals

I worked on a set of four televisions fed with audio signal (scaled up 10000 times) to make them flicker. In most cases I did it using their SCART plug:

IMG_0232

to do that I soldered some odd cables with a TRS jack on one end and a SCART plug on the other:

IMG_0237

Here’s a video of a test:

For previous experiments, some of which ended up with nice videos, see my blog: https://dmsp.digital.eca.ed.ac.uk/blog/ave2014/category/blogs/marcosblog/

Lissajous and Forbidden Motion with PS3 Controller

The final version of the audio-visual performance system used for this performance expanded upon the Lissajous Organ presented in submission1.  I developed a second audio-visual instrument named Forbidden Motion.  By running distorted, beat-based noise through a subtractive synthesis processes similar to Convolution Brother’s ‘forbidden-planet’ and finally through Audio Damage’s EOS reverb, a rich, interesting sound was generated.

Spectral-Motion

The high frequency sounds in this clip are a result of this proces:

vimeo.com/92948778

A simple ioscbank was also implemented to generate dense amounts of sine waves. Lastly, abilities to degrade the audio signal allowed for dirty, crunchy sonorities in the aesthetics of our cave theme.

I chose to use visuals typical of static on analog TVs for this part of my system.

AnalogVisuals

By modulating brightness controls via audio input, these visuals responded to the audio output of this part of my system.

Analog-visuals

2 for 1 control

In developing a way to control my Lissajous Organ and Spectral Motion together, I stumbled upon a system of control in which I could control 16 systems or equal or greater size than the ones I used.  By packaging the data coming out from a controller and routing this data in an efficient way, simple controls can be mapped to many levels of parameters.

packaging:

Packaging_HI_Data_For_Heirachy

Macro routing:

Example of routing

Inside R1 Buffer routes- Gated Buffer routing:

Lock-Level Buffer gates

An unexpected result of using a system structured in this fashion was the ability to combine both visual systems together on the fly.

VisualsTogether

An important feature that this system of control was freedom from my computer screen.  This allowed for more gesture driven and intimate interactions with ensemble members like the ones seen here:

vimeo.com/92936792

Unfortunately, my visuals projected into the audience during this clip were not captured, but are the analog TV type discussed earlier.

More detail about the structuring philosophies of this system can be found in my Submission3 blog post.

Audio-visual spacialization

I chose to use a projector capable of movement so as to utilize the space in which we performed.  Here you can see it projected onto the floor:

vimeo.com/92936795

In addition to the audio spacialization, visual space was also planned out so as to accentuate the performance space and leave room for each others visuals to stand out.

vimeo.com/92947352

Problems encountered

Unresolved differences regarding simple work-place etiquette led to overwhelming, emotional stress and finally extreme verbal harassment the day before our final performance.  Although this course appears structured in a hierarchical fashion, in order to consult supervisors when needed, no coherent plan for conflict resolution resulted from this structure.  Even when approached multiple times with the same issue, repetitive advice received from my supervisor in regards to our issues of diversity was, “You cannot make anybody do anything.”  This advice efficiently dissolved the fragile bonds that existed between individuals with different backgrounds.  I lament not being able to overcome these issues and believe the paradigm of conflict-resolution in the DMSP course needs to be contemplated and restructured.

AVE Video Documentation

Featured

In our final performance in the University of Edinburgh’s “Inspace”, we chose to model our audio-visual narrative around Plato’s “Allegory of a Cave.”  Individual performers were allowed a large degree of freedom to interpret the story on a personal level. The ensemble placed particular emphasis on designing instruments that cogently connected audio and visual aspects in ways that would remain convincing and engaging for the both audience and performers alike.

Live Performance:

Also on YouTube: Audiovisual Ensemble Live Performance

Documentary:

Also on YouTube: A Documentary of Audiovisual Ensemble Project

The AVE 2014 ensemble aspired to avoid clichés associated with the theme and audio-visual performance in general. The “story” served mainly as a loose timeline to introduce transitions, build new textures, and to divide content more fluidly.  Strictly for rehearsal purposes, the following outline was developed.  Although not originally intended for this function, the outline can serve as a kind of “score” or road-map for examining our ideas on a more literal level.

Score_To_Follow

Bass guitar DMX instrument_Submission 2

Bass guitar DMX instrument
My goal was to construct a system that is highly responsive, expressively dynamic and diverse, which can be improvised in real time using a “traditional” instrument (standard bass guitar) to play electronic sounds that trigger and modulate specific DMX light movement in a way that directly connects sound and vision (light).

Timø's Instrument
Submission one and my blog describes in lengthy detail my initial progress, observations, trials and tribulations.
dmsp.digital.eca.ed.ac.uk/blog/ave2014/category/blogs/timosblog/
Post-Submission 1 Progress
Besides handling the bulk of the administration work, organizing and booking equipment, coordinating with the venue, acting as liaison between DMSP groups, etc., I worked on refining my audio visual instrument and rehearsing.

Lamps
To add diversity and flexibility to visual aspects of my instrument, I designed a method to incorporate standard household lighting.  I purchased a 4 channel DMX dimmer pack, rewired five lamps to use Euro plug IEC adaptors and added an additional par can stage lamp.

Mapping Light
Although the lamps and lights were directly controlled by my bass guitar’s audio input signal, I required a way to map state changes to occur over time.  Through automating envelope parameters, which could be automatically or manually triggered via my setup, I achieved greater control.  This in turn contributed to keeping the lighting changes in our performance more varied.

Mapped EnvelopesRefined DMX control rackMapping Sound
Sonically my instrument needed to be flexible and able to produce a wide range of content over the course of the performance.  This ranged from sub bass, low bass, glitched rhythmic passages, percussive events, angelic synth pads, thunder, abstract mid range noise, etc.

Splitting audio input across three tracks, corresponding to different frequency banded instrument voicing, I built a series of multipurpose effect racks, which I mapped to several hardware controllers.

Audio Effect RackAdditionally, since I was able to convert my audio input to MIDI data, I built a multilayer instrument rack that allowed me to select and switch between combinations of virtual instruments and modulate their parameters in real time.

Instrument RackRoadmap
After the group devised a theme, we divided the performance into sections.   This was helpful as I was able to automate and interpolate between states by launching scene changes flexibly as needed.

Scene Selection

Theoretical Context

Please refer to my essay on spectromorphology in submission 3 for theoretical context and reference to existing scholarship in the field.

Allegoric Irony of the Documentation
It is worth mentioning that, in my opinion, the video footage of the performance failed to capture the totality of its full scope. It is ironic and disappointing that the footage is only a ‘shadow’ of the live experience.  Although we were using three separate cameras, the stage, performers, lighting and live video exceeded the boundaries of what was recorded.

We were advised to have another group document the performance.  I had meetings days ahead of time to discuss how and what needed to be captured but still the results, for the most part, were overall unsatisfying.  Although, I am grateful for the help, if we had known, we could have made slight adjustments to compensate.  With more carefully chosen camera angles, readjustment of props and a slight repositioning of the performers, the performance documentation could have captured a much better perspective and more completely conveyed what we were trying to achieve.

Video montage

In the field of audio visual ensemble, it’s always hard to decide whether the sound designers or visual designers should be first to make the next decisive audiovisual step. There would be two options, that is, the sound designers write the music ahead having seen the relevant section of unedited visual content, or to compose it once the visual designers have already edited it. In terms of our AVE group we’ve decided to present our work in the form of a live performance, which requires both sound and visual designers to manipulate their instruments in real time rather than to project the already-integrated work while standing aside. Therefore the first choice seems more feasible for us, which requires visual designers to develop the montage from sound designers’ existing score.

This procedure involved the sound designers (Russell and Timo) drawing the ‘architectural’ plan itself, which would then determine how the audiovisual montage of visual designers (Jessamine and I) would operate. Responsible for the video part in our group I have to create a video montage to match Russell and Timo’s sound cues as well as the whole theme. Here are the main steps of this procedure.

1.  I’ve made a series of video clips which match the theme Cave, using softwares such as      After Effects and Premiere.

video clips_01 Trapped in the cave

video clips_01 Trapped in the cave  vimeo.com/92611381

video clips_02 Go out of the cave

video clips_02 Go out of the cave  vimeo.com/92611782

video clips_03 Outside of the cave

video clips_03 Outside of the cave  vimeo.com/92612143

video clips_04 Outside of the cave

video clips_04 Outside of the cave  vimeo.com/92612228

video clips_05 Outside of the cave

video clips_05 Outside of the cave  vimeo.com/92612279

video clips_06 Outside of the cave

video clips_06 Outside of the cave  vimeo.com/92612428

video clips_07 Go back to the cave

video clips_07 Go back to the cave  vimeo.com/92612493

video clips_08 Struggle in the cave

video clips_08 Struggle in the cave  vimeo.com/92612641

video clips_09 Struggle in the cave

video clips_09 Struggle in the cave  vimeo.com/92612759

video clips_10 The ending

video clips_10 The ending      vimeo.com/92612824

2.   I must have in memory and be very familiar with all the video clips which can be                   manipulated and transformed at any point. With a sophisticated Max patch (thanks to         Martin) and an external controller I could modify many parameters of the videos in real-       time such as frame mangling, brightness, contrast, blur, colour, scale, etc..Moreover all       the parameters could be set to audio-controlled, which is of crucial importance in an           audio-visual-ensemble live performance.

Control the video using Max

Control the video using Max

Max patch interface

Max patch interface

3.  Attention to details during rehearsals and listen again and again to the recorded sound        of the rehearsal video, until the moment arises when I can imagine a series of images          which could correspond with the sound, and realize the effect I had in mind in the next        rehearsal. During this process the matching of the sound cues with corresponding              section of visual representation has produced a satisfactory harmony to some extent.