Spectromorphology and Audio-Visual Performance

Real-time audiovisual performance is governed by a multitude of variables that include, but are not limited to, performance model, technology integration and aesthetic goals.  During the course of our Audio Visual Ensemble’s digital media studio project, time and time again we were asked, “What makes a convincing audiovisual performance?”  Although this is a multifaceted topic with no single definitive answer, I will offer suggestions as to how spectromorphology can be used to analyze strategies for better developing convincing real-time performance.

To start, it is important to realize that in performance situations involving interaction between performers, the audience and their medium, what viewers perceive is not necessarily the same experience as for those performing. What may deem entertaining to the performers may be misunderstood or uninteresting to the audience, especially if the performance lacks a clearly defined theme and significant dynamic change over time.  Since my background is in sound composition and performance, I have been able to draw upon some parallels of similar concerns in electroacoustic music that assist in directing this discussion.

Like language, audiovisual content is constructed from extrinsic-intrinsic building blocks and the ability to interpret their message relies on the ability to communicate meaning effectively. Denis Smalley introduced the concept of spectromorphology as a tool to assess criteria for selecting sound material and organizing structural relationships that are linked to recognizable shared experiences outside of music (Smalley 1997).  Smalley explains, “How composers conceive musical content and form- their aims, models, systems, techniques, and structural plans- is not the same as what listeners perceive in that same music.”  This begs the question, what are universal shared experiences and how might these ideas be conveyed in the realm of audio-vision.

Smalley believed that for performances which rely heavily on technology, where gesture and the corresponding results are not immediately apparent, the ability to adequately convey meaning is impaired by the inability to directly link action to a corresponding result (Smalley 1997).   He suggests that we should try to ignore the technology used in making music, and in this case, the performance, by recognizing that gesture and relationships between source and cause are a more substantial factor for communicating ideas effectively (Smalley 1997).

In a post-digital era, this is no easy feat as technology manifests on almost every level. Perhaps the paradigm has shifted since Smalley developed his initial ideas and now it is more salient to balance technology in ways that convey universal shared experience.  I propose that technology alone is not the underlying concern but that relying on technology without having developed a well-defined theme or aesthetic goal tied to significant meaning or gesture is of greater importance.

Spectromorphology is concerned with motion and growth process and points to trans-contextual interactivity of intrinsic and extrinsic relationships for answers.  These intrinsic-extrinsic relationships need not be limited to sonic or visual events themselves but can be extended to the gesture that defines them (Smalley 1997).  Smalley refers to this concept as source bonding, which relates sound structures to each other on the basis that they appear to have symbolic or literal shared origins.  Applied at both higher and lower levels of structure, physical or metaphoric gestures that clearly exemplify corresponding sonic and visual counterparts can be used to more clearly convey meaning, purpose and intent (Smalley 1997).  Translating these ideas to the audio-visual realm means linking sound and vision in ways that dynamically delineate cause and effect.  By extrapolating intrinsic perspective and translating meaning into extrinsic constructs, both the performers and audience gain deeper insight into function and significance of not only the main theme but also the individual contributing factors that make up the performance.

Whilst meaning plays an important role in disclosing intention, it cannot be convincingly conveyed without placing it into a larger structural context.  Smalley explains, “Motion and growth have directional tendencies which lead us to expect possible outcomes.”  He in turn breaks these into seven characteristic motions that relate to spectral relationships of sound production (Smalley 1997).

Motion Chart

Although it is beyond the scope of this brief essay to elaborate on each of these, it is important to note that these structural relationships can be applied to help establish dynamic trajectory in ways that strengthen structure to better convey meaning over time.  If applied to audio-visual performance, we can devise precise ways of mapping movement to states of tension and release based off of relationships dictated by transitioning events in conjunction to their starting points.  Extrapolating on this idea, if we identify events according to gesture, movement and meaning, we can fit them into a larger dynamic structure and manipulate their placement to either fulfill or break expected outcomes.  This approach in turn can be used to create a more systematic approach and aide in building more elaborate and well though out performance structures.

Stepping back from our performance and having time to contemplate its strengths and limitations, it has been beneficial to consider concepts pertaining to spectromorphology and how they could be applied to our work.  To a degree, because combined audio-visual performance was new to us, the group was short sided in its approach by placing too great of an emphasis on developing the technology used therein.  Although we settled on a theme, overall specific details failed at times to directly correlate to any shared experience outside of the sounds and visuals themselves.  The level to which our performance could grow, change and evolve may be improved with more attention to how we guide transitions through directed movement, energy and gesture.  While spectromorphology does not provide a universal set of answers, it does afford some valuable tools for diagnosing structure and strategy.

References:

Smalley, Denis. 1997. “Spectromorphology: Explaining Sound-Shapes.” Organised Sound 2 (2): 107–26.

Bass guitar DMX instrument_Submission 2

Bass guitar DMX instrument
My goal was to construct a system that is highly responsive, expressively dynamic and diverse, which can be improvised in real time using a “traditional” instrument (standard bass guitar) to play electronic sounds that trigger and modulate specific DMX light movement in a way that directly connects sound and vision (light).

Timø's Instrument
Submission one and my blog describes in lengthy detail my initial progress, observations, trials and tribulations.
dmsp.digital.eca.ed.ac.uk/blog/ave2014/category/blogs/timosblog/
Post-Submission 1 Progress
Besides handling the bulk of the administration work, organizing and booking equipment, coordinating with the venue, acting as liaison between DMSP groups, etc., I worked on refining my audio visual instrument and rehearsing.

Lamps
To add diversity and flexibility to visual aspects of my instrument, I designed a method to incorporate standard household lighting.  I purchased a 4 channel DMX dimmer pack, rewired five lamps to use Euro plug IEC adaptors and added an additional par can stage lamp.

Mapping Light
Although the lamps and lights were directly controlled by my bass guitar’s audio input signal, I required a way to map state changes to occur over time.  Through automating envelope parameters, which could be automatically or manually triggered via my setup, I achieved greater control.  This in turn contributed to keeping the lighting changes in our performance more varied.

Mapped EnvelopesRefined DMX control rackMapping Sound
Sonically my instrument needed to be flexible and able to produce a wide range of content over the course of the performance.  This ranged from sub bass, low bass, glitched rhythmic passages, percussive events, angelic synth pads, thunder, abstract mid range noise, etc.

Splitting audio input across three tracks, corresponding to different frequency banded instrument voicing, I built a series of multipurpose effect racks, which I mapped to several hardware controllers.

Audio Effect RackAdditionally, since I was able to convert my audio input to MIDI data, I built a multilayer instrument rack that allowed me to select and switch between combinations of virtual instruments and modulate their parameters in real time.

Instrument RackRoadmap
After the group devised a theme, we divided the performance into sections.   This was helpful as I was able to automate and interpolate between states by launching scene changes flexibly as needed.

Scene Selection

Theoretical Context

Please refer to my essay on spectromorphology in submission 3 for theoretical context and reference to existing scholarship in the field.

Allegoric Irony of the Documentation
It is worth mentioning that, in my opinion, the video footage of the performance failed to capture the totality of its full scope. It is ironic and disappointing that the footage is only a ‘shadow’ of the live experience.  Although we were using three separate cameras, the stage, performers, lighting and live video exceeded the boundaries of what was recorded.

We were advised to have another group document the performance.  I had meetings days ahead of time to discuss how and what needed to be captured but still the results, for the most part, were overall unsatisfying.  Although, I am grateful for the help, if we had known, we could have made slight adjustments to compensate.  With more carefully chosen camera angles, readjustment of props and a slight repositioning of the performers, the performance documentation could have captured a much better perspective and more completely conveyed what we were trying to achieve.

Audio Visual Instrument- Bass Lamp

Concept

To develop sonic occurrences that communicate direct correlation to visual counterparts, perceptively inseparable in intent, and to acknowledge silence and the absence of visual stimuli as a necessary and effective contrast.

Gaining inspiration from Tim Ingold’s article “Against Soundscape,” and discussions with Martin Parker, I became fascinated with the idea of manipulating light, rather than image to see how this might prove compelling in is own right.

Tim Ingold observes, “It is of course to light, and not to vision, that sound should be compared. The fact however that sound is so often and apparently unproblematically compared to sight rather than light reveals much about our implicit assumptions regarding vision and hearing, which rest on the curious idea that the eyes are screens which let no light through, leaving us to reconstruct the world inside our heads, whereas the ears are holes in the skull which let the sound right in so that it can mingle with the soul.”

Even with our eyes closed it is still possible to perceive flashes, flickering or the presence of light and gain some indication that there is activity and movement. Light can be projected onto surfaces, broken by other objects, used to induce shadows, or add subtle touches begetting mood or ambience.

My Role

Construct a system that is highly responsive, expressively dynamic and diverse, which can be improvised in real time using a “traditional” instrument (standard bass guitar) to play electronic sounds that trigger and modulate specific DMX light movement in a way that directly connects sound and vision (light).  Submission one outlines my progress, observations, trials, tribulations, and aims to discuss plans for further development.

DMX setup

When I began this course, I had no previous experience with DMX and needed to conduct an extensive amount of research to over come numerous technical issues to get my system working. Step one was to get four VISAGE 0493 LED lights controlled remotely through Ableton Live running DMX Max for Live devices. Most of the preliminary documentation is explained in detail on my AVE blog.

DMX First Run

DMX Progress

DMX… a bit further

Findings:

The easiest way to bridge connection from Live to the DMX LED lights was to send MIDI data out of an Audio/MIDI interface and into the school’s DMX console which converts MIDI to DMX.  After modifying Matthew Colling’s Max for Live patches to accommodate four lights, I was to some extent able to control them from within Ableton Live.

The easiest way is sometimes not necessarily the best way as the DMX lights performed sluggish and were very latent.  The lights would at times remain on when switched off and were unpredictable and difficult to control precisely. Additionally, they would flicker intermittently and pulse on and off on their own accord.  After speaking with Collings, he confirmed having the same issue, which he was not able to resolve .

Matt M4L DMX Devices

Although, I experienced limitations with only being able to control two channels with the DMAX devices via the Enttec DMX USB Pro, the setup was much more responsive, less latent, did not flicker, and handled more accurately.  Seeking perfection, I went back to trouble shooting the Enttec box and, after much tinkering, discovered that the issue was with Olaf Matthew’s Max/MSP external ‘dmxusbpro’.  I was able to overcome the channel limitations by using a beta abstraction by David Butler imp.dmx that focuses on jitter matrices to store, read and write data rather than reading straight MIDI values. Using the imp.dmx help file, I turned this into a 27 channel (four lights- each 7 channels) Max for Live Device.

T-Ø_DMX M4L Device Presentation

T-Ø impdmx Max

Up to this point, the Enttec setup has been more stable and the device functions somewhat as intended.  I did however need to limit the number of channels to 27 instead of 512 to accommodate a higher frame rate as to not overload the device when modulating large amounts of control data.

Audio Setup

The way in which a performer interacts in real time performance adds another dimension to the visual component.  I aim to hide my light emitting, distracting, computer from audience view and have toyed with the idea of performing behind a screen back lit by DMX lights (see video on DMX improv with shadows).   Although there is still much work to be done refining a setup that will allow me to do such, I have put together a working model that uses audio to MIDI conversion to control virtual instruments. The bass’s audio input can be additionally added as another voice, processed and manipulated in real time.

Equipment:

Ableton Live 9
PUSH Controller
Korg NanoKontrol
Roland EV-5 Foot controller
Max For Live
Bass Guitar
SoftStep Foot Controller
NI Virtual Instruments

Audio Setup

Instrument voicing:

CH 1- Bass Audio Input (Amp simulated bass sounds, Rhythmic Clicks/Beats, Distorted) 
CH 2- Sub Bass (Sustained or Arpeggiated)
Ch 3- Pads, Leads, Atmospheric Noise

T-Ø AVE instrument Live

Mapping Sound to Light

As a means to bridge visual and sonic events, I have experimented with different methods of mapping audio frequency to DMX control data.

EX 1- Three Lights, Three Voices, Three Colors

In the first example, using the DMX console setup, I’ve daisy-chained and mapped three different colored lights (Red, Blue, White). Each respond to a different audio source in Live, routed to envelope followers, that are mapped to control an individual light’s DMX values (0-255).

Rhythmic- Red > Light 1 (right)
Sub- Blue > Light 2 (middle)
Noise Lead – White > Light 3 (left)


Findings: Although this scenario might be interesting for a short period of time.  It did not communicate an expansive dynamic range of expression. However, projecting onto an object or wall might be worth further investigation.

EX 2- Screen, Lights, Proximity

Wanting to explore greater dimension and possibility, I brought in a huge 10 by 10 foot back projection screen borrowed from LTSTS (no easy feat to transport or assemble).

Screen

Due to it’s size, we were limited to conducting experiments in a bright noisy Atrium in Allison house. Below are two video examples of a 3 light setup without audible instrumentation.

Lights_Screen_No Sound_Close

Lights_Screen_No Sound_Distant

Findings: In this well lit environment, when the LEDs are off you can see a grey background caused by the screen itself.  A dark space is needed for this to be optimally effective.  Additionally, the effects of the lighting change with proximity.  It could prove interesting to stagger light distances at different stages of the performance.

EX 3- Giving Sounds Color and Movement

Using a Sony handy-cam we filmed in a lit Atrium. I am using two synced lights and the improved Enttec DMX Pro setup controlled by 3 different instrument voices. Each instrument voice is assigned a color.  The control is driven by individual audio output, linked to a corresponding envelope follower and mapped to DMX color values.

Angelic Pad- White
Rhythmic Bass – Red
Sub- Blue

EX 4- Combed Voicing, New Permutations

The following is an example of how the basic colors and voicing work in conjunction with one another generating new effects and color combinations but are still able to return to their original state (red, white, blue) when played individually.

EX 5- Adding Shadows

Improvisation combining and switching between voices and lighting control while experimenting with effects produced by shadows.

Findings: Using the Enttec DMX PRO and a projection screen yield a higher-quality dynamic range of expression. The lights are significantly more responsive but the setup still requires tweaking to generate a greater range for fade values. (i.e. contrasting quite to loud and to create a pulsing effect for pulsating sustained sounds).

Our camera distorts when recording sounds linked to the strobe parameter.  This phenomena creates an effect in itself and possibly could be captured, projected, and even fed back and looped, as it looks quite interesting.

Shadows created from behind the screen create an intriguing result and may be useful  to extend meaning, depth and character if carefully executed or choreographed.  Additionally, experimenting with placing myself with my bass guitar or another performer behind the screen might help tie together ideas for a more integrated and engaging audio visual instrument.

Critical Analysis and Moving Forward:

Although these experiments show progress and potential, there is still much work to be done on both the audio and visual fronts.  At this stage, as is and on its  accord, I do not envision my instrument being dynamic or compelling enough to sustain meaning and interest for extended periods of time.  As my group has been working on experiments individually, it’s been difficult to directly access how this will function as part of a larger performance.  Moving forward, I aim to develop my instrument further in its own right as well as work towards integrating it as a subsection of the ensemble.

Plans for further development:

Dial in specific sounds, effects and performance techniques that optimize sound generation and DMX feedback that work well standalone as well as with the rest of the ensemble.

The audio to MIDI conversion needs to be refined.  Tracking bass frequency is no easy task and I often get unexpected and false triggered notes.

Set up foot controllers to aid in modulating sonic and visual elements. Up to this point, these have not been implemented.

Tweak and scale modulation sources and envelope followers to be more dynamic with mode, strobe and fade values.  Develop precise control mechanisms that will allow for better ways of expressing relationships between sound and silence.

Investigate incorporating a dimmer pack and setting up additional lighting sources (lamps) that can be placed around the stage or in the audience. One idea includes switching on/off various audio effect processing or changing instrument voicing to trigger corresponding light states (on, of, dim, bright, flicker).

Work as a group to quickly identify a unifying theme.  Create a map of how our performance will move throughout time and devise how we can keep it engaging throughout its duration. Schedule regular group rehearsals in an effort to better understand how we operate as a dynamic and cohesive unit.

DMX… a bit further.

I spent the last week developing a system for mapping DMX lighting based on MIDI automation and frequency content.

First, I made a consorted effort to get Martin’s Enttec DMX USB Pro working with the DMAX Max for Live Objects. The Enttec installers and documentation are set up to accommodate Window’s computers mostly and not necessarily easy to navigate for Mac users.  The DMAX (Max for Live devices) require that you install Olaf Matthew’s dmxusbpro external and an additional driver.

dmxusbpro external – “In order to work with this object you have to install the Virtual Com Port driver for the interface. It will not work with the FTD2XX driver. The latest driver can be found at www.ftdichip.com/.”

Additionally, you will need to know your computer’s com port to run the dmxpro helpfile. On a Macintosh system, it should look something like: /dev/cu.usbserial-ENT3IHSX.

Unfortunately, I was only able to get the first 2 channels Red and Green to work.  Despite the DMAX hub-monitor indicating that the other channels where receiving proper messages, these signal were not being transmitted to the lights.  I’ve email David Butler (DMAX) to see if he has any suggestions.

Below is a video from what I was able to get working.  Using one light, I was able to map a single drum loop and split audio frequency content.  Red is for lows (Kick) and green (Snare) for mids/highs.

At this point, it’s difficult to say why the other channels aren’t functioning correctly with DMAX.  Too many pieces to the puzzle…. Max externals, Max version compatibility, 5 to 3 pin cables, Max for Live devices, etc.  With looming deadlines, I resorted back to using the DMX console and modified Matt Colling’s DMX Max Midi patches to work with all four lights inside Ableton Live. In the video below, I’ve daisy-chained and mapped three lights. Each respond to different frequency content based on audio signals, which are then mapped to control values.

My first impression is that the DMX MIDI devices, through the console, are more sluggish than the Enttec DMX Pro.  This makes perfect sense as latency plays a huge factor in timing.  The DMX Pro allows for direct connection to the lights, where as the console is need to work directly with sending MIDI and then translating those signals to DMX.  For the mean time, I’ll continue working with the console as it’s proven to be the most reliable.

FYI… another option is to use the American DJ,  myDMX 2.0.  Cameron MacNair has written an informative blog about getting it up and running.

dmsp.digital.eca.ed.ac.uk/blog/actionsound2014/2014/02/18/dmx-setup-configuration-and-aesthetics/

 

DMX Progress

Russell, Shuman, and I met up with Christos on Wednesday to get the DMX rig working. Using the DMX console, standard MIDI out of my Fireface 400 and Matt Collings’ DMX Max Patches, we were able to successfully control the lights. Detailed info on how to set everything up can be found on the Studio wiki. (I haven’t totally given up on the previously mentioned USB DMX boxes but the console has proven to be very straightforward in comparison.)

Today I received Max for Live versions of Matt’s Max patches and, as a first experiment,  plan to use filters and envelope followers to map audio input based on frequency to drive the controls.  Other ideas include mapping events to change over time… function objects, pattrstorage, automation, clip envelopes?

***Special thanks to Christos for taking the time out to share his expertise.***

IMG_4732 IMG_4731 IMG_4729 IMG_4728 IMG_4726

Creating a video control network…

 

IMG_4702

Attempt to connect our computers wireless and through a router for video control. Once again, not as easy as we had hoped. Slight success with the help of Martin and the gang.

The lesson I learned: it will take additional time and effort to make this work properly in a way that may be functionally meaningful.  I’m not opposed to getting this to work but have some questions… Do we need this?  If so, why do we need this? What specific parameters do we want to control? Is there a more straight forward way to do it?

DMX First Run

Russell, Marco and I met up last week to try to connect the DMX lighting rig.  After 3 hours we still were not able to get much of anything other than turning on the lights themselves.  Challenged by 3 and 5 pin cable mismatches, driver issues, and an overall inability to figure out how best direct our efforts, we made little progress and have decided to call in Christos next week to help give some perspective. Fingers crossed.

For the setup, I’d like to use the Enttec USB DMX Pro, 4 x VISAGE 0493 LED Lights, DMAX (Max For Live device) and dmxusbpro object by Olaf Matthes.

The DMX USB Pro is industry standard interface for connecting PCs and MACs to DMX512 lighting networks

DMaX is a system for controlling DMX devices using Ableton Live with Max For Live. It’s modular and expandable, consisting of a central hub device which collates information and communicates with hardware, and numerous fixture devices which contain interfaces for specific pieces of equipment. Support for the Enttec hardware device is provided through use of Olaf Matthes Max/MSP external ‘dmxusbpro’. It is available to purchase from the above link, although it is advisable to download the demo beforehand to ensure compatibility with your system.”

The dmxusbpro external for Max gives access to the Enttec DMX USB Pro interface and allows to send or receive DMX 512 data.”

After reading through manuals and support documentation, there are mixed signals as to which driver we need to install to make the external work. The driver installation process requires hacking into the terminal for one instance and if you don’t choose wisely, one driver overrides the other.  Frightening. Additionally, the dmxusbpro object documentation states, “Some people have reported it works in Max6 as well, others says it doesn’t work in Max6.”

Are we on the right path or barking up the wrong tree?