Lissajous and Forbidden Motion with PS3 Controller

The final version of the audio-visual performance system used for this performance expanded upon the Lissajous Organ presented in submission1.  I developed a second audio-visual instrument named Forbidden Motion.  By running distorted, beat-based noise through a subtractive synthesis processes similar to Convolution Brother’s ‘forbidden-planet’ and finally through Audio Damage’s EOS reverb, a rich, interesting sound was generated.

Spectral-Motion

The high frequency sounds in this clip are a result of this proces:

vimeo.com/92948778

A simple ioscbank was also implemented to generate dense amounts of sine waves. Lastly, abilities to degrade the audio signal allowed for dirty, crunchy sonorities in the aesthetics of our cave theme.

I chose to use visuals typical of static on analog TVs for this part of my system.

AnalogVisuals

By modulating brightness controls via audio input, these visuals responded to the audio output of this part of my system.

Analog-visuals

2 for 1 control

In developing a way to control my Lissajous Organ and Spectral Motion together, I stumbled upon a system of control in which I could control 16 systems or equal or greater size than the ones I used.  By packaging the data coming out from a controller and routing this data in an efficient way, simple controls can be mapped to many levels of parameters.

packaging:

Packaging_HI_Data_For_Heirachy

Macro routing:

Example of routing

Inside R1 Buffer routes- Gated Buffer routing:

Lock-Level Buffer gates

An unexpected result of using a system structured in this fashion was the ability to combine both visual systems together on the fly.

VisualsTogether

An important feature that this system of control was freedom from my computer screen.  This allowed for more gesture driven and intimate interactions with ensemble members like the ones seen here:

vimeo.com/92936792

Unfortunately, my visuals projected into the audience during this clip were not captured, but are the analog TV type discussed earlier.

More detail about the structuring philosophies of this system can be found in my Submission3 blog post.

Audio-visual spacialization

I chose to use a projector capable of movement so as to utilize the space in which we performed.  Here you can see it projected onto the floor:

vimeo.com/92936795

In addition to the audio spacialization, visual space was also planned out so as to accentuate the performance space and leave room for each others visuals to stand out.

vimeo.com/92947352

Problems encountered

Unresolved differences regarding simple work-place etiquette led to overwhelming, emotional stress and finally extreme verbal harassment the day before our final performance.  Although this course appears structured in a hierarchical fashion, in order to consult supervisors when needed, no coherent plan for conflict resolution resulted from this structure.  Even when approached multiple times with the same issue, repetitive advice received from my supervisor in regards to our issues of diversity was, “You cannot make anybody do anything.”  This advice efficiently dissolved the fragile bonds that existed between individuals with different backgrounds.  I lament not being able to overcome these issues and believe the paradigm of conflict-resolution in the DMSP course needs to be contemplated and restructured.

Bass guitar DMX instrument_Submission 2

Bass guitar DMX instrument
My goal was to construct a system that is highly responsive, expressively dynamic and diverse, which can be improvised in real time using a “traditional” instrument (standard bass guitar) to play electronic sounds that trigger and modulate specific DMX light movement in a way that directly connects sound and vision (light).

Timø's Instrument
Submission one and my blog describes in lengthy detail my initial progress, observations, trials and tribulations.
dmsp.digital.eca.ed.ac.uk/blog/ave2014/category/blogs/timosblog/
Post-Submission 1 Progress
Besides handling the bulk of the administration work, organizing and booking equipment, coordinating with the venue, acting as liaison between DMSP groups, etc., I worked on refining my audio visual instrument and rehearsing.

Lamps
To add diversity and flexibility to visual aspects of my instrument, I designed a method to incorporate standard household lighting.  I purchased a 4 channel DMX dimmer pack, rewired five lamps to use Euro plug IEC adaptors and added an additional par can stage lamp.

Mapping Light
Although the lamps and lights were directly controlled by my bass guitar’s audio input signal, I required a way to map state changes to occur over time.  Through automating envelope parameters, which could be automatically or manually triggered via my setup, I achieved greater control.  This in turn contributed to keeping the lighting changes in our performance more varied.

Mapped EnvelopesRefined DMX control rackMapping Sound
Sonically my instrument needed to be flexible and able to produce a wide range of content over the course of the performance.  This ranged from sub bass, low bass, glitched rhythmic passages, percussive events, angelic synth pads, thunder, abstract mid range noise, etc.

Splitting audio input across three tracks, corresponding to different frequency banded instrument voicing, I built a series of multipurpose effect racks, which I mapped to several hardware controllers.

Audio Effect RackAdditionally, since I was able to convert my audio input to MIDI data, I built a multilayer instrument rack that allowed me to select and switch between combinations of virtual instruments and modulate their parameters in real time.

Instrument RackRoadmap
After the group devised a theme, we divided the performance into sections.   This was helpful as I was able to automate and interpolate between states by launching scene changes flexibly as needed.

Scene Selection

Theoretical Context

Please refer to my essay on spectromorphology in submission 3 for theoretical context and reference to existing scholarship in the field.

Allegoric Irony of the Documentation
It is worth mentioning that, in my opinion, the video footage of the performance failed to capture the totality of its full scope. It is ironic and disappointing that the footage is only a ‘shadow’ of the live experience.  Although we were using three separate cameras, the stage, performers, lighting and live video exceeded the boundaries of what was recorded.

We were advised to have another group document the performance.  I had meetings days ahead of time to discuss how and what needed to be captured but still the results, for the most part, were overall unsatisfying.  Although, I am grateful for the help, if we had known, we could have made slight adjustments to compensate.  With more carefully chosen camera angles, readjustment of props and a slight repositioning of the performers, the performance documentation could have captured a much better perspective and more completely conveyed what we were trying to achieve.

draft of “Score” for Performance

Hello team!

Here is the tentative plan for our performance.  Shuman and Timø specifically expressed an interest in having a solid plan so they can ‘dial in’ and/or make presets for different sections.  Let me know if there are any glaring mistakes, and I will fix them.  Otherwise, read it over and come in with some thoughts and/or concerns about how we can improve our performance for the start of next rehearsal.

Gameplan_March18

Thank you!

Russell

New System With Passage Readings

After having a break down with my PS3 controller (2 pound carboot sale controller stopped responding! 🙁 )  I’m back in action with a video game controller that has a gyroscope in it!  Now I really don’t need a gametrak to get gestural control.

As I mentioned before, I’m going to incorporate three different audio types with corresponding visuals.

The first will be what you have seen in submission 1 with slight modifications for better spacialization and more rhythmic possibilities.

The second will be the audio patch that I made for Jessamine’s project, but I will be controlling it with the PS3 controller.  minimal black and white visuals (similar to Marco’s TVs) for this one as I am hoping to respond to Jessmine and Shuman’s visuals in real time and do not want to detract from them.  It will be incorporating some of Timøs sound files and Marco’s IRs.  This one will have a little of everybody!

The third will be some synthesized speech readings of Platos “Allegory of the Cave” as that is our decided theme.  As discussed in group, it would be interesting to go from digital to analog, or analog to digital.  So I’m hoping to potentially record myself reading passages and transition between the two.  I dont know yet if I want the speech to be recognizable or just noise.  I think I will have solid colors for this one leading to blinding white as we leave the cave.

here is an example of some of the digital speech.  It’s using aka.speech, so it sounds EXTRA digital (which I am going for).

13_BIG2.aif      

14_BIG3.aif

What are your thoughts?

 

 

Audio Visual Instrument- Bass Lamp

Concept

To develop sonic occurrences that communicate direct correlation to visual counterparts, perceptively inseparable in intent, and to acknowledge silence and the absence of visual stimuli as a necessary and effective contrast.

Gaining inspiration from Tim Ingold’s article “Against Soundscape,” and discussions with Martin Parker, I became fascinated with the idea of manipulating light, rather than image to see how this might prove compelling in is own right.

Tim Ingold observes, “It is of course to light, and not to vision, that sound should be compared. The fact however that sound is so often and apparently unproblematically compared to sight rather than light reveals much about our implicit assumptions regarding vision and hearing, which rest on the curious idea that the eyes are screens which let no light through, leaving us to reconstruct the world inside our heads, whereas the ears are holes in the skull which let the sound right in so that it can mingle with the soul.”

Even with our eyes closed it is still possible to perceive flashes, flickering or the presence of light and gain some indication that there is activity and movement. Light can be projected onto surfaces, broken by other objects, used to induce shadows, or add subtle touches begetting mood or ambience.

My Role

Construct a system that is highly responsive, expressively dynamic and diverse, which can be improvised in real time using a “traditional” instrument (standard bass guitar) to play electronic sounds that trigger and modulate specific DMX light movement in a way that directly connects sound and vision (light).  Submission one outlines my progress, observations, trials, tribulations, and aims to discuss plans for further development.

DMX setup

When I began this course, I had no previous experience with DMX and needed to conduct an extensive amount of research to over come numerous technical issues to get my system working. Step one was to get four VISAGE 0493 LED lights controlled remotely through Ableton Live running DMX Max for Live devices. Most of the preliminary documentation is explained in detail on my AVE blog.

DMX First Run

DMX Progress

DMX… a bit further

Findings:

The easiest way to bridge connection from Live to the DMX LED lights was to send MIDI data out of an Audio/MIDI interface and into the school’s DMX console which converts MIDI to DMX.  After modifying Matthew Colling’s Max for Live patches to accommodate four lights, I was to some extent able to control them from within Ableton Live.

The easiest way is sometimes not necessarily the best way as the DMX lights performed sluggish and were very latent.  The lights would at times remain on when switched off and were unpredictable and difficult to control precisely. Additionally, they would flicker intermittently and pulse on and off on their own accord.  After speaking with Collings, he confirmed having the same issue, which he was not able to resolve .

Matt M4L DMX Devices

Although, I experienced limitations with only being able to control two channels with the DMAX devices via the Enttec DMX USB Pro, the setup was much more responsive, less latent, did not flicker, and handled more accurately.  Seeking perfection, I went back to trouble shooting the Enttec box and, after much tinkering, discovered that the issue was with Olaf Matthew’s Max/MSP external ‘dmxusbpro’.  I was able to overcome the channel limitations by using a beta abstraction by David Butler imp.dmx that focuses on jitter matrices to store, read and write data rather than reading straight MIDI values. Using the imp.dmx help file, I turned this into a 27 channel (four lights- each 7 channels) Max for Live Device.

T-Ø_DMX M4L Device Presentation

T-Ø impdmx Max

Up to this point, the Enttec setup has been more stable and the device functions somewhat as intended.  I did however need to limit the number of channels to 27 instead of 512 to accommodate a higher frame rate as to not overload the device when modulating large amounts of control data.

Audio Setup

The way in which a performer interacts in real time performance adds another dimension to the visual component.  I aim to hide my light emitting, distracting, computer from audience view and have toyed with the idea of performing behind a screen back lit by DMX lights (see video on DMX improv with shadows).   Although there is still much work to be done refining a setup that will allow me to do such, I have put together a working model that uses audio to MIDI conversion to control virtual instruments. The bass’s audio input can be additionally added as another voice, processed and manipulated in real time.

Equipment:

Ableton Live 9
PUSH Controller
Korg NanoKontrol
Roland EV-5 Foot controller
Max For Live
Bass Guitar
SoftStep Foot Controller
NI Virtual Instruments

Audio Setup

Instrument voicing:

CH 1- Bass Audio Input (Amp simulated bass sounds, Rhythmic Clicks/Beats, Distorted) 
CH 2- Sub Bass (Sustained or Arpeggiated)
Ch 3- Pads, Leads, Atmospheric Noise

T-Ø AVE instrument Live

Mapping Sound to Light

As a means to bridge visual and sonic events, I have experimented with different methods of mapping audio frequency to DMX control data.

EX 1- Three Lights, Three Voices, Three Colors

In the first example, using the DMX console setup, I’ve daisy-chained and mapped three different colored lights (Red, Blue, White). Each respond to a different audio source in Live, routed to envelope followers, that are mapped to control an individual light’s DMX values (0-255).

Rhythmic- Red > Light 1 (right)
Sub- Blue > Light 2 (middle)
Noise Lead – White > Light 3 (left)


Findings: Although this scenario might be interesting for a short period of time.  It did not communicate an expansive dynamic range of expression. However, projecting onto an object or wall might be worth further investigation.

EX 2- Screen, Lights, Proximity

Wanting to explore greater dimension and possibility, I brought in a huge 10 by 10 foot back projection screen borrowed from LTSTS (no easy feat to transport or assemble).

Screen

Due to it’s size, we were limited to conducting experiments in a bright noisy Atrium in Allison house. Below are two video examples of a 3 light setup without audible instrumentation.

Lights_Screen_No Sound_Close

Lights_Screen_No Sound_Distant

Findings: In this well lit environment, when the LEDs are off you can see a grey background caused by the screen itself.  A dark space is needed for this to be optimally effective.  Additionally, the effects of the lighting change with proximity.  It could prove interesting to stagger light distances at different stages of the performance.

EX 3- Giving Sounds Color and Movement

Using a Sony handy-cam we filmed in a lit Atrium. I am using two synced lights and the improved Enttec DMX Pro setup controlled by 3 different instrument voices. Each instrument voice is assigned a color.  The control is driven by individual audio output, linked to a corresponding envelope follower and mapped to DMX color values.

Angelic Pad- White
Rhythmic Bass – Red
Sub- Blue

EX 4- Combed Voicing, New Permutations

The following is an example of how the basic colors and voicing work in conjunction with one another generating new effects and color combinations but are still able to return to their original state (red, white, blue) when played individually.

EX 5- Adding Shadows

Improvisation combining and switching between voices and lighting control while experimenting with effects produced by shadows.

Findings: Using the Enttec DMX PRO and a projection screen yield a higher-quality dynamic range of expression. The lights are significantly more responsive but the setup still requires tweaking to generate a greater range for fade values. (i.e. contrasting quite to loud and to create a pulsing effect for pulsating sustained sounds).

Our camera distorts when recording sounds linked to the strobe parameter.  This phenomena creates an effect in itself and possibly could be captured, projected, and even fed back and looped, as it looks quite interesting.

Shadows created from behind the screen create an intriguing result and may be useful  to extend meaning, depth and character if carefully executed or choreographed.  Additionally, experimenting with placing myself with my bass guitar or another performer behind the screen might help tie together ideas for a more integrated and engaging audio visual instrument.

Critical Analysis and Moving Forward:

Although these experiments show progress and potential, there is still much work to be done on both the audio and visual fronts.  At this stage, as is and on its  accord, I do not envision my instrument being dynamic or compelling enough to sustain meaning and interest for extended periods of time.  As my group has been working on experiments individually, it’s been difficult to directly access how this will function as part of a larger performance.  Moving forward, I aim to develop my instrument further in its own right as well as work towards integrating it as a subsection of the ensemble.

Plans for further development:

Dial in specific sounds, effects and performance techniques that optimize sound generation and DMX feedback that work well standalone as well as with the rest of the ensemble.

The audio to MIDI conversion needs to be refined.  Tracking bass frequency is no easy task and I often get unexpected and false triggered notes.

Set up foot controllers to aid in modulating sonic and visual elements. Up to this point, these have not been implemented.

Tweak and scale modulation sources and envelope followers to be more dynamic with mode, strobe and fade values.  Develop precise control mechanisms that will allow for better ways of expressing relationships between sound and silence.

Investigate incorporating a dimmer pack and setting up additional lighting sources (lamps) that can be placed around the stage or in the audience. One idea includes switching on/off various audio effect processing or changing instrument voicing to trigger corresponding light states (on, of, dim, bright, flicker).

Work as a group to quickly identify a unifying theme.  Create a map of how our performance will move throughout time and devise how we can keep it engaging throughout its duration. Schedule regular group rehearsals in an effort to better understand how we operate as a dynamic and cohesive unit.

Video Recording at Cramond on 16.02.14

Marco and I went to the beach last Sunday to record some interesting stuff related (maybe not) to AudioVisual. We shot the road surface and the forest when in a driving bus. As there were no big waves as expected we recorded the small ones lapping the shore, and some objects on the sea surface moving along with the wave. We also made use of the sand, leaves, shells, stones and dead trees to make different kinds of sound.

I thought these video materials will not be used for Submission1 so I haven’t edited them. Maybe I will work on them the next few days. I will bring them to our meeting this Friday, feel free to get them then!

The video linked is from one of this recordings, where Marco was giving a live performance with dead tree and leaves^_^The original 1080P files are too large so I have to upload to Vimeo. When I’m writing this post the video is still waiting in line to be converted, hope you can successfully get access to it.

vimeo.com/87137256

 

 

DMX… a bit further.

I spent the last week developing a system for mapping DMX lighting based on MIDI automation and frequency content.

First, I made a consorted effort to get Martin’s Enttec DMX USB Pro working with the DMAX Max for Live Objects. The Enttec installers and documentation are set up to accommodate Window’s computers mostly and not necessarily easy to navigate for Mac users.  The DMAX (Max for Live devices) require that you install Olaf Matthew’s dmxusbpro external and an additional driver.

dmxusbpro external – “In order to work with this object you have to install the Virtual Com Port driver for the interface. It will not work with the FTD2XX driver. The latest driver can be found at www.ftdichip.com/.”

Additionally, you will need to know your computer’s com port to run the dmxpro helpfile. On a Macintosh system, it should look something like: /dev/cu.usbserial-ENT3IHSX.

Unfortunately, I was only able to get the first 2 channels Red and Green to work.  Despite the DMAX hub-monitor indicating that the other channels where receiving proper messages, these signal were not being transmitted to the lights.  I’ve email David Butler (DMAX) to see if he has any suggestions.

Below is a video from what I was able to get working.  Using one light, I was able to map a single drum loop and split audio frequency content.  Red is for lows (Kick) and green (Snare) for mids/highs.

At this point, it’s difficult to say why the other channels aren’t functioning correctly with DMAX.  Too many pieces to the puzzle…. Max externals, Max version compatibility, 5 to 3 pin cables, Max for Live devices, etc.  With looming deadlines, I resorted back to using the DMX console and modified Matt Colling’s DMX Max Midi patches to work with all four lights inside Ableton Live. In the video below, I’ve daisy-chained and mapped three lights. Each respond to different frequency content based on audio signals, which are then mapped to control values.

My first impression is that the DMX MIDI devices, through the console, are more sluggish than the Enttec DMX Pro.  This makes perfect sense as latency plays a huge factor in timing.  The DMX Pro allows for direct connection to the lights, where as the console is need to work directly with sending MIDI and then translating those signals to DMX.  For the mean time, I’ll continue working with the console as it’s proven to be the most reliable.

FYI… another option is to use the American DJ,  myDMX 2.0.  Cameron MacNair has written an informative blog about getting it up and running.

dmsp.digital.eca.ed.ac.uk/blog/actionsound2014/2014/02/18/dmx-setup-configuration-and-aesthetics/

 

Analog Study #1- Making Progress

It was hard to sleep last night because I was so excited by the the progress Marco and I made last night in using thrown out TVs and speakers to bring art to a world that otherwise views them as trash.  We had no problem hooking up a very old TV, but when newer(ish) TVs (no idea what date, could not find a manual anywhere!) we had to do a significant amount of research to figure out how to hack a SCART cable.  21 pins!  Way more than midi….

After a couple hours of shooting loud audio signals into various pins, we discovered that pin number 20 and 21 worked for our two digital TVs.  We were able to get 2D patterns occur by touching one audio signal to the ground and the other to the pin.

This morning, we tried to connect three TVs together using a patch I built hacked on the last chapter in “Electronic Music and Sound” book one (book 2 just released!!).  This patch basically allows you to cycle through channels using a wave form like a phasor.  It this way, sounds can be circled around large spaces in a systematic and improvisational way instead of physically changing tracks in a DAW.

Unfortunately, after hacking apart our only SCART cable(chopped it in halve and stripped to expose cables), we could not get half of it to work.  There is still a lot about grounding that I do not understand, and perhaps SCART cables only work one directional?  Marco managed to make his half of a SCART cable work, but we will buy a fresh one to use for Submission one.

Demo Video

It is important to note that the audio that goes into this set up can be anything.  I’m hoping that perhaps audio from another submission will be imported into this set up to bring more unification of the Audio-Visual Ensemble.  For now, we used a sine wave of various frequencies.  Marco and I discussed making friends with a welder and possible making a structure that can be interacted with and potentially incorporated into Jessamine’s color/shadow tracing set-up.  Anybody know a good (and cheap) welder?

That’s it for now.  Might have gotten farther, but Marco had to scramble to get to Italy straight from school.  Until Thursday!

See my insane cable hack attached:

SCART hack

SCART hack