Integrating DMX Lighting Configurations in Real Time Performance Systems

Integrating DMX Lighting Configurations in Real Time Performance Systems – Cameron MacNair, s1300547


This article will investigate various ways of controlling DMX lighting configurations in a real time musical performance environment. Integrating a system of DMX lighting into musical performances allows opportunities to represent performance elements and gestural content in a creative way. When controlled by digital audio signals, mapping visual cues and parameters to the lighting system can be utilized to realize performance action events as categorized audio/visual objects.

The methods described here were incorporated into the Action Sound 2014 group performance for the Digital Media Studio Project course at The University of Edinburgh. For a video capture of this performance, documentation of our system, and Max patches, please see the corresponding media available on the website.

Aesthetic advantages

In what way can we use perception to represent sensory information? How can this be creatively utilized in a real time musical performance?

To introduce aesthetic advantages of a DMX lighting system in a real time musical performance environment, I would like to share this quote by Albert Bregman from his text Auditory Scene Analysis [1].

“The best way to begin is to ask ourselves what perception is for . . . The job of perception, then, is to take the sensory input and to derive a useful representation of reality from it.

An important part of building a representation is to decide which parts of the sensory stimulation are telling us about the same environmental object or event. (pp. 3).”

Creating meaningful representation in any creative environment requires an awareness of the two part system mentioned above. Sensory information is how we, as humans, relate to one another and our environment. If we utilize this highly complex ability as a creative platform for performance, we can represent abstract ideas and actions in new ways. Experimentation with various combinations of sensory data in an ecological method can elaborate an intimate creative practice.

Incorporating DMX lighting into a musical performance allows a “scene” to be constructed by reinforcing and contextualizing the attention of the audience. Do not work against their perception, provide a focus of visual information to represent the scene. Ambient and sporadic lighting techniques can allow a rich visual texture to accompany the auditory events, representing any desired detail of (or relationship between) performance actions and gestures. Bregman identifies this method of utilizing audience awareness, which reinforces the utility of sensory categorization in performance.

“Apart from the role of effort there are other signs by which we recognize the presence of attention. One is that we have a more detailed awareness of things that are the objects of attention than of things that are not (pp. 399).”


Understanding the technical integration process of a DMX lighting configuration is essential to creating a meaningful scene representation of the performance actions. I will not go into technical details in this article, however there is a guide available on the Action Sound website that outlines our specific technical setup for the final performance.

Gestural information in a performance environment can be represented, abstracted and magnified with visual cues. A musical performance inherently contains many gestural definitions, identifying these definitions as control data can be used to create the visual scene. Alexander Refsum Jensenius introduces three categories that can be used to investigate gestural definition in his PhD Thesis, Action Sound: Developing Methods and Tools to Study Music-Related Body Movement [2] by introducing three categories.

“Communication: using gestures to denote aspects of human communication, focusing on how they work as vehicles of social interaction.

Control: investigating gestures as a system, focusing on computational models and the control possibilities of gestures in interactive systems.

Mental Imagery: studying gestures as mental processes, which may be the result of physical movement, sound, or other types of perception (pp. 36).”

How can a DMX lighting configuration become a meaningful representation of the gestural definitions in the musical performance system? In what ways can this be constructed to identify auditory and gestural data? Using intelligent mapping techniques can be a vehicle of communication, control and mental imagery when designing this system.


Parameter mapping in a digital system can be used to identify relationships within the scene in a creative way. John Croft’s Theses on Liveness [3] considers two different types of mapping, procedural and aesthetic. Creating this connection in a digital system can allow the computer’s responsiveness to become ecological and creatively biased, to develop an intimate relationship between attention and content within the performance structure.

“The onus of justification of liveness is shifted to the causal link between the performer’s action and the computer’s response. (pp. 61).”

When defining this connecting between the computer’s response and the audio environment, choosing performance relationships with streams of data is the heartbeat of the scene’s construction. D. Wessel and M. Wright make this consideration in the article Problems and Prospects for Intimate Musical Control of Computers[4] featured in the Computer Music Journal.

“All music, in the end, exists as sound—that is, continuous variations in air pressure. However, the notion of discrete events is a very powerful and effective metaphor for musical control, providing much-simplified reasoning about rhythms, entrances and exits, notes, and many other aspects of music.

Our solution is to represent continuous control gestures as audio signals . . . We can multiplex lower-rate control gestures into a single audio channel (pp. 13).”

Creative mapping techniques allow the scene to be a synchronized, elaborate, and metaphorical representation of a gestural system. Pairing the visual stimulus and auditory environment is a way to grab the attention of the audience, shift their awareness, and propagate deeper meaning into the performance. Roger Dannenberg researched the connections between visual and audio scenes in his article Interactive Visual Music: A Personal Perspective [5] featured in the Computer Music Journal.

“Make connections between deep compositional structure and images. . . By tying visuals to this deep, hidden information, the audience may perceive that there is some emotional, expressive, or abstract connection, but the animation and music can otherwise be quite independent and perhaps more interesting (pp. 28).”


DMX lighting configurations can be used to develop a strong representation of hidden information, gestural definitions, and performance actions in a real time musical performance environment. Integrating this system with sophisticated mapping techniques and perceptual awareness allows a scene to be constructed that utilizes sensory information and the complex association that the human mind creates from it. Pairing these systems in an ecological way is a method of integrating a new dimension of meaning, representation, and gestural content into the performance environment.


[1] Bregman, A.S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, Mass.: Bradford Books, MIT Press.

[2] Jensenius, Alexander. (2007). Action – Sound: Developing Methods and Tools to Study Music-Related Body Movement. University of Oslo, Department of Musicology. Retrieved from

[3] Croft, John. (2007). Theses on liveness. School of Arts, Brunel University. Retrieved from

[4] Wessel, D., Wright, M. Problems and Prospects for Intimate Musical Control of Computers. Computer Music Journal, 26 (3). pp. 11-22.

[5] Dannenberg, Roger. Interactive Visual Music: A Personal Perspective. Computer Music Journal, 29 (4). pp. 25-35.

Visuals in a way…

A visual can capture the imagination of the viewer. It could create borders for him and could direct him to another dimension. At first sight without any music Wolff’s composition on paper was giving me the sense of rush and fullness. When I tried to check it visuals connecting to that composition were full of stress and contrast. However because I never performed for a composition it was a way hard to decide the dimension of the visuals.

At first meeting the music we made was a way to crowded but none of use sure about will it be our interpretation of Wolff.  From my point of view it was the questions of how and in what sense will I balance the composition with visuals and how will it need to be transcript the composition itself via creating a new dimension, a contrast or an extension? I decided to make visuals with different structures but I need to decide their position and role.  Because they need to create the notion of tension, direct a path for the performance itself without exaggerating its motion, contrast with the other instruments while expressing the space and the whole of the piece.

But while transmitting the each note/act as visual I need to classify the footage.  So I shot some footage and separate them very basic two segments: still and moving. Still: grass, sky. Moving: Road, pathway. So in a simple point visuals would be working for Grass for silence, sky for lower and short notes, and pathway for long notes and road for long and high notes.

However when we checked the composition itself rather than using something that crowded we need to use minimal visuals. Because visuals with that much transition and effect, did not relate the interpretation of the composition itself.  And also they were relating to different meanings, which are creating a contradiction with the music, to relate the performance in that way the link between visuals and music was losing its effect acting in a different transmission and dimension for the composition.  So we decided to use lights which are creates a tension to the performance itself.  Each light was responding to a different instrument and tension. In a sense they were transmitting the notes structure with the colour and strength.  They were minimal and created a relevant ambiance for the performance. Using a dark space is giving performers the opportunity of expressing the unity and expression of the composition at a free space.

The lights created that tension for the performance. Especially using darkness and the silhouette of the instruments as visuals creates a space that is stuck at a decent time. When all the lighting relates a single point rather than interpreting itself as a visual, they turned into a response to the music. But from my perspective I am still questioning the dimension of lights and their perspective for the performance.  After I listened our interpretation I agreed that we can’t use any footage references with any different meaning but we also need to direct the visuals as a part of it. It seems that we need to explore the colour palettes in a way, sense of darkness and emotional responses and need to order them in an intelligent way.

Relationships of relationships

Questions that are still of concern and trouble are questions of understanding performance relationships. Although I don’t want to make any comparison to ‘traditional’ music interpretation or making (as I don’t find it necessary to understand indeterminate or improvisation practices), but as most of us compare it to these traditions I will try to use this blog post to outline substantial differences and certain frameworks that need to be broken in order to ‘experience an understanding’ of improvisation and aesthetics of indeterminate scores.

Questions that are still in the room were concerned about action-taking and what choice one has in taking this action and why this action becomes some kind of ‘value’ (a word that should be avoided, but was used). Traditional interpreters or musicians’ action is often linear, pre-determined on almost all levels (i.e. practice, knowledge, socio-cultural etc.), it is a fixed framework of knowledge that offers a vocabulary and toolbox people can ‘safely’ choose from. This kind of box-ticking attitude of course has created a certain value-based idea of what is ‘good’ and what is ‘bad’ within a certain performance situation. Neither of these experiences are however relational nor are they momentary experiences of instability. By default they can’t, so for interpreters of traditional music, a score by Wolff offers and outlines (and I think that is what one should focus on) the problematic of preserved tools and knowledge (i.e. tonality, performance practice, socio-political establishment of music). Rather than trying to find a certain relation to tradition and learned music-making I’d say to completely erase that idea, there is no need for a comparison, it is completely outdated, too many people have written about it and the time of Adorno has passed.

Looking at Wolff from a current perspective, this score offers momentary events; that is all one needs to be aware of. What does a momentary event mean? How is this related to what I tried to say earlier about instability versus knowledge? A momentary event has to do with the relationships I tried to outline briefly in my first blog post. Further, momentary means that these relationships only occur once in that time and space. This is precisely written down in Wolff, he shapes a time concept of listening and action taking, but it is never knowledgeable (i.e. never preserved or value). When we play Wolff then (after having studied the instructions and actions we are ‘allowed’ to take) our focus should be on experiencing the occurring relationships and acting with, through, on them through our ‘learned’ events, which we embodied by studying Wolff. Those relationships have no value, neither can they be presented in a traditional form (score), this is where instability enters. This instability is the place or time one will act in and create some Thing or Other. Instability doesn’t get rid of tools or embodied practices, it still will emerge into some kind of form, but this form is not related to the traditional idea of form nor is a product that can be reused and put into preserved form (i.e. making the experience of instability a knowledge as for example an opera event is a preserved form that will always – in one way or another – present a certain way of performance).

Those experiences within these instable relations still seem to be something not palpable for most of us. This certainly has to do with not being exposed to these kinds of relationships before, neither will this course be able to break these ‘old’ habits (and that is fine, I personally like the clashes of different aesthetics meeting in a constraint improvisation (something that will lead the final submission).
In order to move a little bit out of that transcendental state, I will try to make it a bit clearer by applying certain strategies into this.
If for example someone has trouble of ‘freely’ acting within a given space and time then we should go back to what I said earlier that Wolff actually constraints the performer more than he gives freedom. If we have a rehearsal (as we did for example yesterday) where each of us constrained ourselves to a minimal amount of sonic material and we allow the space to breath then undeniably listening will be automatically emerge as the focus for this performance.
As our tools or vocabulary (the constrained sonic material) are limited we can only act through the emerging relationships of listening. Constraining actually leads into freeing, which is something nice to explore and see emerging, because what actually is being freed in these situational practices is the performer him/herself (amongst the form and structure). What I am trying to outline here is that if we constrain ourselves to limited sonic material then our action becomes a contribution to the performance and shapes the overall form.
This is something very important when thinking of the second assignment where we will have to map all these relationships into a self-acting MaxMsp system, because all these actions need to be in some way or another successfully mapped into the system, which should act as an independent performer rather than a linear DSP system. I will post about mapping in a bit.

DMX Setup, Configuration and Aesthetics

DMX Setup

As we have discussed performance relationships in the context of the Action Sound project, I would like to extend these relationship definitions with visual accompaniment. I will be using a DMX lighting configuration for our first project.

As of now, we are planning to use 4 lights in total. 1 for each performer and 1 flood light for the ensemble. We can use this setup to highlight specific performance relationships in the project.

The lights for the individual performers can be characterized according to the way the performer interacts with their instrument. Since we are performing the piece For 1, 2 or 3 people by Christian Wolff, we need to make aesthetic considerations that are directly related to our behaviors while reading. This can be unique from person to person, instrument to instrument, etc. etc., presenting an opportunity to highlight any number of parameters or performance habits.

I think the way to get the most out of the lighting would be to apply constraints that are derived from our performance relationships. For example, we can assign one color to each performer, activating a strobe effect as the amplitude goes over a certain threshold. Over time, you can get an idea of the sonic performance just by seeing the activity of the lights.

Now I’ll explain our technical setup for integrating the lights.

4x Visage LED par64 Flat 12x8W 4in1 Black Lights
4x DMX cables (3-pin)
4x IEC cables
1x ADJ myDMX 2.0 Unit
1x USB connector

To properly connect all the equipment, there is a protocol that needs to be followed. I am using a Macbook Pro v10.8.3.

Each light operates on 7 “channels” of data – 1 channel for each color, brightness, and effect. This means there’s 1 light per 7 channels.

Light 1 – Channels 1-7
Light 2 – Channels 8-14
Light 3 – Channels 15-21
Light 4 – Channels 22-28

To set a light to a channel, press MODE on the back until you see something like “d.001.” This indicates the channel – use the UP or DOWN buttons to move it to a different channel.

Once the lights are assigned to their appropriate channels, make sure that they are DAISY CHAINED together. This means that the output of the interface connects to DMX IN on light 1. DMX OUT of Light 1 connects to DMX IN of light 2. Connect all 4 lights in this way. Once this is set up, connect the interface (MYDMX2.0) to the computer.

In order to control the lights via the MYDMX2.0 interface, you need the software to control it. Download it here:

It’s quite a tricky software to fully understand. Since we’re using MIDI to control the lights, we’re going to be using a standard minimal file for basic control. This is the file I’m working with:
Referenced on this University of Edinburgh Wiki article:

Fire up the software and open the file.

At the bottom, you have faders for each channel going to each light. As the lights are each assigned to 7 channels, each group of 7 channels (indicated by color) corresponds with one of the four lights. Read the Wiki article for details on each fader control.

Now, in order to control these faders with MIDI, you need to activate your IAC Driver in Audio MIDI Setup (Mac).

In Audio MIDI Setup, go to Window>Show MIDI Window. This brings up a window with all of the MIDI devices recognized by your computer. Double click the IAC Driver device and click the box “Device is online.” (This is necessary as the MYDMX2.0 interface is NOT recognized as a MIDI device).

Now exit Audio MIDI Setup.

Fire up your MIDI controller, I’ll be using Max 6. Using the noteout object, assign the output device as IAC Driver. This is your connection to myDMX2.0. Prepend each data stream of MIDI (0 to 127) with a number, to differentiate each channel to be received in myDMX2.0.

To accomplish this, I assigned a slider of values 0 to 127 to a “pak 1 0” object, changing the value in the second inlet. This produces a list of two numbers, with the first always being 1. Change this first number to create different “channels” or ports for your data to flow through to communicate with the MYDMX2.0 software.

In the MYDMX2.0 software, find the fader you’d like to control with MIDI. Right click it and select Midi Learn. Once you move the slider (connected to the pak object) it will recognize that as the control for the fader.

Now you should have a basic understanding of how to go about assigning MIDI controls to the DMX unit. I found it fairly simple – despite having to use the proprietary software.

Cameron MacNair

Videos of Christian Wolff’s piece For 1, 2 or 3 people

Here’s a couple videos of performances of Christian Wolff’s piece “For 1, 2, or 3 people.” I found them insightful, as they demonstrate the concepts we have discussed.

I would like to point out the definitions of timbre, space, and sound environment relationships that these musicians are displaying. How can we create our own unique definition when we perform the piece?

Cameron MacNair

Developing Performance Relationships

Action Sound is becoming a project that challenges my performance techniques and interpretation of gestural based performances. As we’re currently realizing the work For 1, 2 or 3 people by Christian Wolff, discussions and rehearsals have broadened my sensibility in order to understand different types of performance relationships. I think these relationships are important when developing an idea or concept, with emphasis towards listening and responding to the sound environment within various constraints.

The performance relationships between the players, instruments, space, and environment are projected as actions, and each element needs to be treated with many considerations. Applying constraints is one way to get the most out of these relationships.

The process of developing the Action Sound project has led us to many different ideas and possible outcomes. We had originally considered creating the Christian Wolff piece as a real-time audio system. The thought process behind the system – rather than the aesthetics itself – has helped me understand how I can get the most out of a “gestural” stream of data. Capturing sound in such a way that allows amplification of the data, or creative bias, in consideration of how the digital system will respond, is a pipeline that I think is at the root of any digital system we decide to incorporate in this project. Refining this system will take a lot of time, because we need to understand the behavior of how the system responds in order to get the most out of it.

As of now, I am focused on understanding the Christian Wolff piece. This means rehearsing, researching previous performances, and experimenting with constraints. It means stepping out of my performance comfort zone and developing a new skill set and level of understanding.

-Cameron MacNair

First encounters into action

Action and Sound. How are both related to each other? What is action? How is action mapped into a successful performance using digital means?
This blog post will just summarise my current thoughts on various paths this project my lead, for the purpose of this I am going to write a few bullet points and elaborate on them throughout the project.

1) ACTION: Art as political action, as protest as socio-political relations
2) ACTION: as performers relationship
3) ACTION: as ‘notated’ instructions
4) ACTION: as mapping in digital systems
5) ACTION: as generative/listening/independent systems

For today I would like to talk a bit about indeterminate action scores and what I have actually realised this week about the relations the score itself can offer. Some of us are not familiar with open or graphic notation nor have we all explored musicking in terms of purely listening to each other and acting only when certain events occur. We tried to discuss the score, some being completely lost with the instruction Christian Wolff gives, some seeing the score as some kind of guide. What I realised during the process of ‘analysing’ the piece is that already the analysis or the ‘getting familiar’ with the score shapes a certain identity in the interpreters head. What we are essentially (maybe) facing are pre-performance instructions, which similar to a learned instrument, will be manifested in our conscious and be re-used during a performance. Therefore I interpret Wolff as a learning and process of understanding distinct actions. Essentially we are not dealing with an indeterminate score that would be a momentary instruction, i.e. live notations, but a determined action-led performance. Funnily enough this means that our actions are incredibly constraint (also outlined by the score and performance instructions). As this is an interest of mine in general (constraint set of minds), we will hopefully achieve a situation where our minds can really focus on listening to everyone’s action and re-actions (as actions toward or against other actions). For now and for the purpose of getting to know each other as well as the instructions by Wolff it is fine to limit ourselves to only discussing the ‘score’ itself. Further I think that in general we should remain within that limited space, focused musicking, rather than just ‘go and make some noise’, in order to learn and understand something from this way of musicking. Also then we can put that experience into a broader understanding of what action means as a socio-political factor and how art and protest can be in fact one and the same thing, and not separated.

Hello DMSPers!

Welcome to the home of Action Sound 2014. I´m leaving it blank as  it´s your space to present your work, ideas, influences or anything else that may be useful for you. Look at the About tab for details on the project. Look forward to working with you!

Jess Aslan