Lissajous and Forbidden Motion with PS3 Controller

The final version of the audio-visual performance system used for this performance expanded upon the Lissajous Organ presented in submission1.  I developed a second audio-visual instrument named Forbidden Motion.  By running distorted, beat-based noise through a subtractive synthesis processes similar to Convolution Brother’s ‘forbidden-planet’ and finally through Audio Damage’s EOS reverb, a rich, interesting sound was generated.


The high frequency sounds in this clip are a result of this proces:

A simple ioscbank was also implemented to generate dense amounts of sine waves. Lastly, abilities to degrade the audio signal allowed for dirty, crunchy sonorities in the aesthetics of our cave theme.

I chose to use visuals typical of static on analog TVs for this part of my system.


By modulating brightness controls via audio input, these visuals responded to the audio output of this part of my system.


2 for 1 control

In developing a way to control my Lissajous Organ and Spectral Motion together, I stumbled upon a system of control in which I could control 16 systems or equal or greater size than the ones I used.  By packaging the data coming out from a controller and routing this data in an efficient way, simple controls can be mapped to many levels of parameters.



Macro routing:

Example of routing

Inside R1 Buffer routes- Gated Buffer routing:

Lock-Level Buffer gates

An unexpected result of using a system structured in this fashion was the ability to combine both visual systems together on the fly.


An important feature that this system of control was freedom from my computer screen.  This allowed for more gesture driven and intimate interactions with ensemble members like the ones seen here:

Unfortunately, my visuals projected into the audience during this clip were not captured, but are the analog TV type discussed earlier.

More detail about the structuring philosophies of this system can be found in my Submission3 blog post.

Audio-visual spacialization

I chose to use a projector capable of movement so as to utilize the space in which we performed.  Here you can see it projected onto the floor:

In addition to the audio spacialization, visual space was also planned out so as to accentuate the performance space and leave room for each others visuals to stand out.

Problems encountered

Unresolved differences regarding simple work-place etiquette led to overwhelming, emotional stress and finally extreme verbal harassment the day before our final performance.  Although this course appears structured in a hierarchical fashion, in order to consult supervisors when needed, no coherent plan for conflict resolution resulted from this structure.  Even when approached multiple times with the same issue, repetitive advice received from my supervisor in regards to our issues of diversity was, “You cannot make anybody do anything.”  This advice efficiently dissolved the fragile bonds that existed between individuals with different backgrounds.  I lament not being able to overcome these issues and believe the paradigm of conflict-resolution in the DMSP course needs to be contemplated and restructured.

Hierarchical System Design in Live Audio-Visual Improvisation

This article explores the usage of a hierarchical system to interact with audio-visual systems via digital mapping interfaced through a gaming controller.  It will investigate a system developed called HLSC (Hierarchical Lindenmayer inspired Structure of Control) featured in the Audio Visual Ensemble 2014 project as part of the Digital Media Studio Project course at The University of Edinburgh.  It will not investigate hardware controllers, but will acknowledge the unique capabilities when pairing gaming devices with real-time, audio-visual systems.  The device used for the Audio Visual Ensemble project is a PS3, Dualshock controller.  The methodology of controller assisted performance used by the HLSC infrastructure can be applied to any comparable hardware controller.

A note on gaming devices

            Most modern gaming devices are designed for spontaneous, multi-dimensional interactions in a virtual gaming world.  For this reason, they are naturally inclined towards controlling similarly complicated interactions in the audio-visual realm. Todd Winkler (2001) writes about computer controllers:

“Thought should be given to the kinds of physical gestures used to send data to the computer, and how specific movements can best serve a composition….Playing new sounds with old instruments only makes sense if the old technique is valid for the composition” (p.37)

With the computer’s ability to create sound without a physical medium, controller decisions can and should be tailored to specific performance situations.  There are documented ways of using gaming controllers within the maxMSP programming environment (Jensenius 2007, p.106).  However, most of these utilize simple and linear systems for users with a “plug-and-play” mentality.  While simple for immediate use as a real-time performance tools, these systems have limitations when used to control multiple systems in creative and improvisatory ways. Jensenius elaborates:

“A common solution to master such a complex system, i.e. consisting of many different and connected parts, is to create presets that define a set of parameters that work well together. These presets may then be used for further exploration of the sound synthesis model. However, the result is often that many people only use the presets, and never actually control the sound model more than changing from one preset to another. This again makes for a static and point-based type of control” (p. 101).

This type of static control which relies heavily on presets was not conducive to the interactive aesthetic of the Audio Visual Ensemble 2014.  I developed the HLSC system to separate myself from the computer screen, control complex systems, and facilitate gesture driven interaction with ensemble members.

Inspiration for model

Originally inspired by structures produced by Lindenmayer systems, these tree-like hierarchies are a good way to visualize how this control system works.

Courtesy of Wikipedia:

Starting from level N=0, we see from this diagram that at each level contains more and more elements.  Although originally designed to model organic growth,  I imagined these levels as a parent-child system of control.  For example, the A and B in level n=1 control the elements in level n=2 which in turn control the elements in level n=3, etc.

To apply this structure to a system of control, the ability to go from level to level was paramount.  By moving through levels, macro and micro controls can be accessed by the same device.  Further, the self-similar nature of the Lindenmayer structure allows different levels to have familiar shapes of control.  For the application of this structure to the PS3 controller, I utilized button combinations and bumpers to access and gate data flow.

Controller application

Please refer to this diagram from the official PS3 website for exact controller surfaces referenced in this section.

PS3_ControllerOn this hardware controller, one stream of the HLSC system taken to its end could can be represented as such:


Please click to expand

Only one element of each level is expanded in this diagram, but each element of the same level could be expanded in an identical way.  A more conceptual representation of this could structure is as follows:

Please click to expand

Please click to expand

This representation shows a multi-level, hierarchical structure similar to the Lindenmayer system seen earlier.  To move between levels and gate information for specific settings, a combination of home and bumper is used at the Top-Level, bumpers to gate information at the Lock-Level, and either alt or start at the Surface-Level to access alternative, lower level parameters.

For pictures of the patch and more details about its use in performance, please refer to my Submission2 blog post.  To download the interfaced used to connect and package data from my PS3 Dualshock controller, download the patches from this link:

Complexity and creativity

By using HLSC, levels of one-to-one and one-to-many interactions can be accomplished without physically touching a computer.  With the aesthetic goal of live interactivity for my performance with the Audio Visual Ensemble 2014, I needed to interact with multi-dimensional parameters of two visual systems, two musical systems, and combinations of these systems in real-time.

Despite working with so many systems and controls simultaneously, less than 10% of the possible controls in the HLSC system were used (only two of the “Top-Level” branches are partially utilized).  However, the vastness of this system allowed me to make mapping decisions based on artistic decisions instead of system limitations.  For example, mapping orientation data to be controlled by gestures during more intimate sections where dialogue is especially important.

Although parameters were mapped scrupulously to be logically arranged, at times I found myself forgetting what exact surfaces of my controller accessed what.  However, these unpredictable results were often aesthetically rewarding.  Electronic composer Brian Eno dreams of getting lost in a complex, self-generating system similar to HLSC:

“But what if the synthesizer just ‘grew’ programs? If you pressed a ‘randomize’ button which then set any of the thousand ‘black-box’ parameters to various values and gave you sixteen variations. You listen to each of those and then press on one or two of them—your favourite choices. Immediately, the machine generates 16 more variations based on the ‘parents’ you’ve selected. You choose again. And so on. The attraction of this idea is that one could navigate through very large design spaces without necessarily having any idea at all of how any of these things were being made” (Dahlstedt, 2007)

This passage demonstrates the creative potential of a complex system even when a performer is lost.  With simpler one-to-one mapping, all combinations of interaction can be quickly exhausted, but with a complex system like HLSC, the possibility for unexpected interactions is much greater and therefore can foster creative results.


The HLSC system is one that I will be utilizing and further developing in the future. Its multifunctional, high-level control allows for more meaningful, audio-visual interactions than static presets.  The ability to negotiate and improvise with many nested levels of control in real-time makes it a valuable performance tool.  Future application to the API of DAWs such as Abelton Live 9 or Bitwig could allow an average game controller to be used as a powerful production tool.

An interesting study for the future would be to develop rules for Lindenmayer systems of control based on a particular controller specifications.  By formulating and testing equations based on the number of surfaces and the dimensions in which those surfaces function, perhaps a HLSC-like system could be autonomously generated without having any experience with a specific controller.



Dahlstedt, Palle. 2007. Evolution in Creative Sound Design. In Evolutionary Computer Music. Eduardo Reck Miranda MSc and John Al Biles BA MS, eds. Pp. 79–99. Springer London., accessed April 24, 2014.

Jensenius, Alexander Refsum. 2007. Action-Sound : Developing Methods and Tools to Study Music-Related Body Movement., accessed April 24, 2014.

L-System. 2014. Wikipedia, the Free Encyclopedia., accessed April 25, 2014.

Sony. n.d. [Diagram of PS3 controll]. Retrieved from

Winkler, Todd. 2001. Composing Interactive Music: Techniques and Ideas Using Max. New Ed edition. Cambridge, Mass.: MIT Press.

Audio-Visual Instrument: Lissajous Beat Organ

Study:  Color, Sound, Light, and Lissajous Figures


My goal was to create an instrument that artistically actualizes the phenomena of phase interference which can be heard as auditory beats and seen as Lissajous curves.  I initially added color in a way historically described by one of my favorite composers, Alexander Scriabin. After experimentation, however, I found a color and shape palette that is more true to my own musical-visual experiences.

Because of our group’s discussions about the importance of a final performance that is more interactive than passive, I decided to use controllers to modify audio and visual components in real time.  In this version, a gametrak (piloted by Jessmine XinYan Zhang) is being used to control the camera angles and object rotation (based on a patch from, and a knock-off “Air-Flow” PS3 controller (bought for 2 pounds from a Car-Boot sale in the Omni-Center in Edinburgh) is used to control sonic elements and visual pallettes.

Audio-Visual Interpretation

Here is the screen capture and audio from the computer from the February 26, 2014 performance:

Overall I am pleased with the sound and visuals interaction.  The live performance aspects, however, have room to improve upon.  Here is the live performance from the Alison House Atrium from February 26, 2014:

Overall, the sound/visual interaction was successful, but the performers are too dark to be seen!  It’s nearly impossible from this video to see the intimate interactions between performers, sounds, and visuals.  For a next performance, better lighting will be used.

Technical Information

After a search for a way to create Lissajous curves on the C74 blog (Max/MSP website), I found an efficient way to render a 2 dimensional figure using Jitter and OpenGL.  After carefully studying a patch from Oli Larken, I managed to make this shape 3 dimensional with the Z dimension and brightness being modified from live sound input.  This is my hacked version, not in presentation mode:


You can see the gametrak patch mentioned early in the top left corner.

Some data mapping was necessary to allow for an aesthetically pleasing sound and visual interaction, but most of these mapping decisions were based on psycho-acoustic boundaries.  For example, Lissajous figures look most interesting (to me) when the frequency drawing them is under 20 Hz, but human hearing only starts at 20 Hz.  In addition to this perceptual consideration, I noted that Lissajous figures become fabric like as the two sine waves generating their figure separate by more than about 5-10 Hz (depending on initial frequency).  But with frequency separation this great, human hearing segregates these sounds into separate tones instead of a single timbre.  I mapped data accordingly so as to not detract from either sonic or visual aesthetic of the instrument.

As mentioned above, the sonic and visual components are based on the phenomena of phase cancellations that occur between two slightly out-of-tune sine waves.  There are three sound generators in this instrument, each which control the visuals via shared data input or mapped envelope followers.  This is my performance patch:


Here is the patch that is processing the audio:


You can see the PS3 controller patch that I am using at top of the screen.

The only extra sound effect used in this version (other than the exploitation of phase cancellation in sine waves triggering low-pass, high-resonant filters in a rhythmic way) is reverb.  Specifically I am using a series of Max/MSP externals called HISSTools.  This external allows me to take incoming sound and convolve it through multiple reverbs.  Although many impulses are loaded, the main impulse response that I am using is one that I personally recorded at the University of Edinburgh pool in the fall.


Final Notes

By using controllers, the Lissajous Beat Organ takes on an exciting life of its own!  Even though I sometimes forget my own controls, the way in which the PS3 controller is mapped allows for new sounds to be created from a wide array of gestures.  The sonic and visual textures that can be created in real time would be nearly impossible to achieve with only a mouse or track pad.

For future versions, controls will be mapped without using global send and receive objects.  In this way, I will have more abilities to change my sounds and visuals throughout the piece.  In addition to this, as rehearsals take place, the sonic and visual content of this instrument will be modified to meet the needs of the group as a whole.  Perhaps the gametrak will control sonic element or audio generated from another performer and modified in my instrument.  I look forward to seeing how this instrument will work to create a dialogue and eventual performance with the other members of the Audio-Visual Ensemble in the coming weeks!

Beats and Lissajous Figures

Had a breakthrough last night!  I have been trying to figure out how to create the analog audio-visual systems explored by F.C. Judd (UK Audio-Visual artist) in  Chromasonics.

I figured it out reading a book Prof Michael Edwards lent me last week that many of the shapes are Lissajous figures that move through space.

This links up with the audio-visual synthesizer that I have been building that explores beats, because the most beautiful (in my opinion) Lissajous figures are 2D/3D representations of beats!  The colors of the figures can be painted using the Scriabin color palette of any other to represent an artists color vision.

Max patches soon to come!  I’ll be rendering everything in open GL with so that we can potentially incorporate these structures into a group visualization.