Lissajous and Forbidden Motion with PS3 Controller

The final version of the audio-visual performance system used for this performance expanded upon the Lissajous Organ presented in submission1.  I developed a second audio-visual instrument named Forbidden Motion.  By running distorted, beat-based noise through a subtractive synthesis processes similar to Convolution Brother’s ‘forbidden-planet’ and finally through Audio Damage’s EOS reverb, a rich, interesting sound was generated.

Spectral-Motion

The high frequency sounds in this clip are a result of this proces:

vimeo.com/92948778

A simple ioscbank was also implemented to generate dense amounts of sine waves. Lastly, abilities to degrade the audio signal allowed for dirty, crunchy sonorities in the aesthetics of our cave theme.

I chose to use visuals typical of static on analog TVs for this part of my system.

AnalogVisuals

By modulating brightness controls via audio input, these visuals responded to the audio output of this part of my system.

Analog-visuals

2 for 1 control

In developing a way to control my Lissajous Organ and Spectral Motion together, I stumbled upon a system of control in which I could control 16 systems or equal or greater size than the ones I used.  By packaging the data coming out from a controller and routing this data in an efficient way, simple controls can be mapped to many levels of parameters.

packaging:

Packaging_HI_Data_For_Heirachy

Macro routing:

Example of routing

Inside R1 Buffer routes- Gated Buffer routing:

Lock-Level Buffer gates

An unexpected result of using a system structured in this fashion was the ability to combine both visual systems together on the fly.

VisualsTogether

An important feature that this system of control was freedom from my computer screen.  This allowed for more gesture driven and intimate interactions with ensemble members like the ones seen here:

vimeo.com/92936792

Unfortunately, my visuals projected into the audience during this clip were not captured, but are the analog TV type discussed earlier.

More detail about the structuring philosophies of this system can be found in my Submission3 blog post.

Audio-visual spacialization

I chose to use a projector capable of movement so as to utilize the space in which we performed.  Here you can see it projected onto the floor:

vimeo.com/92936795

In addition to the audio spacialization, visual space was also planned out so as to accentuate the performance space and leave room for each others visuals to stand out.

vimeo.com/92947352

Problems encountered

Unresolved differences regarding simple work-place etiquette led to overwhelming, emotional stress and finally extreme verbal harassment the day before our final performance.  Although this course appears structured in a hierarchical fashion, in order to consult supervisors when needed, no coherent plan for conflict resolution resulted from this structure.  Even when approached multiple times with the same issue, repetitive advice received from my supervisor in regards to our issues of diversity was, “You cannot make anybody do anything.”  This advice efficiently dissolved the fragile bonds that existed between individuals with different backgrounds.  I lament not being able to overcome these issues and believe the paradigm of conflict-resolution in the DMSP course needs to be contemplated and restructured.

Hierarchical System Design in Live Audio-Visual Improvisation

This article explores the usage of a hierarchical system to interact with audio-visual systems via digital mapping interfaced through a gaming controller.  It will investigate a system developed called HLSC (Hierarchical Lindenmayer inspired Structure of Control) featured in the Audio Visual Ensemble 2014 project as part of the Digital Media Studio Project course at The University of Edinburgh.  It will not investigate hardware controllers, but will acknowledge the unique capabilities when pairing gaming devices with real-time, audio-visual systems.  The device used for the Audio Visual Ensemble project is a PS3, Dualshock controller.  The methodology of controller assisted performance used by the HLSC infrastructure can be applied to any comparable hardware controller.

A note on gaming devices

            Most modern gaming devices are designed for spontaneous, multi-dimensional interactions in a virtual gaming world.  For this reason, they are naturally inclined towards controlling similarly complicated interactions in the audio-visual realm. Todd Winkler (2001) writes about computer controllers:

“Thought should be given to the kinds of physical gestures used to send data to the computer, and how specific movements can best serve a composition….Playing new sounds with old instruments only makes sense if the old technique is valid for the composition” (p.37)

With the computer’s ability to create sound without a physical medium, controller decisions can and should be tailored to specific performance situations.  There are documented ways of using gaming controllers within the maxMSP programming environment (Jensenius 2007, p.106).  However, most of these utilize simple and linear systems for users with a “plug-and-play” mentality.  While simple for immediate use as a real-time performance tools, these systems have limitations when used to control multiple systems in creative and improvisatory ways. Jensenius elaborates:

“A common solution to master such a complex system, i.e. consisting of many different and connected parts, is to create presets that define a set of parameters that work well together. These presets may then be used for further exploration of the sound synthesis model. However, the result is often that many people only use the presets, and never actually control the sound model more than changing from one preset to another. This again makes for a static and point-based type of control” (p. 101).

This type of static control which relies heavily on presets was not conducive to the interactive aesthetic of the Audio Visual Ensemble 2014.  I developed the HLSC system to separate myself from the computer screen, control complex systems, and facilitate gesture driven interaction with ensemble members.

Inspiration for model

Originally inspired by structures produced by Lindenmayer systems, these tree-like hierarchies are a good way to visualize how this control system works.

Courtesy of Wikipedia: http://en.wikipedia.org/wiki/L-system

Starting from level N=0, we see from this diagram that at each level contains more and more elements.  Although originally designed to model organic growth,  I imagined these levels as a parent-child system of control.  For example, the A and B in level n=1 control the elements in level n=2 which in turn control the elements in level n=3, etc.

To apply this structure to a system of control, the ability to go from level to level was paramount.  By moving through levels, macro and micro controls can be accessed by the same device.  Further, the self-similar nature of the Lindenmayer structure allows different levels to have familiar shapes of control.  For the application of this structure to the PS3 controller, I utilized button combinations and bumpers to access and gate data flow.

Controller application

Please refer to this diagram from the official PS3 website for exact controller surfaces referenced in this section.

PS3_ControllerOn this hardware controller, one stream of the HLSC system taken to its end could can be represented as such:

DMSP_Micro

Please click to expand

Only one element of each level is expanded in this diagram, but each element of the same level could be expanded in an identical way.  A more conceptual representation of this could structure is as follows:

Please click to expand

Please click to expand

This representation shows a multi-level, hierarchical structure similar to the Lindenmayer system seen earlier.  To move between levels and gate information for specific settings, a combination of home and bumper is used at the Top-Level, bumpers to gate information at the Lock-Level, and either alt or start at the Surface-Level to access alternative, lower level parameters.

For pictures of the patch and more details about its use in performance, please refer to my Submission2 blog post.  To download the interfaced used to connect and package data from my PS3 Dualshock controller, download the patches from this link:

www.dropbox.com/s/vwqj59aahujeccm/ControlSys.zip

Complexity and creativity

By using HLSC, levels of one-to-one and one-to-many interactions can be accomplished without physically touching a computer.  With the aesthetic goal of live interactivity for my performance with the Audio Visual Ensemble 2014, I needed to interact with multi-dimensional parameters of two visual systems, two musical systems, and combinations of these systems in real-time.

Despite working with so many systems and controls simultaneously, less than 10% of the possible controls in the HLSC system were used (only two of the “Top-Level” branches are partially utilized).  However, the vastness of this system allowed me to make mapping decisions based on artistic decisions instead of system limitations.  For example, mapping orientation data to be controlled by gestures during more intimate sections where dialogue is especially important.

Although parameters were mapped scrupulously to be logically arranged, at times I found myself forgetting what exact surfaces of my controller accessed what.  However, these unpredictable results were often aesthetically rewarding.  Electronic composer Brian Eno dreams of getting lost in a complex, self-generating system similar to HLSC:

“But what if the synthesizer just ‘grew’ programs? If you pressed a ‘randomize’ button which then set any of the thousand ‘black-box’ parameters to various values and gave you sixteen variations. You listen to each of those and then press on one or two of them—your favourite choices. Immediately, the machine generates 16 more variations based on the ‘parents’ you’ve selected. You choose again. And so on. The attraction of this idea is that one could navigate through very large design spaces without necessarily having any idea at all of how any of these things were being made” (Dahlstedt, 2007)

This passage demonstrates the creative potential of a complex system even when a performer is lost.  With simpler one-to-one mapping, all combinations of interaction can be quickly exhausted, but with a complex system like HLSC, the possibility for unexpected interactions is much greater and therefore can foster creative results.

 Conclusion

The HLSC system is one that I will be utilizing and further developing in the future. Its multifunctional, high-level control allows for more meaningful, audio-visual interactions than static presets.  The ability to negotiate and improvise with many nested levels of control in real-time makes it a valuable performance tool.  Future application to the API of DAWs such as Abelton Live 9 or Bitwig could allow an average game controller to be used as a powerful production tool.

An interesting study for the future would be to develop rules for Lindenmayer systems of control based on a particular controller specifications.  By formulating and testing equations based on the number of surfaces and the dimensions in which those surfaces function, perhaps a HLSC-like system could be autonomously generated without having any experience with a specific controller.

 

References

Dahlstedt, Palle. 2007. Evolution in Creative Sound Design. In Evolutionary Computer Music. Eduardo Reck Miranda MSc and John Al Biles BA MS, eds. Pp. 79–99. Springer London. link.springer.com/chapter/10.1007/978-1-84628-600-1_4, accessed April 24, 2014.

Jensenius, Alexander Refsum. 2007. Action-Sound : Developing Methods and Tools to Study Music-Related Body Movement. www.duo.uio.no//handle/10852/27149, accessed April 24, 2014.

L-System. 2014. Wikipedia, the Free Encyclopedia. en.wikipedia.org/w/index.php?title=L-system&oldid=605470900, accessed April 25, 2014.

Sony. n.d. [Diagram of PS3 controll]. Retrieved from support.us.playstation.com/app/answers/detail/a_id/960/related/1/session/L2F2LzEvdGltZS8xMzk4MzgzOTUwL3NpZC9XUUNrQ0RTbA%3D%3D

Winkler, Todd. 2001. Composing Interactive Music: Techniques and Ideas Using Max. New Ed edition. Cambridge, Mass.: MIT Press.

draft of “Score” for Performance

Hello team!

Here is the tentative plan for our performance.  Shuman and Timø specifically expressed an interest in having a solid plan so they can ‘dial in’ and/or make presets for different sections.  Let me know if there are any glaring mistakes, and I will fix them.  Otherwise, read it over and come in with some thoughts and/or concerns about how we can improve our performance for the start of next rehearsal.

Gameplan_March18

Thank you!

Russell

New System With Passage Readings

After having a break down with my PS3 controller (2 pound carboot sale controller stopped responding! 🙁 )  I’m back in action with a video game controller that has a gyroscope in it!  Now I really don’t need a gametrak to get gestural control.

As I mentioned before, I’m going to incorporate three different audio types with corresponding visuals.

The first will be what you have seen in submission 1 with slight modifications for better spacialization and more rhythmic possibilities.

The second will be the audio patch that I made for Jessamine’s project, but I will be controlling it with the PS3 controller.  minimal black and white visuals (similar to Marco’s TVs) for this one as I am hoping to respond to Jessmine and Shuman’s visuals in real time and do not want to detract from them.  It will be incorporating some of Timøs sound files and Marco’s IRs.  This one will have a little of everybody!

The third will be some synthesized speech readings of Platos “Allegory of the Cave” as that is our decided theme.  As discussed in group, it would be interesting to go from digital to analog, or analog to digital.  So I’m hoping to potentially record myself reading passages and transition between the two.  I dont know yet if I want the speech to be recognizable or just noise.  I think I will have solid colors for this one leading to blinding white as we leave the cave.

here is an example of some of the digital speech.  It’s using aka.speech, so it sounds EXTRA digital (which I am going for).

13_BIG2.aif      

14_BIG3.aif

What are your thoughts?

 

 

Audio-Visual Instrument: Lissajous Beat Organ

Study:  Color, Sound, Light, and Lissajous Figures

Concept

My goal was to create an instrument that artistically actualizes the phenomena of phase interference which can be heard as auditory beats and seen as Lissajous curves.  I initially added color in a way historically described by one of my favorite composers, Alexander Scriabin. After experimentation, however, I found a color and shape palette that is more true to my own musical-visual experiences.

Because of our group’s discussions about the importance of a final performance that is more interactive than passive, I decided to use controllers to modify audio and visual components in real time.  In this version, a gametrak (piloted by Jessmine XinYan Zhang) is being used to control the camera angles and object rotation (based on a patch from x37v.com), and a knock-off “Air-Flow” PS3 controller (bought for 2 pounds from a Car-Boot sale in the Omni-Center in Edinburgh) is used to control sonic elements and visual pallettes.

Audio-Visual Interpretation

Here is the screen capture and audio from the computer from the February 26, 2014 performance:

www.youtube.com/watch?v=x3x4SWRVFM0&feature=youtu.be

Overall I am pleased with the sound and visuals interaction.  The live performance aspects, however, have room to improve upon.  Here is the live performance from the Alison House Atrium from February 26, 2014:

www.youtube.com/watch?v=YZDpTjCPmsQ&feature=youtu.be

Overall, the sound/visual interaction was successful, but the performers are too dark to be seen!  It’s nearly impossible from this video to see the intimate interactions between performers, sounds, and visuals.  For a next performance, better lighting will be used.

Technical Information

After a search for a way to create Lissajous curves on the C74 blog (Max/MSP website), I found an efficient way to render a 2 dimensional figure using Jitter and OpenGL.  After carefully studying a patch from Oli Larken, I managed to make this shape 3 dimensional with the Z dimension and brightness being modified from live sound input.  This is my hacked version, not in presentation mode:

Visuals

You can see the gametrak patch mentioned early in the top left corner.

Some data mapping was necessary to allow for an aesthetically pleasing sound and visual interaction, but most of these mapping decisions were based on psycho-acoustic boundaries.  For example, Lissajous figures look most interesting (to me) when the frequency drawing them is under 20 Hz, but human hearing only starts at 20 Hz.  In addition to this perceptual consideration, I noted that Lissajous figures become fabric like as the two sine waves generating their figure separate by more than about 5-10 Hz (depending on initial frequency).  But with frequency separation this great, human hearing segregates these sounds into separate tones instead of a single timbre.  I mapped data accordingly so as to not detract from either sonic or visual aesthetic of the instrument.

As mentioned above, the sonic and visual components are based on the phenomena of phase cancellations that occur between two slightly out-of-tune sine waves.  There are three sound generators in this instrument, each which control the visuals via shared data input or mapped envelope followers.  This is my performance patch:

Performance_Patch

Here is the patch that is processing the audio:

SineWave_BeatExploit

You can see the PS3 controller patch that I am using at top of the screen.

The only extra sound effect used in this version (other than the exploitation of phase cancellation in sine waves triggering low-pass, high-resonant filters in a rhythmic way) is reverb.  Specifically I am using a series of Max/MSP externals called HISSTools.  This external allows me to take incoming sound and convolve it through multiple reverbs.  Although many impulses are loaded, the main impulse response that I am using is one that I personally recorded at the University of Edinburgh pool in the fall.

Reverb_Unit

Final Notes

By using controllers, the Lissajous Beat Organ takes on an exciting life of its own!  Even though I sometimes forget my own controls, the way in which the PS3 controller is mapped allows for new sounds to be created from a wide array of gestures.  The sonic and visual textures that can be created in real time would be nearly impossible to achieve with only a mouse or track pad.

For future versions, controls will be mapped without using global send and receive objects.  In this way, I will have more abilities to change my sounds and visuals throughout the piece.  In addition to this, as rehearsals take place, the sonic and visual content of this instrument will be modified to meet the needs of the group as a whole.  Perhaps the gametrak will control sonic element or audio generated from another performer and modified in my instrument.  I look forward to seeing how this instrument will work to create a dialogue and eventual performance with the other members of the Audio-Visual Ensemble in the coming weeks!

Analog Study #1- Making Progress

It was hard to sleep last night because I was so excited by the the progress Marco and I made last night in using thrown out TVs and speakers to bring art to a world that otherwise views them as trash.  We had no problem hooking up a very old TV, but when newer(ish) TVs (no idea what date, could not find a manual anywhere!) we had to do a significant amount of research to figure out how to hack a SCART cable.  21 pins!  Way more than midi….

After a couple hours of shooting loud audio signals into various pins, we discovered that pin number 20 and 21 worked for our two digital TVs.  We were able to get 2D patterns occur by touching one audio signal to the ground and the other to the pin.

This morning, we tried to connect three TVs together using a patch I built hacked on the last chapter in “Electronic Music and Sound” book one (book 2 just released!!).  This patch basically allows you to cycle through channels using a wave form like a phasor.  It this way, sounds can be circled around large spaces in a systematic and improvisational way instead of physically changing tracks in a DAW.

Unfortunately, after hacking apart our only SCART cable(chopped it in halve and stripped to expose cables), we could not get half of it to work.  There is still a lot about grounding that I do not understand, and perhaps SCART cables only work one directional?  Marco managed to make his half of a SCART cable work, but we will buy a fresh one to use for Submission one.

Demo Video

It is important to note that the audio that goes into this set up can be anything.  I’m hoping that perhaps audio from another submission will be imported into this set up to bring more unification of the Audio-Visual Ensemble.  For now, we used a sine wave of various frequencies.  Marco and I discussed making friends with a welder and possible making a structure that can be interacted with and potentially incorporated into Jessamine’s color/shadow tracing set-up.  Anybody know a good (and cheap) welder?

That’s it for now.  Might have gotten farther, but Marco had to scramble to get to Italy straight from school.  Until Thursday!

See my insane cable hack attached:

SCART hack

SCART hack

 

Beats and Lissajous Figures

Had a breakthrough last night!  I have been trying to figure out how to create the analog audio-visual systems explored by F.C. Judd (UK Audio-Visual artist) in  Chromasonics.

I figured it out reading a book Prof Michael Edwards lent me last week that many of the shapes are Lissajous figures that move through space.

This links up with the audio-visual synthesizer that I have been building that explores beats, because the most beautiful (in my opinion) Lissajous figures are 2D/3D representations of beats!  The colors of the figures can be painted using the Scriabin color palette of any other to represent an artists color vision.

Max patches soon to come!  I’ll be rendering everything in open GL with so that we can potentially incorporate these structures into a group visualization.

 

 

From the Conference this weekend

Hello World!  Here are all the links to topics and VJ software that was used at the conference.  I learned a lot and am ready to jump into the seemingly massive rabbit hole that is audio-visual art!

Free stuff first;

For connecting computers people were using this:

syphon.v002.info/ 

For mapping software.  This guy presented this software and seemed open to communication and trouble shooting:

robotized.arisona.ch/mpm/

This is a link of plugins for various softwares that do visual effects.  Some are free and I think can work in Max/PD

community.freeframe.org/plugindatabase

I bought this guys CD and it is AMAZING!  Some of the best Chip tunes I’ve ever heard.  He gave a lecture about using PD with visuals that was very comprehensive….BUT….you can do everything in jitter too…just putting that out there….I don’t see any of his visual stuff on here, but he said that he just started doing visuals 2 months ago!

videogameaudio.com/main.htm#patches

There was a more theatrical performance using iPads that was interesting.  A nice man named Nuno (email is sf29@gre.ac.uk) told me that he used the software linked below and epoccam multi cam software to link the iPads to the software.  Epoccam turns iPhone/ipads into wireless webcams!  Nuno is based in Greenwich but seemed happy to collaborate.

troikatronix.com/

itunes.apple.com/gb/app/epoccam-wireless-computer/id449133483?mt=8

This one is expensive, but seems to do literally everything.  Lightsurgeons use this one, so it is legit:

vidvox.net/

This looks like a comprehensive list of everything out there!  I see resolume in there too Timø

www.pearltrees.com/#/N-u=1_513396&N-play=0&N-f=1_4411192&N-s=1_4411192&N-p=35814160&N-fa=4227476

As far as learning the software….People seemed to speak super highly of this website for learning literally everything software.  It was interesting how many people taught themselves these seemingly complex programs.  This website might be a key….

www.lynda.com/

Here are links to performers that we saw this weekend for more ideas of what this software can do!

Biology and data influenced:

www.wiretrace.net/

raw lights with sound (and friend of Martin [could be useful…]):

heavyside.net/

Light surgeons use of pre-made video with semi-improvised music linked with after-effects

www.thelightsurgeons.co.uk/

There was a heated discussion about opera vs. live cinema vs. multimedia musical theatre….It looks like live cinema is more open to collaboration!  follow them on FB and there is a lot of info about things happening in London.  Non-Profit = cheap good times!

www.facebook.com/Livecinemafoundation

There was a weird contrast between some visual artists I spoke to about what has happened and what is happening now.  One guy talked about this thing called “The 9 Evenings” that happened in NYC in 1966 as being the pinnacle of audio-visual connections in music.  It’s crazy they were doing this in 1966!

www.9evenings.org/

And this is a project to retouch things from the 70s.  The guy I spoke to is based in Dundee

In terms of data mapping, this guy Jer Thorp is basically a professional data-mapper for the New York Times.  He uses all their information to create art, so pretty cool.

www.brainpickings.org/index.php/2012/03/01/jer-thorp-tedxvancouver/

For shadows and effects like that, these people seem to be top notch:

www.youtube.com/watch?v=ViZBwbzOcC8

We could do live visuals like this…..real analog:

www.youtube.com/watch?v=TgjDAIqu6-8

Oh and this is the piece that Marco just showed me, but it’s not the studio version.  Will you post the studio version for me? cheers!

www.youtube.com/watch?v=N5Qm86PNyCo

Ok that’s enough for now!  Cheers!