Audio Visual Instrument/Control System – Colour Tracking

Concept

At the start point, my aim was to build a digital instrument that could tie the screen and the space of stage together so that audience would be able to  experience a audio visual theatre performance rather  than a real-time music video. To avoid the embarrassing situation that performers just stand in a dark stage operating computers without visible interact with the contents which they show to the audience, I attempt to develop a system that could smoothly embed the performer into the stage and make him/her an inseparable part of the whole performance. Inspired by the Electronic Theatre performance “Oedipus – The Code Breaker” in the Real Time Visuals Conference on 24th January 2014, I realized one way to connect the performer and the screen was to record the action of the performer on stage and add real time feedback into the video on the screen. Then I came out the idea of make a live video tracking system. This system capture the motion of objects that controlled by the performer or even the performer himself/herself on the stage in real time and update the data into programs generating sound and graphic as feedback. In this way such system could also be considered as an instrument.

Method

From the jitter tutorial documents in Max/MSP, I found that one way for motion tracking is to follow the trace of colour. The object jit.findbounds provides us the function to find the position of visual elements in a specific range of colour from a video which could also be the real time video from a camera. Then it would output a set of data which could be used for manipulating or generating audio and video for output.

Here is a screen shot of the whole Mas patch:

Screen Shot 2014-02-27 at 9.40.40 PM

This patch consists of three sections: the colour tracking part, the graphic generating part and the sound generating part. The same set of data is sent into both audio and video sections at the same time to manipulate the parameters for different effects.

The colour tracking section could also be divided into three parts: video input, colour picker and position tracker. Video input allows data from different sources like cameras, web-cams or video files. The colour picker part allows settings from either directly click on the colour pad or the Suckah object that masked on the video preview window. The position tracker would find out the top-left point and the bottom-right point of that colour range and output them. In this patch I use some mathematical expression to transform the data that illustrating the centre of that colour and the size as well so that we could get the position more precisely.

The audio part of this patch is made by Russell Snyder, he built a audio adjusting section and a sound generating patch using the concept of river. Using this functions, the data of colour position is used to adjust the panning and volume of sound and at the same time mapped into sound generating. 

Previously I built a section that drawing rectangles using the colour position data, but it seems not that coherent with the sound effects. So I tried to find some other graphics. The Jitter Recipes show us some example of generating stunning visual effects. And I adapted one of the examples, Party Light by Andrew Benson in this patch to make the demo video.

Execution

Attached here is a demo video  experiments of audio visual effects using this colour tracking instrument:

Colour Tracking Experiments – AVE 2014 – submission 1 – jz

We have done three takes to exploring different possibilities under different settings of both audio and video. There is still some space for discovering new possibilities of this patch.

Conclusion

At this stage we did develop a effective motion tracking system by tracking the movement of colour. It could either be played on its own as an independent instrument or be combined with works of other group mates to develop some possibilities for the final performance. While using coloured objects in a bigger size, performers would be able to perform on the stage and at the same time tracked by the system. By this means it possible to combine the gestures and digital audiovisual effects together as a coherent performance.

However there is still something to be improved:

1. Sometimes the data of position still seems to flick which might cause some noise, I should improve the patch to smooth the changing of data.

2. The level of brightness could strongly influence the performance of colour tracking system. It seems to perform much better in brighter environment. As the stage of final performance would be pretty dark, I have to figure out a method to improve the performance of video recognition part in poor brightness.

3. Up till now the diversity in graphic aspect seems to be so limited, so more options for interaction would be added afterward.

Leave a Reply