Bass guitar DMX instrument_Submission 2

Bass guitar DMX instrument
My goal was to construct a system that is highly responsive, expressively dynamic and diverse, which can be improvised in real time using a “traditional” instrument (standard bass guitar) to play electronic sounds that trigger and modulate specific DMX light movement in a way that directly connects sound and vision (light).

Timø's Instrument
Submission one and my blog describes in lengthy detail my initial progress, observations, trials and tribulations.
dmsp.digital.eca.ed.ac.uk/blog/ave2014/category/blogs/timosblog/
Post-Submission 1 Progress
Besides handling the bulk of the administration work, organizing and booking equipment, coordinating with the venue, acting as liaison between DMSP groups, etc., I worked on refining my audio visual instrument and rehearsing.

Lamps
To add diversity and flexibility to visual aspects of my instrument, I designed a method to incorporate standard household lighting.  I purchased a 4 channel DMX dimmer pack, rewired five lamps to use Euro plug IEC adaptors and added an additional par can stage lamp.

Mapping Light
Although the lamps and lights were directly controlled by my bass guitar’s audio input signal, I required a way to map state changes to occur over time.  Through automating envelope parameters, which could be automatically or manually triggered via my setup, I achieved greater control.  This in turn contributed to keeping the lighting changes in our performance more varied.

Mapped EnvelopesRefined DMX control rackMapping Sound
Sonically my instrument needed to be flexible and able to produce a wide range of content over the course of the performance.  This ranged from sub bass, low bass, glitched rhythmic passages, percussive events, angelic synth pads, thunder, abstract mid range noise, etc.

Splitting audio input across three tracks, corresponding to different frequency banded instrument voicing, I built a series of multipurpose effect racks, which I mapped to several hardware controllers.

Audio Effect RackAdditionally, since I was able to convert my audio input to MIDI data, I built a multilayer instrument rack that allowed me to select and switch between combinations of virtual instruments and modulate their parameters in real time.

Instrument RackRoadmap
After the group devised a theme, we divided the performance into sections.   This was helpful as I was able to automate and interpolate between states by launching scene changes flexibly as needed.

Scene Selection

Theoretical Context

Please refer to my essay on spectromorphology in submission 3 for theoretical context and reference to existing scholarship in the field.

Allegoric Irony of the Documentation
It is worth mentioning that, in my opinion, the video footage of the performance failed to capture the totality of its full scope. It is ironic and disappointing that the footage is only a ‘shadow’ of the live experience.  Although we were using three separate cameras, the stage, performers, lighting and live video exceeded the boundaries of what was recorded.

We were advised to have another group document the performance.  I had meetings days ahead of time to discuss how and what needed to be captured but still the results, for the most part, were overall unsatisfying.  Although, I am grateful for the help, if we had known, we could have made slight adjustments to compensate.  With more carefully chosen camera angles, readjustment of props and a slight repositioning of the performers, the performance documentation could have captured a much better perspective and more completely conveyed what we were trying to achieve.

Video montage

In the field of audio visual ensemble, it’s always hard to decide whether the sound designers or visual designers should be first to make the next decisive audiovisual step. There would be two options, that is, the sound designers write the music ahead having seen the relevant section of unedited visual content, or to compose it once the visual designers have already edited it. In terms of our AVE group we’ve decided to present our work in the form of a live performance, which requires both sound and visual designers to manipulate their instruments in real time rather than to project the already-integrated work while standing aside. Therefore the first choice seems more feasible for us, which requires visual designers to develop the montage from sound designers’ existing score.

This procedure involved the sound designers (Russell and Timo) drawing the ‘architectural’ plan itself, which would then determine how the audiovisual montage of visual designers (Jessamine and I) would operate. Responsible for the video part in our group I have to create a video montage to match Russell and Timo’s sound cues as well as the whole theme. Here are the main steps of this procedure.

1.  I’ve made a series of video clips which match the theme Cave, using softwares such as      After Effects and Premiere.

video clips_01 Trapped in the cave

video clips_01 Trapped in the cave  vimeo.com/92611381

video clips_02 Go out of the cave

video clips_02 Go out of the cave  vimeo.com/92611782

video clips_03 Outside of the cave

video clips_03 Outside of the cave  vimeo.com/92612143

video clips_04 Outside of the cave

video clips_04 Outside of the cave  vimeo.com/92612228

video clips_05 Outside of the cave

video clips_05 Outside of the cave  vimeo.com/92612279

video clips_06 Outside of the cave

video clips_06 Outside of the cave  vimeo.com/92612428

video clips_07 Go back to the cave

video clips_07 Go back to the cave  vimeo.com/92612493

video clips_08 Struggle in the cave

video clips_08 Struggle in the cave  vimeo.com/92612641

video clips_09 Struggle in the cave

video clips_09 Struggle in the cave  vimeo.com/92612759

video clips_10 The ending

video clips_10 The ending      vimeo.com/92612824

2.   I must have in memory and be very familiar with all the video clips which can be                   manipulated and transformed at any point. With a sophisticated Max patch (thanks to         Martin) and an external controller I could modify many parameters of the videos in real-       time such as frame mangling, brightness, contrast, blur, colour, scale, etc..Moreover all       the parameters could be set to audio-controlled, which is of crucial importance in an           audio-visual-ensemble live performance.

Control the video using Max

Control the video using Max

Max patch interface

Max patch interface

3.  Attention to details during rehearsals and listen again and again to the recorded sound        of the rehearsal video, until the moment arises when I can imagine a series of images          which could correspond with the sound, and realize the effect I had in mind in the next        rehearsal. During this process the matching of the sound cues with corresponding              section of visual representation has produced a satisfactory harmony to some extent.

Plan for our documentary

The goal of the documentary is briefly introducing our project and explaining how we execute it. The documentary would consist of the following parts:

–          The concept of audiovisual ensemble (that we are trying to demonstrate in this specific performance);

–          The narrative line (the allegory of the cave) and audiovisual metaphors;

–          Role of each group mates and introduction of their “instruments” (from both technology and aesthetic aspects);

–          Footages of rehearsal (to show some ensemble effects) and experience of the whole process;

–          Audience feedback (if possible).

 

And the contents could be organized as:

 

1. Concept:

Cross cutting clips from rehearsals and performance showing some audiovisual effects

Interview 1: introducing project concept

Full view of performing together, (interview continue as voice over), cross cut with discussion and setup.

2. Narrative:

Specific part of the performance typically related to the story of cave

Interview 2: allegory of the cave (keep it brief, the version we are using)

Picture (or simple animation if possible) showing the cave and prisoners (voice-over from interview)

3. Metaphor:

Interview 3: visual metaphor (shadow as illusion and video as realistic world);

With Detail of visual aspect (interview as voice over)

Interview 4: audio metaphor

With detail of playing instruments as well

4. Instruments:

(Possibly cross cut with the metaphor part)

Everyone playing his/her role in the rehearsal/performance, details of instruments (close-up shots), interaction with others, set up….

With interview about: your role, your instrument, your feelings……

(This section is for each person in the group. Although it seems quite short in this draft, this part is actually the main consistence of the whole documentary as it shows how and why we are doing these things.)

5. Feedback:

Shots of a prepared stage

Effects of the final performance

Feedback from audience

Feedback about the final performance from performers

6. Ending

 

This is only a draft for the documentary. I hope it would be helpful for us to prepare and collect relevant materials. If you feel confused about some of the points or have suggestions for improving it, please do not hesitate to tell me. J

Here are also some questions for you to prepare for. Please try to keep the answers brief because the length of documentary would only be around 5 minutes.

For everyone:

  1. What are you doing in the performance and how your outcomes related to some metaphors?
  2. Introduce your instrument.
  3. How you feel about doing this project?
  4. Which part of the performance do you like most?
  5. Whatever else you would like to talk about…

Beside, here are also some questions that should be talked about in the documentary but not necessarily discussed by everyone. I’ve attached my suggestion about who would talk about it. If you feel uncomfortable about it please just let me know and we could try some other arrangements.

Introduction to the concept of audiovisual ensemble (Russell?)

Allegory of the cave (Timo?)

How to achieve ensemble automatically (Marco?)

How to achieve ensemble manually (Shuman?)

draft of “Score” for Performance

Hello team!

Here is the tentative plan for our performance.  Shuman and Timø specifically expressed an interest in having a solid plan so they can ‘dial in’ and/or make presets for different sections.  Let me know if there are any glaring mistakes, and I will fix them.  Otherwise, read it over and come in with some thoughts and/or concerns about how we can improve our performance for the start of next rehearsal.

Gameplan_March18

Thank you!

Russell

New System With Passage Readings

After having a break down with my PS3 controller (2 pound carboot sale controller stopped responding! 🙁 )  I’m back in action with a video game controller that has a gyroscope in it!  Now I really don’t need a gametrak to get gestural control.

As I mentioned before, I’m going to incorporate three different audio types with corresponding visuals.

The first will be what you have seen in submission 1 with slight modifications for better spacialization and more rhythmic possibilities.

The second will be the audio patch that I made for Jessamine’s project, but I will be controlling it with the PS3 controller.  minimal black and white visuals (similar to Marco’s TVs) for this one as I am hoping to respond to Jessmine and Shuman’s visuals in real time and do not want to detract from them.  It will be incorporating some of Timøs sound files and Marco’s IRs.  This one will have a little of everybody!

The third will be some synthesized speech readings of Platos “Allegory of the Cave” as that is our decided theme.  As discussed in group, it would be interesting to go from digital to analog, or analog to digital.  So I’m hoping to potentially record myself reading passages and transition between the two.  I dont know yet if I want the speech to be recognizable or just noise.  I think I will have solid colors for this one leading to blinding white as we leave the cave.

here is an example of some of the digital speech.  It’s using aka.speech, so it sounds EXTRA digital (which I am going for).

13_BIG2.aif      

14_BIG3.aif

What are your thoughts?

 

 

an improvised solo

After having refined (redesigned almost all control messages of) my instrument, I started to explore possible gestures and short forms. I think I achieved a good variety of sounds and found a few interesting gestures, so I recorded a demo of about 8 minutes. Great inspiration is, again, coming from Di Scipio. It’s a one take improvisation, but I think there’s some “storytelling” in it. Having built this instrument from scratch, being able to paint a beginning and an end is for me a big result. The left channel is much hotter than the right one (I love asymmetry).

stretching sound

This time I’ve been using one of the stretch sensors. I like its feeling a lot, especially because it gives an absolutely clear feedback, allowing for a really fine control. This is quite visible around 1:05 min in the video. There still are several issues, though. First and foremost, I’d like to use its own sounds and noises as the processing material, but I can’t do it using Arduino (as I’m doing now) because its 5V power is so noisy that completely covers the sensor. I read that the 3.3V source is less noisy, but I’m running out of time to continue experimenting (sadly). For the DMSP, I might just use a few sensors to control the processing of the sound(s) coming from a contact mic placed on a “sculpture” (and that is another issue). I like the sounds of this new video, but they’re coming from the laptop mic and, to mimic a contact mic, I had to tap and scratch its surface. I did it both with my “wired” arm, having a perfect sync between the stretching and the impacts (but unfortunately that’s out of the camera field), and with my other hand, but it turned out I was not syncing them, I’m not sure why (maybe it felt insipid while playing). Finally, I’m not entirely sure of where and how I can place the sensors on my body, but I’ll figure it out soon. Good night, and good luck.

something else

This short video was born after experimenting with sound generation processes. The only source of all sounds is the laptop microphone, as I was trying to expand the palette of one of the systems I’ve been using for my A/V instrument. In particular, I focused on the interaction between a modal synthesis unit and a granulator I recently build. The latter seems to be not so flexible and probably needs some more work, but since it is completely built in the signal-domain, it has an interesting sound, at times almost analog. I experimented with fundamental pitches of the modal synthesis, but I’ll get more variety as soon as I’ll start to dynamically modify the mutual relationships between each filter – that is to say, modify timbre. As I said, this is just one portion of the system I’m working on: this video doesn’t feature the actual contact mic, the electro-magnetic feedback from the television screen and the background resonances/feedback running in Pure Data. Nevertheless, it shows some work done on the sounds that I want to be the centerpiece of my “thing”.

I thought the general mood matched pretty well this phone-resolution video I shot some time ago.

Audio Visual Instrument/Control System – Colour Tracking

Concept

At the start point, my aim was to build a digital instrument that could tie the screen and the space of stage together so that audience would be able to  experience a audio visual theatre performance rather  than a real-time music video. To avoid the embarrassing situation that performers just stand in a dark stage operating computers without visible interact with the contents which they show to the audience, I attempt to develop a system that could smoothly embed the performer into the stage and make him/her an inseparable part of the whole performance. Inspired by the Electronic Theatre performance “Oedipus – The Code Breaker” in the Real Time Visuals Conference on 24th January 2014, I realized one way to connect the performer and the screen was to record the action of the performer on stage and add real time feedback into the video on the screen. Then I came out the idea of make a live video tracking system. This system capture the motion of objects that controlled by the performer or even the performer himself/herself on the stage in real time and update the data into programs generating sound and graphic as feedback. In this way such system could also be considered as an instrument.

Method

From the jitter tutorial documents in Max/MSP, I found that one way for motion tracking is to follow the trace of colour. The object jit.findbounds provides us the function to find the position of visual elements in a specific range of colour from a video which could also be the real time video from a camera. Then it would output a set of data which could be used for manipulating or generating audio and video for output.

Here is a screen shot of the whole Mas patch:

Screen Shot 2014-02-27 at 9.40.40 PM

This patch consists of three sections: the colour tracking part, the graphic generating part and the sound generating part. The same set of data is sent into both audio and video sections at the same time to manipulate the parameters for different effects.

The colour tracking section could also be divided into three parts: video input, colour picker and position tracker. Video input allows data from different sources like cameras, web-cams or video files. The colour picker part allows settings from either directly click on the colour pad or the Suckah object that masked on the video preview window. The position tracker would find out the top-left point and the bottom-right point of that colour range and output them. In this patch I use some mathematical expression to transform the data that illustrating the centre of that colour and the size as well so that we could get the position more precisely.

The audio part of this patch is made by Russell Snyder, he built a audio adjusting section and a sound generating patch using the concept of river. Using this functions, the data of colour position is used to adjust the panning and volume of sound and at the same time mapped into sound generating. 

Previously I built a section that drawing rectangles using the colour position data, but it seems not that coherent with the sound effects. So I tried to find some other graphics. The Jitter Recipes show us some example of generating stunning visual effects. And I adapted one of the examples, Party Light by Andrew Benson in this patch to make the demo video.

Execution

Attached here is a demo video  experiments of audio visual effects using this colour tracking instrument:

Colour Tracking Experiments – AVE 2014 – submission 1 – jz

We have done three takes to exploring different possibilities under different settings of both audio and video. There is still some space for discovering new possibilities of this patch.

Conclusion

At this stage we did develop a effective motion tracking system by tracking the movement of colour. It could either be played on its own as an independent instrument or be combined with works of other group mates to develop some possibilities for the final performance. While using coloured objects in a bigger size, performers would be able to perform on the stage and at the same time tracked by the system. By this means it possible to combine the gestures and digital audiovisual effects together as a coherent performance.

However there is still something to be improved:

1. Sometimes the data of position still seems to flick which might cause some noise, I should improve the patch to smooth the changing of data.

2. The level of brightness could strongly influence the performance of colour tracking system. It seems to perform much better in brighter environment. As the stage of final performance would be pretty dark, I have to figure out a method to improve the performance of video recognition part in poor brightness.

3. Up till now the diversity in graphic aspect seems to be so limited, so more options for interaction would be added afterward.

From Audio/Vision to Transmedial Experience

Many different ways of approaching the idea of an Audio/Visual Ensemble are possible and it might be argued that vision and music were originally united in performative and ritual practices: for instance, archeological studies about prehistoric art deal with the same time scale in dating the birth of both painting and music. Moreover, many ritual practices around the globe involve this unity: a big fire, a circle of people playing musical instruments, other people dancing, clapping hands, singing along in colourful dresses, casting shadows around: all that is food for eyes and ears. Actually, also for nose and skin (think of the fire heat, of the contact with others’ skin and with the ground…). All that is no different from “occidental” disco clubs.

The way our ensemble approaches audio/visual projects directly comes from the idea of preserving this original unity: actually, we do not aim at recomposing it a-posteriori, but rather at making it the starting point from which to grow different fronds, all belonging to the audio/visual realm. Therefore, we understand the “/” sign as a continuum consisting of a range of experiences (including the strictly auditive and visual ones) that can be freely navigated. Free navigation, however, may well lead us out of the map: that is to say, other expressive forms may at a certain point be included in our projects, for example spoken words or performative arts. To be ready to greet them we should start to think of our group as a Transmedial Ensemble.

As a matter of facts, some of our projects already include performative elements. Timo’s instrument, for example, consists of system whose output massively involves light and sound; at the same time, though, it can only be appreciated considering its performance-driven nature. The source of it all is Timo playing a bass guitar – an action that, indeed, is one of the most ancient and well recognised ways of performing. Its performative nature becomes even more clear as Timo makes use of unconventional gestures. Russell and Jessamine have been working together on two different projects that go in an even more abstract direction, leaving behind the idea of playing a musical instrument and freeing the performative element. Marco’s system can be placed somewhere in between, as the physical interaction that “plays” it can vary from a percussive-instrument style to more “theatrical” gestures.