Submission 2

 

Optical Audio Explorations – Submission 2

Team: Ian Hynd, Andreas Miranda, Martha Winther, Fiona Keenan, Jim Pritchard, Raz Ullah and Marie-Claude Codsi.

Project Supervisor: Dr Sean Williams

 

Project Introduction and Concept:

LCD is a sound installation that amplifies this secret world of light and manifests it as a material that can be directly manipulated and interacted with.

Informed by the work of early experimenters with Optical Audio such as Daphne Oram, Oscar Fischinger and others. LCD uses specially designed optical circuits and repurposed computer parts as components in an array of six sound sculptures that emit electronic tones when activated by LED torches.

The interactive element itself comprises two elements, firstly we utilise handheld led torches to stimulate a series of electro-mechanical oscillators to manufacture base tones. We also utilize a second LED light source to modify the tone by sonic granulation and pitch shifting by means of  Max/MSP via an Arduino processing to create a rich and pervasive sound world that is determined by user interaction.

Secondly, the data from background and transient light level variations was intergrated to form an audio composition that forms the background to the installation, representing the urban landscape, over which we lay interactive elements to represent the way in which people influence their environment.

The interactive element is laid on top of the background composition to reflect the complexity of the modern urban lightscape.

DMSP Optical Audio Group Performance

[soundcloud url=”http://api.soundcloud.com/tracks/89437812″ params=”” width=” 100%” height=”90″ iframe=”true” /]

DMSP Optical Audio Presentation Night

[soundcloud url=”http://api.soundcloud.com/tracks/89438288″ params=”” width=” 100%” height=”90″ iframe=”true” /]

 

Installation Composition

[soundcloud url=”http://api.soundcloud.com/tracks/83976612″ params=”” width=” 100%” height=”90″ iframe=”true” /]

Having gathered such a wide range of alluring sounds using our RGB Recorder, we decided it would be interesting to combine these various sonic atmospheres under one musical composition. The idea was to craft the various recorded environments in a unified piece. The underlying layer of the composition includes the output recorded from a solar panel. The constant sound of the sun characterizes the omnipresent light source and offers support to the various recorded samples present in urban environments. These include recordings we made from ambient lights, bike lights, television screens, exit lights etc.

The following image shows how these various samples where organized to create the piece:

This piece was also used as background material during the installation in order to include environmental elements with the real time audience performance.

Hardware Topology

Initial testing revealed the DC fan front end as an excellent generator of squarewave type noise.

A decision was made to produce a piece of hardware which would incorporate a DC fan and an LDR and perhaps a light source on an ‘arm’ to be used to stimulate a second unit. The hardware would itself be mountable in a performance situation on a stand.

A couple of working prototype examples of this design were attempted for evaluation but aspects of the LED ‘arms’ were omitted for ease of manufacture.

The prototype drones were workhorse development platforms which allowed us to reject various details and think of including others. We had particular difficulty sourcing light sources strong enough to stimulate them into operation.

 

Initial fan test with Max/MSP

[soundcloud url=”http://api.soundcloud.com/tracks/82184511″ params=”” width=” 100%” height=”90″ iframe=”true” /]

A cumulative series of additional control inputs were considered for the installation. The context of the word ”interactive” was tantamount and great gravity was given to what fitted in with this ideal.

Our 12V fan and LDR unit developed from our first submission design, into six colour-coded units mounted on microphone stands, each with a coloured plastic bowl housing the LDR input to the Arduino control, and a dedicated loudspeaker for audio output.

 

 

 

 

 

 

 

 

The units  were arranged in a semi-circular configuration, with the control station (laptop, Mackie mixer and Motu soundcard), projector, and a pair of stereo speakers for the separate composition playback.

 

Software

The software component of the installation was built around ‘Maxuino’, a collaborative open source project that allows MAX/MSP to interface with an Arduino Microcontroller. ‘Maxuino’ allows MAX to read data arriving at the Arduino’s analog and digital pins, write to digital pins, control servos and sensors and more.

Six light-dependent-resistors were connected to the Arduino via a breadboard, using a 1kOhm resistor as a voltage divider. Activating the analog pins allowed Maxuino to start receiving the resistance changes which were manifested as floating point numbers in the range 0 to 1:

The values registered by Maxuino were then routed to the audio processing and visualizer elements of the patch, however they needed to be passed through a conditional object first, ‘if’, to ensure that audio processing and graphics generation only happened when a specific threshold had been exceeded. The threshold was set to the resistance value produced by the ambient light in the space – once an LED torch was directed towards the LDR the resistance value increased, which in turn activated the granulation and graphics elements. A ‘scale’ object was also used to make more meaningful use of the data coming from the LDR’s – the 0 to 1 numbers were scaled to different values to allow for a wider range of grain size and pitch:

A simple granulator was utilised to process the incoming audio – this parameters that could be controlled were granulation on/off, grain size and random pitch shift range:

Expanded view of granulator:

The feed from each granulator was routed to a Jitter system to generate graphics from the incoming audio. Using the Jitter objects ‘jit.catch’ and jit.gl.graph allowed us to generate a solid coloured line which changed appearance according to the intensity of the incoming audio stream:

Each instance of the visualizer was then combined to produce the final 6-column matrix for projection. As each visualizer rendered to its own unique window we needed to find a way to combine them which was achieved by using a ‘jit.gl.asyncread’ object. This object allows us to read from the OpenGL domain and create a synchronous video stream which can be displayed in a standard ‘jit.pwindow’. Once we had the visual data in a matrix form it was a simple case of using ‘jit.glue’ to combine them all together:

 

Projected visualizer:

Overall handling of the audio streams was managed by two instances of a ‘dac’ object – one to route the unprocessed audio and another to route the granulated/pitch shifted audio. Different multiplication values were used on each ‘dac’ to create an appropriate balance between the two different audio streams with the unprocessed audio set at a lower level then the processed audio. The different levels were decided upon after extensive listening tests:

 

Rejected Installation Design Ideas

A succession of design ideas were rejected and the scale of possible front end devices reduce due to several factors. These were;

  • Time considerations
  • Budgetary considerations
  • Technical ability
  • Interactivity shortfall
  • Deviation from the concept of optical synthesis

Concepts which were rejected include;

  •  Optical card reader

A device which would have taken a series of ‘programming’ cards, each with holes cut into them (possibly filled with RGB coloured filters). These cards would have been read by an array of LDRs to derive an instruction for an aspect of control.

Rejected due to lack of interactivity or dynamic control.

  •  Spectrum tuning box

A device which would have held a glass prism inline with a diffraction grating and a pure white light source. The prism would have been manually turnable to a user thus allowing a realtime creation and refraction of a visible spectrum. The concept was to have a series of LDRs inline with the area that the spectrum would appear from the prism to allow a user to ‘tune’ the area of the visible spectrum to listen to. A further variation of RGB filtered LDRs acted upon by a series of prisms was also considered.

Rejected due to difficulty in achieving refraction and the very fine space that the spectrum fell in.

  •  Mobile phone screen LDR reception array

A device whereby an array of LDRs would be available for individuals attending the exhibit to present their mobile handsets to in order for the system to derive a sound or control input from individual screen choices. Very tempting because ‘nearly’ everybody has a phone and a phone with a screen and as such a soundtrack can be instantly made from people’s individual choice of screensaver of background.

Rejected due to the low fidelity output of a group of LDRs- they are not particularly colour sensitive on mass and unlikley to give a perceptible variation between screen types. Also the financial consideration of the 150 plus LDRs required to create a receptive surface.

  •  A live daylight input working perhaps through Ian’s oscillator reception circuit.

This would have been an opportunity for attendees to manually augment or at least audition live daylight as it entered the exhibition space. Tempting because it would be an entirely external analogue input which would variate over time.

Rejected because the performance space was going to be best served in darkness to allow more accurate reception of other light sources within the exhibit and also because of the unpredictable nature of daylight levels and the possibility of slow variance in possible audio output.

  •  Producing monochrome images on a computer screen to be scanned
A publicly viewable display surface would show light & dark patterns and shapes to be scanned by users with LDR wands and provide a control input to the system.

Rejected because of difficulty in making the concept a group-participatory resource and because of the almost completely open ended control inputs; it might have turned out to yield more data than could be meaningfully used.

  •  LED/LDR plug & socket table

A system wherby a table or raised surface, perhaps inside a tent to shield extranouous light, is fitted with a series of illuminated LED sockets which are pluggable-into from a wired LDR wand. Tempting because of interactivty and visual impact within the space.

Rejected because of cost and build difficulty within timescale and compatibility & application within the rest of the system.

 

LCD in exhibition and after:

Firstly, overall the project proved a challenge in terms of working to together create a piece of sonic art which comprises technology and aesthetics. As with any exhibited art work, however, the creator cannot know how the viewer will perceive and experience the work, no matter the intention. Looking back at the installation LCD as exhibited and interacted with by the spectator, the following aspects are brought to our attention.

-Implementing stereo pair to reveal the tonality of the overall soundscape (background and individual fans)

-Introducing flashing light sources to accentuate the detailed changes in the sound

-Lower overall sound level

-Altered arrangement of separate modules in order for the ‘audience’ to interact through being able to see each other

-Stronger emphasis on the relationships between each specific sound and its’ visual source

-The creators to perform with the installation to amplify its’ diversity in sound and to bridge the relationship between audience and interactiveness

In terms of implementing a stereo pair of speakers, we believe it may have been useful in order to collect the entire soundscape into one solid sound wall. By doing so, the individual engager would potentially obtain a feeling of composing/playing in unison with other engagers. This would also eliminate the possible need of altering the position of the fans, as we put great emphasis on the physical layout to achieve the aesthetic we felt it needed.

The idea of using flashing light sources has potential to expand the overall aesthetic of LCD further, this could be implemented by using flashbulbs, strobes and other intermittent light sources. As to the relationship between each specific sound and it’s visual source, we agree that this should be strengthened by increasing the size of the projection surface or exploring other options for projecting visuals, for example 3D mapping.

As a group, we understand that, as with all things, there is the potential of further developing LCD to improve various aspects. However, on the basis of time constraints as well as cross disciplinary engagement, we agree that LCD was a well collected piece of sonic art aesthetically and technologically.

 

Task Credits:

All members of the team contributed to our research, the design process, hardware setup, and testing, project documentation.

Andreas Miranda: Light recordings, composition.

Jim Pritchard: Fan unit design and fabrication, meeting minutes.

Ian Hynd: Sunlight data recording with oscillator circuit, book keeper (accounts).

Raz Ullah: Max/MSP programming, promo videos, poster design, booklet printing, ECA space booking.

Martha Winther: Equipment sourcing, installation in ECA space.

Marie-Claude Codsi: Event promotion/advertising on social media and elsewhere, equipment sourcing.

Fiona Keenan: Unit fabrication for light recording, sound file recording.