Wooden Fish

 Wooden fish, or temple block, is wooden percussion instrument used in religious ceremonies. Here, a toy motor rotates a small piece of wood inside of a temple block to produce an organic yet frantic and unusual sound. This output is processed through a pitch-shifter, helping to add low frequency content to the sound without removing any of the organic, or real-world, tonal quality. The wooden fish is controlled using a pressure sensitive foot-switch.

IMG_4124

Lazer mic

A lazer pointer incident on a small solar cell allows for a potential difference between the positive and negative cables of the panel. The lazer mic/turntable object uses this in order to produce sound by allowing users to shine light upon the panel, with the output sent to an amplifier. Reflecting the light off of reflective, rotating surfaces before arriving at the panel can produce various pitches depending upon speed of rotation. Using flashing light can also produce a very irritating sonic output.

IMG_4121

Singing Bowl

The singing bowl is situated on a rotating turntable platter, having a stationary mic stand hold a batter/paddle against the side of the bowl. After several rotations the bowl begins to resonate due to friction. A singing bowl is a type of bell that is typically used in conjunction with meditation and prayer. It has been used here in attempt to evoke similar emotional states within the performer

IMG_4120

Wind Chimes

Wind chimes set into motion by a servo motor. Wind chimes were chosen due to their high frequency, metallic quality (adding a new sonic element to the entire ensemble) and ability to evoke emotion. In some cases, wind chimes are associated with tranquillity –  an ornamental feature in a garden, or other quiet space. They have also been associated with the opposite – one person stating that the sound is reminiscent of a horror film in the “it’s quiet… too quiet” sense. The chimes are processed using a spectral harmonizer, playing a delayed (and spectrally different) version of the sound through various surfaces using the GEL speakers.

IMG_4125

Water Pump

The rumbling, rattling sound of a small electric water pump is picked up and amplified by means of a self-built electromagnetic microphone. Sonically, it has a fairly jarring quality to it, especially when played for longer periods of time.
Water Pump

Light Sensor Phasing Theremin

An Arduino with a light sensor is connected to a self built speaker made out of a plastic pretzel jar, and located fairly high up in the room.
By moving ones hand over the light sensor, a square wave sound can be frequency-modulated. The resulting sound has a slightly obnoxious, and very electronic character.Light Sensor

Piano Soundboard 2

An E-bow (electromagnetic bow), controlled by a remote potentiometer, sets a string in constant vibration, creating a droning sound, that is further augmented by a piece of guitar string bouncing off the vibrating piano string, and the Ebow lightly touching the latter, to create a fair amount of overtones.
Further, the sound is augmented by the same pitch shifter/GEL speaker setup as the one used for Piano Soundboard 1.
Piano Soundboard 2

Piano Soundboard 1

We use a potentiometer attached to an Arduino controlling a servo motor as a turning knob to move a metal arm across some of the strings of an upright piano soundboard. The resulting sound is augmented by a pitch shifter and sent to two GEL surface speakers attached to a metal sheet and a piano body, respectively, that are spread apart spatially.
Due to the untuned strings the sound is fairly unpredictable in terms of pitch and density. Sonically, the soundboard itself does not resemble a traditional piano sound by virtue of being detached from piano body and mechanisms.

Piano Soundboard 1GEL SpeakerGEL Speaker

Early Sound Object Experiments

The images below show experiments with creating sound objects that took place within the initial few weeks of the project. Some of these objects can be seen in pictures from an earlier post.

The first image shows a guitar, to which Wolfgang attached contact microphones. The initial ideal was to use a servo motor to move a violin bow over the strings. A guitar slide was also being tested. It was discovered that the servo was not powerful enough to power the bow, and the set-up required would be more complex than initially anticipated.

The next shows the servo being attached to the sound board of the piano. The servo powers an arm with plastic tassels which ‘strum’ the piano strings as the wheel of the servo rotates back and forth. Code used to run the arduino board is based on the ‘sweep’ example provided in the ‘learning’ section of the website (arduino.cc/en/Tutorial/Sweep#.UwecDPR_t6Q). Qianqian added a potentiometer to the arduino circuit, allowing for the user to control the angle and timing of the arm movement simply by turning the knob.

Wind chimes were used in the final image. A string attached to the servo wheel at one end is attached to the wind chimes at the other. The rotation of the wheel causes the chimes to move and produce their distinctive sound. A button was added to this object by Qianqian. The button allowed for the user to start and stop the rotation of the servo wheel.

GuitarWithContactMics

PianoSweepWithArduinoEarlyVersion

2014-02-10 20.46.54

Max MSP First Prototypes

The two screenshots below show Max MSP patches for performing signal processing on various audio input in order to produce output to be used as stimuli for the performer wearing the neuro-headset. The patches were built for the first installation demo on Wednesday 12th February.

The first image shows a patch named “BrainDrainFX1” that is built of a bandpass filter based on the “biquad~” object help file provided with the Max software. The output from this is passed through the subpatch “combFilt” which uses the “teeth~” object to create various audio FX (vibrato, phaser, ring modulator etc.) based on the help file for this object. This output is passed through a gain slider before reaching a panning section. This is capable of creating a panning effect that can send the output to only the left speaker then only the right speaker, alternating at regular or at random (depending on user defined parameters) time intervals. This was designed to create a disorienting effect for the performer as each speaker were to be placed on opposite side of the performance space. The way the sound shifts from one side of the room to the other in quick succession can confuse the listener, or create a sense of discomfort.

The input for “BrainDrainFX1” could be from previously recorded audio (via the “wave~” and “buffer~” objects) or live audio (via a microphone and the “ezadc” object, which can be seen on the far left of the image represented by a microphone in a grey box). A range of user controlled parameters can be modulated via an external MIDI controller which sends data to the “ctlin” objects. These controls are labeled in the image.

The aim of this patch was to distort audio to the point that it would cause some discomfort and confusion, hopefully evoking some emotion (perhaps frustration, excitement or engagement) in the performer. The output was deemed too artificial as a result of heavy processing. This removed most of the real-world qualities from the audio, making them less effective for their intended purpose (as briefly discussed here www.neuroacoustic.com/methods.html – more about the topic of real vs synthetic sounds will be discussed in another post).

The second patch named “Granulator” is based around Michael Edward’s “mdeGranulator~” object. Granulation is a type of synthesis that takes an audio file and splits it into short sections, referred to as grains. These are played back in random order to produce an output that has some of the sonic characteristics of the input, but provides new and unique textures. Using this type of synthesis was an attempt at providing more real-world characteristics within the audio output. Human voice was to be used as input. The muddling of words and various vocal sounds was intended to, again, disorient the performer, but do so with more ‘human’ sounding output, resulting in a stronger effect on the brain activity of the performer.

The input to the granulator comes from an audio input (again via a mic and the “ezadc” object) before being filtered through the “biquad~” object, again copied from the object’s help file provided by Max. The signal is then sent to a simple delay line using the “tapin” and “tapout” objects. A scaled version of the output from this delay line is fed back to the input to create a slowly decaying delay, with the amount of the signal fed back into the system defined by a dial controlled by the user via the MIDI controller (“ctlin 6”).

This passes through the granulator before being scaled by a gain slider, then being sent through the same autopan/random pan section as used in the previous patch.

Again, the overall ‘realism’ of the sounds produced by this patch was in question, but the textures produced were rich and sonically interesting. Perhaps tweaking the structure to produce audio more closely related to real-world sounds could make this more useful for the project, but as we are moving towards more physical sound-making objects and less digital ones, these patches may have to go down simply as experimentation. The sounds produced, however, helped in the process of understanding what is more and less effective in terms of evoking a reaction from the performer.

"BrainDrainFX1" Max Patch Prototype
“BrainDrainFX1” Max Patch Prototype

 

"Granulator" Max Patch Prototype
“Granulator” Max Patch Prototype