Max MSP First Prototypes
The two screenshots below show Max MSP patches for performing signal processing on various audio input in order to produce output to be used as stimuli for the performer wearing the neuro-headset. The patches were built for the first installation demo on Wednesday 12th February.
The first image shows a patch named “BrainDrainFX1” that is built of a bandpass filter based on the “biquad~” object help file provided with the Max software. The output from this is passed through the subpatch “combFilt” which uses the “teeth~” object to create various audio FX (vibrato, phaser, ring modulator etc.) based on the help file for this object. This output is passed through a gain slider before reaching a panning section. This is capable of creating a panning effect that can send the output to only the left speaker then only the right speaker, alternating at regular or at random (depending on user defined parameters) time intervals. This was designed to create a disorienting effect for the performer as each speaker were to be placed on opposite side of the performance space. The way the sound shifts from one side of the room to the other in quick succession can confuse the listener, or create a sense of discomfort.
The input for “BrainDrainFX1” could be from previously recorded audio (via the “wave~” and “buffer~” objects) or live audio (via a microphone and the “ezadc” object, which can be seen on the far left of the image represented by a microphone in a grey box). A range of user controlled parameters can be modulated via an external MIDI controller which sends data to the “ctlin” objects. These controls are labeled in the image.
The aim of this patch was to distort audio to the point that it would cause some discomfort and confusion, hopefully evoking some emotion (perhaps frustration, excitement or engagement) in the performer. The output was deemed too artificial as a result of heavy processing. This removed most of the real-world qualities from the audio, making them less effective for their intended purpose (as briefly discussed here www.neuroacoustic.com/methods.html – more about the topic of real vs synthetic sounds will be discussed in another post).
The second patch named “Granulator” is based around Michael Edward’s “mdeGranulator~” object. Granulation is a type of synthesis that takes an audio file and splits it into short sections, referred to as grains. These are played back in random order to produce an output that has some of the sonic characteristics of the input, but provides new and unique textures. Using this type of synthesis was an attempt at providing more real-world characteristics within the audio output. Human voice was to be used as input. The muddling of words and various vocal sounds was intended to, again, disorient the performer, but do so with more ‘human’ sounding output, resulting in a stronger effect on the brain activity of the performer.
The input to the granulator comes from an audio input (again via a mic and the “ezadc” object) before being filtered through the “biquad~” object, again copied from the object’s help file provided by Max. The signal is then sent to a simple delay line using the “tapin” and “tapout” objects. A scaled version of the output from this delay line is fed back to the input to create a slowly decaying delay, with the amount of the signal fed back into the system defined by a dial controlled by the user via the MIDI controller (“ctlin 6”).
This passes through the granulator before being scaled by a gain slider, then being sent through the same autopan/random pan section as used in the previous patch.
Again, the overall ‘realism’ of the sounds produced by this patch was in question, but the textures produced were rich and sonically interesting. Perhaps tweaking the structure to produce audio more closely related to real-world sounds could make this more useful for the project, but as we are moving towards more physical sound-making objects and less digital ones, these patches may have to go down simply as experimentation. The sounds produced, however, helped in the process of understanding what is more and less effective in terms of evoking a reaction from the performer.