Immersive Audio-Vision

Part 2: Sound Design Documentation for Submission 1

For our immersive audio-visual project Mind at Large, we all agreed that we wanted sound to play a key part in the experience. Over several weeks and group discussions, we came up with a concept and visual aesthetic, which then shaped the types of sounds we decided to incorporate. The concept we followed was a specific paragraph from Aldous Huxley’s The Doors of Perception. As part of the visual designers’ meetings, a working storyboard was devised, which visually represented six key scenes from the paragraph. For sound-related research, we took some sonic inspiration from Ryoji Ikeda, a Japanese experimental sonic artist. He tends to utilise patterns with recursive patterned elements in his works, and the minimal sounds he uses compliment this, to create a feel of deceptive complexity. We aimed to mirror this in our overall sound, when working with the low-poly visuals.

For Submission 1 specifically, we decided to focus on just one of the six scenes in the storyboard; ‘Scene 4: Sumptuous red surfaces’. This would allow us to integrate visuals and sounds within a realistic scope, and create a simple initial prototype for people to experience. We divided the scene into three sonic elements and allocated each of them to a member of the sound team. These consisted of: a) Natural soundscape elements, b) Synth layers, to represent ‘bright nodes of energy that vibrated’, and c) Organic textures to bridge the gap between the two. We felt that having this contrast between sound types would represent the different stages of mescaline effects described in Huxley’s text. Each member of the sound team created their own FMOD project and all relevant sound events were then compiled into an overall project for the scene.

Natural Soundscape Elements
This section of sounds was covered by Gabrielle, and aimed to create part of the ambient sound bed present throughout the scene. She used field recording to capture organic sounds mostly found in forests, such as birds, running water, various footsteps and leaves rustling in the wind. Various conditions needed to be met for achieving suitable foley recordings so that they could be usable in the project. Several effects such as reverb, tremolo and low-pass filters were used dynamically in FMOD to create sonic variation in different ways. Overall, this section of the sound represented a more standard state of perception. A more detailed description of this portion and audio examples can be found here.

Vibrating Synth Layers
This section of sounds was covered by myself and represented the ‘bright nodes of energy that vibrated’, which Huxley mentions. Since I wanted it to stand out against the more natural, organic sounds, I used a mixture of oscillating synths, VST plugins in Reaper and Logic synth instruments. I adjusted the settings in each of them, to achieve a good balance between buzzing, gritty sawtooth waves, and softer, more pleasant-sounding synths. With these, I made layers of sustained notes, and transitioned between them in FMOD, so that the sound changed gradually with the increasing distance parameter. I added reverb and low-pass filter envelopes to add an even more distant feel during this transition, so that the user will hear more synthetic, ear-catching sounds as they grow closer to the sound emitter. In addition to this, I also had other sound events for more general ambient synth beds, and kept the levels low to compliment the organic sounds in the scene. (Password for Video: dmsp)


Organic Textures; Bridging the Gap
This section of sounds was covered by Richard and added to the organic element, but also represented a bridge between natural and synthetic, or ‘unnatural’. This tied in with the altered state of perception we hoped to portray. He created additional textures for the sound world, such as insect sounds and vocal samples, applying filters to some of them to ensure they wouldn’t be too identifiable. Many of them were designed to trigger randomly throughout the scene, or in specific locations. Additional wind sounds were also incorporated to mirror the desert wind in Mexico, where mescaline originates from. As with the other sections, parameters were used in FMOD to change sounds dynamically; for example, additional gusts of winds coming in and out. A detailed description of this portion can be found here.

What we aim to do sound-wise for the next submission is to create a more interactive and intuitive experience by mapping the user’s actions more specifically with the sounds being heard. For example, we will experiment with scripting in Unity which will utilise the XYZ position co-ordinates (where the user is looking) and connect this with ambiences/sounds being triggered. Also, FFT frequency band analysis will be used with sounds in Unity, directly impacting the visuals. This will allow the experience to feel more like a composition or audio-visual art piece.
The Oculus Rift will of course also be integrated, and with some experience of it we will have a greater idea of what works and what doesn’t for a VR environment. The Xbox One Kinect could also potentially be used for physical movement, as well as a heart-rate monitor to influence the sound and visuals. Our hope is that each of the six scenes in the project will have slightly different forms of user interaction, and also different audio-visual interaction. Several sound events were abandoned when migrating to the final FMOD project, but we anticipate using them and creating many more for each scene, in the next submission.

 

Additional Credits

  • Multi-surface footsteps audio script used, from official FMOD Unity Integration Tutorials (can be found here).

One thought on “Part 2: Sound Design Documentation for Submission 1

  1. Pingback: Pre-Meeting Sound Ideas for Submission 2 – Immersive Audio-Vision