Interactivity in Music and Sound

As other team members begin to delve into visual interactivity, I’ve spent time researching the seemingly endless ways to incorporate this concept with auditory material – both sound design and music. Wwise, the audio middleware software we’ll use to connect musical elements to the game engine, provides a comprehensive look into interactive music via the Wwise 201 course. It begins by outlining the differences between a conventional music-making approach and the gameplay method I’ll be utilising for this project, and continues by getting into the minute details of linking game events to additions and alterations in the score.

On a greater conceptual level, starting to consider the various creative techniques that would fit the vision of our experience is a priority. This paper from the Vienna University of Technology, while over ten years old, offers some relevant definitions for music interactivity methodology:

Active Score: players build their own environment of sound
Quantisation: player’s actions can occur at any time, quantised to the music
Synaesthesia: musical imitation and exaggeration of sounds physical objects in the game would produce
Sound Agents: visual elements primarily existing for affecting, emitting, or accompanying sound.    

Of the several definitions the paper explores, these four are the most useable for our purposes in doubleLink. Another takeaway from the above paper is the following remark: “The predominant technique for embedding sound in video games is to have a layer of background music and to synchronise the onscreen action with fitting samples.” While this is the most common technique, I feel we can create a more fluid and evolving score that doesn’t rely on a steady background track.

Another idea briefly mentioned and requiring more discussion is the idea of procedural audio (audio generated in real time) audio-induced visual changes. In his article Procedural Audio, Made in Unity3D, designer Konstantinos Sfikas shared code on Github for linking Unity-sourced sounds to a visualizer corresponding to frequency and amplitude. This code could potentially be connected to a mesh in our environment for the second scene and the shape shifting triggered by music cues. I’ve tested it in the screen capture below:

Screen Capture of Procedural Audio Script

With some initial feedback in place, I’ve composed a standard, linear piece that will serve as the foundation for the second scene’s interaction. It will be manipulated, dissected and reformed to match the plot and actions of the scene as they unfold.

Leave a Reply

Your email address will not be published. Required fields are marked *