Essay for Submission 1 – “Aspects of sound perception in absence of visual stimulae in relation to the ‘Brain Drain’ installation”

1. General prerequisites of the installation

The basic principle of the installation is to use sound as an input into a performer’s brain, picking up his reactions by means of a portable EEG device – which translates brainwaves into four channels of emotional states – and then to output these states as projected visuals.

In order to obtain the least amount of brain data distortion – where the performer would be distracted by stimulae outside of sound – and so as to not feed back visual impressions into the brain, the person wearing the headset needs to be placed in a fairly dark room, and be blindfolded.
By nature, this alludes to the principles of acousmatics (Cp. Schaeffer 2004, p.76ff), and in turn to Chion’s “Listening Modes” (Chion 1994, p.25ff).

Aspects of both phenomena need to be investigated in the following, as do general observations relating to auditory perception and cognition, and their relevance to the installation.

2. Specifics of auditory perception and how they relate to the installation setup

In principle, our auditory sense allows us to perceive signals located at any point on or in a sphere, i.e. we are able to hear sound on three axes. Further it provides us with the ability to gauge sound source distance. Having said that, there are various limitations to each of these capabilities, especially when deprived of simultaneous additional visual information. (Cp. Raffaseder 2010, p.125/Farnell 2010, p.81)

Generally, our ears provide us with more precise information where the x-axis of spatial depth (left-right) is concerned, and less precise information where the y-(above-below) and z-(front-back) axes are concerned. (Cp. Raffaseder 2010, p.125) Head movement is employed to augment the process of spatial localisation. (ibid)

Judging distance fundamentally works by way of assessing loudness or sound volume, and to a lesser extent frequency content. However, absence of comparative values, based on our expectations of how loud a specific sound would be (Cp. Farnell 2010, p.80), can easily confuse our judgement regarding this. Consequently, not being able to gather any visual information on sound sources further distorts our assessment of sound source distance.

Of equal importance is the fact that we are able (and constantly using this ability) to gather a multitude of further informations about sound sources: We make assessments of their size, material, and even their cause and thus their situative context, deriving meaning from them in the process. (Cp. Raffaseder 2010, p.46 ff) In effect, a large part of our everyday listening practices consists of what Chion famously classified as “Causal Listening”. (Chion 1994, p.25ff)

Knowing about these phenomena of course provides us with the possibility to consciously influence the listener’s perception in an installative context such as the one we are working on.
Possible means of doing this, and routes we are exploring, concern such as creating various sounds that are in themselves limited to relatively narrow frequency spectra and pitch ranges – while between them covering a wider range of frequencies, pitches, and timbres – and experimenting with spatial placement as well as distance to the performer.

Further, employing surface loudspeakers provides us with the opportunity to change the reverberation characteristics of a given space, by means of adding material reverberations to those prevalent in the room where the installation takes place. A practical example would be the application of such speakers to metal sheets, which give an impression of a fairly large space, even when set up in a small room.

Another area we are exploring is that of acousmatics, which by definition is very much a core element of the installation by virtue of the performer being blindfolded, and thus only able to hear but not see. (Cp. Schaeffer 2004, p.77)

Furthering on the concept, we have so far deliberately tried to create sounds that are generally not readily classifiable as coming from an easily identifiable sound source, by means of either altering or changing the way sound objects are ‘played’, or by building pukka ‘instruments’.
Indeed, the augmentation of spatial and timbral characteristics as explained above might help in intensifying the acousmatic effect.

The idea is that, when exposed to it long enough, the listener might begin to concentrate purely on the sonic features of any given sound, in the process being less and less concerned with questions of cause and the implications and connotations thereof.

This alludes to what Chion calls “Reduced Listening” (Chion 1994, p.29), and we are hoping to derive from it insight on how sound characteristics, as opposed their source and meaning, affect the emotional state of the listener. An interesting sub-aspect of this will be to find out whether it is actually possible to deliberately cause a listener to enter a state of Reduced Listening.

However, this would not likely be discernible from his emotional response but would have to be investigated by means of having the performer verbally describe the listening experience.

3. Concluding remarks

The techniques I explained above will naturally have an effect on the performer’s sensual perception, simply by way of him/her being exposed to sensual input.

How they specifically affect his/her emotional state is what we are looking to find out about, and represent. Having said that, it needs to be taken into consideration that factors other than sound will affect the performer’s emotional state – even just being deprived of sight will have a considerable effect, more so when combined with auditory input, which is likely to create – at least initially – a feeling of unease, as no visual confirmation of sound sources is possible. (Cp. Connor in: Coyne 2010, p.55)

Still, we are trying to obtain as much information as possible about the performer’s emotional state when exposed to sound, which will on the one hand enrich the visual part of the installation, on the other hand will hopefully provide us with usable data and experience for future work.

Word count: 964
Reference list:

CHION, M. (1994) Audio-vision: sound on screen/Michel Chion; edited and translated by Claudia Gorbman; with a foreword by Walter Murch. Columbia University Press, New York.
COYNE, R. (2010) The tuning of place: sociable spaces and pervasive digital media, The MIT Press Cambridge, Massachusetts, London.
FARNELL, A. (2010) Designing Sound, The MIT Press Cambridge, Massachusetts, London.
RAFFASEDER, H. (2010) Audiodesign, Carl Hanser Verlag, München.
SCHAEFFER, P. (2004) Acousmatics. in Cox, C., Warner, D. (eds.), Audio culture – readings in modern music, Bloomsbury Academic, New York, London, pp 76-81.

Music and Emotion – Submission 1 Essay

DMSP Submission 1 – Music and Emotion

 The purpose of this year’s Brain Drain DMSP project is to use electroencephalography (EEG) readings to monitor an individual’s emotional reaction to stimuli – stimuli in this case being sound. An EEG headset will measure brain activity, which is decoded into numerical values between 0-1 for four separate moods; engagement, meditation, excitement, frustration. These ‘mood’ readings are based on the strength of the alpha, beta, theta, and delta brainwaves. These numerical values will then be used to control visual output, hopefully providing insight towards the individual’s emotional response to sound. The sound will be ‘composed’ in real-time, with a level of user-control being provided to the audience. The conclusion of the project will be an interactive installation combining each of these elements, with the aim to create an immersive and enjoyable, yet informative, experience.

This report will discuss the main theoretical concepts underlying the project, with a focus on sound and its intended emotional effect on the user. Decision-making regarding sound will be supported by studies in the field linking music/sound and emotional/cognitive response. Installation setting and audience participation will also be discussed, considering a way to include an element of user-control that allows for freedom and enjoyment for the audience, but ensures an interesting and effective installation.

“All musical emotions occur in a complex interplay between the listener, the music, and the situation.” (I. Deliège, pg 118)

Before analysis of sound and how it may be used to evoke emotion, consider the point made in the above quote. Emotional response to music or sound cannot simply be thought of as a direct result of a specific timbre, phrase, or sequence of notes, but that of a range of factors. Therefore, to reach the goal of making the stimulus effective in this project, looking beyond the sound itself is essential. Although sound is the main stimulus for the performer, the location and audience participation may play as vital a role.

The size and shape of the performance space will determine how the sound will travel and reverberate. Size and shape, along with building materials, determine dampening/attenuation of specific frequencies within the audible range due to standing waves formed between hard, parallel surfaces, and absorption of softer surfaces. Audience participation may interfere with sound or distract the performer, preventing meditation and engagement or evoking frustration. “A significant proportion (approximately 40%) of musical emotion episodes seem to occur when the listener is alone” (I. Deliège, pg 119), perhaps suggesting that separating the audience and the performer may be the best way to obtain strong emotional response in the installation. Deliège also suggests “musical emotion episodes are most prevalent in the evening” and “more frequent during weekend days than weekdays”, highlighting time as another important factor in perception and emotional response to sound.

Undeniably, we must tread carefully when designing sound with the intention of inducing a reaction in a setting that has largely been undefined. This said, there are a number of sonic features that we can look to emphasize in order to lead the performer towards a certain mind state.

“Changes in basic acoustics attributes such as loudness, tempo, and pitch height can give rise to dramatic changes in arousal” (W.F.Thompson, pg 124)

Thompson’s observation, along with Juslin and Västfjäll’s BRECVEM model, would suggest that surprise or a sudden change in sonic attributes is a valuable tool in trying to produce abrupt mood changes in the user, allowing for a more dramatic, or less stagnant, visual output in the case of this project. Providing opportunity for fluctuations in sonic characteristics such as rhythm, timbre, dynamics, and pitch of the stimuli would be a useful tool in ensuring similar fluctuations in EEG data. The next thing to consider is how to restrict the entire soundscape, ensuring that individual characteristics are not lost in a sea of sound. This is where setting and audio interface design comes into play.

Clearly, the interface design will play a vital role in the mood management of both the performer and the audience, but the setting of the event must also be chosen and arranged with as much consideration. The way the sound objects are positioned around the room, the lighting, the distance between the audience and the objects, and the location of the performer, amongst other attributes, are of vital importance in shaping the correct mood of all involved in the exhibition. Atmosphere plays a major role, and being in an unfamiliar location surrounded by strangers is to have an effect on the audience’s likelihood to interact freely with the sound objects.

The idea of arranging the performance space as if it were a living room has been raised amongst the group. This could fit in with some of the objects already being used (piano, turntable, speakers, laptop, tables), allowing for a more inviting and relaxing atmosphere. The introduction of lamps for illumination of the individual sound objects would also tie in with this theme. The individual illumination of objects in a dark room could help in attracting members of the audience to interact with these objects; something that required some encouragement during testing in a cramped, well-lit room.

Aside from all that has been mentioned, there are a range of design issues that play a pivotal role in the success of the installation; the visualization being one of the main aesthetic concerns. The visuals must be impressive on their own, but also correlate with the audio. They must be visibly linked to the output sound, not only to connect these main aspects of the installation but to provide gratification to the audience for their interaction. Providing a visible link between their actions and the sonic and visual output is the most likely way to evoke a sense of curiosity and excitement. This is seen as the best way to make the entire installation move fluently, providing a truly immersive environment for both the performer and the audience.


I. Deliege & J.W. Davidson, Music and the Mind. Oxford University Press, 2011.

S. Feld & K. Basso, Senses of Place. School of American Research Press, 1996.

S. Feld & C. Keil, Music Grooves. Fenestra Books, 2005.

Y.H. Hang & H.H. Chen, Music Emotion Recognition. CRC Press, 2011.

D. Howes, Empire of the Senses: The Sensual Culture Reader. Berg Publishers, 2004.

W.F. Thompson, Music, Thought and Feeling: Understanding the Psychology of Music. OUP USA, 2008.

Brain as a language in the Future


After the Tower of Babel
One of the most aspiring communication approach without words and languages is a message for extraterrestrial life. In 1977, NASA launched Voyager Golden Record which are symbolic messages to the space. This enigmatic disk contains more than 100 symbolic pictures of human being, animals, human anatomy, multilingual greetings and natural sounds. As human being did not know the communication style of addressee, they prepared non-literature messages. On the other hand, most part of communication is composed of languages and words. People uses various types of communication tools such as Facebook chat and Skype video message. As for these communicational messages, there is an significant limiting condition, language barrier. However, there are various different types of communication using different sensory such as odor and temperature (Classen, 2005). This installation idea focuses on wordless communication among people in cerebral and synesthestic way.

Concepts of the installation
The narrative of the installation is based on three concepts. The fist one is “brain as a language”, which means that human brains will be able to communicate with each other without words in the future. For instance, some science fiction novelists such as Sir Arthur Charles Clark, Robert Anson Heinlein and Stanisław Lem have proposed characteristics of future human being or extraterrestrial lives. They can share their emotions and thoughts without uttering. Second concept is “coding and decoding in non-oral communication”. For communication between brains, it is significant to apply the way of coding and decoding in these communication. The last idea is “synesthesia”. Brains behaves as a switch of different senses. As invisible feedback from brains have no receiver unlike five senses, the installation needs to convert them into cognitive stimuli.

This one way communication transfers messages from previous participants to next participants as long as there are audience. It would be similar to a telephone game. Some message would convey same emotions, and others would evoke totally different emotions. Brains continues to transfer messages to other brains with changing their messages.

Step 1 : Searching for stimuli
The first exploration starts at searching for stimuli which evoke shareable emotions or meanings. For instance, scratch sounds on the blackboard tend to irritate people. Also, it is seemed that bell sounds in temples make them meditate. All input and output is saved, and clarified relationship between specific external stimuli and emotions. To achieve this purpose, research of non oral communication might be useful.

Communication without language
In the human history, people have tried to communicate with each other without languages. Morse code represents alphabet using just two sounds. Also, lighthouses have communicate with ships lights. Aboriginal Australian have “song lines” or ” dreaming track” which are routes relate to creators of creatures. Song lines is a system to travel across severe Australian wild land without risk of lost. Songs which describes landmarks, dangers and characteristics of each path have been passed down by word of mouth. Sound pitch is one of the most significant point in song lines because some specific pitch evoke scary or frustrate feelings. It is hard for strangers to understand wordless and enigmatic messages. However, these signals evoke some feeling such as briskness and spookiness even though they do not know these actual meanings. If there are stimuli which allowed people to share same or similar feelings, it will be wordless word.

Relationship between specific external stimuli and emotions are recorded in a comparison table for translation from feedback to next stimuli in Step 3.

Step 2 : Representation of feedback from the brain
Next step is the conversion from brain reaction to cognitive output such as visualization and sounds. EEG headset reads four different kinds of emotions strength; excitement, engagement, frustration and meditation. To demonstrate these emotions effectively, the installation utilizes the research of synesthesia.

Synesthesia world
External stimuli such as sound, visualization and smell are applied to specific sensory receptors. However, some people’s brains apply stimuli to different modality and cognitive pathway as they received stimuli. This neurological phenomenon called synesthesia shows that sensory discordance brings people to unusual impressive experience. According to Daniel Hammet who is well known as an autistic savant and “the Brain man”, he feels color, emotion and personality when he sees words and number.

Even though synesthesia is a controversial phenomenon in terms of verification difficulties, the basic idea that one stimuli provide unusual sensory reactions. For instance, Wassily Kandinsky drawed music composition on canvases. This feedback needs to be cognitive and suggestive for other audience (visualization and sounds might be feasible). Furthermore, emotion kinds and value size are significant for next step.

Step 3. Coding brain’s feedback to stimuli
The final step is coding brain’s feedback to stimuli for next participants. It is similar to translation in different languages. In this step, installation team uses the comparison table of stimuli and emotions. According to the table, feedback messages are translated into stimuli for next audience. As the table is based on hypothesis, it needs to be modified and improved continuously during numerous experimentations. After translation, new message made by previous brain’s feedback will be transferred to newt audience.


  • Chatwin, B. 1987. The Songlines. London. Jonathan Cape.
  • Classen, C. 2005. McLuhan in the rainforest: the sensory world of oral cultures. In D. Howes (ed.), Empire of the Senses: The Sensual Culture Reader: 147-163. Oxford. Berg.
  • Harrison, J. 2001. Synaesthesia: The Strangest Thing. Oxford. Oxford University Press.
  • Ingold, T. 2000. The Perception of the Environment. London and New York, Routedge.
  • NASA. 2014. Golden Record [Online] Available from: [Accessed: 26th February 2014].

LED/6v bulbs serial code(Processing+Arduino)


Processing(Calculate maximum value and send to Arduino):

import processing.serial.*;
import oscP5.*;
import netP5.*;

Serial port;
OscP5 oscP5;
float meditation=0;
float frustration=0;
float engagement=0;
float excitement=0;
boolean start=false;
float max=0;
void setup() {
//size(256, 150);
oscP5 = new OscP5(this, 7400);
println(“Available serial ports:”);

port = new Serial(this, Serial.list()[0], 9600);


void draw() {
if(meditation==0 && frustration==0 && engagement==0 && excitement==0){
port.write(0);//LED fading from corner to center
// start = true;
// else if(meditation!=0 || frustration!=0 || engagement!=0 || excitement!=0){
// if(start==true){
// port.write(1);//LED all off for 3 sec
// start = false;
// } Continue reading

Visualization prototype: Geometric visualization




This Processing code focuses on generating dynamic and quick transition between emotions.


Significant issue of previous codes is that visualization stops easily if a participant feels one emotion for a long time.  In this code, boxes , which is main object in the installation, transform quickly in proportion to value size.  Values are utilized for not only defining the biggest value but also changing object’s size, speed and range of vibration.  Also, I prepared threshold value for each motion.  If emotion value takes over the threshold value, each motion exaggerates their motion.

  • Excitement: generating multiple boxes in proportion to value size
  • Engagement: changing object’s rotation speed in proportion to value size
  • Frustration: changing range of vibration in proportion to value size
  • Meditation: changing object’s opacity

Feedback: quick transition, but complicated

Transition became much more responsive and quicker than previous version.  However, the visualization seemed to be complicated because multiple motions appears at the same time. If all motions values take over the threshold value, opaque boxes were rotating, vibrating and being multiplied.  Audience could see quick transition, but it was hard for them to recognize which motion was dominant.

Further improvement

We removed the previous logic that defines biggest value in the code. However, we need to put it back to this new version.  Next version will be simpler.