Visualizal effects-Marina Papagianni

Marina Papagianni

Brain Drain Project

Submission 1

Individual assessment:

My focus on this project is directed on the visual effects that are an outcome of the artist’s brain activity, while being at a certain emotive state. To my opinion the visual metaphor is a very important element in an installation like this, where the aim is to demonstrate the activity of the brain, a human body part that is not external and its function is very complex to understand with other scientific tools and technologies. The visualisation of the brain activity can be represented in several ways and the emotive state of the performer is a very good territory for artistic implementations.

To create a visual metaphor we first need to understand the way the brain’s activity is translated through mobile electroencephalography (EEG) technology and of course how this technology works. The EPOC emotive headset is a multi-channel, wireless device that is used for interaction between human and computers. It uses a set of 14 sensors plus 2 references to tune into electric signals produced by the brain to detect the user’s thoughts, feelings and expressions in real time. When the brain is under a specific emotive state, the neurones communicate with each other, exchanging a signals which produce a significant amount of electrical activity. This activity can be detected by this technology, by measuring electric waves from specific areas of the scalp. The brain waves are known as: Beta, Alpha, Theta and Delta. Each of them are emitted when we are at a specific emotional situation. The software that is accompanied with the device, translates this waves visually in an electroencephalogram. In addition it connects to the processing software which is the main tool for visualising the brain activity in a more creative and aesthetic way.

Having this knowledge in mind it is clear that the visual metaphor will consist of 4 different transitions to make obvious the emotive state of the performer at real time. So the visualisation will have to change is some way when the performer stops feeling in a certain way, for example at the beginning he is excited but when he goes from excitement to engagement, the visualisation will change as well.

The change of the emotions can be represented in several ways. The main effect is the behaviour of the object/objects displayed on a screen and it includes many kinds such as rotation, threshold(expanding-shrinking), vibration and change of location. This kind of behaviour can be changed by certain variables such as displacement, average speed, average velocity, instantaneous velocity, average acceleration, acceleration, etc. Another behaviour is the number of objects increasing or decreasing according to the value of a specific emotion. Colour is another effect that can display quite obviously the state change. However, in some cases, like having a colour-blind person in the audience, it might not be effective.

Even though a transition can help us understand the change of the emotive state, it is important to choose an effect that will be representative and suitable for each emotion. For example, a slow and steady motion of an object cannot be very suitable for emotions like excitement or frustration. Most of us would associate such feelings with an intense effect, quick motion and specific range of colours. Especially in the case of colours, there are many theories that can help us have a reference on which colour is more representative. If we take a look on Kandinsky’s colour theory for example, we have the following option for the emotions we are working on:

  • Excitement: Red

  • Engagement: Green

  • Frustration: Yellow

  • Meditation: Blue

Having this theory as a reference point we used these colours in some of our visualisations to indicate the emotive change. In our first attempts to visualise the brain activity We used mainly the colour to show the transition from one state to another and the motion was the same for every state. However, in the end we decided to focus more on the behaviour of our visualisation, rather than the colour.

Another thing to consider in a visual metaphor is the shape and if it will be symbolic or abstract. As the project is based on the brain activity this gave us inspiration to create visualisations that flow, that include objects expanding-shrinking and that are close to the neural activity of the brain. At the beginning we wanted to create something abstract because we thought that simple geometries would give an effect that would not be very interesting. The result was interesting and pleasant to the eye, however it didn’t show very clearly the change between the emotions. In many cases, the emotions were mixed together and it was not clear at all. After that we decided to work on changing the motion and the behaviour of objects that were more simple as geometries.

Finally, another important aspect of the visualisation is the dimensions. Throughout our project we worked both in 2D and 3D. To my opinion, having a visualisation in 3 dimensions is definitely more impressive and realistic and can give to the creator more potential to enrich the behaviour of the object/objects. However, it is more difficult to create very unique and abstract geometries in 3D.

In general, working on visualisations is a very interesting and creative process. First of all, it gives to the creator the chance to explore theories of colours, shapes and many other art theories. Secondly, it a good chance to learn more about psychology and how to express yourself through a visual metaphor. It gives the chance to the audience to connect with the performer and literally see what is going on inside his head. In the future I would like to create a visualisation that will be based on original artwork from my personal portfolio to give a more personalised character to the installation. I also expect to see my visualisation accompanied with sound effects that will be also influenced by the performer’s brain activity.

References:

 emotiv.com/epoc/features.php

facstaff.gpc.edu/~pgore/PhysicalScience/motion-ofobjects.html

dmsp.digital.eca.ed.ac.uk/blog/braindrain2014/starting-point-and-inspiration/

lettersfrommunich.wikispaces.com/Kandinsky%27s+Color+Theory

Empire of the senses”, the sensual cultural reader, By David Howes, Oxford, New York.

Essay for Submission 1 – “Aspects of sound perception in absence of visual stimulae in relation to the ‘Brain Drain’ installation”

1. General prerequisites of the installation

The basic principle of the installation is to use sound as an input into a performer’s brain, picking up his reactions by means of a portable EEG device – which translates brainwaves into four channels of emotional states – and then to output these states as projected visuals.

In order to obtain the least amount of brain data distortion – where the performer would be distracted by stimulae outside of sound – and so as to not feed back visual impressions into the brain, the person wearing the headset needs to be placed in a fairly dark room, and be blindfolded.
By nature, this alludes to the principles of acousmatics (Cp. Schaeffer 2004, p.76ff), and in turn to Chion’s “Listening Modes” (Chion 1994, p.25ff).

Aspects of both phenomena need to be investigated in the following, as do general observations relating to auditory perception and cognition, and their relevance to the installation.

2. Specifics of auditory perception and how they relate to the installation setup

In principle, our auditory sense allows us to perceive signals located at any point on or in a sphere, i.e. we are able to hear sound on three axes. Further it provides us with the ability to gauge sound source distance. Having said that, there are various limitations to each of these capabilities, especially when deprived of simultaneous additional visual information. (Cp. Raffaseder 2010, p.125/Farnell 2010, p.81)

Generally, our ears provide us with more precise information where the x-axis of spatial depth (left-right) is concerned, and less precise information where the y-(above-below) and z-(front-back) axes are concerned. (Cp. Raffaseder 2010, p.125) Head movement is employed to augment the process of spatial localisation. (ibid)

Judging distance fundamentally works by way of assessing loudness or sound volume, and to a lesser extent frequency content. However, absence of comparative values, based on our expectations of how loud a specific sound would be (Cp. Farnell 2010, p.80), can easily confuse our judgement regarding this. Consequently, not being able to gather any visual information on sound sources further distorts our assessment of sound source distance.

Of equal importance is the fact that we are able (and constantly using this ability) to gather a multitude of further informations about sound sources: We make assessments of their size, material, and even their cause and thus their situative context, deriving meaning from them in the process. (Cp. Raffaseder 2010, p.46 ff) In effect, a large part of our everyday listening practices consists of what Chion famously classified as “Causal Listening”. (Chion 1994, p.25ff)

Knowing about these phenomena of course provides us with the possibility to consciously influence the listener’s perception in an installative context such as the one we are working on.
Possible means of doing this, and routes we are exploring, concern such as creating various sounds that are in themselves limited to relatively narrow frequency spectra and pitch ranges – while between them covering a wider range of frequencies, pitches, and timbres – and experimenting with spatial placement as well as distance to the performer.

Further, employing surface loudspeakers provides us with the opportunity to change the reverberation characteristics of a given space, by means of adding material reverberations to those prevalent in the room where the installation takes place. A practical example would be the application of such speakers to metal sheets, which give an impression of a fairly large space, even when set up in a small room.

Another area we are exploring is that of acousmatics, which by definition is very much a core element of the installation by virtue of the performer being blindfolded, and thus only able to hear but not see. (Cp. Schaeffer 2004, p.77)

Furthering on the concept, we have so far deliberately tried to create sounds that are generally not readily classifiable as coming from an easily identifiable sound source, by means of either altering or changing the way sound objects are ‘played’, or by building pukka ‘instruments’.
Indeed, the augmentation of spatial and timbral characteristics as explained above might help in intensifying the acousmatic effect.

The idea is that, when exposed to it long enough, the listener might begin to concentrate purely on the sonic features of any given sound, in the process being less and less concerned with questions of cause and the implications and connotations thereof.

This alludes to what Chion calls “Reduced Listening” (Chion 1994, p.29), and we are hoping to derive from it insight on how sound characteristics, as opposed their source and meaning, affect the emotional state of the listener. An interesting sub-aspect of this will be to find out whether it is actually possible to deliberately cause a listener to enter a state of Reduced Listening.

However, this would not likely be discernible from his emotional response but would have to be investigated by means of having the performer verbally describe the listening experience.

3. Concluding remarks

The techniques I explained above will naturally have an effect on the performer’s sensual perception, simply by way of him/her being exposed to sensual input.

How they specifically affect his/her emotional state is what we are looking to find out about, and represent. Having said that, it needs to be taken into consideration that factors other than sound will affect the performer’s emotional state – even just being deprived of sight will have a considerable effect, more so when combined with auditory input, which is likely to create – at least initially – a feeling of unease, as no visual confirmation of sound sources is possible. (Cp. Connor in: Coyne 2010, p.55)

Still, we are trying to obtain as much information as possible about the performer’s emotional state when exposed to sound, which will on the one hand enrich the visual part of the installation, on the other hand will hopefully provide us with usable data and experience for future work.

Word count: 964
Reference list:

CHION, M. (1994) Audio-vision: sound on screen/Michel Chion; edited and translated by Claudia Gorbman; with a foreword by Walter Murch. Columbia University Press, New York.
COYNE, R. (2010) The tuning of place: sociable spaces and pervasive digital media, The MIT Press Cambridge, Massachusetts, London.
FARNELL, A. (2010) Designing Sound, The MIT Press Cambridge, Massachusetts, London.
RAFFASEDER, H. (2010) Audiodesign, Carl Hanser Verlag, München.
SCHAEFFER, P. (2004) Acousmatics. in Cox, C., Warner, D. (eds.), Audio culture – readings in modern music, Bloomsbury Academic, New York, London, pp 76-81.

Brain as a language in the Future

Introduction

After the Tower of Babel
One of the most aspiring communication approach without words and languages is a message for extraterrestrial life. In 1977, NASA launched Voyager Golden Record which are symbolic messages to the space. This enigmatic disk contains more than 100 symbolic pictures of human being, animals, human anatomy, multilingual greetings and natural sounds. As human being did not know the communication style of addressee, they prepared non-literature messages. On the other hand, most part of communication is composed of languages and words. People uses various types of communication tools such as Facebook chat and Skype video message. As for these communicational messages, there is an significant limiting condition, language barrier. However, there are various different types of communication using different sensory such as odor and temperature (Classen, 2005). This installation idea focuses on wordless communication among people in cerebral and synesthestic way.

Concepts of the installation
The narrative of the installation is based on three concepts. The fist one is “brain as a language”, which means that human brains will be able to communicate with each other without words in the future. For instance, some science fiction novelists such as Sir Arthur Charles Clark, Robert Anson Heinlein and Stanisław Lem have proposed characteristics of future human being or extraterrestrial lives. They can share their emotions and thoughts without uttering. Second concept is “coding and decoding in non-oral communication”. For communication between brains, it is significant to apply the way of coding and decoding in these communication. The last idea is “synesthesia”. Brains behaves as a switch of different senses. As invisible feedback from brains have no receiver unlike five senses, the installation needs to convert them into cognitive stimuli.

This one way communication transfers messages from previous participants to next participants as long as there are audience. It would be similar to a telephone game. Some message would convey same emotions, and others would evoke totally different emotions. Brains continues to transfer messages to other brains with changing their messages.

Step 1 : Searching for stimuli
The first exploration starts at searching for stimuli which evoke shareable emotions or meanings. For instance, scratch sounds on the blackboard tend to irritate people. Also, it is seemed that bell sounds in temples make them meditate. All input and output is saved, and clarified relationship between specific external stimuli and emotions. To achieve this purpose, research of non oral communication might be useful.

Communication without language
In the human history, people have tried to communicate with each other without languages. Morse code represents alphabet using just two sounds. Also, lighthouses have communicate with ships lights. Aboriginal Australian have “song lines” or ” dreaming track” which are routes relate to creators of creatures. Song lines is a system to travel across severe Australian wild land without risk of lost. Songs which describes landmarks, dangers and characteristics of each path have been passed down by word of mouth. Sound pitch is one of the most significant point in song lines because some specific pitch evoke scary or frustrate feelings. It is hard for strangers to understand wordless and enigmatic messages. However, these signals evoke some feeling such as briskness and spookiness even though they do not know these actual meanings. If there are stimuli which allowed people to share same or similar feelings, it will be wordless word.

Relationship between specific external stimuli and emotions are recorded in a comparison table for translation from feedback to next stimuli in Step 3.

Step 2 : Representation of feedback from the brain
Next step is the conversion from brain reaction to cognitive output such as visualization and sounds. EEG headset reads four different kinds of emotions strength; excitement, engagement, frustration and meditation. To demonstrate these emotions effectively, the installation utilizes the research of synesthesia.

Synesthesia world
External stimuli such as sound, visualization and smell are applied to specific sensory receptors. However, some people’s brains apply stimuli to different modality and cognitive pathway as they received stimuli. This neurological phenomenon called synesthesia shows that sensory discordance brings people to unusual impressive experience. According to Daniel Hammet who is well known as an autistic savant and “the Brain man”, he feels color, emotion and personality when he sees words and number.

Even though synesthesia is a controversial phenomenon in terms of verification difficulties, the basic idea that one stimuli provide unusual sensory reactions. For instance, Wassily Kandinsky drawed music composition on canvases. This feedback needs to be cognitive and suggestive for other audience (visualization and sounds might be feasible). Furthermore, emotion kinds and value size are significant for next step.

Step 3. Coding brain’s feedback to stimuli
The final step is coding brain’s feedback to stimuli for next participants. It is similar to translation in different languages. In this step, installation team uses the comparison table of stimuli and emotions. According to the table, feedback messages are translated into stimuli for next audience. As the table is based on hypothesis, it needs to be modified and improved continuously during numerous experimentations. After translation, new message made by previous brain’s feedback will be transferred to newt audience.

References

  • Chatwin, B. 1987. The Songlines. London. Jonathan Cape.
  • Classen, C. 2005. McLuhan in the rainforest: the sensory world of oral cultures. In D. Howes (ed.), Empire of the Senses: The Sensual Culture Reader: 147-163. Oxford. Berg.
  • Harrison, J. 2001. Synaesthesia: The Strangest Thing. Oxford. Oxford University Press.
  • Ingold, T. 2000. The Perception of the Environment. London and New York, Routedge.
  • NASA. 2014. Golden Record [Online] Available from: voyager.jpl.nasa.gov/spacecraft/sounds.html [Accessed: 26th February 2014].