Brain Drain First Submission Report
s1204053 Zechao Li
In this project, we plan to use EEG headset as an intermediary between stimuli source and visualization. Audiences can interact with blindfolded performer by choosing different kind sounds to impact performer’s motion. Then we will visualize his real time motion value which gathered by EEG headset and show it to audiences.
In the original design, we have two options for the installation space. The first one is to use two rooms. Room A is used for setting up input equipment and the performer who wear the EEG headset, while room B is for audiences to sit in to interact with performer by using control panel and view the visual result of performer’s emotions changes. The second one is to use single room which allows audiences and performer in the same room in order to let audiences have more interaction with performer.
The input source is different kind of sounds, which will affect performer’s emotions in different degree and different directions. These sounds are produced by using different equipment such as piano, the bowl on a turntable, wooden fish. The equipment are connected and controlled by Arduino. At this stage, the rotary knobs and buttons are all controlled by our team members, but in the final stage they should be connected with a control panel which is controlled by audiences, and some sensors which are triggered by audiences.
Different sounds can make performers’ emotion have different changes. In order to figure out which sounds are more effective for triggering the changes of meditation, frustration, engagement and excitement, we made a survey on group members and some other people about which kind of sounds can trigger their specific emotions. This survey is very helpful to the sound team to classify sounds and improve the decision of the order of sounds.
Performer with EEG Headset
An EEG headset is used for receiving signal from performer’s brain. The EEG headset can get 16 values of brain waves from performer’s brain and calculate them into the four emotion values ranging from 0 to 1. In order to make the headset works, we need to make sure the USB plugin is connected to the computer which runs the processing stretch and keep two software, which are Emotive Developer Control Panel and Mind Your OSCs , running during the progress of the installation. The first software is for receiving the 16 values of brain waves from EEG headset, and the second one is for transferring the 16 values into 4 emotion values and sending these values to the listener event in processing software.
As to the visualization part, the visualization team work on receiving the 4 emotions values from EEG headset and use them to be parameters to trigger the changes of animations shown on the wall.
In the beginning, each of the visualization team tried to create animations in different kinds of forms. For example, we used several lines, which are made up by points, to show the real time 16 brain waves data dynamically. One of our members also tried to use triangles to show the 16 parameters.
At the second stage, visualization team made two different kind visualizations. The first one, we divided the black background into four parts, and there is one fluid producer in each part with different color. All of the fluid producers came from the center of screen and each one of the fluid producer indicates one of the four emotion parameters. If the fluid producer stays closer to the center, it means that the value of these emotion parameters is low and if the producer goes far it means the value is high. Also the speed that the smoke was produced is also related to the value of the emotion parameters. But since the fluid library was built by a third party and the appearance of this visualization looked quite like the example of this library even most of the code was new, this plan was denied. The other one was in the form of flickering dynamic triangles. It showed different parameters by using different colors, but the shapes are only stayed on the side of the screen, which did not give audiences a very good experience.
At the first two stages, the plan for showing the visualization is to use a screen to get the visualization animation. But during the experiment, we found that the result of the screen is not quite good and it cannot let the audiences feel that they are participate into the visualization. In this case, we decided to use a projector to project the visualization animation directly on the wall, and use abstract shapes and simple concept in order to let the audiences easily understand what is happening when the shape changes.
Then we came up with two ideas for improvement. The first one is to develop dynamic abstract shapes such as Nero system as the visualization on the wall, and use explosion and collapse effect of the shape to show the degree of value changes. For this part, we developed hair ball effect and cube array effect. For instance, when frustration of the four emotion values is the highest, the hair ball will keep flickering its size in a small degree, and when excitement is the highest value it will act as radials that come from center. As to the cube array, it will change the number, shape and flicker frequency of cube array in 3D space. The second idea is to use LED light to change the color or the light frequency to change the room atmosphere in order to let audiences can feel the mood that corresponds to performers’ feeling. To experiment the feasibility of this idea, we tried to use the LED stage light connect DMX box and use Arduino to control DMX box to send command to LED stage light. But since the effect is more like a stage effect and this brand DMX box cannot be connected to COM port on laptop, which will be used by Processing and Arduino to send commands, we denied this experiment. Then we begin to experiment with small LED array to show flickering and other effect as visualization and the effect looks better than LED stage light, so we continue our experiment to use 6v bulbs instead of small LED lights to make the effect more clear and vivid.
In the future, we will figure out a better way, which is more understandable and have better visual impact, to visualize the emotion values from EEG headset. Also we will set up the control panel for letting audiences to control sounds and have a better interaction with performer. Besides, more feedback for this project will be gathered to help us to improve this system.