The ‘Hairy Ball’ is one of the visual effects we made for the project ‘Brain Drain’, aiming to interpret the performer’s emotional state in a vivid and reasonable way. I am mainly working on this design.
This visualization is based and developed on the Processing code ‘Noise Sphere’ by David Pena (Ref: Location/Processing/File/Examples/Topics/Geometry/NoiseSphere). But I did much work to develop and re-create visual effects with coding.
Here’s the working progress of the design.
26 Feb 2014
Created basic visual effects for ‘Hairy Ball’
·Excitement: Hair grow randomly longer, the ball looks shimmering
·Engagement: Hairy Ball rotates 360°
·Meditation: Hair moves from the center to the edge of the ball regularly and comes back again.
·Frustration: Hairy Ball trembles
27 Feb 2014
Tried to duplicate the ‘Hairy Ball’ to the quantity of four appearing together but finally dropped the idea
12 March 2014
·Alarm function with TV noise effect to indicate the headset is not working properly
·Play with colour by adding colours to hair according to different emotional state but finally dropped the idea
·Adding the fifth parameter Boredom: Hairy Ball stays fixed and breathing
20 March 2014
·Modification to the meditation: speed up the moving to make the transition neat and clean
·Text field for displaying real-time data
27 March 2014
·Having idea of projecting the ‘Hairy Ball’ visualization to a real sphere like a yoga ball
1 April 2014
·Combining Processing code with Arduino code
·Connecting Processing code with other visuals with IP
Excitement : expanding and shrinking boxes / orange
Engagement : falling boxes / light green
Boredom : hovering box / yellow
Frustration : trembling boxes / pink
Meditation : breathing box / blue
These motions reflect the value size of each emotion.
I changed motions of engagement, boredom and meditation because they were not very active. Motions of excitement and frustration are exaggerated in the latest visualization, and it seems to be more successful than previous version in terms of audience engagement.
I got the idea of the engagement motions from these links as below.
I received suggestions that boredom are not very successful because it is not related to boredom motion very well. Also, there is a issue that only engagement shows rectangles instead of 3D boxes. I tried to solve the issue, but falling function did not work well.
This “generative Polygon” is improved version of “geometric boxes”. The first recognizable modification is object shapes. I changed object shapes from boxes to polygon because it is easy to change shapes for each emotion. Also, this code distinguishes engagement and boredom. Previous codes could not distinguish between them because a headset reads two emotions together. In the code, output value 0~50 are set to engagement, value 51~100 are set to boredom. Each motion of five emotions are as below.
We got some suggestions to play with colours in our visualization. We’ve focused on creating more active motions rather than playing with colours because of colour-blindness issues. In this code, I added a colour function as a complementary function.
The colour shows the strongest emotion in four emotion parameters.
The EEG headset often doesn’t work properly. Most common problem is disconnection of sensors. To discover these problems, I added an alert function that shows another motion if the headset has problems.
Every 10 seconds, the code saves four emotion values as temporary values to compare previous value and current value. If previous value and current value are totally same, it means the headset does not work properly.
Every 10 seconds: check each value.
-excitement 11, engagement 22, frustration 33, meditation 44
Save as temporary values.
-excitementTemp 11, engagementTemp 22, frustrationTemp 33, meditationTemp 44
After 10 seconds, check current values.
excitement 44, engagement 88, frustration 33, meditation 66
Compare all previous temporary values (10 seconds ago) and current values
excitement==excitementTemp? engagement==engagementTemp? …
If both value are same, alert function starts.
frustration == frustrationTemp=33 (same!) = Headset is not sending values!
In the prototype, the code shows noisy lines in the screen.
As for the visualization part, we aim to deliver an interpretation of the performer’s response to different sounds based on brainwave readings. To achieve an artistic yet reasonable visual effect, we experimented using the Processing programming language. Generally, the visualization features … Continue reading →
This Processing code focuses on generating dynamic and quick transition between emotions.
Significant issue of previous codes is that visualization stops easily if a participant feels one emotion for a long time. In this code, boxes , which is main object in the installation, transform quickly in proportion to value size. Values are utilized for not only defining the biggest value but also changing object’s size, speed and range of vibration. Also, I prepared threshold value for each motion. If emotion value takes over the threshold value, each motion exaggerates their motion.
Excitement: generating multiple boxes in proportion to value size
Engagement: changing object’s rotation speed in proportion to value size
Frustration: changing range of vibration in proportion to value size
Meditation: changing object’s opacity
Feedback: quick transition, but complicated
Transition became much more responsive and quicker than previous version. However, the visualization seemed to be complicated because multiple motions appears at the same time. If all motions values take over the threshold value, opaque boxes were rotating, vibrating and being multiplied. Audience could see quick transition, but it was hard for them to recognize which motion was dominant.
We removed the previous logic that defines biggest value in the code. However, we need to put it back to this new version. Next version will be simpler.
This visualization generates neuronic objects according to the biggest emotion value. After seeing the results of previous experimentation “Emotive particle”,we developed more generative code.
Basic logic is as same as previous version. Processing calculates the biggest value, and shows motion related to the biggest emotion. This code focuses on not only color but also expanding and vanishment. There are four motions as follows.
Excitement: green neurons expands in every direction from the center point
Engagement: horizontal neuron line appears
Frustration: black neurons eats previous neurons
Meditation: blue neurons expands from random points in the whole screen
Feedback: still static, also complicated
To solve the issue that changing color is not effective, this version applied transformation and different pattern of motions. However, the visualization stopped the motion when a participant continued to feel one particular motion. Even though neurons generates new branches every second, motions seemed to be static. This result suggests that the code is much heavier than previous one. After two or three minutes elapsed, there were full neurons and audience could not recognize any changes.
Although neuronic object was more suitable for the concept, output was not generative and responsive. To improve this issue, we will change object shapes dramatically, and focus on responsive scaling and transformation in proportion of value.
This week we are experimenting with a new visualization that looks more organic and is close to the image of a neuron. We tested the code with the headset and the result of the visualization was not as we expected it so we changed several things in the code such as the radius (strength of line), the colors (added 4 different colors for each parameter) and the velocity. The visualization will be processed even more by adding some vibration and other kind of effects that are applicable to this code. The attached video is a recording of the visualisation without being tested with the headset, only by running the processing script.