Submissions 2

S U B M I S S I O N  2

E x p l o r i n g  E n t r o p y

 

Introduction

Entropy is an ubiquitous process. It refers to the nature of matter which unstoppably decays and degrades over time. From physics to art, entropy is present, unfailingly bringing unplanned behaviours, unexpected patterns and sudden chaos in both the analog and digital worlds. The action of entropy on these, when seen under certain scopes, can turn into extraordinary displays of sensorial information.

This project aims to portray the aforementioned concept through audiovisual means. By generating an appealing sensorial experience, Exploring Entropy looks to offer the audience a space to reflect on the small, yet ever-present, details of our surroundings.


Development

Processes
The setup’s layout and functionality are based on the six following processes or ‘levels’. These procedures encompass both the technical and artistic approaches taken to reflect on this particular entropic experience:

Level 1 – The Rust

The process of entropy in nature was explored through more natural and analog means. The team interacted with the cycle and witnessed the previously conceptualised idea of change over time.

prepared metal plates

rusting happening after chemicals being sprayed

Chemical reaction being documented from                              different angles

Level 2 – The curves

A system of curves was used to represent the evolution of the rusting process, as modelled empirically in formal oxidation measurements (A Mathematical Model of Copper Corrosion, 2012). This aimed to emphasize the idea of the cyclical nature of decay and examine, through multiple stages of transformation, how matter will find itself going back to an original balanced state, and again be submitted to alterations. It also allowed us to trigger different audio and visual reactions of degradation, directing this evolution towards a climax while highlighting how natural processes are perceived over time.

Rusting curves

2. max patch - visuals

Visual patch

Level 3 – The feedback system

The sonic scheme used is inspired by the notion of audible ecosystems, as described by composer Agostino Di Scipio, most notably, on his series «Modes of Interference». This approach reflects on the complexity and interrelations that musical and sonic experiences have with their surrounding environment. It also relates to the concept of ‘planning for the unplanned’: being aware that even when the group is the responsible for the organisation of the system, it would ultimately function on its own, potentially having an unpredictable, autonomous and self-ruling behavior (Annex 1).

System's layout

System’s layout

Level 4 – Audio and visual processes

The processes applied to sounds and visuals are tightly related. As shown on the graph below, all of them are associated to degradation and use this feature as an aesthetic medium. In consequence, entropy creates various ‘colorations’ displayed both visually and sonically. As a close analogy to chaos in the natural world, these processes must also point to the unapparent but nonetheless entropic and decaying nature of the digital realm (loss of information, change in ‘qualities’, glitches).

Effects’ progression

4. projection software

Projection software

 

Level 5 – The human interaction

As in any other entropic process, elements of interference are also an important part of the cycle. By having people interact with the installation, unpredictable effects could be discovered. The human agent is also part of an audible ecosystem (Meric & Solomos, 2009), and this results in more emphasis on the procedure itself rather than on the result (refer to section ‘Performance or installation’).

5. team with plates 1 5. team with plates 2

Level 6 – Material

To highlight the idea of entropy, the objects chosen for the setup had their primary use compromised, such as the broken and mismatching speakers which provided audio reinforcement. Each of these speakers presented a different sonic character, which we didn’t want to hide but rather display as yet another form of degradation. Entropy as ‘transformation’ was also present in the used materials, such as the plates and screen frames which where made from discarded scraps and additional cheap sources. Additionally, the frames show a simple design which allows for easy alteration, dismantling and storage should they be required in the future.

Set-up ready

6. speakers and frames 2

Set-up testing

 

Audiovisual development

1. Audio

Feedback-based system and self-regulation

Feedback systems based on Larson tones were chosen for this project as they present an evolving and reciprocal relationship between every sonic component in a system (microphone – speaker – space – listener). However, feedback tends to become unstable, so ways to maintain a certain degree of control had to be devised. The ‘self-gating’ system, developed on Agostino Di Scipio’s “Modes of Interference III”, served as the proof of concept for a self-regulating feedback system. In it, the incoming tone is modulated in amplitude through an inverted control signal of the same tone (smoothed and delayed). Therefore, when the tone rises in amplitude, it regulates itself providing space for other tones to appear.

self gate patch

Self-gating system used in ‘Modes of Interference III’

The audio system had to avoid making as many arbitrary decisions as possible. Pre-defined data had to be restrained to a few key items such as tonality, envelopes and durations, and even so, that data would have to be conceptually coherent.

Tonality

During the testing stage, it was proposed that the system’s tonality should correspond to each plate’s resonant frequencies. These were determined empirically using sweep tones, surface transducers, and an omnidirectional microphone arrangement connected to a Max/MSP patch that would store the frequencies with the highest amplitude.

Patch produces sweep and stores the highest values in coll

Patch produces sweep and stores the highest values in coll

Testing setup

Testing setup

In practice however, this arrangement created many instabilities and excessive bursts of sound output, so it had to be discarded. Instead, the final system incorporated the resonant frequencies of the room where the installation was set up (ECA room B28), calculated using Lord Rayleigh’s modal equations for rectangular rooms (Everest & Pohlmann, 2009).

Captura de pantalla 2015-04-24 a la(s) 20.49.14

This approach proved to have the most aesthetically pleasing results and it also established a  conscious relationship between sound emitter and space.

Sound of the metal plates

With the objective of getting the material to “speak” on it’s own, the arrangement of surface transducers on top of the metal plates brought new levels of sonic expression to the installation. Being really small in dimension, the transducers could be moved in position and manipulated, creating unique harmonic sweet spots and playability. However, positioning was delicate as it was clear that the flat metal material could introduce an excess in mid-high to high resonances. Also, this kind of transduction method worked better with a limited audio band, as very low frequencies would make the transducers move around in unpredictable ways. While this created an interesting view of the speaker being “alive” and traveling through the plates, it also produced extreme imbalances in the feedback system as well as a detachment of attention in the visual aspect. The effect, while interesting, was discarded from the project.
Finally, a band limit was set to each plate’s output (from 120  to 2500 Hz) while the transducers were placed on corner areas, since the resonances were dampened by each plate’s wood supports, making the overall sound smoother and less harsh.

Final recordings of the installation can be heard below. The first and second clips were recorded with an A/B configuration, while the third one is the direct input from the DPA microphones underneath each plate:

March 30th:


March 31st:


April 1st:

 

2. Visuals

Projected Time-Lapse Animations

The process of oxidisation was documented with a 2 camera set-up taking photos of the plates at intervals of 20 seconds over approximately 5 hours (using intervalometers). This allowed for a top-down and low angle view of the process over the allotted time.
A solution of hydrogen peroxide, white vinegar and salt was sprayed, splattered or dripped at periods of around 15 minutes. This ensured that the reaction was constant and created further movement and tonal changes within the final animations.

The resulting shots from each camera were then processed using a series of photoshop actions and combined into individual videos running at 24fps. Automated processing of the images was essential given large number of images, nearly 6000 in total. Each of the images from the top-down camera were cropped 2 times ensuring as much of the surface was documented as possible and to create further levels of variation within the final video projection. The animations from the top camera crops and bottom camera were combined to create one complete video for each metal plate. Further compression and resizing of the videos allowed for optimal playback. The equipment used for the final projection was the following:

–  Sanyo DLP 1500 lumen short-throw projector
–  4 Custom built screens for back projection
–  MacBook Pro Laptop

projection example

The projector was set up at a distance of approximately 3 metres from the back wall to ensure each screen was covered with an acceptable margin on the sides. Also, an additional control slider was added to desaturate the images during the planned performative elements of the presentation. When running as an ‘automated’ system, this wouldn’t be enabled.

emck_DMSP_10

Additional details on software manipulation can be found under Annex 2

 

Results

Rusted metal plates, wood and fabric screens, surface transducers, speakers and microphones were arranged to initialise the entropic experience. The setup was located in the middle of a large, dark concrete room and responded to the following structure:

Complete setup

Complete setup

The software initialized a process which quickly became independent. Entropy came into place creating constant morphing in the feedback system. The lack of light contrast highlighted the engaging visuals while the sound creating a spacious atmosphere, resulting in a very positive experience for the whole team:

Full-length video:

Observations

In the sonic side of the installation, remarks were made in the following aspects:
– A lack of energy in the low frequency spectrum was noted.
– People were keen to interact with the installation after seeing the performers touch the
plates.
– Performers could include more gestural statements like silent contemplation.
– A sense of the plates as a performing quartet. This perception could give way for more
daring compositional arrangements, without losing the touch of self regulation and
envelopment.

In respect to the visual design and set-up there are various improvements which could be made to the installation:

– A brighter projector would strengthen the visuals and reduce pollution from ambient
light sources.
– Thinner screen material would also enhance the brightness of the projected images.
– While the wood used for the framing came at no-cost, some of the beams were slightly
warped limiting the size of the final screens.

Additional documentation

In addition to audio, video and and photographs, the team sought for an alternative form of documentation that would reflect the nature of entropy in this work. In consequence, a Microsoft Kinect camera was the device of choice. The Kinect’s working principles are based on readings of ever-changing particles of information (depth and colour), a concept which proved to be coherent and exciting for this project.

The installation in action was captured on several shots during the two presentation days. The purpose of having multiple takes was to be able to arrange them to recreate the experience over time from a different sensorial perspective. This change of perspective meant having the Kinect data undergo a degradation process to ultimately output something new yet tightly related. Ultimately, the depth and colour information from the device were manipulated to create textures which were later used to build a navigable Unity scene. Such space would allow the users to walk through this process of transformation:

2D Kinect progressions:

8. kinect 2D progression

Obtained from manipulation in Meshlab. Point-cloud data (depth and colour) was                                gathered by the camera and then converted into flat textures.

3D Kinect navigation:


The Unity scene shown in the video above can be downloaded as a standalone application here.

Insights

Planned for the unplanned

The unpredictable features of this entropic system was another recurring concept. The project had been prepared and thought through in detail, nevertheless since day one the team was aware that its outcome couldn’t be predicted.
The fact that the ‘creators’ couldn’t totally control the ‘creation’ became the center of the system’s functionality. When embracing this idea, the team stopped being outer observators and instead turned into part of the cycle.

In the programming side of things, this became an immediately conflicting but nonetheless exciting idea: the sense that in designing a chaotic interaction system, notions such as expected behaviour, functionality, intuitiveness and aesthetic balance were constantly contributing and opposing each other. Ultimately, decisions were made to expect an unexpected yet aesthetically interesting result.

Additionally, the environment’s influence, the software’s malleability, the audience’s reactions and the team’s conscious (and unconscious) acts regarding the system made it potentially unstable, and that became a totally vivid example of entropy. From every angle there were elements interacting with the plates, somehow.

9. Euan's melting screens


Performance or installation

As the assemblage process took place, this ambivalent question kept happening: was this system being prepared as an installation or a performance?

Initially, it seemed like the concept leaned towards the first option. Nevertheless, a common behaviour was noticed during audio tests, as third parties tended to approach the plates after listening to the sounds generated. This observation lead the team into considering the interaction of other systems within a primary one.

Hence, some of the arguments that encouraged us to interact with the installation are stated in the article Hybrid Resonant Assemblages: Rethinking Instruments, Touch and Performance in New Interfaces for Musical Expression by Bowers and Haas (2014). The article defines a type of work referred to as hybrid resonant assemblages, and provides examples from artists such as David Tudor, Nicolas Collins and Bowers and Archer. Our work, just as much as theirs, fits the criteria of theses assemblages, as they consist of « varied materials excited by sound transducers, feeding back to themselves via digital  signal  processing ».

According  to the authors, the making of a resonant assemblage is a performance itself, as a « process of creating transient, ephemeral situations in contrast to performing a work with pre-existing instruments or interfaces ». Furthermore, they eloquently phrase in the following passage how such human interaction can be relevant to the aesthetics and the philosophy of a resonant assemblage :

« The transience and occasional fragility of assemblages encourages an
orientation to performance, and indeed a deportment of the performer’s body,
which emphasizes care, deliberation, attentive listening and judicious touching.
This form of auditory-tactile exploration makes for a notable performance
aesthetic which is often characterized by moments of withholding and hesitating
to touch the assemblage.»

The phenomenon of touch, in this case, is considered as part of the musical expression, and is experienced not as a gesture as much as a matter of « tension between expressive and destructive potentialities ». In other words, the performer’s gesture, either active or withholding, lets new sonic behavior emerge as the system responds to its environment.

10. audience interacting with plates 1

Audience approaching the system

10. audience interacting with plates 2

Audience-system interaction

 


Annex 1

Audible Ecosystems and Emergent Sound Structures in Di Scipio’s Music: Music Philosophy Helps Musical Analysis

Our installation explores the concept of Agostino Di Scipio’s ecosystemic approach by creating an « audio system that interacts with the environment, i.e. space. » (Meric, Solomos 2009).  We thought interesting to rely on the unpredictability and instability of such a system or structure as a way to explore entropy. What we created, just as some of Di Scipio’s pieces, consists of an ephemeral moment, directly in relation to a specific space and the listener, as the feedback system will be influenced by the characteristics of the environment in which it is created. This can be derived from Barry Truax’s communicational approach on sound (Acoustic Communication, 1984).

Our system also captures self-organization, which, according to Di Scipio, is the « main peculiarity of social and living systems ».  This will bring dynamical behavior and change, which is what would refer to entropy and its occurrence over time.

The idea of the wanted and the unwanted, as exploited by artists such as Christian Marclay or even the Wabi-Sabi philosophy, also fits the concept elaborated by Di Scipio which we abided by. The composer, in its compositional approach, performs « a shift from creating wanted sounds via interactive means, towards creating wanted interactions having audible traces ». This can lead to unwanted and unpredictable sounds and behaviors, whether enjoyable or not, as the process is more important than the result, or even considering the process itself as the result.

11. Euan's screen close-up



Annex 2

Additional software details

 

Final patch for the audio system

Final patch for the audio system

Both commented patches referred to in the following analysis can be downloaded here.

Audio patching and stages

The audio system consist of four microphone inputs that are processed through a Max/MSP patch and then outputted via the surface transducers and speakers. This creates the feedback loop necessary for the installation to evolve in time.

Because of the group’s own ambivalent views on installation and performance, the patch was designed to serve as both at any point. By defining a time constraint, the software could be both an ‘infinite’ installation or a ‘finite’ performance.
Independent of this, the four rusting curves and their evolution in time are the software’s primary basis. Depending on their cycle stage, the system activates several audible effects. These served as different stages of a whole decomposition and recomposition process. As the four curves completed a full cycle, the system would enter another stage that represented a different digital decaying process:

Stage 1 – Mp3 Rusting

This effect takes a discrete FFT window of the incoming signal and rearranges its FFT bins in real-time using the framerank~ external developed by Alex Harker. This creates glitchy sound artifacts reminiscent to that of extreme mp3 compression.

Stage 2 – Spectral Chemistry

This effect uses FFT graphic filters to mirror the spectroscopic patterns of different chemical elements that are a part of the rusting process. Iron is the base material, and sulfur and nitrogen are elements present in the air and pollution. The incoming signal is filtered with the base material (plus several other effects) and the others are added progressively as the rusting curve increases in value. This process creates an accumulated sound texture with a gliding effect.

Stage 3 – Destroy

This effect is a basic audio down-sampler (bit-crusher) with a self-regulating amplitude mechanism. In this stage the sound is transformed to a lower resolution, tending to stay in a single tone for an important amount of time, creating a unique stage in the installation’s progress.

Stage 4 – All

In this final stage, every effect is activated to serve as a full climax and culmination before the beginning of a new cycle.

Video patching and stages

The visual setup consists of four video players with effects linked to incoming data from the audio curves’ system. The udp.send / udp.receive objects in the patch link the audio and visual systems via an ethernet connection, creating a direct link between their responses and the corresponding stage the installation is in.

single_screen

Stage 1 – Over saturation (jit.brcosa)

Manipulates the video saturation to extreme values. Results vary from bright yellow and red to a strong blue and black.

Stage 2 – Pixel sliding (jit.scanslide)

Uses matrix cells envelope-following to take every pixel and slide their values downwards, creating a gliding pixel-sorting type of effect.

Stage 3 – Scanline wrapping (jit.scanwrap, spew mode)

Rewraps an input matrix to different dimension values and outputs as many matrices until void itself of all data. This creates a continuous sequence of smaller but distorted versions of the video signal, that when pushed to a minimum value creates moiré patterns reminiscent of low-bit glitch textures.

Stage 4 – All effects running except ‘scanline wrapping’

Produces the final visual stage, parallel to the audio climax.

emck_DMSP_3

 

emck_DMSP_7

A jit.xfade object is in place for each projected screen to create an alpha transparency when there is no sound being emitted from the corresponding plate.

Pushing brightness and contrast is utilised as both a visual effect and also provides some compensation on light lost through back projection.

0408
Minor adjustments could be made to each video within the jitter patch to compensate for differences in the contrast of the original time-lapse animations.

emck_DMSP_screen2

Mapping

Although the projection mapping required was minimal for this installation it was still important to create clean projections on the four screens. Various mapping softwares were investigated such as VPT but MadMapper was by far the most user friendly and suitable to the task. Each layer is combined in the Jitter patch within a jit.glue object creating one video source. Also, syphon output from the patch to MadMapper allowed for easy connectivity and subsequent mapping for the visuals.

emck_DMSP_screen1

MadMapper

CPU-usage

A two-computer setup was necessary to alleviate heavy CPU usage (almost 40% only in the audio section). In the visuals section, many tweaks had to be made to achieve the required speed, from downgrading the video files themselves to enabling Max in low-resolution mode (given issues in performance in Retina Mode).


Annex 3

Invitation poster

 

12. invitation poster

 

 

References

Bowers, J., & Haas, A. (2014). Hybrid Resonant Assemblages: Rethinking Instruments,
Touch and Performance in New Interfaces for Musical Expression. Proceedings of
the 
International Conference on New Interfaces for Musical Expression, 11-11.
Retrieved March 1, 2015, from nime2014.org/proceedings/papers/438_paper.pdf

Clarelli, F., De Filippo, B., & Natalini, R. (2012). A Mathematical Model of Copper
Corrosion. Retrieved from arxiv.org/pdf/1211.6938.pdf

Everest, F., & Pohlmann, K. (2009). Modal Resonances. In Master handbook of acoustics
(5th ed., p. 230). New York: McGraw-Hill.

Meric, R., & Solomos, M. (2009). Audible Ecosystems and Emergent Sound Structures in
Di Scipio’s Music: Music Philosophy Helps Musical Analysis. Journal of Interdisciplinary
Music Studies, Volume 3(1 & 2), 57-76. Retrieved March 1, 2015, from citeseerx.
ist.psu.edu/viewdoc/download;jsessionid=19DC9E4CF5DFFE42FBA07193A5F4E708?
doi=10.1.1.154.1201&rep=rep1&type=pdf

Truax, B. (1984). Acoustic Communication. Norwood, New Jersey: Ablex Pub.

Winkler, T. (1998). Composing interactive music: Techniques and ideas using Max.
Cambridge, Mass.: MIT Press.

Leave a Reply