Here is the tentative plan for our performance. Shuman and Timø specifically expressed an interest in having a solid plan so they can ‘dial in’ and/or make presets for different sections. Let me know if there are any glaring mistakes, and I will fix them. Otherwise, read it over and come in with some thoughts and/or concerns about how we can improve our performance for the start of next rehearsal.
After having a break down with my PS3 controller (2 pound carboot sale controller stopped responding! 🙁 ) I’m back in action with a video game controller that has a gyroscope in it! Now I really don’t need a gametrak to get gestural control.
As I mentioned before, I’m going to incorporate three different audio types with corresponding visuals.
The first will be what you have seen in submission 1 with slight modifications for better spacialization and more rhythmic possibilities.
The second will be the audio patch that I made for Jessamine’s project, but I will be controlling it with the PS3 controller. minimal black and white visuals (similar to Marco’s TVs) for this one as I am hoping to respond to Jessmine and Shuman’s visuals in real time and do not want to detract from them. It will be incorporating some of Timøs sound files and Marco’s IRs. This one will have a little of everybody!
The third will be some synthesized speech readings of Platos “Allegory of the Cave” as that is our decided theme. As discussed in group, it would be interesting to go from digital to analog, or analog to digital. So I’m hoping to potentially record myself reading passages and transition between the two. I dont know yet if I want the speech to be recognizable or just noise. I think I will have solid colors for this one leading to blinding white as we leave the cave.
here is an example of some of the digital speech. It’s using aka.speech, so it sounds EXTRA digital (which I am going for).
What are your thoughts?
After having refined (redesigned almost all control messages of) my instrument, I started to explore possible gestures and short forms. I think I achieved a good variety of sounds and found a few interesting gestures, so I recorded a demo of about 8 minutes. Great inspiration is, again, coming from Di Scipio. It’s a one take improvisation, but I think there’s some “storytelling” in it. Having built this instrument from scratch, being able to paint a beginning and an end is for me a big result. The left channel is much hotter than the right one (I love asymmetry).
This time I’ve been using one of the stretch sensors. I like its feeling a lot, especially because it gives an absolutely clear feedback, allowing for a really fine control. This is quite visible around 1:05 min in the video. There still are several issues, though. First and foremost, I’d like to use its own sounds and noises as the processing material, but I can’t do it using Arduino (as I’m doing now) because its 5V power is so noisy that completely covers the sensor. I read that the 3.3V source is less noisy, but I’m running out of time to continue experimenting (sadly). For the DMSP, I might just use a few sensors to control the processing of the sound(s) coming from a contact mic placed on a “sculpture” (and that is another issue). I like the sounds of this new video, but they’re coming from the laptop mic and, to mimic a contact mic, I had to tap and scratch its surface. I did it both with my “wired” arm, having a perfect sync between the stretching and the impacts (but unfortunately that’s out of the camera field), and with my other hand, but it turned out I was not syncing them, I’m not sure why (maybe it felt insipid while playing). Finally, I’m not entirely sure of where and how I can place the sensors on my body, but I’ll figure it out soon. Good night, and good luck.
This short video was born after experimenting with sound generation processes. The only source of all sounds is the laptop microphone, as I was trying to expand the palette of one of the systems I’ve been using for my A/V instrument. In particular, I focused on the interaction between a modal synthesis unit and a granulator I recently build. The latter seems to be not so flexible and probably needs some more work, but since it is completely built in the signal-domain, it has an interesting sound, at times almost analog. I experimented with fundamental pitches of the modal synthesis, but I’ll get more variety as soon as I’ll start to dynamically modify the mutual relationships between each filter – that is to say, modify timbre. As I said, this is just one portion of the system I’m working on: this video doesn’t feature the actual contact mic, the electro-magnetic feedback from the television screen and the background resonances/feedback running in Pure Data. Nevertheless, it shows some work done on the sounds that I want to be the centerpiece of my “thing”.
I thought the general mood matched pretty well this phone-resolution video I shot some time ago.