This week we are experimenting with with network performances; essentially having individual computers work together and share information for more diverse performances.
This work makes use of the Microsoft Kinect for tracking, and makes use of the dancers gestures and position within the performance space. For this Sarah will be controlling a bed track and one-shot sounds that can be triggered. The the bed track(s) are placed throughout the room, with amplitude mapping to the dancers proximity to them in virtual space. As well, the dancers hands control a pitch bend on the bed track. Lastly, by tracking the velocity of the hands we are able to trigger one-shot samples (in this case bells/celeste sounds) when her hands move above a certain speed. The pitch of the samples is controlled by the Y position of her head, meaning that the range low-high is mapped in low-high vertical space.
Beyond this, Byron will be controlling Alec’s video with his horn, where the amplitude of the horn is mapped to the brightness of the video playback. Because the horn has a wide dynamic range and the ability to do short, long, sustained attacks and different dynamic shapes, it means that the video can be manipulated in many ways. As well, Byron’s horn is being processed, with the processed sound being the main controller for the video. By using delay and reverb effects, we can control video processing in a more dynamic way than simply note-on/note-off.
Right now we are still very much testing and experimenting with this. We’ve been running into a number of issues with the networking, but have otherwise laid the groundwork for a diverse and exciting performance environment.
We ride together, we die together, bad boys for life.