Jocelyn Ma (GoGlobal Student Safety Abroad Advisor) and Lillian Lau gave us a presentation on safety, travel documents, preparations and then ran us through an evaluation exercise. Here’s the class trying to figure out their personalities in stressful situations. (That was stressful for some!)
An interactive environment for dancer, cello, and vocalist, with motion tracking and Max/MSP.
A depiction of sorrow for interactive dance and video, with French horn, and Max/MSP/Jitter.
A depiction of fear, using real time processing of trombone and piano, with game controllers and Max/MSP patching in 8 channel surround sound.
A lovely evening of performances in the studio tonight — videos to be posted soon!
This week we are experimenting with with network performances; essentially having individual computers work together and share information for more diverse performances.
This work makes use of the Microsoft Kinect for tracking, and makes use of the dancers gestures and position within the performance space. For this Sarah will be controlling a bed track and one-shot sounds that can be triggered. The the bed track(s) are placed throughout the room, with amplitude mapping to the dancers proximity to them in virtual space. As well, the dancers hands control a pitch bend on the bed track. Lastly, by tracking the velocity of the hands we are able to trigger one-shot samples (in this case bells/celeste sounds) when her hands move above a certain speed. The pitch of the samples is controlled by the Y position of her head, meaning that the range low-high is mapped in low-high vertical space.
Beyond this, Byron will be controlling Alec’s video with his horn, where the amplitude of the horn is mapped to the brightness of the video playback. Because the horn has a wide dynamic range and the ability to do short, long, sustained attacks and different dynamic shapes, it means that the video can be manipulated in many ways. As well, Byron’s horn is being processed, with the processed sound being the main controller for the video. By using delay and reverb effects, we can control video processing in a more dynamic way than simply note-on/note-off.
Right now we are still very much testing and experimenting with this. We’ve been running into a number of issues with the networking, but have otherwise laid the groundwork for a diverse and exciting performance environment.
We ride together, we die together, bad boys for life.
We’re trying to bring “fear” to life for project #4. By “we,” I mean Michael, Janine, Hannah and me (George).
WE’VE GOT FILTERS. WE’VE GOT TROMBONES. WE’VE GOT FANTÔMES. WE’VE GOT PIANOS. WE’VE GOT GAME CONTROLLERS. WE’VE GOT HARMONIC SPECTRA. WE’VE GOT GRANULATION. WE’VE GOT A SCORE WITH DYNAMIC “FFFFFFF.” WE’VE GOT A WHITEBOARD MARKER. MAN, WE’VE GOT IT ALL.
Is it scary? That’s for you to decide. It’s certainly sounding interesting (that’s just objective fact)