As any performer who has ever worked with technology knows, interfacing human and machine elements is a time-consuming process. Our experiment with motion capture was no exception.
Fortunately, we had wonderful people to work with – our dancers, Professor Roger Good and his students and staff from the OU School of Digital Media Arts, and Nathan Berger and Rakesh Kashyap from the OU Aesthetic Technology Lab. The latter two were responsible for the motion capture recording, using a portable MOCAP suit. This had to be fitted and calibrated on each dancer in order to produce a clean recording. And this often involved painstaking recalibrations between performances.
It was a very long day, but we managed to record the scales we wanted to capture, along with a dance class exercise, an improvisation, and a section of one of Jean Erdman’s choreographies. Now even harder work followed, for Madeleine and I had to learn to read the MOCAP recordings.
While we had certainly captured the trace-forms, they were merely white lines against a black background. The recording had no visual depth; that is, we could not easily discern which lines represented motion in the front part of kinesphere and which lines represented movements in the dancer’s back space.
Berger and Kashyap came to our rescue here, by creating as skeletal icosahedron that could be superimposed to help us decode the MOCAP tracery. But this introduced other problems – should the icosahedron turn when the dancer turned, or remain stationary? Should it tilt if the dancer did, or not? MOCAP reintroduced issues around systems of reference that have been dealt with in Labanotation.
As with all pilot research projects, Madeleine and I discovered how much we still had to learn. Nevertheless, a few intriguing findings emerged. Learn more in the next blogs.