We are all equipped with two extremely expressive instruments for performance: the body and the voice. By using computer systems to sense and analyze human movement and voices, artists can take advantage of technology to augment the body's communicative powers. However, the sophistication, emotional content, and variety of expression possible through the original physical channels is often not captured by or addressed in the technologies used for analyzing them, and thus cannot be transferred from body to digital media. To address these issues, we are developing systems that use machine learning to map continuous input data, whether of gesture or voice or biological/physical states, to a space of expressive, qualitative parameters. We are also developing a new framework for expressive performance augmentation, allowing users to easily create clear, intuitive, and comprehensible mappings by using high-level qualitative movement descriptions, rather than low-level descriptions of sensor data streams. This work is also a major component of other projects including Vocal Vibrations.
Through this work, I hope to explore a variety of questions. How can raw sensor data be abstracted into more meaningful descriptions of physical and vocal expression? How can we describe the quality of a movement, or of a vocal gesture? What features of movement convey particular expressive and emotional content? And how can we create evocative high-level descriptions of all of these so that they can be used intuitively and creatively in the process of choreographing, composing, and performance-making?
This research draws from my experience with projects such as the Disembodied Performance System and the Vocal Augmentation and Manipulation Prosthetic. Additionally, as part of my master's thesis work at the MIT Media Lab, I created a prototype system that could learn specific gestures from a performer and tracked qualities of their movement based on Rudolf Laban's concept of Effort. I choreographed and designed a set of four performance works, Four Asynchronicities on the Theme of Contact, which used this system to map dancers movements to control sound and visual elements, including video projection.
Related publication: Jessop, E. "A Gestural Media Framework: Tools for Expressive Gesture Recognition and Mapping in Rehearsal and Performance." M.S. Thesis. MIT Media Laboratory, 2010.