Modeling Laban Effort qualities

Modeling Laban Effort qualities

3D motion Capture + Machine Learning + movement sonification

Collaboration with Jules Francoise and Frederic Bevilacqua (IRCAM), Thecla Schiphorst, (SIAT – SFU), Karen Studd (LIMS),  Diego Silang Maranan, Pattarawut Subyen, Lyn Bartram (SIAT – SFU)

Human movement has historically been approached as a functional component of interaction within HCI. Yet movement is not only functional, it is also highly experiential. In this research project, we explored how movement expertise as articulated in Laban Movement Analysis can contribute to the design of computational models of movement’s expressive qualities as defined in the framework of Laban Effort theory.

Effort Detect is a prototype of wearable system for the real-time detection and classification of movement quality using acceleration data. The system applies Laban Movement Analysis (LMA) to recognize Laban Effort qualities from acceleration input using a Machine Learning software that generates classifications in real time. Existing LMA-recognition systems rely on motion capture data and video data, and can only be deployed in controlled settings. Our single-accelerometer system is portable and can be used under a wide range of environmental conditions. We evaluated the performance of the system, presented two applications using the system in the digital arts and discussed future directions.

https://www.youtube.com/watch?v=2ZevHqXErOc

In a second publication, we included experts in LMA in our design process, in order to select a set of suitable multimodal sensors including Vicon Motion Capture, Kinect, accelerometers and electromyograms. We also computed high–level features that closely correlate to the definitions of Efforts in LMA. We evaluated the selected sensors and the high–level features for the recognition of the Weight, Time and Space Efforts using a Machine Learning Algorithm based on Hierarchical Hidden Markov Models. Our results showed that the optimal Weight and Time Effort recognition was achieved by the high–level features informed by LMA expertise applied to multimodal sensors of accelerometers and electromyograms.

We then investigated the use of interactive sound feedback for dance pedagogy based on the practice of vocalizing while moving. Our goal was to allow dancers to access a greater range of expressive movement qualities through vocalization. We proposed a methodology for the sonification of the Effort Factors, as defined in Laban Movement Analysis, based on vocalizations performed by movement experts. Our approach to movement sonification used vocalizations performed by Certified Laban Movement Analysts along with movement with various Effort Factors. We developed a sonification system based on a method called Mapping–by–Demonstration. Mapping refers to the relationship between the movement features and the sound parameters. Rather than formulating an analytical description of the relationship between movement and sound, Mapping–by–Demonstration trains a machine learning model based on Multimodal Hidden Markov Models (MHMMs) on examples of performances of movement with the various Laban Effort qualities associated with vocalization sounds in order to learn the motion–sound interaction model. The system is then used for interactive sonification: during the Performance phase, the participants’ live movements qualities are interpreted by the system that continuously estimates the associated voice features in order to re-synthesize the pre-recorded vocalizations using corpus-based sound synthesis. Based on the experiential outcomes of an exploratory workshop, we proposed a set of design guidelines that can be applied to interactive sonification systems for learning to perform Laban Effort Factors in a dance pedagogy context.

Publication:

Diego Silang Maranan, Sarah Fdili Alaoui, Thecla Schiphorst, Philippe Pasquier, Pattarawut Subyen, Lyn Bartram. “Designing For Movement: Evaluating Computational Models using LMA Effort Qualities”, In Proceedings of ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), Toronto, 2014. CHI2014_Marananetal

 Sarah Fdili Alaoui, Jules Françoise, Thecla Schiphorst, Karen Studd, Fréderic Bevilacqua “Seeing, Sensing and Recognizing Laban Movement Qualities”, In Proceedings of ACM Conference on Human Factors in Computing Systems (CHI), Denver, 2017.

Jules Francoise, Sarah Fdili Alaoui, Frederic Bevilacqua,  Thecla Schiphorst, “Vocalizing Movement for Laban Effort Sonification in Dance Pedagogy”, In Proceedings of ACM conference on Designing Interactive Systems (DIS), Vancouver, 2014. DIS2014-francoiseetal