Traces of Expressivity: high-level score of data-stream for interdisciplinary works
Updated: Aug 21, 2019
Keywords: Interdisciplinarity, Communication, Symbols
IRCAM Artistic Reseach Residency, UCA, UAntwerp
Collaborating with great researchers of Ircam on a subject I am passionate about, has always been rich and stimulating to me.
This year I started a project on the conception of an electronic score (software) which provides a realtime data stream as a source of formalised sound and gestural information. “Traces of Expressivity: high-level score of data-stream for interdisciplinary works” is the subject that I am currently carrying out at Ircam (Institut de recherche et coordination acoustique/musique) in Paris, within the artistic research residency program, in collaboration with the Musical Representations Team and the Sound Music Movement Interaction Team. See the proposal here.
This project aims to formalize a technique tailored for score-creation in the context of music-based interdisciplinary works. In multidisciplinary works the significance of communication between artists, from the different artistic disciplines led me to think about the conception of a hybrid universal high-level score. This new paradigm should allow us to transmit the intentions and the ideas of a composer to choreographers, set designers and other artists involved in the dramatic, performing, visual or digital arts. This hybrid score consists of a notation of gestures (graphic notation), as well as a data stream score (the subject of this residency) that provides real-time data stream as a source of formalized sound and gestural information. The data stream score should be able to convert the audio signal from the music that is being performed by the performers, as well as their physical movements (gestures) into data. In this project, our attention will be focused on defining the relevant semiological parameters, which is at the heart of the problematic of this research.
From the beginning of the project soon I realized that this research cannot pretend to make a universal software that suits different types of composition. That’s why I decided to change my strategy and focused my work on my own pieces. Actually Data Stream Score will be conceived for works in which gesture is the main element of musical discourse. Thus the score, at this stage, cannot pretend to be universal.
Although the data stream score provides data in real time, a huge quantity of information should be prepared and formalised before the performance, in order to get accurate score as output. In this sense the project is a perfect combination of CAC (Computer Assisted Composition) and realtime process.
On December 11 a meeting took place with Jean Bresson and Fréderic Bevilacqua, head of the Sound Music Movement Interaction team at IRCAM. We discussed about possible techniques to make an interface of gesture recognition which allows us to recognize and classify certain types of gestures in an audio file, according to some rules and instruction (learning process). In order to observe behavior of gestures in the piece, we decided to use audio descriptors.
The patch below gets a sound at input and gives us a graphic representation of the content of the sound. At output we get a compilation of those information as SDIF (Sound Description Interchange Format).
Zamyad (2015), a piece for cello and electronics, is the subject of this operation. This work has been composed based on gestures which come from traditional Iranian music. After we analyzed the piece by listening to the recording, the score has been annotated by 17 different types of gestures and micro gestures.
Classified gestures in Zamyad
The sound file is segmented in many micro sound files according to the analysis. A gesture can be played in different way depending on the instrument, the performer and acoustic conditions of the recording. It is important that the descriptors are able to understand it and not get confused.
Sometimes a gesture is a combination of two or more micro gestures (Figure below).
Let’s go back to the structure of the patch. In the patch we used an object from IAE (Interactive Audio Engine) library. After the audio file is segmented according the analysis, descriptors define different features of each of them. Descriptors analyses the sound in real time and provide different features of the sound like centroid, spread, skewness, kurtosis, roll-off,fundamental frequency, noisiness, inharmonicity, odd-to-even energy ratio, deviation, loudness, roughness, etc.
The goal consist in making gesture recognizer through the artificial neural network or Gaussian Mixture Models in order in the piece and getting all the parameters which can be used for further artistic operations.
In parallel I worked with Diemo Schwartz on the library of XMM which is a “portable, cross-platform C++ library that implements Gaussian Mixture Models and Hidden Markov Models for recognition and regression. The XMM library was developed for movement interaction in creative applications and implements an interactive machine learning workflow with fast training and continuous, real-time inference.” <http://ircam-rnd.github.io/xmm/>.
We made a patch that gets the all the sound files of the gestures (the segmented recording of Zamyad) in order to learn them. Then it should be able to recognize every gesture while the recording of the whole piece.