top of page

Notation of Gesture and Modeling :The process of composition of Mots de jeu

Updated: Oct 29, 2020

International Conference on Technologies for Music Notation and Representation TENOR 2020 Hamburg, Germany


Keywords: Modeling, Gesture, Notation, Formant Synthesis


Jean-François Trubert Alireza Farhang

University of Côte d’Azur                   University of Côte d’Azur

(IDEX UCA-JEDI, ANR15-IDEX-01) /

University of Antwerp


This article aims to present the process of composition of a piece in which the musical material integrates syntactic and semantic dimensions of a poetic language. Modeling the semantics of speech gestures through graphical notation and using formant synthesis to generate the electronic sounds are also the subjects that will be explained in order to give an outline about the way poetry and prosody contribute to the microstructure and macrostructure of a vocal piece.


Introduction

Mots de jeu is born of the challenge to compose a musical work built on the sensitive and emotional content of a text, and thereby to create what could be called a speech gesture. Beyond the abstract meaning of words (the signified), the composition seeks to capture the vocal gestural imprint of poetry and use it as both a morphological and structural model for the entire work, acting as a counterpoint to the signifier.


From a technical perspective, the piece involves reproducing phonemes and generating sounds using formant synthesis[i]in the OpenMusic programming environment (OM-Chant library),[ii]and producing a sound that gives the illusion of an augmented human voice. The ambiguous quality produced by formant synthesis echoes the challenges specific to the language of several poetic texts drawn from the collection L’Espace du dedansby Henri Michaux. The language of these poems is constantly revealing new discoveries, and their original and emerging poetic content can never be fully grasped,[iii]opening the door to multiple points of view in a process of composition that is constantly moving back and forth between the microscopic and local dimension of the language and its models (phonemes, syllables and words) and the macroscopic dimension (syntactic and semantic). The result is a superposition of formal structures organized according to the vocal gestures selected for their paradigmatic function as form generators. As we will explain below, this has an impact not only on the sound structures but also on the spectral morphology of the whole piece, as the work on vocal formants necessarily has an incidence on the harmonic and textural dimension of the synthesis process and its combination with the voices.


The composition process of this piece will therefore reveal what happens upstream of the sound matter where, through graphic schemes, the prosody of Henri Michaux’s text gives birth to an acoustic matter. How is the poetic environment perceived and analyzed? What vocal gestures have been combined and how? How can a gesture or vocal gestures lead to more macroscopic musical structures and what notation signs could be used to convey them? How do we bring human vocal matter in contact with synthesized vocal matter?


Modeling prosody

To meet the challenge of creating a formal structure for this piece based on the voice and of mobilizing all the vocal resources of the poem, it was essential to work from a text that could be segmented into small speech units, while retaining the poetic expression of the whole. The poetic material also needed to be malleable to a certain extent to allow us to move the words, the syllables and the phonemes, for instance. This full potential is immediately apparent on reading the poem collection L’Espace du dedansby Henri Michaux, where these very aspects reveal a tremendous musical power.[iv]


Regarding the composition of Mots de jeu, the goal was not to simply reflect the text in the music,[v]but rather to follow the principle of a geometric translation: to model a form of contamination between verbal matter and musical matter, where the texture of the words – both their meaning and the bodily form of an intention or of a logos [vi]– would then serve as reference. To achieve this goal, a graphic representation of vocal gestures was first imagined as an abstraction of speech and as a graphic representation of its acoustic temporal form (see Figure 1).


Figure 1 .Graphic representation of a speech gesture.


The preliminary process therefore involved modeling speech gestures and defining graphic schemes. The value of these schemes is strictly subjective. The graphic sign is placed in a space (on the page) that is neither ordered nor homogeneous, where it combines several functions conveying the strength, the energy, and the internal movement of the gestures that arise from the words, the letters, and the interjections, etc. In some cases relationships are established between the size of the letters and the dynamic and in others, correlations are made between durations, etc. (see Figure 2).


Figure 2.The strength, energy and internal movement of the phonemic gestures.


Graphic design of vocal gestures

The A graphic scheme is a visual bridge which seeks to lead the observer from a concrete unit to an abstract one. The unit becomes an element of material in the Schaefferian sense of the term: from each unit, a character (essentially of a structural and morphological nature) emerges and crystallizes as it adopts different variables and becomes “a structure of value variation” to use Michel Chion’s own words.[vii]Each graphic sign can be used multiple times, in different situations and with different parameter settings. We therefore consider that there are several levels of modeling of a source vocal gesture. Each is determined by the degree of distance (i.e. the degree of fidelity) with respect to the source gesture, from level 1 where there are many similarities between the phonology of the text and its spectral and transcribed translation, to levels 3 and 4, where elements from several source gestures are combined in a complex way such that the origin of the gestures can no longer be identified. At this level, the process of abstraction grants the freedom to translate the prosody into the sound matter illustrated in the graphic notation and materialized in music. This approach is applied to both the sound synthesis part and the vocal part.


Level 1

A) The first level represents the morphology of words derived directly from their phonetic analysis. In this case, the graphic schemes represent the succession of phonemes and take into account the transitions or morphing (passage from one state to another without a noticeable transition) between them. The first task was to group phonemes into classes according to their spectral content (see the table below).

In order to represent phonemes graphically, we grouped them in a slightly different, more subjective way. Categories were defined according to the rate of periodicity and the articulatory form of their spectral content. The black color represents the periodicity (voicing) and the gray color the breath noise (voiceless or devoiced phonemes). Some phonemes, such as vowels, have a simple and stable structure while others, such as liquids, have a shape that undergoes micro-evolutions. The phoneme [R] presents a more complex spectral structure. It is a vibrating phoneme (trill) whose shape varies depending on the context.[viii]The table below shows the outline of the phoneme shapes.


B) These profiles are used as a graphic alphabet, a type of abstraction designed to show the acoustic behavior of each phoneme. Obviously, the progressive transition from one phoneme to another must also be represented in the graphic scheme. Figure 3 shows the progression of the acoustic structure of the word “étrange”.

Figure 3.Graphic scheme representing the profile of the acoustic structure of the word “étrange”.


The word begins with the vowel [é],which has a hard attack and ends with the plosive consonant[t]which has an abrupt onset and offset, hence the short silence which precedes the phoneme [R] which has a granular character, and so on. The scheme also expresses the intonation of the French language. The upward movement of the word emphasizes the second syllable while the tone descends on the last two letters.


Level 2

TheseThis level is considered as an intermediate step in the transformation of the word gesture into a musical gesture. Thus, the graphic schemes that represent “phonemes” and “words”, free themselves from their linguistic content (signified) to produce a musical gesture (a kind of virtual signifier). It can be quite short, limited to a simple phoneme or as long as a fraction of a word. The gesture is achieved by observing the spectral behavior of the profile and serves to form the composition material of the piece (see figure 4).

Figure 4. The profile of the vocal gestures derived from the words “étrange”, “plus” and “chose”.


Level 3

TheseAt the third level, the simple gestures are superimposed in order to build more complex gestures. At this level, the morphology moves away from its original state to the extent that source words can no longer be identified.


Voice processes can be included, such as vibratos, pitch changes, staccato or damped movements, which are not usual in spoken language. This level corresponds to purely vocal gestures, sometimes quite complex, but without identifiable words. At this stage, the macro-gestures are almost ready to be used as structural elements of the musical discourse.


The example below represents the gesture for the entry of voice V which is derived from the word “étrange”. The vocal gesture is electronically augmented. The phoneme [R]which has a granular character, is first pronounced by the voice. Then the granular synthesized sound similar to the phoneme [R]transforms the beginning by stretching it out not only in time, but also in space. The spatialized synthesized sound which lasts around 4 seconds undergoes micro-variations in timbre. It is then enriched by another synthesized sound or gesture, which consists of the succession of the vowels [a] and [U]and a fairly fast vibrato. The result is represented graphically in Figure 5.

Figure 5.Profile of a complex gesture.


Level 4

The fourth level appears in the final score, which can be considered as a hybrid medium where conventional notation meets graphic notation. We note that several complex gestures are superimposed to give a wider dimension to the vocal part. When this level is reached, the formal and textural progression of the work can be seen (see Figure 6).

Figure 6. Graphic representation of gestures in the score (measures 24 to 26).


The abstract aspect of these schemes generates a malleable material which, depending on the context, can be adapted to new musical situations in which the profiles adopt a new duration, a new spectrum, a new height, and a new temperament. Gestures can be segmented into shorter units or on the contrary combined with other micro-gestures to create a complex profile while sharing certain parameters such as temperament, etc.


Thus, the vocal gestures– sung but also electrically simulated – go through a series of derivations which produce variations in their parameters and particularly in their temporalities. The gestures can be very localized (limited to a phoneme or one syllable) or more extended (to a sentence or even a structural unit – see Figure 6).


The drawings also have a poetic and semantic function. They do not replace the traditional notation but complement it. They not only reflect the microstructures, but also a macrostructure which integrates the meaning of the text without using it in its original state. Consider for example the introduction and the first section of the piece composed from the following text:


Oh! Quelle étrange chose au début, ce courant qui se révèle, cet inattendu liquide, ce passage porteur, en soi, toujours et qui était.

On ne reconnait plus d’entourage (le dur en est parti).

On a cessé de se heurter aux choses. On devient capitaine d’un FLEUVE…


In this excerpt, Michaux talks about music, time and the fact that music bends time and resists the flow of time. For the poet “to make music […] is to practice the art of drifting, which is not only to let oneself be carried along where the currents lead, but to modify the perceived movement of music.”[ix]The introduction of the piece only represents synthesized sounds, accompanied by the singers’ body gestures, without voice. The synthesized sounds are modeled from the combination of fricative and plosive phonemes[x]such as [r] (a voiceless alveolar r), [∫], [k] and [p]. This opening represents both resistance and current (as in the image of the river). The modeled gestures therefore contain an accumulated energy which is suddenly released before coming up against other obstacles (see figure 7).

Figure 7. Sonogram of the opening of the piece (before the first entry of the singers) and the scheme distribution. This extract lasts a total of 12 seconds. The gestures result from the superimposed graphic schemes. In the temporal evolution of the spectrum we observe the sudden change in the formants produced by the occlusive phoneme “k”. The short, damped breaths are also visible.


The same text generates the entire next section (first section) where the song first appears: Figure 8 represents a rough drawing of a large gesture of affirmative speech which is the inspiration for the composition of the first section of the work (measures 2 to 43). This introduction, which lasts about 3 minutes, has a latent character, interrupted by the rhythm of short caesuras such as “OH”, a vocal gesture of exclamation (measures 2, 9, 12 and 19). The short cadence that begins from measure 24 concludes the 4 caesuras that preceded it.

Figure 8.Graphic representation of a long gesture (the gesture that opens the work).


Different types of gestures (voiced, voiceless, breathed, etc.) can be superimposed to create even more complex profiles (see Figure 9). In measures 43 and 44 for example, the fricative or plosive phonemes are combined with the constantly evolving vowels and nasals. Here again, a flow of vocal gestures is generated by the singers’ voices combined with the synthesized sounds. The texture of the piece is quite dense, unlike the beginning.

Sometimes, when two formant synthesis voices are superimposed while going through constant pitch and resonator modifications, a third virtual voice can be heard. Its material is a succession of harmonics from a fundamental tone, resembling East Asian overtone singing (measures 44 and 45). This further increases the density of the texture of this section.


Figure 9. The combination of voiced and voiceless gestures generates more complex gestures.

The textural progression and the harmonic outline of the work

Although the progression of the piece is based on changes in texture and timbre, rather than on harmonic relationships in the classical sense of the term, in some sections of the work the textural progression can be described through chords and relationships between intervals. The table below describes the formal progression of the piece.

The contrapuntal structure of the piece, quite discreet at first, progresses around a central pitch (B flat) and becomes increasingly dense. Starting from measure 25, the vocal part covers an interval that extends to an augmented fifth (see figure 9). The process continues and the gestures become shorter and shorter leading to reduced phrases. The texture’s density increases and the gestural or phonetic units become shorter and increase in number.


Figure 10 is an example of complex matter in which the percussive phonemes of voice II are combined with the damped and filtered breath-like noise (pulse-train) of the synthesized voice, while the voiced gestures are augmented by a succession of synthesized vowels, simulated using resonators and morphing techniques.


In measure 82, voice II is in the high register, followed by the first brief appearance of a large chord in measure 75 (see Figure 11), which is the dominant color in the finale of the piece. In measures 81 and 82 (see figure 12), the range of the vocal part reaches a climax in the fortissimo dynamic. In measure 83, voice III (spoken) makes the transition from the loaded and tumultuous texture of measure 82 to a calm and more static texture. In measure 84, as in the previous measures, voice III pronounces “taine” with an inhaled breath prolonged by the sustained breath of the electronic sound. The word “d’un” in measure 85 puts an end to the voiced gestures of the synthesized sound and begins a new discourse whose material is the whispered voice and the damped breaths produced by electronic means.

Figure 10.Measures 77 to 80, the sung voice combined with the synthesized sounds.


At the end of the piece, with the return of the chord (see Figure 13) resembling a chorale, a vertical temporality finally replaces the horizontal temporality. The linguistic elements become more concrete and the text begins to surface in an intelligible way.

Figure 11. First appearance of the chord, a combination of voiced and voiceless gestures generating more complex gestures.



Figure 12.Measures 81 to 86, the sung voice combined with the synthesized sounds.


Figure 13. With the emergence of chords, the vertical temporality replaces the horizontal temporality.


Technical aspects of the sung voice and of the synthesised sounds

Voice description systems are usually based on a source-filter acoustic production model. The source produces a sound and gives it pitch or noise and power. The vocal tract acts as an acoustic filter and defines the timbre of the sound originating from the source. Each of these contributions play an important role in the spectral content and therefore in the phonetic information of each sound.


For the composition of Mots de jeu, a software called Chant was used to control the voice synthesis via the OM-Chant library in OpenMusic. OM-Chant provides for continuous control of the parameter settings for the different formants. Synthesis events can be fundamental frequency values of the pulse train (f0), FOF parameter matrices, noise generators or formant filters.


The role of electronics is to extend the human voice’s timbre and technical performances. However, from a composition perspective, synthetic sounds must remain linked with the characteristics of the human voice, hence the use of formant synthesis. The modeled gestures therefore always remain linked with the vocal gestures of the singers. Different types of sounds and processing were used to produce the synthesized sounds: filtered or pulsed white noise to produce a colored breath (continuous, damped or granular) and periodic formants to produce the vowels. Further processing was added such as vibrato and morphing.


Filtered breath

Figure 14 shows an OpenMusic patch that generates a continuous noise. The noise parameters are controlled by an amplitude envelope and by inputs which can be used to adjust the total duration of the noise and its onset. Three filters can be applied to the source signal. Each filter contains two formants with two fundamental frequencies. The parameter settings for the number of formants, the onset, the duration and the amplitude of each frequency can be adjusted separately.

Figure 14. OpenMusic patch generating a continuous breath.


At the patch output, a synthesis engine generates the sound based on the input values. The source can produce a damped noise or a kind of staccato breath whose damping frequency is controlled by an envelope. The three filters mentioned above can be applied to the pulsed noise.


When the frequency is low, the sound generated resembles a pulsation. When the frequency value is increased, the sound rendered is similar to a voiceless alveolar [r] or a jeté or to flutter-tonguing on the flute filtered by independent trajectories.


This effect was often used to prolong the gestures resulting from the pronunciation of [r] by the singers (measure 50) or as a pulsed breath passing through the whispered voice in measures 87 to 90. Figure 15 shows the patch and the spectral content for this gesture. On the sonogram, the independence of the formants’ trajectory is explicit. We can observe the change in frequency of the pulsation that follows the frequency envelope of the patch.

Figure 15. Patch generating a continuous breath with the settings for each formant.The sonagram below shows how the formants of the produced sound evolve.


Vowels

As a general rule, the different vowels are simulated by adjusting the frequency, amplitude, and bandwidth values of three groups of simple sine waves (three formants). The trajectory of each of these values determines how the formant of the sound produced evolves, and therefore how the formant of the vocal gesture evolves.


Transition between phonems

Compared with other signals, the speech signal requires more elaborate transitions. These sections show significant formant evolutions. Consonants or articulations (staccato, legato, etc.), for example, are expressive states which cause spectrum micro-evolutions that are essential for their phonetic and semantic profile. The word “ama”, for example, represents the transition between two stable states (two vowels) via the nasal “m”. Using the graphic representation of the formant paths (sonogram), we can observe the temporal behavior of the frequencies, amplitudes, and bandwidths, etc. between the phonemes. Figure 16 shows the evolution of the voice formants pronouncing “ana” (left) and “ama” (right). As can be seen in the sonogram, the profile of “M” is straighter. This can be explained by the fact that “M” has a more abrupt onset and offset than the phoneme “N”, as can be heard by the ear.

Figure 16. The sonogram resulting from the analysis of the phonemes “N” (left) and “M” (right) surrounded by the vowel “A”.


The transition and morphing between voiced vowels and phonemes requires an elaborate technical operation. The CH-TRANSITION function[xi]is used for the transition between phonemes. It adjusts the behavior of intervals when they overlap. Figure 17 represents a patch where the vowels (here O, EOE, I) make the transition between phonemes such as B or L. In the patch, the frequency band filters, the transition envelope, the fundamental frequency and the amplitude were used to adjust the parameter settings which determine the spectral content of the phonemes. By manipulating the envelopes, phonemes are produced which cannot be pronounced by the human speech production system.


Figure 17. Phoneme succession patch with the transition controlled by frequency band filters.


Vibratos

A vocal vibrato also requires a complex formant trajectory. The OM-Chant library allows us to adjust different parameter settings that generate vibratos that cannot be achieved by the human voice, while maintaining the voice’s characteristics. Two of these parameters are speed (frequency) and amplitude.


Sound triggering

The electronic component of Mots de jeuwas created entirely in a computer-assisted composition environment. None of the processing is carried out in real-time. Close to 500 micro-gestures were generated in OM-Chant then edited and spacialized (in stereo) in Logic Pro,[xii]reducing the number of edited sounds to 48. To trigger the sounds, a simple Max patch was designed. It can be activated either by the sound engineer in the sound room or by the computer music designer using a pedal or directly on a computer keyboard, in which case the electronic sounds follow relatively closely the temporality of the musicians.


Conclusion

Thanks to the human capacity to perceive the slightest changes in the spectral content of the voice and thanks to formant synthesis and visual control technologies, composers have a wide range of possibilities to explore material produced by the voice. L’Espace du dedans[xiii] is a series that draws from formant synthesis techniques while focusing on the man-machine relationship, on poetry and on new technologies. The composition Mots de jeuis an attempt to illustrate the poetry of Henri Michaux by conveying what language paradoxically is unable to express. Henri Michaux does not hesitate to express his frustration that words are trapped in what we are taught and what others would like to impose on us. For him, language imposes limits on beings and things. It forces the world into a grid and freezes meanings and identities.[xiv]Or, to use Gaston Bachelard’s words, Mots de jeusomehow attempts to illustrate that “poetry is a metaphysics of the present moment […]. It is the principle of essential simultaneity where the more dispersed and disunited being achieves unity”.[xv]


While Michaux did not hesitate to invent new words which acoustically remained familiar to the ear of a French speaker, he would probably have been tempted to invent new phonemes and would have included them in his poetry, if he had had access to the technologies we have today. In this sense, Mots de jeushares Michaux’s approach.



Footnotes [i]Mots de jeufor 5 female voices and electronics is the first of a series of pieces entitled L'Espace du dedans,based on the text by the Franco-Belgian poet, Henri Michaux (1899-1984).Commissioned by the French Center of Musical Creation (CIRM), it was created on December 9, 2018 by the vocal ensemble Mora Vocis at the Marc Chagall museum in Nice, as part of the Manca festival, with the assistance of Camille Giuglaris.


[ii]Rodet, X. “Time-domain Formant-wave Function Synthesis”, Computer Music Journal, 8 (3), 1984.


[iii]OpenMusic is a visual programming environment for musical composition. To find out more:

- Bresson, J., Agon, C. and Assayag, G. (2011). OpenMusic. Visual Programming Environment for Music Composition, Analysis and Research. In ACM MultiMedia 2011 (OpenSource Software Competition), Scottsdale, AZ, USA.

- Assayag, G., Agon, C., Fineberg, J. and Hanappe, P. (1997). An Object Oriented Visual Environment for Musical Composition. In Proceedings of the International Computer Music Conference (ICMC’97), Thessaloniki, Greece.

- Agon, C. (1998). OpenMusic: Un langage visuel pour la composition musicale assistée par ordinateur. Doctoral thesis, Pierre and Marie Curie University (Paris 6), Paris, France.

For OM-Chant:

- Jean Bresson, Romain Michon. Implémentations et contrôle du synthétiseur CHANT dans OpenMusic. Journées d’Informatique Musicale, May 2011, Saint Etienne, France. pp.1-1, 2011. <hal-01157083>

- Bresson, J. and Stroppa, M. (2011). The Control of the Chant Synthesizer in OpenMusic: Modeling Continuous Aspects in Sound Synthesis. In Proceedings of the International Computer Music Conference (ICMC’11), Huddersfield, UK. Foulon, R. and Bresson, J. (2013).

- Un modèle de contrôle pour la synthèse par fonctions d’ondes formantiques avec OM-Chant. In Proceedings of the Journées d'Informatique Musicale (JIM’13), Saint-Denis, France.

[iv]“[…] while ordinary language tends to vanish, as soon as it is understood, to make room for the ideas, impressions, acts, etc. that it evokes, poetry tends, in its very form, to persist in our mind; the poem is something that lasts, it is par excellencememorable.” RUWET, Nicolas, Language, music, poetry. Paris: Éditions du Seuil, 1972, p. 152.


[v]Consider the example below:

« Comme une cloche sonnant un malheur, une note, une note n’écoutant qu'elle-même, une note à travers tout, une note basse comme un coup de pied dans le ventre, une note âgée, une note comme une minute qui aurait à percer un siècle, une note tenue à travers le discorde des voix, une note comme un avertissement de mort, une note, cette heure durant m’avertit. »

This passage is used in the final section of the work (measure 98 until the end of the piece). The text revolves around the words “une note”, repeated 9 times. With this repetition, the poet uses a circular form to express the multiple facets of “une note”. As indicated in the score, this forms a counterpoint of words or from a certain point of view, a counterpoint of vocal gestures produced by juxtaposing the sung and breathed gestures of two groups of singers (voices I and II against voices IV and V). The counterpoint is possible because of the circular and repetitive quality of the text that does not necessarily unfold in a linear fashion. Voice III then operates as a contact surface between the two groups of voices and gives a dramatic aspect to the musical discourse. The text also speaks of the independent quality of the music, of the fact that the only meaning of a note is the note itself without referring to an external meaning. In several respects, the act of reading the text is already music in its own right because it speaks about music in the same way that music speaks for itself.


[vi]… and not necessarily by amplifying the meaning

that music could add. Michaux was very reluctant to have his poetry set to music, and even categorically opposed to it, to the extent that he wrote to Robert Bréchon, the French poet and essayist, to tell him: “I am looking for a secretary who knows forty or fifty different ways of writing ‘no’ for me.”

In 1966, weary of the requests he had received for adaptations, he wrote René Bertelé, the great specialist of 20th century literature, to tell him: “Would you be so kind as to answer no to this lady on my behalf. I am not taking any more risks and songs do not seem to me to be a good preparation for a composer who wants to capture ‘exorcisms’.”

In 1957, Pierre Boulez wrote to the poet: “I would like to set your Poésie pour pouvoirto music, and I would like you to read the text.”

To which Henri Michaux responded:

“Pierre Boulez, who is not just anyone, believed, even though I had warned him, that a musical composition with a powerful orchestration would add something or at least translate more directly (!) what this poem is about. After two auditions, the work was removed from the composer’s catalogue.”

MICHAUX (Henri), Lettre à Michel Mathieu, dated January 24, 1979, cited in MARTIN (Jean-Pierre), op. cit., p. 543.


[vii]BEAUFILS, Marcel, Musique du son, musique du verbe, Paris: Klincksieck, 2000 [New ed.], p.12-20.


[viii]CHION, Michel, Le Guide des objets sonores, Paris: Buchet and Chastel, 1983, p. 67.


[ix] “It is produced with a vibration ([R]in the IPA table) only when in a consonant group preceded by a plosive or a voiced fricative. This [R]then consists of three parts: a vowel followed by a tap followed by another vowel (for a complete description see Meunier, 1994). It produces a fricative in consonant groups where the plosives or fricatives are voiceless. In the latter case, the is totally devoiced (voicing is assimilated with the preceding consonant) and takes the status of a fricative consonant. Finally, between vowels, the ‘r’ is usually an approximant: there is no tap and it is not a fricative. The place of articulation remains uvular in all cases.”

Christine MEUNIER. Phonétique acoustique :Phonétique acoustique. Auzou P. Les dysarthries, Solal, pp.164-173, 2007. <https://hal.archives-ouvertes.fr/hal-00250272>


[x] Marie-Aline Villard. Poétique du geste chez Henri Michaux : mouvement, regard, participation, danse. Littératures. Université de Grenoble, 2012. Français. <NNT : 2012GRENL009>. <tel-00919040>


[xi]A plosive occurs when a blocked airflow in the mouth, pharynx or glottis is suddenly released.


[xi] A function is a programming tool in the OpenMusic environment.


[xii] Logic Pro is a computer-assisted music software program distributed by the American company Apple. This software can be used to edit multi-track sounds and add studio effects, among other things.


[xiii]L’espace du dedansincludes Mots de jeu(2018) for 5 female voices and electronics, Chuchotements burlesques(2019) for ensemble, an actor and electronics, and a piece for vibraphone and electronic chorus (2019).


[xiv]<http://www.fabula.org/colloques/document1083.php>, date de consultation 21/09/2018.


[xv] BACHELARD, Gaston, Paris: Stock, 1992, p. 103.

137 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page