This is a research corpus of 100 improvised musical duets recorded for the paper “Musical friends and foes: The social cognition of affiliation and control in improvised interactions” by JJ Aucouturier and Clément Canonne, Cognition, vol. 161, 94-108, 2017. http://www.sciencedirect.com/science/article/pii/S0010027717300276
A recently emerging view in music cognition holds that music is not only social and participatory in its production, but also in its perception, i.e. that music is in fact perceived as the sonic trace of social relations between a group of real or virtual agents. To investigate whether it is at all the case, we asked a group of free collective improvisers from Conservatoire Supérieur National de Musique et de Danse de Paris (CNSMDP) to try to communicate a series of social attitudes (being dominant, arrogant, disdainful, conciliatory or caring) to one another, using only their musical interaction. Both musicians and non-musicians were able to recognize these attitudes from the recorded music.
The study, a collaboration between Clément Canonne and JJ Aucouturier, was published today in Cognition. The corpus of 100 improvised duets used in the study is also available online: http://cream.ircam.fr/?p=575
CREAM is looking for talented master students for two research internship positions, for a duration of 5-6months first half of 2017 (e.g. Feb‐June ’17). UPDATE (Jan. 2017): The positions have now been filled.
The first position mainly involves experimental, psychoacoustic research: it is examining the link between between increased/decreased pitch in speech and singing voice and the listener’s emotional response. It will be supervised by Louise Goupil & JJ Aucouturier, and is suitable for a student with a strong interest in experimental psychology and psychoacoustics, and good acoustic/audio signal processing/music production skills. See the complete announcement here: [pdf]
The second position is a lot more computational: it involves building audio pattern recognition tools in order to datamine a large corpus of audio recordings of human infant cries for acoustical patterns informative of the babies’ development of linguistic/communicative abilities. It will be supervised by JJ Aucouturier, in collaboration with Kazuo Okanoya and Yulri Nonaka from the University of Tokyo in Japan. It is suitable for a student with strong audio machine learning/music information retrieval skills and programming experience in Matlab or (preferably) Python. See the complete announcement here: [pdf]
Applications: send a CV and cover letter by email (see announcement). Interviews for selected applicants will be held in December’16-January’17.
C.L.E.E.S.E. (Combinatorial Expressive Speech Engine) is a tool designed to generate an infinite number of natural-sounding, expressive variations around an original speech recording. More precisely, C.L.E.E.S.E. creates random fluctuations around the file’s original contour of pitch, loudness, timbre and speed (i.e. roughly defined, its prosody). One of its foreseen applications is the generation of very many random voice stimuli for reverse correlation experiments, or whatever else you fancy, really.
Exciting news: Laura, Pablo, Emmanuel and JJ from the lab will be “touring” (the academic version thereof, at least) Japan this coming week, with two events planned in Tokyo:
- We’ll present our voice transformation software DAVID and our related research on emotional vocal feedback at the “Europa Science House” of the Science Agora event at Miraikan – Dates: Thursday 3 – Sunday 6 Nov, 10:00-17:00 / Venue: Miraikan 1F Booth Aa. This is at the kind invitation of the European Union’s Delegation in Japan.
- We co-organize a public workshop on Music cognition, emotion and audio technology with our friend Tomoya Nakai from University of Tokyo (he did all the organizing work, really), on Monday November 7th. This is hosted by Kazuo Okanoya’s laboratory.
If you’re around, and want to chat, please drop us a line. 「日本にきてとてもうれしい！！」
CREAM is looking for a talented master student interested in EEG and speech for a spiffy research internship combining Mismatch Negativity and some of our fancy new voice transformations technologies (here and there). The intern will work under the supervision of Laura Rachman and Jean-Julien Aucouturier (CNRS/IRCAM) & Stéphanie Dubal (Brain & Spine Institute, Hopital La Salpétrière, Paris).
See the complete announcement here: internship-annonce
Duration: 5-6months first half of 2017 (e.g. Feb‐June ’17).
Applications: send a CV and cover letter by email to Laura Rachman & JJ Aucouturier (see announcement). Interviews for selected applicants will be held in November‐December 2016.
A corpus of free pop music recordings, mixed by a professional sound engineer in several variants used to experiment with a listener’s feelings of social cohesion. Each track is available in X variants
- Single lead voice + orchestra (PBO) – drums
- Single lead voice + vocal overdub (double-tracking) + orchestra (PBO)
- Single lead voice – overdub + orchestra + drums
- Single lead voice + randomly detuned PBO – drums
- Single lead voice + randomly desynchronized PBO
The corpus is made available under a Creative Commons licence, from archive.org.
Mixed by Sarah Hermann (CNSMDP, Paris) at IRCAM, July 2016.
Our team published a new paper this week, in which we test the influence of patients’ tone of voice on medical decisions.
In the line of our recent real-time emotional voice transformations, we manipulated the voice of (fake) patients calling a 911 phone simulator used for training (real) medical doctors, to make them sound more, or less, physically dominant (with deeper, more masculine voices corresponding to lower pitch and greater formant dispersion, see e.g. Sell, A. et al. Adaptations in humans for assessing physical strength from the voice. Proceedings of the Royal Society of London B: Biological Sciences, 277(1699), 3509–3518 (2010) – link).
We found that patients whose voice signalled physical strength obtained a higher grade of response, a higher evaluation of medical emergency and longer attention from medical doctors than callers with strictly identical medical needs whose voice signaled lower physical dominance.
The paper, a collaboration with Laurent Boidron M.D. and his colleagues at the Department of Emergency Medicine of the Dijon CHU Hospital/Université de Bourgogne, was published last tuesday in Scientific Reports (link, pdf).
Jan 2016 – Humbled at the amazing media response to our voice feedback study. Some great write-ups from science outlets such as Science News, New Scientist and Science Friday, as well as general public news outlet such as HuffPo, Vox, … and even Glamour! Last but not least, it was great fun to use our voice changin tool DAVID to process a good deal of announcer voices on radios in France, Germany, Deustchland and Canada. See a selection of links below, and see complete round-up on Altmetrics
Période: Avril à Mai 2016
L’équipe CREAM (“Cracking the Emotional Code of Music” – http://cream.ircam.fr) du laboratoire
STMS UMR9912 (CNRS/IRCAM/UPMC – http://www.ircam.fr) à Paris cherche à engager en contrat
CDD (temps-plein ou temps incomplet) un ou une assistant/e de recherche pour la passation de
plusieurs expériences de psychologie et neurosciences cognitive sur le thème de la musique et des
émotions, dont plusieurs utilisant l’EEG, sur la période d’Avril à Mai 2016. La personne recrutée travaillera en collaboration avec les chercheurs ayant conçu les expériences et sera responsable de la collecte de données, en autonomie.
We created a digital audio platform to covertly modify the emotional tone of participants’ voices while they talked toward happiness, sadness, or fear. Independent listeners perceived the transformations as natural examples of emotional speech, but the participants remained unaware of the manipulation, indicating that we are not continuously monitoring our own emotional signals. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed. This result is the first evidence, to our knowledge, of peripheral feedback on emotional experience in the auditory domain.
The study is a collaboration between the CREAM team in Science and Technology of Music and Sound Lab (STMS), (IRCAM/CNRS/UPMC), the LEAD Lab (CNRS/University of Burgundy) in France, Lund University in Sweden, and Waseda University and the University of Tokyo in Japan.
Aucouturier, J.J., Johansson, P., Hall, L., Segnini, R., Mercadié, L. & Watanabe, K. (2016) Covert Digital Manipulation of Vocal Emotion Alter Speakers’ Emotional State in a Congruent Direction, Proceedings of the National Academy of Sciences, doi: 10.1073/pnas.1506552113
Article is open-access at : http://www.pnas.org/content/early/2016/01/05/1506552113
See our press-release: http://www.eurekalert.org/pub_releases/2016-01/lu-twy011116.php
A piece in Science Magazine reviewing the work: http://news.sciencemag.org/brain-behavior/2016/01/how-change-your-mood-just-listening-sound-your-voice
Download the emotional transformation software at: http://cream.ircam.fr/?p=44