Gregory A. Bryant (UCLA) – Animal signals and emotion in music

IRCAM, June 9th, 2016

Speaker: Prof. Gregory A. Bryant (University of California, Los Angeles)

Gregory A. Bryant is an Associate Professor in the Department of Communication at University of California, Los Angeles. He received his Ph.D. in cognitive psychology at the University of California, Santa Cruz in 2004. His research focuses primarily on vocal communication, and how acoustic features of the voice interact with language and communicative intentions. He is also a musician and sound engineer, and his work has been included in sound art exhibitions in the Americas and Europe.


Read More

PNAS press roundup

Jan 2016 – Humbled at the amazing media response to our voice feedback study. Some great write-ups from science outlets such as Science News, New Scientist and Science Friday, as well as general public news outlet such as HuffPo, Vox, … and even Glamour! Last but not least, it was great fun to use our voice changin tool DAVID to process a good deal of announcer voices on radios in France, Germany, Deustchland and Canada. See a selection of links below, and see complete round-up on Altmetrics


Read More

CREAM is looking for a RA! (fixed-term, 2-months)

uncle-samAssistant de recherche EEG pour passation d’expériences de psychologie/neuroscience

Période: Avril à Mai 2016

L’équipe CREAM (“Cracking the Emotional Code of Music” – du laboratoire
STMS UMR9912 (CNRS/IRCAM/UPMC – à Paris cherche à engager en contrat
CDD (temps-plein ou temps incomplet) un ou une assistant/e de recherche pour la passation de
plusieurs expériences de psychologie et neurosciences cognitive sur le thème de la musique et des
émotions, dont plusieurs utilisant l’EEG, sur la période d’Avril à Mai 2016. La personne recrutée travaillera en collaboration avec les chercheurs ayant conçu les expériences et sera responsable de la collecte de données, en autonomie.

[PDF] Announcement


Read More

The way you sound affects your mood – new study in PNAS

We created a digital audio platform to covertly modify the emotional tone of participants’ voices while they talked toward happiness, sadness, or fear. Independent listeners perceived the transformations as natural examples of emotional speech, but the participants remained unaware of the manipulation, indicating that we are not continuously monitoring our own emotional signals. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed. This result is the first evidence, to our knowledge, of peripheral feedback on emotional experience in the auditory domain.

The study is a collaboration between the CREAM team in Science and Technology of Music and Sound Lab (STMS), (IRCAM/CNRS/UPMC), the LEAD Lab (CNRS/University of Burgundy) in France, Lund University in Sweden, and Waseda University and the University of Tokyo in Japan.

Aucouturier, J.J., Johansson, P., Hall, L., Segnini, R., Mercadié, L. & Watanabe, K. (2016) Covert Digital Manipulation of Vocal Emotion Alter Speakers’ Emotional State in a Congruent Direction, Proceedings of the National Academy of Sciences, doi: 10.1073/pnas.1506552113


Article is open-access at :

See our press-release:

A piece in Science Magazine reviewing the work:

Download the emotional transformation software at:


Read More