Gregory Beller (IRCAM) – Voice Synthesis Technologies in Contemporary Music Creation

Speaker: Gregory Beller (IRCAM)

Greg Beller works as an artist, a researcher, a teacher and a computer designer for contemporary arts. He defended a PhD thesis in Computer Science on generative models for expressivity and their applications for speech and music, especially through performance. While developing new ideas for signal analysis, processing, synthesis and control, he takes part in a range of artistic projects. He is currently the director of the department for Research/Creativity Interfaces of IRCAM, where he coordinates the works of the researchers, the developers, the computer music designers and the artists in the creation, the design and the performance of artistic moments.

(more…)

Read More

Aniruddh D. Patel – Music, Language, Emotion, and the Brain: a Cognitive Neuroscience Perspective

Ircam, 9th June 2016

Speaker: Aniruddh D. Patel (Tufts University)

Aniruddh (Ani) Patel is a cognitive neuroscientist who studies the relationship between music and language. He uses a range of methods ithis research, including brain imaging, theoretical analyses, acoustic measurements, and comparative work with other species. Patel has served as president of the Society for Music Perception and Cognition, and has published over 70 research articles and a scholarly book, Music, Language and the Brain (2008, Oxford). He is a Professor of Psychology at Tufts University.

(more…)

Read More

Philippe Schlenker (ENS) – Prolegomena to Music Semantics

IRCAM, June 9th, 2016

Speaker: Philippe Schlenker (Ecole Normale Supérieure, Paris)

Philippe Schlenker is a Senior Researcher (DR1) at Institut Jean-Nicod (CNRS) and a Global Distinguished Professor at New York University. He was educated at École Normale Supérieure (Paris), and obtained a Ph.D. in Linguistics from MIT, and a Ph.D. in Philosophy from EHESS (Paris). His research has been devoted to the semantics and pragmatics of spoken and signed languages, to philosophical logic and the philosophy of language, to primate communication, and more recently to some aspects of music cognition.

(more…)

Read More

Gregory A. Bryant (UCLA) – Animal signals and emotion in music

IRCAM, June 9th, 2016

Speaker: Prof. Gregory A. Bryant (University of California, Los Angeles)

Gregory A. Bryant is an Associate Professor in the Department of Communication at University of California, Los Angeles. He received his Ph.D. in cognitive psychology at the University of California, Santa Cruz in 2004. His research focuses primarily on vocal communication, and how acoustic features of the voice interact with language and communicative intentions. He is also a musician and sound engineer, and his work has been included in sound art exhibitions in the Americas and Europe.

(more…)

Read More

PNAS press roundup

Jan 2016 – Humbled at the amazing media response to our voice feedback study. Some great write-ups from science outlets such as Science News, New Scientist and Science Friday, as well as general public news outlet such as HuffPo, Vox, … and even Glamour! Last but not least, it was great fun to use our voice changin tool DAVID to process a good deal of announcer voices on radios in France, Germany, Deustchland and Canada. See a selection of links below, and see complete round-up on Altmetrics

(more…)

Read More

CREAM is looking for a RA! (fixed-term, 2-months)

uncle-samAssistant de recherche EEG pour passation d’expériences de psychologie/neuroscience

Période: Avril à Mai 2016

L’équipe CREAM (“Cracking the Emotional Code of Music” – http://cream.ircam.fr) du laboratoire
STMS UMR9912 (CNRS/IRCAM/UPMC – http://www.ircam.fr) à Paris cherche à engager en contrat
CDD (temps-plein ou temps incomplet) un ou une assistant/e de recherche pour la passation de
plusieurs expériences de psychologie et neurosciences cognitive sur le thème de la musique et des
émotions, dont plusieurs utilisant l’EEG, sur la période d’Avril à Mai 2016. La personne recrutée travaillera en collaboration avec les chercheurs ayant conçu les expériences et sera responsable de la collecte de données, en autonomie.

[PDF] Announcement

(more…)

Read More

The way you sound affects your mood – new study in PNAS

We created a digital audio platform to covertly modify the emotional tone of participants’ voices while they talked toward happiness, sadness, or fear. Independent listeners perceived the transformations as natural examples of emotional speech, but the participants remained unaware of the manipulation, indicating that we are not continuously monitoring our own emotional signals. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed. This result is the first evidence, to our knowledge, of peripheral feedback on emotional experience in the auditory domain.

The study is a collaboration between the CREAM team in Science and Technology of Music and Sound Lab (STMS), (IRCAM/CNRS/UPMC), the LEAD Lab (CNRS/University of Burgundy) in France, Lund University in Sweden, and Waseda University and the University of Tokyo in Japan.

Aucouturier, J.J., Johansson, P., Hall, L., Segnini, R., Mercadié, L. & Watanabe, K. (2016) Covert Digital Manipulation of Vocal Emotion Alter Speakers’ Emotional State in a Congruent Direction, Proceedings of the National Academy of Sciences, doi: 10.1073/pnas.1506552113

 

Article is open-access at : http://www.pnas.org/content/early/2016/01/05/1506552113

See our press-release: http://www.eurekalert.org/pub_releases/2016-01/lu-twy011116.php

A piece in Science Magazine reviewing the work: http://news.sciencemag.org/brain-behavior/2016/01/how-change-your-mood-just-listening-sound-your-voice

Download the emotional transformation software at: http://cream.ircam.fr/?p=44

 

Read More