CREAM Lab in Japan 1~8/11

Exciting news: Laura, Pablo, Emmanuel and JJ from the lab will be “touring” (the academic version thereof, at least) Japan this coming week, with two events planned in Tokyo:

If you’re around, and want to chat, please drop us a line. 「日本にきてとてもうれしい!!」

Read More

[CLOSED] Research internship (EEG/Speech/Emotion)

uncle-sam

 

CREAM is looking for a talented master student interested in EEG and speech for a spiffy research internship combining Mismatch Negativity and some of our fancy new voice transformations technologies (here and there). The intern will work under the supervision of Laura Rachman and Jean-Julien Aucouturier (CNRS/IRCAM) & Stéphanie Dubal (Brain & Spine Institute, Hopital La Salpétrière, Paris).

 

 

See the complete announcement here: internship-annonce

Duration: 5-­6months first half of 2017 (e.g. Feb­‐June ’17).

Applications: send a CV and cover letter by email to Laura Rachman & JJ Aucouturier (see announcement). Interviews for selected applicants will be held in November‐December 2016.

 

 

 

 

 

 

 

 

 

 

Read More

The overdub corpus

A corpus of free pop music recordings, mixed by a professional sound engineer in several variants used to experiment with a listener’s feelings of social cohesion. Each track is available in X variants

  1. Single lead voice + orchestra (PBO) – drums
  2. Single lead voice + vocal overdub (double-tracking) + orchestra (PBO)
  3. Single lead voice – overdub + orchestra + drums
  4. Single lead voice + randomly detuned PBO – drums
  5. Single lead voice + randomly desynchronized PBO

The corpus is made available under a Creative Commons licence, from archive.org.

Mixed by Sarah Hermann (CNSMDP, Paris) at IRCAM, July 2016.

 

 

Read More

Bullying the doc: stronger-sounding patients get more 911 attention

Our team published a new paper this week, in which we test the influence of patients’ tone of voice on medical decisions.

In the line of our recent real-time emotional voice transformations, we manipulated the voice of (fake) patients calling a 911 phone simulator used for training (real) medical doctors, to make them sound more, or less, physically dominant (with deeper, more masculine voices corresponding to lower pitch and greater formant dispersion, see e.g. Sell, A. et al. Adaptations in humans for assessing physical strength from the voice. Proceedings of the Royal Society of London B: Biological Sciences, 277(1699), 3509–3518 (2010) – link).

We found that patients whose voice signalled physical strength obtained a higher grade of response, a higher evaluation of medical emergency and longer attention from medical doctors than callers with strictly identical medical needs whose voice signaled lower physical dominance.

The paper, a collaboration with Laurent Boidron M.D. and his colleagues at the Department of Emergency Medicine of the Dijon CHU Hospital/Université de Bourgogne, was published last tuesday in Scientific Reports (link, pdf).

Read More

Elvira Brattico – Automatic processing of musical emotions in the brain

Ircam, 9th June 2016

Speaker: Elvira Brattico (Aarhus University)

Elvira Brattico (PhD in Psychology, University of Helsinki, 2007) is Professor of Neuroscience, Music and Aesthetics at the Center for Music in the Brain (MIB), Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Denmark. She is a pioneer in applying computational music information retrieval methods to neurophysiological and neuroimaging methods to solve questions concerning music processing, such as how the brain represents musical features and why we enjoy music. Prof.Brattico has published more than 100 papers, of which 68 appear in peer-reviewed international journals or conference proceedings.

(more…)

Read More

Gregory Beller (IRCAM) – Voice Synthesis Technologies in Contemporary Music Creation

Speaker: Gregory Beller (IRCAM)

Greg Beller works as an artist, a researcher, a teacher and a computer designer for contemporary arts. He defended a PhD thesis in Computer Science on generative models for expressivity and their applications for speech and music, especially through performance. While developing new ideas for signal analysis, processing, synthesis and control, he takes part in a range of artistic projects. He is currently the director of the department for Research/Creativity Interfaces of IRCAM, where he coordinates the works of the researchers, the developers, the computer music designers and the artists in the creation, the design and the performance of artistic moments.

(more…)

Read More

Aniruddh D. Patel – Music, Language, Emotion, and the Brain: a Cognitive Neuroscience Perspective

Ircam, 9th June 2016

Speaker: Aniruddh D. Patel (Tufts University)

Aniruddh (Ani) Patel is a cognitive neuroscientist who studies the relationship between music and language. He uses a range of methods ithis research, including brain imaging, theoretical analyses, acoustic measurements, and comparative work with other species. Patel has served as president of the Society for Music Perception and Cognition, and has published over 70 research articles and a scholarly book, Music, Language and the Brain (2008, Oxford). He is a Professor of Psychology at Tufts University.

(more…)

Read More

Philippe Schlenker (ENS) – Prolegomena to Music Semantics

IRCAM, June 9th, 2016

Speaker: Philippe Schlenker (Ecole Normale Supérieure, Paris)

Philippe Schlenker is a Senior Researcher (DR1) at Institut Jean-Nicod (CNRS) and a Global Distinguished Professor at New York University. He was educated at École Normale Supérieure (Paris), and obtained a Ph.D. in Linguistics from MIT, and a Ph.D. in Philosophy from EHESS (Paris). His research has been devoted to the semantics and pragmatics of spoken and signed languages, to philosophical logic and the philosophy of language, to primate communication, and more recently to some aspects of music cognition.

(more…)

Read More

Gregory A. Bryant (UCLA) – Animal signals and emotion in music

IRCAM, June 9th, 2016

Speaker: Prof. Gregory A. Bryant (University of California, Los Angeles)

Gregory A. Bryant is an Associate Professor in the Department of Communication at University of California, Los Angeles. He received his Ph.D. in cognitive psychology at the University of California, Santa Cruz in 2004. His research focuses primarily on vocal communication, and how acoustic features of the voice interact with language and communicative intentions. He is also a musician and sound engineer, and his work has been included in sound art exhibitions in the Americas and Europe.

(more…)

Read More

PNAS press roundup

Jan 2016 – Humbled at the amazing media response to our voice feedback study. Some great write-ups from science outlets such as Science News, New Scientist and Science Friday, as well as general public news outlet such as HuffPo, Vox, … and even Glamour! Last but not least, it was great fun to use our voice changin tool DAVID to process a good deal of announcer voices on radios in France, Germany, Deustchland and Canada. See a selection of links below, and see complete round-up on Altmetrics

(more…)

Read More

CREAM is looking for a RA! (fixed-term, 2-months)

uncle-samAssistant de recherche EEG pour passation d’expériences de psychologie/neuroscience

Période: Avril à Mai 2016

L’équipe CREAM (“Cracking the Emotional Code of Music” – http://cream.ircam.fr) du laboratoire
STMS UMR9912 (CNRS/IRCAM/UPMC – http://www.ircam.fr) à Paris cherche à engager en contrat
CDD (temps-plein ou temps incomplet) un ou une assistant/e de recherche pour la passation de
plusieurs expériences de psychologie et neurosciences cognitive sur le thème de la musique et des
émotions, dont plusieurs utilisant l’EEG, sur la période d’Avril à Mai 2016. La personne recrutée travaillera en collaboration avec les chercheurs ayant conçu les expériences et sera responsable de la collecte de données, en autonomie.

[PDF] Announcement

(more…)

Read More