Voice transformation tool DAVID now available on the IRCAM Forum!

Exciting news! As of March 2017, DAVID, our emotional voice transformation tool, is available as a free download on the IRCAM Forum, the online community of all science and art users of audio software developped in IRCAM. This new plateform will provide updates on the latest releases of the software, and better user support. In addition, we’ll demonstrate the software at the IRCAM Forum days in Paris on March 15-17, 2017. Come say hi! (and sound all very realistically happy/sad/afraid) if you’re around.

(more…)

Read More

Upcoming: Two invited talks on reverse-correlation for high-level auditory cognition

CREAM Lab is hosting a small series of distinguised talks on reverse-correlation this month:

  • Wednesday 22nd March 2017 (11:00) – Prof. Fréderic Gosselin (University of Montreal)
  • Thursday 23rd March 2017 (11:30) – Prof. Peter Neri (Ecole Normale Supérieure, Paris).

These talks are organised in the context of a workshop on reverse-correlation for high-level audio cognition, to be held in IRCAM the same days (on-invitation-only). Both talks are free for all, in IRCAM (1 Place Stravinsky, 75004 Paris). Details (titles, abstract) are below.

(more…)

Read More

Is musical consonance a signal of social affiliation? Our new study of musical interactions, out today in Cognition

A recently emerging view in music cognition holds that music is not only social and participatory in its production, but also in its perception, i.e. that music is in fact perceived as the sonic trace of social relations between a group of real or virtual agents. To investigate whether it is at all the case, we asked a group of free collective improvisers from Conservatoire Supérieur National de Musique et de Danse de Paris (CNSMDP) to try to communicate a series of social attitudes (being dominant, arrogant, disdainful, conciliatory or caring) to one another, using only their musical interaction. Both musicians and non-musicians were able to recognize these attitudes from the recorded music.

The study, a collaboration between Clément Canonne and JJ Aucouturier, was published today in Cognition. The corpus of 100 improvised duets used in the study is also available online:  http://cream.ircam.fr/?p=575

Read More

[CLOSED] CREAM is looking for a new RA ! (part-time, 5-months)

UPDATE (Feb. 2017): THE POSITION HAS BEEN FILLED. 

Assistant de recherche pour passation d’expériences de psychologie cognitive 

Période: Mars à Juillet 2017

Nous cherchons à engager en contrat CDD à temps partiel un ou une assistant/e de recherche pour la passation de plusieurs expériences de psychologie et neurosciences cognitive sur le thème de la voix, de la perception de soi, de la musique, et des émotions, sur la période de Mars à Juillet 2017.

La personne recrutée travaillera en collaboration avec les chercheurs ayant conçu les expériences et sera responsable de la collecte de données, en autonomie. La passation d’expérience se passera au Centre Multidisciplinaire des Sciences Comportementales Sorbonne Universités-INSEAD (6 rue Victor Cousin, 75005 Paris).

Profil souhaité

  • Licence ou master de psychologie expérimentale/neurosciences ou equivalent.
  • OBLIGATOIRE: avoir une expérience de la collecte de données expérimentales en psychologie/neurosciences (avoir fait passer au moins N = 20 participants adultes dans le cadre de ses études, d’un stage ou d’un contrat précédent), incluant la signature de consentement de participation, l’explication du protocole au participant, la veille au bon déroulement technique de l’expérience, la sauvegarde appropriée des différents fichiers de données, et le debriefing avec le participant.
  • FORTEMENT SOUHAITÉ: avoir une expérience en psychoacoustique et/ou en traitement du signal audio.
  • SOUHAITÉ: avoir un intérêt pour la recherche en psychologie/neurosciences cognitive, en particulier dans le domaine des neurosciences musicales, de la conscience de soi et des émotions et/ou avoir un intérêt pour les technologies de la musique et du son.

 

Conditions

Contrat: CDD CNRS niveau assistant ingénieur (AI) ou ingénieur d’étude (IE), selon diplôme et experience.

Durée: 5 mois sur la période Mars – Juillet 2017.

Quotité: temps-partiel souhaité (50 ou 60%).

Lieu de travail: Paris (France).

Rémunération: correspondant à 1821.83e mensuel brut (AI) à 2465.67e mensuel brut (IE), selon diplôme et expérience) pour un temps plein.

 

Envoyer un CV et lettre de motivation détaillant vos expériences précédentes de collecte de données par email à Louise Goupil lougoupil@gmail.com & Jean-Julien Aucouturier aucouturier@gmail.com.

 

 

Offre – Assistant de Recherche CREAM – PDF

Read More

[CLOSED] Two new research internship for 2017: psychoacoustics of singing voice, and datamining of baby cries

CREAM is looking for talented master students for two research internship positions, for a duration of 5-­6months first half of 2017 (e.g. Feb­‐June ’17). UPDATE (Jan. 2017): The positions have now been filled. 

The first position mainly involves experimental, psychoacoustic research: it is examining the link between between increased/decreased pitch in speech and singing voice and the listener’s emotional response. It will be supervised by Louise Goupil & JJ Aucouturier, and is suitable for a student with a strong interest in experimental psychology and psychoacoustics, and good acoustic/audio signal processing/music production skills. See the complete announcement here: [pdf]

The second position is a lot more computational: it involves building audio pattern recognition tools in order to datamine a large corpus of audio recordings of human infant cries for acoustical patterns informative of the babies’ development of linguistic/communicative abilities. It will be supervised by JJ Aucouturier, in collaboration with Kazuo Okanoya and Yulri Nonaka from the University of Tokyo in Japan. It is suitable for a student with strong audio machine learning/music information retrieval skills and programming experience in Matlab or (preferably) Python. See the complete announcement here: [pdf]

Applications: send a CV and cover letter by email (see announcement). Interviews for selected applicants will be held in December’16-January’17.

Read More

CREAM Lab in Japan 1~8/11

Exciting news: Laura, Pablo, Emmanuel and JJ from the lab will be “touring” (the academic version thereof, at least) Japan this coming week, with two events planned in Tokyo:

If you’re around, and want to chat, please drop us a line. 「日本にきてとてもうれしい!!」

Read More

[CLOSED] Research internship (EEG/Speech/Emotion)

uncle-sam

 

CREAM is looking for a talented master student interested in EEG and speech for a spiffy research internship combining Mismatch Negativity and some of our fancy new voice transformations technologies (here and there). The intern will work under the supervision of Laura Rachman and Jean-Julien Aucouturier (CNRS/IRCAM) & Stéphanie Dubal (Brain & Spine Institute, Hopital La Salpétrière, Paris).

 

 

See the complete announcement here: internship-annonce

Duration: 5-­6months first half of 2017 (e.g. Feb­‐June ’17).

Applications: send a CV and cover letter by email to Laura Rachman & JJ Aucouturier (see announcement). Interviews for selected applicants will be held in November‐December 2016.

 

 

 

 

 

 

 

 

 

 

Read More

PNAS press roundup

Jan 2016 – Humbled at the amazing media response to our voice feedback study. Some great write-ups from science outlets such as Science News, New Scientist and Science Friday, as well as general public news outlet such as HuffPo, Vox, … and even Glamour! Last but not least, it was great fun to use our voice changin tool DAVID to process a good deal of announcer voices on radios in France, Germany, Deustchland and Canada. See a selection of links below, and see complete round-up on Altmetrics

(more…)

Read More

The way you sound affects your mood – new study in PNAS

We created a digital audio platform to covertly modify the emotional tone of participants’ voices while they talked toward happiness, sadness, or fear. Independent listeners perceived the transformations as natural examples of emotional speech, but the participants remained unaware of the manipulation, indicating that we are not continuously monitoring our own emotional signals. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed. This result is the first evidence, to our knowledge, of peripheral feedback on emotional experience in the auditory domain.

The study is a collaboration between the CREAM team in Science and Technology of Music and Sound Lab (STMS), (IRCAM/CNRS/UPMC), the LEAD Lab (CNRS/University of Burgundy) in France, Lund University in Sweden, and Waseda University and the University of Tokyo in Japan.

Aucouturier, J.J., Johansson, P., Hall, L., Segnini, R., Mercadié, L. & Watanabe, K. (2016) Covert Digital Manipulation of Vocal Emotion Alter Speakers’ Emotional State in a Congruent Direction, Proceedings of the National Academy of Sciences, doi: 10.1073/pnas.1506552113

 

Article is open-access at : http://www.pnas.org/content/early/2016/01/05/1506552113

See our press-release: http://www.eurekalert.org/pub_releases/2016-01/lu-twy011116.php

A piece in Science Magazine reviewing the work: http://news.sciencemag.org/brain-behavior/2016/01/how-change-your-mood-just-listening-sound-your-voice

Download the emotional transformation software at: http://cream.ircam.fr/?p=44

 

Read More