(S)CREAM ! An impromptu workshop on screams

The CREAM lab organizes a short, impromptu workshop on the biology, cultural history, musicality and acoustic of !screams!, to be held in IRCAM, Paris, on Thursday 22nd June, 2-5pm. The workshop will consist of four invited talks, followed by a discussion around drinks and cakes.

CREAM organise un petit séminaire impromptu sur la biologie, l’histoire culturelle, la musicalité et l’acoustique des !CRIS!, il aura lieu à l’IRCAM le Jeudi 22 Juin de 14h à 17h. Le séminaire sera constitué de quatre présentations suivies par une discussion autour de quelques boissons et gâteaux.

 

 

Date: Thursday 22nd June 2017, 2-5pm

Place: Stravinsky Room, IRCAM, 1 Place Stravinsky, 75004 Paris.

Attendance: free, subjected to seat availability.

Local organizers: Louise Goupil (louise.goupil@ircam.fr) , JJ Aucouturier (aucouturie@gmail.com)

 

 

(more…)

Read More

Voice transformation tool DAVID now available on the IRCAM Forum!

Exciting news! As of March 2017, DAVID, our emotional voice transformation tool, is available as a free download on the IRCAM Forum, the online community of all science and art users of audio software developped in IRCAM. This new plateform will provide updates on the latest releases of the software, and better user support. In addition, we’ll demonstrate the software at the IRCAM Forum days in Paris on March 15-17, 2017. Come say hi! (and sound all very realistically happy/sad/afraid) if you’re around.

(more…)

Read More

Upcoming: Two invited talks on reverse-correlation for high-level auditory cognition

CREAM Lab is hosting a small series of distinguised talks on reverse-correlation this month:

  • Wednesday 22nd March 2017 (11:00) – Prof. Fréderic Gosselin (University of Montreal)
  • Thursday 23rd March 2017 (11:30) – Prof. Peter Neri (Ecole Normale Supérieure, Paris).

These talks are organised in the context of a workshop on reverse-correlation for high-level audio cognition, to be held in IRCAM the same days (on-invitation-only). Both talks are free for all, in IRCAM (1 Place Stravinsky, 75004 Paris). Details (titles, abstract) are below.

(more…)

Read More

Is musical consonance a signal of social affiliation? Our new study of musical interactions, out today in Cognition

A recently emerging view in music cognition holds that music is not only social and participatory in its production, but also in its perception, i.e. that music is in fact perceived as the sonic trace of social relations between a group of real or virtual agents. To investigate whether it is at all the case, we asked a group of free collective improvisers from Conservatoire Supérieur National de Musique et de Danse de Paris (CNSMDP) to try to communicate a series of social attitudes (being dominant, arrogant, disdainful, conciliatory or caring) to one another, using only their musical interaction. Both musicians and non-musicians were able to recognize these attitudes from the recorded music.

The study, a collaboration between Clément Canonne and JJ Aucouturier, was published today in Cognition. The corpus of 100 improvised duets used in the study is also available online:  http://cream.ircam.fr/?p=575

Read More

[CLOSED] Two new research internship for 2017: psychoacoustics of singing voice, and datamining of baby cries

CREAM is looking for talented master students for two research internship positions, for a duration of 5-­6months first half of 2017 (e.g. Feb­‐June ’17). UPDATE (Jan. 2017): The positions have now been filled. 

The first position mainly involves experimental, psychoacoustic research: it is examining the link between between increased/decreased pitch in speech and singing voice and the listener’s emotional response. It will be supervised by Louise Goupil & JJ Aucouturier, and is suitable for a student with a strong interest in experimental psychology and psychoacoustics, and good acoustic/audio signal processing/music production skills. See the complete announcement here: [pdf]

The second position is a lot more computational: it involves building audio pattern recognition tools in order to datamine a large corpus of audio recordings of human infant cries for acoustical patterns informative of the babies’ development of linguistic/communicative abilities. It will be supervised by JJ Aucouturier, in collaboration with Kazuo Okanoya and Yulri Nonaka from the University of Tokyo in Japan. It is suitable for a student with strong audio machine learning/music information retrieval skills and programming experience in Matlab or (preferably) Python. See the complete announcement here: [pdf]

Applications: send a CV and cover letter by email (see announcement). Interviews for selected applicants will be held in December’16-January’17.

Read More