Speaker: Beau Sievers (Dartmouth College, NH)
CREAM is looking for talented master students for two research internship positions, for a duration of 5-6months first half of 2017 (e.g. Feb‐June ’17). UPDATE (Jan. 2017): The positions have now been filled.
The first position mainly involves experimental, psychoacoustic research: it is examining the link between between increased/decreased pitch in speech and singing voice and the listener’s emotional response. It will be supervised by Louise Goupil & JJ Aucouturier, and is suitable for a student with a strong interest in experimental psychology and psychoacoustics, and good acoustic/audio signal processing/music production skills. See the complete announcement here: [pdf]
The second position is a lot more computational: it involves building audio pattern recognition tools in order to datamine a large corpus of audio recordings of human infant cries for acoustical patterns informative of the babies’ development of linguistic/communicative abilities. It will be supervised by JJ Aucouturier, in collaboration with Kazuo Okanoya and Yulri Nonaka from the University of Tokyo in Japan. It is suitable for a student with strong audio machine learning/music information retrieval skills and programming experience in Matlab or (preferably) Python. See the complete announcement here: [pdf]
Applications: send a CV and cover letter by email (see announcement). Interviews for selected applicants will be held in December’16-January’17.
C.L.E.E.S.E. (Combinatorial Expressive Speech Engine) is a tool designed to generate an infinite number of natural-sounding, expressive variations around an original speech recording. More precisely, C.L.E.E.S.E. creates random fluctuations around the file’s original contour of pitch, loudness, timbre and speed (i.e. roughly defined, its prosody). One of its foreseen applications is the generation of very many random voice stimuli for reverse correlation experiments, or whatever else you fancy, really.
Exciting news: Laura, Pablo, Emmanuel and JJ from the lab will be “touring” (the academic version thereof, at least) Japan this coming week, with two events planned in Tokyo:
- We’ll present our voice transformation software DAVID and our related research on emotional vocal feedback at the “Europa Science House” of the Science Agora event at Miraikan – Dates: Thursday 3 – Sunday 6 Nov, 10:00-17:00 / Venue: Miraikan 1F Booth Aa. This is at the kind invitation of the European Union’s Delegation in Japan.
- We co-organize a public workshop on Music cognition, emotion and audio technology with our friend Tomoya Nakai from University of Tokyo (he did all the organizing work, really), on Monday November 7th. This is hosted by Kazuo Okanoya’s laboratory.
If you’re around, and want to chat, please drop us a line. 「日本にきてとてもうれしい！！」
CREAM is looking for a talented master student interested in EEG and speech for a spiffy research internship combining Mismatch Negativity and some of our fancy new voice transformations technologies (here and there). The intern will work under the supervision of Laura Rachman and Jean-Julien Aucouturier (CNRS/IRCAM) & Stéphanie Dubal (Brain & Spine Institute, Hopital La Salpétrière, Paris).
See the complete announcement here: internship-annonce
Duration: 5-6months first half of 2017 (e.g. Feb‐June ’17).
Applications: send a CV and cover letter by email to Laura Rachman & JJ Aucouturier (see announcement). Interviews for selected applicants will be held in November‐December 2016.
A corpus of free pop music recordings, mixed by a professional sound engineer in several variants used to experiment with a listener’s feelings of social cohesion. Each track is available in X variants
- Single lead voice + orchestra (PBO) – drums
- Single lead voice + vocal overdub (double-tracking) + orchestra (PBO)
- Single lead voice – overdub + orchestra + drums
- Single lead voice + randomly detuned PBO – drums
- Single lead voice + randomly desynchronized PBO
The corpus is made available under a Creative Commons licence, from archive.org.
Mixed by Sarah Hermann (CNSMDP, Paris) at IRCAM, July 2016.
Our team published a new paper this week, in which we test the influence of patients’ tone of voice on medical decisions.
In the line of our recent real-time emotional voice transformations, we manipulated the voice of (fake) patients calling a 911 phone simulator used for training (real) medical doctors, to make them sound more, or less, physically dominant (with deeper, more masculine voices corresponding to lower pitch and greater formant dispersion, see e.g. Sell, A. et al. Adaptations in humans for assessing physical strength from the voice. Proceedings of the Royal Society of London B: Biological Sciences, 277(1699), 3509–3518 (2010) – link).
We found that patients whose voice signalled physical strength obtained a higher grade of response, a higher evaluation of medical emergency and longer attention from medical doctors than callers with strictly identical medical needs whose voice signaled lower physical dominance.
The paper, a collaboration with Laurent Boidron M.D. and his colleagues at the Department of Emergency Medicine of the Dijon CHU Hospital/Université de Bourgogne, was published last tuesday in Scientific Reports (link, pdf).
Ircam, 9th June 2016
Speaker: Elvira Brattico (Aarhus University)
Elvira Brattico (PhD in Psychology, University of Helsinki, 2007) is Professor of Neuroscience, Music and Aesthetics at the Center for Music in the Brain (MIB), Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Denmark. She is a pioneer in applying computational music information retrieval methods to neurophysiological and neuroimaging methods to solve questions concerning music processing, such as how the brain represents musical features and why we enjoy music. Prof.Brattico has published more than 100 papers, of which 68 appear in peer-reviewed international journals or conference proceedings.
Speaker: Gregory Beller (IRCAM)
Greg Beller works as an artist, a researcher, a teacher and a computer designer for contemporary arts. He defended a PhD thesis in Computer Science on generative models for expressivity and their applications for speech and music, especially through performance. While developing new ideas for signal analysis, processing, synthesis and control, he takes part in a range of artistic projects. He is currently the director of the department for Research/Creativity Interfaces of IRCAM, where he coordinates the works of the researchers, the developers, the computer music designers and the artists in the creation, the design and the performance of artistic moments.
Ircam, 9th June 2016
Speaker: Aniruddh D. Patel (Tufts University)
Aniruddh (Ani) Patel is a cognitive neuroscientist who studies the relationship between music and language. He uses a range of methods in this research, including brain imaging, theoretical analyses, acoustic measurements, and comparative work with other species. Patel has served as president of the Society for Music Perception and Cognition, and has published over 70 research articles and a scholarly book, Music, Language and the Brain (2008, Oxford). He is a Professor of Psychology at Tufts University.
IRCAM, June 9th, 2016
Speaker: Philippe Schlenker (Ecole Normale Supérieure, Paris)
Philippe Schlenker is a Senior Researcher (DR1) at Institut Jean-Nicod (CNRS) and a Global Distinguished Professor at New York University. He was educated at École Normale Supérieure (Paris), and obtained a Ph.D. in Linguistics from MIT, and a Ph.D. in Philosophy from EHESS (Paris). His research has been devoted to the semantics and pragmatics of spoken and signed languages, to philosophical logic and the philosophy of language, to primate communication, and more recently to some aspects of music cognition.