Symposium: The Cultural Evolution of Music

At the occasion of the visit of Dr Mehr in IRCAM this month, the CREAM and Analyse des Pratiques Musicales (APM) teams are happy to organize a art-and-science mini-symposium on the Cultural Evolution of Music, in which Dr Sam Mehr (Dept. Psychology, Harvard, MA) will report on his recent Natural History of Song project, Dr Nicolas Baumard (Dept. Cognitive Science, Ecole Normale Supérieure, Paris) on his research on the evolution of emotions using portraits, and contemporary composer Pascal Dusapin will describe his ongoing Lullaby Experience project. The symposium is free and open to all, subjected on seat availability.

Update (18/4): The participation of Pascal Dusapin has been cancelled due to unforeseen circumstances.

Date: Thursday April 18th

Hours: Afternoon, 15h-18h

Place: Salle Stravinsky, Institut de Recherche et Coordination en Acoustique/Musique (IRCAM), 1 Place Stravinsky 75004 Paris. [access]

Local organizers: Clément Canonne (APM), Jean-Julien Aucouturier (PDS/CREAM), IRCAM/CNRS/Sorbonne Université.


Additional lecture: As a companion to the event, Dr Sam Mehr will also give an additional lecture on the origins and functions of music in infancy at Ecole Normale Supérieure, Friday April 19th, morning. (see information below)


Thursday April. 18th, 15h-18h, IRCAM

The Cultural Evolution of Music

15h-16h – Dr. Sam Mehr (Harvard University, MA, USA)
A natural history of song

Theories of the origins of music claim that the music faculty is shaped by the functional design of the human mind. On these ideas, musical behavior and musical structure are expected to exhibit species-wide regularities: music should be characterized by human universals. Many cognitive and evolutionary scientists intuitively accept this idea but no one has any good evidence for it. Most scholars of music, in contrast, intuitively accept the opposite position, citing the staggering diversity of the world’s music as evidence that music is shaped mostly by culture. I will present two papers that attempt to resolve this debate. The first, a pair of experiments, shows that the musical forms of songs in 86 cultures are shaped by their social functions (Mehr & Singh et al., 2018, Current Biology). The second, a descriptive project, applies tools of computational social science to the recently-created Natural History of Song corpora (http://naturalhistoryofsong.org) to demonstrate universals and dimensions of variation in musical behaviors and musical forms (Mehr et al., working paper, https://psyarxiv.com/emq8r).

Samuel Mehr is a Research Associate in the Department of Psychology at Harvard University, where he directs the Music Lab. Originally a musician, Sam earned a B.M. in Music Education from the Eastman School of Music before diving into science at Harvard, where he earned an Ed.D. in Human Development and Education under the mentorship of Elizabeth Spelke, Howard Gardner, and Steven Pinker. 


16h-17h – Dr Nicolas Baumard (Ecole Normale Supérieure, Paris)
Psychological Origins of Cultural Revolutions

Social trust is linked to a host of positive societal outcomes, including improved economic performance, lower crime rates and more inclusive institutions. Yet, the origins of trust remain elusive, partly because social trust is difficult to document in time. Building on recent advances in social cognition, we designed an algorithm to automatically generate trustworthiness evaluations for the facial action units (smile, eye brows, etc.) of portraits in large historical databases. Our results show that trustworthiness in portraits increased over the period 1500 – 2000 paralleling the decline of interpersonal violence and the rise of democratic values observed in Western Europe. Further analyses suggest that this rise of trustworthiness displays is associated with increased living standards.

Nicolas Baumard is a CNRS researcher in the Department of Cognitive Sciences at the École Normale Supérieure in Paris, working in the Evolution and Social Cognition team, at Institut Jean-Nicod. His work uses evolutionary and psychological approaches in the social sciences, in particular in economics and history. More specifically, his recent work has used reciprocity theory (in particular partner choice) to explain why moral judgments and cooperative behaviors are based on considerations of fairness; and Life-history theory to explain behavioral variability across culture, history, social classes and developmental stages.


17h-18h – Pascal Dusapin – The Lullaby Experience Project

Update (18/4): The participation of Pascal Dusapin has been cancelled due to unforeseen circumstances.

Lullaby Experience est un projet participatif imaginé par le compositeur Pascal Dusapin, ouvert à tous, enfants et adultes, partout dans le monde. Chacun de nous possède, inscrit au plus profond de lui-même, une mélodie qui a marqué son enfance. Souvent, cette comptine a été déformée par le temps et la mémoire. C’est ce souvenir que l’on vous demande de chanter, de chuchoter. Les enregistrements collectés fourniront la matière sonore utilisée par le compositeur pour la création musicale Lullaby Experience. Transformés et assemblés, ils dessineront le portrait sonore de chaque ville où l’œuvre sera présentée. La création française de Lullaby Experience aura lieu au CENTQUATRE à Paris en Juin 2019.

Pascal Dusapin fait ses études d’arts plastiques et de sciences, arts et esthétique à l’Université de Paris-Sorbonne. Entre 1974 et 1978 il suit les séminaires de Iannis Xenakis. De 1981 à 1983 il est boursier de la Villa Médicis à Rome. Il reçoit de très nombreuses distinctions dès le début de sa carrière de compositeur. Parmi celles-ci, le Prix symphonique de la Sacem en 1994, le Grand prix national de musique du ministère de la Culture en 1995 et le Grand prix de la ville de Paris en 1998. La Victoire de la musique 1998 lui est attribuée pour le disque gravé avec l’Orchestre national de Lyon, puis de nouveau en 2002, comme « compositeur de l’année ». En 2005, il obtient le prix Cino del Duca remis par l’Académie des Beaux-arts. Il est Commandeur des Arts et des Lettres. Il est élu à la Bayerische Akademie der Schönen Künste en juillet 2006. En 2006 il est nommé professeur au Collège de France à la chaire de création artistique. En 2007, il est lauréat du Prix international Dan David, un prix international d’excellence récompensant les travaux scientifiques et artistiques et qu’il partage avec Zubin Metha pour la musique contemporaine. En 2014, il est Chevalier de l’Ordre national de la Légion d’honneur.


Friday April 19th, 11h, Ecole Normale Supérieure.

Salle séminaire du pavillon jardin, 29 rue d’Ulm.

Additional lecture by Sam Mehr: The Origins and functions of music in infancy

In 1871, Darwin wrote, “As neither the enjoyment nor the capacity of producing musical notes are faculties of the least use to man in reference to his daily habits of life, they must be ranked among the most mysterious with which he is endowed.” Infants and parents engage their mysterious musical faculties eagerly, frequently, across most societies, and for most of history. Why should this be? In this talk I propose that infant-directed song functions as an honest signal of parental investment. I support the proposal with two lines of work. First, I show that the perception and production of infant-directed song are characterized by human universals, in cross-cultural studies of music perception run with listeners on the internet; in isolated, small-scale societies; and in infants, who have much less experience than adults with music. Second, I show that the genomic imprinting disorders Prader-Willi and Angelman syndromes, which cause an altered psychology of parental investment, are associated with an altered psychology of music. These findings converge on a psychological function of music in infancy that may underlie more general features
of the human music faculty.

Read More

Research internship: voice transformations for surgical anxiety

L’équipe CREAM recherche un étudiant francophone pour un stage de recherche niveau M2, sur l’influence d’une manipulation acoustique de la voix sur l’anxiété pré-opératoire.

Période: Mars à Juillet 2019

Encadrants : Gilles Guerrier (Médecin Anesthésie-réanimation chirurgicale, Hôpital Cochin, APHP, Paris) & Jean-Julien Aucouturier (Chargé de recherche CNRS, IRCAM, Paris)

Contexte : Le stage est proposé dans le cadre d’une étude clinique en collaboration entre l’Hôpital Cochin et l’Institut de Recherche et Coordination en Acoustique/Musique (IRCAM) à Paris. Le stage est financé par le projet ANR REFLETS (“Rétroaction Faciales et Linguistiques et Etats de Stress Traumatiques”), qui vise à étudier l’impact de la voix sur les émotions.

Description du projet : L’anxiété pré-opératoire est un phénomène connu depuis longtemps chez les patients qui vont bénéficier d’une chirurgie [Corman Am J Surg 1958]. L’anxiété a un impact sur le niveau et la qualité de rétention d’informations et de consignes fournies par les différents interlocuteurs (chirurgien, anesthésiste, personnel paramédical et
administratif) [Johnston Ann Behav Med 1993]. Les moyens de limiter cette anxiété de façon non médicamenteuse sont actuellement peu nombreux. Nos travaux récents ont montré qu’il est possible de transformer le son d’une voix parlée en temps-réel, au fur et à mesure d’une conversation, pour lui donner des caractéristiques émotionnelles [Arias IEEE Trans. Aff. Comp. 2018]. Sur des volontaires sains, nous avons montré qu’une voix transformée pour être plus souriante a un impact émotionnel positif sur l’interlocuteur [Arias Cur. Biol. 2018]. Nous proposons d’évaluer l’impact de ce dispositif de modulation de la voix humaine sur la qualité de la prise en charge de la population des patients anxieux avant chirurgie. 

Rôle de l’étudiant.e dans le projet : Le/la stagiaire participera en premier lieu à l’inclusion des participants dans le service de chirurgie ambulatoire de l’Hopital Cochin, sous la supervision du Dr. Gilles Guerrier. Sa responsabilité sera d’expliquer le protocole aux participants, de recueillir leur consentement, d’installer l’équipement nécessaire (casque, micro), puis de veiller au bon recueil et enregistrement des données avant et après l’intervention. En second lieu, le/la stagiaire participera à l’analyse des données collectées, leur interprétation et la rédaction d’un rapport ou d’un article scientifique. 

Profil recherché : Nous cherchons pour ce stage un.e étudiant.e ayant une formation médicale ou paramédicale (ex. ARC, orthophonie), ou une formation de sciences cognitives expérimentales avec une orientation de recherche clinique et d’innovation pour la santé. La personne idéale pour ce stage aura une expérience et un intérêt pour interagir avec des patients en milieu hospitalier, une bonne connaissance des procédures liées aux essais cliniques (randomisation, consentement, etc.) et d’excellentes capacités à gérer un emploi du temps d’inclusion et de suivi de patients et à assurer la tracabilité des données enregistrées avec l’outil informatique. Une familiarité avec l’enregistrement sonore, ou la voix humaine, seront un plus.

Conditions :
Le stage fera l’objet d’une convention tri-partite entre l’étudiant.e, l’établissement de formation du M2, et l’IRCAM. Le stage sera rémunéré au montant forfaitaire, d’environ 500e par mois.

Comment candidater : Envoyer un CV et une lettre de motivation répondant aux points recherchés ci-dessus à Gilles Guerrier guerriergilles@gmail.com & Jean-Julien AUCOUTURIER, aucouturier@gmail.com

Read More

Symposium: the evolution of vocal and facial expressions

At the occasion of the PhD defense of Pablo Arias on Dec. 18th, the CREAM lab is happy to organize a mini-symposium on recent results on the evolution and universality of vocal and facial expressions with two prominent researchers from the field, Dr. Rachael Jack (School of Psychology, University of Glasgow) and Prof. Tecumseh Fitch (Department of Cognitive Biology, University of Vienna). The two talks will be followed in the afternoon by the PhD viva of Pablo Arias on “auditory smiles”, which is also public.

Date: Tuesday December 18th

Hours: 10h30-12h (symposium), 14h (PhD. viva)

Place: Salle Stravinsky, Institut de Recherche et Coordination en Acoustique/Musique (IRCAM), 1 Place Stravinsky 75004 Paris. [access]


Tuesday Dec. 18th, 10h30-12h

Symposium: The evolution of facial and vocal expressions (Dr. Rachael Jack, Prof. Tecumseh Fitch)


10h30-11H15 – Dr. Rachael Jack (University of Glasgow, UK)

Modelling Dynamic Facial Expressions Across Cultures

Facial expressions are one of the most powerful tools for human social communication. However, understanding facial expression communication is challenging due to their sheer number and complexity. Here, I present a program of work designed to address this challenge using a combination of social and cultural psychology, vision science, data-driven psychophysical methods, mathematical psychology, and 3D dynamic computer graphics. Across several studies, I will present work that precisely characterizes how facial expressions of emotion are signaled and decoded within and across cultures, and shows that cross-cultural emotion communication comprises four, not six, main categories. I will also highlight how this work has the potential to inform the design of socially and culturally intelligent robots.



11h15-12h – Prof. Tecumseh Fitch (University of Vienna, Austria)

The evolution of voice formant perception





Abstract t.b.a.




Tuesday Dec. 18th, 14h-16h30

PhD Defense: Auditory smiles (Pablo Arias)

At 14h on the same day, Pablo Arias (PhD candidate, Sorbonne-Université) will defend his PhD thesis, conducted in the CREAM Lab/ Perception and Sound Design Team (STMS – IRCAM/CNRS/Sorbonne Université). The viva is public, and all are welcome.

14h-16h30 – M. Pablo Arias (IRCAM, CNRS, Sorbonne Université)

The cognition of auditory smiles: a computational approach

Emotions are the fuel of human survival and social development. Not only do we undergo primitive reflexes mediated by ancient brain structures, but we also consciously and unconsciously regulate our emotions in social contexts, affiliating with friends and distancing from foes. One of our main tools for emotion regulation is facial expression and, in particular, smiles. Smiles are deeply grounded in human behavior: they develop early, and are used across cultures to communicate affective states. The mechanisms that underlie their cognitive processing include interactions not only with visual, but also emotional and motor systems. Smiles, trigger facial imitation in their observers, reactions thought to be a key component of the human capacity for empathy. Smiles, however, are not only experienced visually, but also have audible consequences. Although visual smiles have been widely studied, almost nothing is known about the cognitive processing of their auditory counterpart. 

This is the aim of this dissertation. In this work, we characterise and model the smile acoustic fingerprint, and use it to probe how auditory smiles are processed cognitively. We give here evidence that (1) auditory smiles can trigger unconscious facial imitation, that (2) they are cognitively integrated with their visual counterparts during perception, and that (3) the development of these processes does not depend on pre-learned visual associations. We conclude that the embodied mechanisms associated to the visual processing of facial expressions of emotions are in fact equally found in the auditory modality, and that their cognitive development is at least partially independent from visual experience.

Download link: Thesis manuscript

Thesis Committee:

  • Prof. Tecumseh Fitch – Reviewer – Department of Cognitive Biology, University of Vienna
  • Dr. Rachael Jack – Reviewer – School of Psychology, University of Glasgow
  • Prof. Julie Grèzes – Examiner – Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris
  • Prof. Catherine Pelachaud – Examiner – Institut des Systèmes Intelligents et de Robotique, Sorbonne Université/CNRS, Paris.
  • Prof. Martine Gavaret – Examiner – Service de Neurophysiologie, Groupement Hospitalier Saint-Anne, Paris.
  • Dr. Patrick Susini – Thesis Director – STMS, IRCAM/CNRS/Sorbonne Université, Paris
  • Dr. Pascal Belin – Thesis Co-director – Institut des Neurosciences de la Timone, Aix-Marseille Université.
  • Dr. Jean-Julien Aucouturier – Thesis Co-director – STMS, Ircam/CNRS/Sorbonne Université, Paris

Read More

Symposium: Recent voice research from the Netherlands

At the occasion of the PhD defense of Laura Rachman on Dec. 7th, the CREAM lab is happy to organize a mini-symposium on recent voice affective science and neuroscience with two prominent researchers from the Netherlands, Prof. Disa Sauter (Department of Social Psychology, Universiteit van Amsterdam) and Prof. Sonja Kotz (Department of Neuropsychology & Psychopharmacology, Maastricht University). The two talks will be followed in the afternoon by the PhD viva of Laura Rachman, which is also public.

Date: Friday December 7th

Hours: 10h30-12h (symposium), 14h (PhD. viva)

Place: Salle Stravinsky, Institut de Recherche et Coordination en Acoustique/Musique (IRCAM), 1 Place Stravinsky 75004 Paris. [access]

 


Friday Dec. 7th, 10h30-12h

Symposium: Recent voice research from the Netherlands (Prof. Disa Sauter, Prof. Sonja Kotz)

 

10h30-11H15 – Prof. Disa Sauter (Universiteit van Amsterdam, NL)

Preparedness for emotions: Evidence for discrete negative and positive emotions from vocal signals

We all have emotions, but where do they come from? Functional accounts of emotion propose that emotions are adaptations which have evolved to help us deal with recurring challenges and opportunities. In this talk, I will present evidence of preparedness from studies of emotional vocalisations like laughs, screams, and sighs. This work suggests that a number of negative and positive emotional states are associated with discrete, innate, and universal vocal signals.

 

11h15-12h – Prof. Sonja Kotz (Maastricht University, NL)

Prediction in voice and speech

Prediction in voice and speech processing is determined by “when” an event is likely to occur (regularity), and “what” type of event can be expected at a given point in time (order). In line with these assumptions, I will present a cortico-subcortical model that involves the division of labor between the cerebellum and the basal ganglia in the predictive tracing of acoustic events. I will discuss recent human electrophysiological and fMRI data in line with this model.

 


Friday Dec. 7th, 14h-16h30

PhD Defense: The “other-voice” effect (Laura Rachman)

At 14h on the same day, Laura Rachman (PhD candidate, Sorbonne-Université) will defend her PhD thesis, conducted in the CREAM Lab/ Perception and Sound Design Team (STMS – IRCAM/CNRS/Sorbonne Université). The viva is public, and all are welcome.

 

14h-16h30 – Ms. Laura Rachman (IRCAM, CNRS, Sorbonne Université)

The “other-voice” effect: how speaker identity and language familiarity influence the way we process emotional speech

The human voice is a powerful tool to convey emotions. Humans hear voices on a daily basis and are able to rapidly extract relevant information to successfully interact with others. The theoretical aim of this dissertation is to investigate the role of familiarity on emotional voice processing. A set of behavioral and electrophysiological studies investigated how self- versus non self-produced voices influence the processing of emotional speech utterances. By contrasting self and other, familiarity is here assessed at a personal level. The results of a first set of studies show a dissociation of explicit and implicit processing of the self-voice. While explicit discrimination of an emotional self-voice and other-voice was somewhat impaired, implicit self-processing prompted a self-advantage in emotion recognition and speaker discrimination. The results of a second set of studies show a prioritization for the non-self voice in the processing of emotional and low-level acoustic changes, reflected in faster electrophysiological (EEG) and behavioral responses. In a third set of studies, the effect of voice familiarity on emotional voice perception is assessed at a larger sociocultural scale by comparing speech utterances in the native and a foreign language. Taken together, this disseration highlights some ways in which the ‘otherness’ of a voice – whether a non-self speaker or a foreign language speaker – is processed with a higher priority on the one hand, but with less acoustic precision on the other hand.

Download link: Thesis manuscript

Thesis Committee:

  • Prof. Sonja Kotz  – Reviewer – Department of Neuropsychology and Psychopharmacology, Maastricht University
  • Prof. Pascal Belin – Reviewer – Institut de Neurosciences de la Timone, CNRS, Aix-Marseille Université
  • Prof. Disa Sauter – Examiner – Department of Social Psychology, Universiteit van Amsterdam
  • Dr. Marie Gomot – Examiner – Centre de Pédopsychiatrie, INSERM, Université de Tours
  • Prof. Mohamed Chetouani – Examiner – Institut des Systèmes Intelligents et de Robotique, Sorbonne Université
  • Dr. Stéphanie Dubal – Thesis Co-director – Institut du Cerveau et de la Moelle épinière, CNRS, Sorbonne Université
  • Dr. Jean-Julien Aucouturier – Thesis Co-director – STMS – Ircam/CNRS/Sorbonne Université

Read More

ANGUS: the Highway to Yell

ANGUS is a real-time voice transformation tool able to simulate cues of arousal/roughness on arbitrary voice signals with a high degree of realism. Vocal roughness is generated by highly unstable modes of vibration in the vocal folds and tract, which result in sub-harmonics and nonlinear components which are not present in standard phonation. We propose to simulate this physiological mechanism using multiple amplitude modulations driven by the fundamental frequency of the incoming sound.

(more…)

Read More

Cracking the social code of speech prosody

New paper out this month in PNAS, in which we use new audio software (CLEESE) to deploy reverse-correlation in the space of speech prosody, and uncover robust and shared mental representations of trustworthiness and dominance in a speaker’s voice.  The paper is open-access, data and analysis code freely available at https://zenodo.org/record/1186278 and the CLEESE software is open-source and available as a free download here.

Ponsot, E., Burred, JJ., Belin, P. & Aucouturier, JJ. (2018) Cracking the social code of speech prosody using reverse correlation, Proceedings of the National Academy of Sciences. [html] [pdf]

Presse:

Science: http://www.sciencemag.org/news/2018/03/want-sound-someone-people-can-trust-new-software-could-help

Australia Science Channel: https://australiascience.tv/how-to-make-a-good-impression-its-not-what-you-say-its-how-you-say-it/

 

 

 

 

Read More

(S)CREAM ! An impromptu workshop on screams

The CREAM lab organizes a short, impromptu workshop on the biology, cultural history, musicality and acoustic of !screams!, to be held in IRCAM, Paris, on Thursday 22nd June, 2-5pm. The workshop will consist of four invited talks, followed by a discussion around drinks and cakes.

CREAM organise un petit séminaire impromptu sur la biologie, l’histoire culturelle, la musicalité et l’acoustique des !CRIS!, il aura lieu à l’IRCAM le Jeudi 22 Juin de 14h à 17h. Le séminaire sera constitué de quatre présentations suivies par une discussion autour de quelques boissons et gâteaux.

 

 

Date: Thursday 22nd June 2017, 2-5pm

Place: Stravinsky Room, IRCAM, 1 Place Stravinsky, 75004 Paris.

Attendance: free, subjected to seat availability.

Local organizers: Louise Goupil (louise.goupil@ircam.fr) , JJ Aucouturier (aucouturie@gmail.com)

 

 

(more…)

Read More

Ministry of Silly Talks: Infinite numbers of prosodic variations with C.L.E.E.S.E.

C.L.E.E.S.E. (Combinatorial Expressive Speech Engine) is a tool designed to generate an infinite number of natural-sounding, expressive variations around an original speech recording. More precisely, C.L.E.E.S.E. creates random fluctuations around the file’s original contour of pitch, loudness, timbre and speed (i.e. roughly defined, its prosody). One of its applications is the generation of very many random voice stimuli for reverse correlation experiments, or whatever else you fancy, really.

(more…)

Read More