Symposium: the evolution of vocal and facial expressions

At the occasion of the PhD defense of Pablo Arias on Dec. 18th, the CREAM lab is happy to organize a mini-symposium on recent results on the evolution and universality of vocal and facial expressions with two prominent researchers from the field, Dr. Rachael Jack (School of Psychology, University of Glasgow) and Prof. Tecumseh Fitch (Department of Cognitive Biology, University of Vienna). The two talks will be followed in the afternoon by the PhD viva of Pablo Arias on “auditory smiles”, which is also public.

Date: Tuesday December 18th

Hours: 10h30-12h (symposium), 14h (PhD. viva)

Place: Salle Stravinsky, Institut de Recherche et Coordination en Acoustique/Musique (IRCAM), 1 Place Stravinsky 75004 Paris. [access]

Tuesday Dec. 18th, 10h30-12h

Symposium: The evolution of facial and vocal expressions (Dr. Rachael Jack, Prof. Tecumseh Fitch)

10h30-11H15 – Dr. Rachael Jack (University of Glasgow, UK)

Modelling Dynamic Facial Expressions Across Cultures

Facial expressions are one of the most powerful tools for human social communication. However, understanding facial expression communication is challenging due to their sheer number and complexity. Here, I present a program of work designed to address this challenge using a combination of social and cultural psychology, vision science, data-driven psychophysical methods, mathematical psychology, and 3D dynamic computer graphics. Across several studies, I will present work that precisely characterizes how facial expressions of emotion are signaled and decoded within and across cultures, and shows that cross-cultural emotion communication comprises four, not six, main categories. I will also highlight how this work has the potential to inform the design of socially and culturally intelligent robots.

11h15-12h – Prof. Tecumseh Fitch (University of Vienna, Austria)

The evolution of voice formant perception

Abstract t.b.a.

Tuesday Dec. 18th, 14h-16h30

PhD Defense: Auditory smiles (Pablo Arias)

At 14h on the same day, Pablo Arias (PhD candidate, Sorbonne-Université) will defend his PhD thesis, conducted in the CREAM Lab/ Perception and Sound Design Team (STMS – IRCAM/CNRS/Sorbonne Université). The viva is public, and all are welcome.

14h-16h30 – M. Pablo Arias (IRCAM, CNRS, Sorbonne Université)

The cognition of auditory smiles: a computational approach

Emotions are the fuel of human survival and social development. Not only do we undergo primitive reflexes mediated by ancient brain structures, but we also consciously and unconsciously regulate our emotions in social contexts, affiliating with friends and distancing from foes. One of our main tools for emotion regulation is facial expression and, in particular, smiles. Smiles are deeply grounded in human behavior: they develop early, and are used across cultures to communicate affective states. The mechanisms that underlie their cognitive processing include interactions not only with visual, but also emotional and motor systems. Smiles, trigger facial imitation in their observers, reactions thought to be a key component of the human capacity for empathy. Smiles, however, are not only experienced visually, but also have audible consequences. Although visual smiles have been widely studied, almost nothing is known about the cognitive processing of their auditory counterpart. 

This is the aim of this dissertation. In this work, we characterise and model the smile acoustic fingerprint, and use it to probe how auditory smiles are processed cognitively. We give here evidence that (1) auditory smiles can trigger unconscious facial imitation, that (2) they are cognitively integrated with their visual counterparts during perception, and that (3) the development of these processes does not depend on pre-learned visual associations. We conclude that the embodied mechanisms associated to the visual processing of facial expressions of emotions are in fact equally found in the auditory modality, and that their cognitive development is at least partially independent from visual experience.

Download link: Thesis manuscript

Thesis Committee:

  • Prof. Tecumseh Fitch – Reviewer – Department of Cognitive Biology, University of Vienna
  • Dr. Rachael Jack – Reviewer – School of Psychology, University of Glasgow
  • Prof. Julie Grèzes – Examiner – Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris
  • Prof. Catherine Pelachaud – Examiner – Institut des Systèmes Intelligents et de Robotique, Sorbonne Université/CNRS, Paris.
  • Prof. Martine Gavaret – Examiner – Service de Neurophysiologie, Groupement Hospitalier Saint-Anne, Paris.
  • Dr. Patrick Susini – Thesis Director – STMS, IRCAM/CNRS/Sorbonne Université, Paris
  • Dr. Pascal Belin – Thesis Co-director – Institut des Neurosciences de la Timone, Aix-Marseille Université.
  • Dr. Jean-Julien Aucouturier – Thesis Co-director – STMS, Ircam/CNRS/Sorbonne Université, Paris

Read More

Bullying the doc: stronger-sounding patients get more 911 attention

Our team published a new paper this week, in which we test the influence of patients’ tone of voice on medical decisions.

In the line of our recent real-time emotional voice transformations, we manipulated the voice of (fake) patients calling a 911 phone simulator used for training (real) medical doctors, to make them sound more, or less, physically dominant (with deeper, more masculine voices corresponding to lower pitch and greater formant dispersion, see e.g. Sell, A. et al. Adaptations in humans for assessing physical strength from the voice. Proceedings of the Royal Society of London B: Biological Sciences, 277(1699), 3509–3518 (2010) – link).

We found that patients whose voice signalled physical strength obtained a higher grade of response, a higher evaluation of medical emergency and longer attention from medical doctors than callers with strictly identical medical needs whose voice signaled lower physical dominance.

The paper, a collaboration with Laurent Boidron M.D. and his colleagues at the Department of Emergency Medicine of the Dijon CHU Hospital/Université de Bourgogne, was published last tuesday in Scientific Reports (link, pdf).

Read More

CREAM is looking for a RA! (fixed-term, 2-months)

uncle-samAssistant de recherche EEG pour passation d’expériences de psychologie/neuroscience

Période: Avril à Mai 2016

L’équipe CREAM (“Cracking the Emotional Code of Music” – du laboratoire
STMS UMR9912 (CNRS/IRCAM/UPMC – à Paris cherche à engager en contrat
CDD (temps-plein ou temps incomplet) un ou une assistant/e de recherche pour la passation de
plusieurs expériences de psychologie et neurosciences cognitive sur le thème de la musique et des
émotions, dont plusieurs utilisant l’EEG, sur la période d’Avril à Mai 2016. La personne recrutée travaillera en collaboration avec les chercheurs ayant conçu les expériences et sera responsable de la collecte de données, en autonomie.

[PDF] Announcement


Read More