Upcoming: Two invited talks on reverse-correlation for high-level auditory cognition

CREAM Lab is hosting a small series of distinguised talks on reverse-correlation this month:

  • Wednesday 22nd March 2017 (11:00) – Prof. Fréderic Gosselin (University of Montreal)
  • Thursday 23rd March 2017 (11:30) – Prof. Peter Neri (Ecole Normale Supérieure, Paris).

These talks are organised in the context of a workshop on reverse-correlation for high-level audio cognition, to be held in IRCAM the same days (on-invitation-only). Both talks are free for all, in IRCAM (1 Place Stravinsky, 75004 Paris). Details (titles, abstract) are below.


Wednesday 22nd March 2017 – 11:00 – Salle Stravinsky, IRCAM.

Prof. Frédéric Gosselin, University of Montreal

Title: Fifteen years of research in high-level vision with classification images

<new> talk video is now online

Abstract: Fifteen years ago, I invented a brute-force experimental technique—called Bubbles—capable of revealing the visual information that drives a measurable response. Bubbles and other classification image techniques have since met with considerable success in high-level vision. I will present what I consider to be the landmark studies, focussing on the face recognition literature. For example, I will discuss the findings that early occipito-temporal EEG activity correlates with the eye on the contralateral side of the face (Schyns et al., 2003; Smith, Gosselin & Schyns, 2004; Rousselet et al., 2014). I will discuss the discovery that the static facial features are sampled dynamically by the visual system (Vinette, Gosselin & Schyns, 2004; Blais et al., 2013). I will talk about evidence that face inversion alters face processing quantitatively, not qualitatively (Sekuler et al., 2004; Willenbockel et al., 2010; Royer et al., 2017). I will present the discovery that damage to the amygdala affects spontaneous eye processing (Adolphs et al., 2005; Gosselin et al., 2010). I will discuss the findings that culture impacts the representation of faces (Jack, Caldara & Schyns, 2012; Jack et al., 2012; Tardif et al., 2017). I will discuss the discovery of an abnormally high number of mouth cells in the amygdala of ASD individuals (Rutishauser et al., 2011; Rutishauser et al., 2013; Rutishauser, Mamelak & Adolphs, 2015; Wang et al., 2014).


Thursday 23 March 2017 – 11:30 – Salle Stravinsky, IRCAM.

Prof. Peter Neri, Ecole Normale Supérieure

<new> talk video is now online 

Title: Classified noise: neurons versus people, blobs versus scenes, tones versus speech – can we put it all together?

Abstract: Answer: no, or at least not yet. I will discuss a wide range of applications for reverse-correlation methods, spanning an equally wide range of daunting interpretational challenges. Although very substantial progress has been made over the past 20 years in characterizing and understanding results from reversecorrelation experiments, I will highlight how much is still unresolved when attempting conceptual leaps between circuits and behaviour, artificial stimuli and natural statistics, vision and audition. It is tempting to make these connections on the basis of data-rich and seemingly transparent characterizations of the sensory process such as those typically returned by reverse correlating noise, but a closer look at the technicalities involved prompts caution and the need for more experimental as well as theoretical work.


About reverse-correlation:

The reverse correlation technique was first introduced in neurophysiology to characterize neuronal receptive fields (Eggermont et al, 1983). It was then extended by psychophysicians to characterize sensory systems, using behavioral choices (e.g., yes/no responses) instead of neuronal spikes as the systems’ output variables, with applications to a wide variety of low-level tasks in the auditory domain, e.g. detection of tones in noise (Ahumada & Lovel, 1971), discrimination of frequency distributions (Berg, 1989), or spectro-temporal loudness weighting in fluctuating noises and tones (e.g. Oberfeld et al., 2012; Ponsot et al., 2013). In the visual domain, these techniques have quickly been extended to address not only low-level sensory processes, but full-fledged cognitive mechanisms: facial recognition (Mangini & Biederman, 2004), facial emotional expression (Jack et al., 2009, 21012a; Gosselin & Schyns, 2001) or social traits (Dotsch & Todorov, 2012).

Unlike in vision, the use of reverse-correlation in high-level auditory cognition of complex, natural stimuli like speech or music is still very much an emerging theme, with only a handful of very recent studies. By bringing together the main practitioners of this emerging community, these two days in Paris aim to move beyond the stage of relatively isolated proofs-of-concept, to mutualize everybody’s experience and create a roadmap for future research applying reverse-correlation techniques to high-level audio cognition.