Project Details
The origin of interindividual differences in audiovisual speech perception, combining neuroimaging, eye movements and behavior
Applicant
Johannes Rennig, Ph.D.
Subject Area
Human Cognitive and Systems Neuroscience
Cognitive, Systems and Behavioural Neurobiology
Cognitive, Systems and Behavioural Neurobiology
Term
from 2016 to 2019
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 319069319
Speech is the most common form of human communication and is fundamentally multisensory: observing the mouth of the talker allows us to identify otherwise ambiguous auditory information. However, recent discoveries indicate wide interindividual differences in the use of visual information during speech processing. This is easily demonstrated with the well known McGurk illusion, in which combining two different auditory and visual syllables causes the percept of a third, completely different, syllable. While some subjects are strongly influenced by the visual speech and always experience the illusion, others never do. Recently, two possible explanations for this variability were discovered. First, frequent perceivers of the McGurk effect were more likely to fixate the mouth of the talker, with a significant correlation between McGurk frequency and mouth looking time. Second, on a neuronal level, individual who perceived the McGurk effect were observed to have greater activity in the superior temporal sulcus (STS), a brain structure mediating multisensory integration and audiovisual speech perception. These two observations raise an obvious question: do eye movements drive STS activity? Or is it vice versa? For the first time, we will measure audiovisual speech perception (measured with the McGurk effect), neural activity in the STS (measured with fMRI) and eye movements (measured with infrared eye tracking) and use structural equation modeling to untangle the causal relationship between them.
DFG Programme
Research Fellowships
International Connection
USA