Project Details
Auditory scene analysis and attentional focusing in speech perception under complex dynamic listening conditions in younger and older adults
Applicant
Privatdozent Dr. Stephan Getzmann
Subject Area
Biological Psychology and Cognitive Neuroscience
Term
from 2018 to 2022
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 398995238
Speech comprehension under adverse (cocktailparty) listening conditions deteriorates with age. It is assumed that, in addition to genuine hearing deficits, age-related declines in cognitive abilities contribute to these difficulties. In the first phase of the project, we demonstrated that declines in attentional and inhibitory control affect speech perception especially in highly dynamic auditory sceneries and after the occurrence of speaker changes. Deficits observed in behavioural performance were associated with age-related changes in electrophysiological measures, reflecting increased distraction, less flexible attention allocation, and reduced preparation of the processing of relevant speech information in older adults. However, previous studies also showed that older adults benefit from cues indicating changes in the auditory scenery and in speaker settings to a similar degree as younger adults. A quite natural type of cue is visual speech information provided by the speakers face. It is long known that visual speech can substantially improve (auditory) speech comprehension. This bimodal audio-visual (AV) facilitation is based on the fact that visual speech carries a specific predictive value for the auditory utterance. The integration of auditory and visual speech information reduces signal uncertainty and supports the extraction of relevant speech information especially under adverse listening conditions. Interestingly, analyses of eventrelated potentials (ERPs) revealed that older adults benefit even more from bimodal speech information than younger ones. The degree to which audio-visual speech information may enhance speech comprehension also in highly dynamic cocktailparty situations is yet unclear. Regarding our previous results, however, it can be expected that visual speech also has a positive effect on speech comprehension when changes in the auditory scenery and in speaker settings occur. In the second phase of the project, this should be investigated in four experiments, using realistic listening scenarios and combined behavioural and electro-physiological measures. By combining auditory speech with (a) congruent visual speech, (b) faces that produce non(speech)specific lip movements, and (c) static faces, the specific benefits of AV speech under dynamic listening conditions should be tested in younger and older adults. The experiments focus on changing speaker settings and on switch costs associated with change-related attentional switching. The underlying cortical processes will be analysed by ERP methodology. Here, the analysis will focus on ERPs that proved to be sensitive to speaker changes (N400diff und LPCdiff), to attentional switching (N2ac and LPCpc), and to age (N2 and CNV) in the first phase of the project. In sum, the aim of the new experiments is to learn more about the neurophysiological mechanisms of audio-visual speech processing and their relation to benefits in speech comprehension under adverse listening.
DFG Programme
Research Grants