Project Details
Projekt Print View

Audiovisual speech perception in cochlear implant recipients

Subject Area Otolaryngology, Phoniatrics and Audiology
Term from 2019 to 2024
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 416867313
 
Speech perception is multimodal in nature. Besides auditory cues visual features give important information. This is especially evident in cochlear implant (CI) users who might reveal particularly good speech reading ability. Nevertheless, assessment of speech perception in CI recipients typically is solely auditory, both in clinical routine and in research, which might at least in part be due to a lack in suitable audiovisual test materials. However, considering both auditory and visual speech would allow higher ecological validity of the measurements.This project proposes a unique approach, namely the assessment of audiovisual speech by means of presenting visual cues via a computer-based articulation model ("talking head"). This approach has several important advantages. First, conventional speech audiometric materials can still be used since they are supplemented by the visual cues provided by the talking head. Second, the articulation model gives the opportunity of presenting arbitrary visual cues (congruent, incongruent, different salience) and thus allows using highly controlled audiovisual stimuli. The proposed project aims at establishing various audiovisual speech materials suited for both clinical and scientific assessment of speech perception in CI recipients. Based on these materials different topics are considered, such as the development of multimodal speech perception after implantation. Moreover, cross-modal reorganization is addressed by presenting congruent and incongruent audiovisual information and comparing interactions in the auditory cortex across different levels of speech processing. Multi-talker speech recognition will be examined in detail by considering effects of talker characteristics and visual speech cues that allow segregation of competing speech streams. Prosody perception will be investigated by addressing the role of visual speech cues in sentence stress perception. Moreover, fitting aspects such as bilateral versus bimodal CI use are taken into account.Due to the fact that multimodal speech perception in CI recipients has not yet been comprehensively examined and the unique approach of presenting highly controlled and standardized visual cues by an articulation model it is expected that the project will have significant clinical and scientific impact allowing detailed insight into CI-mediated speech perception under ecologically valid conditions.
DFG Programme Research Grants
Co-Investigator Professor Dr. Martin Walger
 
 

Additional Information

Textvergrößerung und Kontrastanpassung