Project Details
TRR 169: Crossmodal Learning: Adaptivity, Prediction and Interaction
Subject Area
Social and Behavioural Sciences
Humanities
Computer Science, Systems and Electrical Engineering
Medicine
Humanities
Computer Science, Systems and Electrical Engineering
Medicine
Term
from 2016 to 2024
Website
Homepage
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 261402652
The term “crossmodal learning” refers to the adaptive, synergistic integration of complex perceptions from multiple sensory modalities, such that the learning that occurs within any individual sensory modality can be enhanced with information from one or more other modalities. Crossmodal learning is crucial for human understanding of the world, and examples are ubiquitous, such as learning to grasp and manipulate objects, learning to read and write, learning to understand language, etc. In all these examples, visual, auditory, somatosensory, or other modalities have to be integrated. The long-term goal of our research is to develop an interdisciplinary understanding of the neural, cognitive, and computational mechanisms of crossmodal learning. This understanding will allow us to pursue the following primary sub-goals of the research programme: (1) to enrich our current understanding of the multisensory processes underlying the human mind and brain, (2) to create detailed formal models that describe crossmodal learning in both humans and machines, and (3) to build artificial systems for tasks requiring a crossmodal conception of the world. The purpose of this project is to continue the Transregional Collaborative Research Centre (TRR169) as an interdisciplinary cooperation between the existing fields of computer science, psychology, and neuroscience, focused on strengthening the newly established discipline of crossmodal learning. Our aim is, therefore, to continue our collaborative centre as the primary research vehicle at the focal point of this new discipline. Based on a successful first funding period and extensive groundwork of collaborative research between Germany and China, the second phase of the centre is to be jointly funded by the DFG (Deutsche Forschungsgemeinschaft) and the NSFC (Natural Science Foundation of China) as an international collaboration between the University of Hamburg, the Medical Center Hamburg Eppendorf and the three top universities in China (Tsinghua, Beijing Normal, and Peking University) as well as the Institute of Psychology of the Chinese Academy of Sciences, all located in Beijing, China.
DFG Programme
CRC/Transregios
International Connection
China
Current projects
- A01 - Adaptation of multisensory processing to changing priors and sensory evidence (Project Heads Bruns, Patrick ; Fu, Xiaolan ; Hong, Bo ; Röder, Brigitte )
- A02 - Neural circuits for crossmodal memory (Project Heads Guan, Ji-Song ; Hilgetag, Claus Christian )
- A03 - Crossmodal learning in health and neurological disease: neurocomputational representation and therapeutic application (Project Heads Gerloff, Christian ; Hummel, Friedhelm Christoph ; Xue, Gui )
- A04 - Crossmodal representation facilitating robust robot behaviour (Project Heads Wang, Yizhou ; Zhang, Jianwei ; Zhang, Changshui )
- A05 - Neurorobotic models for crossmodal joint attention and social interaction (Project Heads Liu, Xun ; Wermter, Stefan )
- A06 - Deep learning for robust audio-visual processing (Project Heads Frintrop, Simone ; Gerkmann, Timo ; Hu, Xiaolin ; Weber, Cornelius )
- B01 - Modulation of neural mechanisms underlying crossmodal predictions (Project Heads Engel, Andreas K. ; Hu, Xiaolin ; Zhang, Dan )
- B03 - Neurocognitive mechanisms for implicit learning of crossmodal predictions (Project Heads Fu, Qiufang ; Gao, Xiaorong ; Rose, Michael )
- B04 - Brain dynamics of top-down control on crossmodal congruency (Project Heads Engel, Andreas K. ; Liu, Xun ; Nolte, Guido )
- B05 - Crossmodal transfer of dexterous manipulation skills (Project Heads Sun, Fuchun ; Zhang, Jianwei )
- C04 - Neurocognitive models of crossmodal language learning (Project Heads Liu, Zhiyuan ; Weber, Cornelius ; Wermter, Stefan )
- C07 - Crossmodal learning for improving human reading (Project Heads Biemann, Christian ; Li, Xingshan ; Qu, Qingqing )
- C08 - Crossmodal bindings and plasticity during visual-haptic interaction for novel forms of therapy (Project Heads Chen, Lihan ; Kühn, Simone ; Steinicke, Frank ; Wei, Kunlin )
- C09 - The role of mental models and sense of agency in learning crossmodal communicative acts (Project Heads Fu, Xiaolan ; Gläscher, Jan )
- MGKZ02 - Integrated research training group (Project Heads Engel, Andreas K. ; Fu, Xiaolan ; Zhang, Jianwei )
- Z01 - Central activities of the Collaborative Research Centre (Project Heads Sun, Fuchun ; Zhang, Jianwei )
- Z03 - Integration Initiatives for Model Software and Robotic Demonstrators (Project Heads Sun, Fuchun ; Wermter, Stefan ; Zhang, Jianwei )
Completed projects
- B02 - Bayesian analysis of the interaction of learning, semantics and social influence with crossmodal integration (Project Heads Gläscher, Jan ; Zhu, Jun )
- C05 - Combining active vision with multimodal speech comprehension (Project Heads Li, Xingshan ; Menzel, Wolfgang ; Qu, Qingqing )
- C06 - Multisensory perception, learning and action for crossmodal tele-operations (Project Heads Chen, Lihan ; Fang, Fang ; Steinicke, Frank )
Applicant Institution
Universität Hamburg
Co-Applicant Institution
Tsinghua University
Spokesperson
Professor Dr. Jianwei Zhang