Project Details
Projekt Print View

Mechanisms of spatial context learning through touch and vision

Subject Area General, Cognitive and Mathematical Psychology
Term from 2018 to 2023
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 405949458
 
In everyday scenes, searched-for targets do not appear in isolation, but are embedded within configurations of non-target or distractor items. If the position of the target relative to the distractors is invariant, such spatial contingencies are learned and come to guide environmental scanning (“contextual cueing” effect; (Chun & Jiang, 1998; Geyer, Shi, & Müller, 2010). Importantly, such context learning is not limited to the visual modality: using a novel tactile search task, we recently showed that repeated tactile contexts aid tactile search (Assumpção, Shi, Zang, Müller, & Geyer, 2015, 2017). However, the guidance of search by context memory has almost entirely been investigated in the individual modalities of vision and touch. Thus, relatively little is known about crossmodal plasticity in spatial context learning. The present proposal will employ behavioural and neuroscience methods, in addition to mathematical modelling, to investigate context learning in unimodal search tasks (consisting of only visual or tactile search items), and multimodal search tasks (consisting of both visual and tactile search items). In doing so, any specific learning mechanisms (representations) commonly referred to as crossmodal plasticity should be revealed. The three leading research questions concern (1) the factors that may eventually give rise to crossmodal context memory (an example question is whether the development of crossmodal spatial context memory requires practice on a multimodal search task; (2) how context memory acquired in uni- or multimodal search tasks has to be characterized in ERP waveforms (and whether there are differences in electrophysiological components between these tasks); and (3), how the beneficial effects of context memory on reaction time performance in uni- and multimodal search can be quantified using stochastic Bayesian models.
DFG Programme Research Grants
 
 

Additional Information

Textvergrößerung und Kontrastanpassung