,

Cross-modal contextual memory guides selective attention in visual-search tasks

Posted by

Abstract

Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items (“contextual cueing”). Contextual cueing is also observed in cross-modal search, when the location of the—visual—target is predicted by distractors from another—tactile—sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively; both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the samevisual processing stages related to the selection and subsequent analysis of the search target.

The article is published at Psychophysiology: https://onlinelibrary.wiley.com/doi/10.1111/psyp.14025

Citation:

Chen, S., Shi, Z., Zinchenko, A., Müller, H. J., & Geyer, T. (2022). Cross-modal contextual memory guides selective attention in visual-search tasks. Psychophysiologyn/a(n/a), e14025. https://doi.org/10.1111/psyp.14025