,

Statistical context learning in tactile search

Posted by

When searching for an object, our brains learn to associate its location with other surrounding objects, known as contextual cueing. The fixed contextual configuration thus helps us find the object faster. It has been shown in visual search as well as in tactile search (e.g., in our previous study by Assumpção et al., 2015). However, whether visual context could help tactile search remain unclear. Chen et al. (2023) tested this with visual-tactile multisensory learning, and found no redundancy gains by multisensory-visuotactile contexts. This suggests that the task-critical modality determines the reference frame for contextual learning (i.e., somatotopic coordinates for tactile search).

References

  • Assumpção, L., Shi, Z., Zang, X., Müller, H. J., & Geyer, T. (2015). Contextual cueing: implicit memory of tactile context facilitates tactile search. Attention, Perception & Psychophysics, 77(4), 1212–1222. https://doi.org/10.3758/s13414-015-0848-y
  • Chen, S., Shi, Z., Vural, G., Müller, H. J., & Geyer, T. (2023). Statistical context learning in tactile search: Crossmodally redundant, visuo-tactile contexts fail to enhance contextual cueing. Frontiers in Cognition, 2. https://doi.org/10.3389/fcogn.2023.1124286