Summary
Experiments have shown that attention can influence neuronal representations. When human subjects are warned that an object will appear at a certain location in the visual field, it can be shown that the firing rate of neurons in visual cortex that correspond to that location will increase slightly. It is believed that such modulations of the neuronal representations of visual percepts lead to faster reaction times and lower search effort in a complex visual scene. Spatial-based attention is relatively well understood. When subjects are primed for a feature (such as colour) it can be shown that neuronal representations coding for that feature are affected across the visual field.
This form of attention and the neuronal mechanisms behind it are less well understood. Yet, another form of attention is object-based attention. Here neurons that contribute to specific visual shapes are primed, regardless of where they are in the visual field. In a system for translation-invariant object recognition, it is hard to understand how this form of attention is implemented. In earlier work, we have modelled each of these forms of attention in isolation. Using the simulator we created to do this, you will create an integrated model that combines all neuronal mechanism into a single integrated network model. You will apply the resulting model to publicly available visual search data and to 'pop-out' phenomena. There is a possibility to contribute to running psychological experiments in this area.
