Moving Image - Moving Eyes: Active Vision in the Real World
The role of eye movements in real world human navigation
Szonya Durant and Johannes Zanker
Recovering our heading direction based on visual information requires interpreting optic flow, the pattern of motion caused by our movement through the world. This is affected by head stability and the direction of eye gaze. We investigated how eye movements interact with head movements whilst walking forward. An observer navigated through a variety of environments around the university campus using a head mounted device that simultaneously recorded the scene ahead and tracked eye movements, allowing us to determine the gaze direction in each frame. This resulted in an image sequence as recorded by the camera, and by realigning the images to keep eye fixation location fixed at the same point, we could mimic the input received by the retina. We found that eye movements were usually focused towards the heading direction when not scanning the scene. Local motion direction and magnitude was calculated for the two types of image sequences to analyze the optic flow patterns. In some scenes eye movements appeared to compensate to some extent for head movement, challenging the general view that eye movements complicate optic flow retrieval. Our results suggest that the role of compensatory eye movements might be important in the calculation of heading direction.
Eye guidance in natural vision
The human behavioural repertoire is intricately linked to the gaze control system: many behaviours require visual information at some point in their planning or execution. Moreover, the spatial and temporal restrictions imposed by foveal vision and saccadic eye movements mean that high acuity vision needs to be allocated appropriately in both space and time. How we allocate vision when viewing complex static scenes has been researched extensively and there exist effective computational models of fixation selection for such circumstances. However, it is not clear whether understanding from static scene-viewing paradigms generalizes to more natural behavioural settings. General principles that appear to underlie targeting decisions during natural behaviour are evident across a range of behaviours. These principles identify the components of eye movement behaviour that any models of fixation selection in natural behaviour must be able to explain. Reward maximization provides a powerful potential framework for explaining eye movement behaviour, but formal models of this are in their infancy.
Eye movements in reading as the expression of distributed spatial coding in oculomotor-centre maps
Eye movements in natural perceptual tasks are classically considered to reflect ongoing cognitive processes as well as pre-established visuo-motor scanning routines aimed at optimizing visual-information intake and/or motor action. Here, I will argue against this assumption for the particular case of reading, providing empirical evidence for the alternative assumption that eye behaviour in reading is for a great part the expression of distributed spatial coding in oculomotor-centre maps (i.e. the Superior Colliculus). First, I will show that the general tendency for the eyes to land near the centre of long words as well as variability around this preferred landing position comes from the more basic tendency to land at the centre of gravity of the visual configuration in the periphery, also referred to as global effect [Findlay, 1982, Vision Research, 22, 1033-1045]. Second, I will present recent data from our group showing that the deformation of visual space in oculomotor-centre maps constrains both the metrical properties of saccades in simple saccade-target tasks as well as eye movements in reading.
Eye movements in interception
Eli Brenner and Jeroen B. J. Smeets
People generally try to keep their eyes on a moving target that they intend to catch or hit. I will discuss several reasons why they may want to do so. We studied this issue by designing interception tasks that promote different eye movements. When the task was to hit a moving target, we found that people’s hits were less precise if they did not pursue the target. If the task was to hit the target at a certain position, they were better at getting the position right if they did not pursue the target. Comparing these two tasks, after matching them in their overall perceptual requirements, showed that pursuing the target has an additional benefit. We ascribe this additional benefit to information about the pursuit eye movements themselves. Thus, improving the resolution of visual information that is gathered during the movement for continuously improving predictions about critical aspects of the task, such as anticipating where the target will be at some time in the future, may not be the only reason for keeping one’s eyes on the target. I will discuss some other possible benefits.
Learning to use the lightfield for shape and lightness perception
Julie M. Harris, P. George Lovell, Glen Harding and Marina Bloj
To infer shape and lightness from illumination gradients, the visual system must understand the relationship between the illumination and the environment in which the object is located (dubbed "the lightfield"). Here we explored the importance of actively learning the lightfield. Realistically rendered scenes depicted objects with complex illumination gradients. We explored two learning paradigms. One where the object moved through a number of shape configurations before shape perception was tested. The other paradigm involved observers actively moving objects within a lightfield before lightness judgments were made. Our results suggested that observers are able to use illumination gradients to make consistent shape judgments, if they are given a short learning period, where they experience the object moving through all possible shape configurations. In the lightness study, we found that lightness constancy could best be achieved when observers experienced the lightfield during a systematic learning period. In sum, our work suggests the importance of active learning of the environment in the interpretation of lightness and shape via gradient cues.