Encoding and decoding

Tracking Eye Movements While Encoding Faces

Chantal Lemieux, Elizabeth Nelson and Charles Collin

While many studies have shown that a middle band of spatial frequencies (SF) is most useful for face recognition, others have pointed out that the most informative SF ranges vary depending on location on the face. In this study, we examined similar issues by measuring 32 subjects' eye movements during the encoding phase of an old/new face recognition task. Stimuli were 16 faces filtered to preserve 11 SF bands across the spectrum, plus an unfiltered baseline condition. Twelve areas of interest (AOIs) were defined for each face, and total fixation time was analyzed across AOI and SF. Results show that low SFs elicited more fixations on medial AOIs such as nose, forehead and chin. This may indicate a tendency towards holistic processing, whereby fixation on these features represents an attempt to take in the entire face. In contrast, high SFs elicited more fixations on inner features, such as eyes and mouth, suggesting greater featural processing. Our results are compatible with previous work suggesting that low and high SFs respectively support holistic and featural processing. The advantage of middle SFs may arise because these allow for sufficient analysis of both holistic and featural aspects of face processing.

The effects of background noise on microsaccades generation in humans

Halim Hicheur, Steeve Zozor, Aurélie Campagne and Alan Chauvin

Microsaccades are miniature saccades occurring once or twice per second during visual fixation. While microsaccades and saccades share similarities at the oculomotor level (same kinematic properties, e.g. duration, specific relationship between peak velocity and amplitude...), the functional roles of microsaccades are still debated. In this study, we examined specifically the possibility that the 'microsaccadic activity' is affected by the type of background across which fixation is maintained. Using a forced choice-task paradigm adapted from Rucci and colleagues [2007 Nature 447(14), 851-855], we found that the performance of subjects in discriminating the tilt of high-spatial-frequency stimuli (textured ellipses) was significantly better in the dynamic condition compared to the static condition, on average (but this was not systematic for each of the 30 tested subjects). This was associated with a systematic and significant effect of the background noise on the microsaccade rate: microsaccades occurred more frequently (by up to 25 %) in the static condition. Our experimental findings served to integrate the signal-to-noise ratio (between the stimulus and the visual environment) as a critical parameter in a simple model of microsaccade generator. Taken together, our experimental findings and our preliminary computational predictions provided new insights into the potential roles of microsaccades in visual perception.

Representation of component motion in V1 predicts perceptual thresholds for pattern motion.

Stephanie Lehmann, Alexander Pastukhov and Jochen Braun

We study the perceptual representation and model the neural representation of pattern motion. Lacking quantitative characterizations of neuronal responsiveness in area MT, we derive theoretical predictions from the responsiveness of motion-selective neurons in area V1, which are comparatively well characterized. Given a quantitative model of responsiveness (tuning and variability) to component motion, we have previously predicted the responsiveness (tuning and variability) to pattern motion, assuming statistically efficient integration of Fisher information (ECVP 2011). These theoretical results predict that sensitivity to speed and direction of pattern motion should vary with the constitutive component motions. We now report psychophysical threshold measurements that quantitatively confirm these predictions. Specifically, 5 Observers viewed composite arrays of two types of component motion wavelets. Thresholds for direction of pattern motion decrease, whereas thresholds for speed of pattern motion increase, with the angle between component wavelets. In conclusion, we can predict neural responsiveness at the level of pattern motion (presumably in area MT) from psychophysical measurements and from neural responsiveness at the level of component motion (area V1).

The Role of Neural Noise in Perceiving Looming Objects and Eluding Predators

Matthias Keil

For many organisms, escaping from predators and avoiding collisions is of paramount importance to shun fatality. Looming-sensitive neurons reveal similar properties across species. The ETA-function describes one class of such neurons, and asserts a multiplicative interaction and an exponential nonlinearity [Gabbiani et al, 2002, Nature 420, 320-324]. Recently, I showed that a new function ('PSI') could implement ETA in a biophysically plausible fashion without multiplication [MS Keil, NIPS 2012]. Instead of ETA's exponential nonlinearity, PSI incorporates a power law, that also agrees better with neurophysiological properties of the locust lobula giant movement detector (LGMD). This neuron receives a large number of inputs, where noise levels increase from the photoreceptors to the LGMD [Jones and Gabbiani, 2012, J. Neurosc. 107, 1067-1079]. Provided that some of these input channels have threshold properties, and assuming independent noise in each channel, a power law emerges as a result from pooling. As PSI represents a biophysical implementation of ETA, the groundbreaking idea is that no nonlinearities besides thresholding are necessary for explaining the properties of collision-sensitive neurons. I thus will present corresponding simulation results with a modified PSI model and discuss the implications.

Encoding Space in Time during Active Fixation

Xutao Kuang, Martina Poletti, Jonathan D. Victor and Michele Rucci

It has long been known that humans and other species continually perform microscopic eye movements, even when attending to a single point. However, the impact of these movements on retinal stimulation and on the neural encoding of visual information remains unclear. Here, we examine the spatiotemporal stimulus on the retina of human observers while they freely view pictures of natural scenes. We show that the spectral density of the retinal image during normal intersaccadic fixation differs sharply from that of the external scene: whereas low spatial frequencies predominate in natural images, fixational eye movements equalize the power of the retinal image over a wide range of spatial frequencies. This reformatting of the visual input prior to any neural processing attenuates sensitivity to redundant information and enhances responses to luminance discontinuities, outcomes long advocated as fundamental goals of early visual processing. Our results link microscopic eye movements to the characteristics of natural environments and indicate that neural representations are intrinsically sensory-motor from the very first processing stages. (Supported by NIH EY18363, EY07977 and EY09314, and NSF BCS-1127216)

Smothered by the scene: When context interferes with memory for objects.

Karla Evans and Jeremy Wolfe

How tightly are object representations in our memory bound to scene context? Arbitrary objects, presented in isolation are remembered very well when tested in a subsequent old/new task (Brady et al., 2008, PNAS, 105, 12325-14329; d'=2.27 in our data). When we asked observers to memorize the same objects embedded in a scene, but clearly placed in an outlined box, performance dropped (d'=1.40). When observers memorized objects in one scene context and had to make an old/new judgment about those objects and foils presented in new contexts, performance dropped still further (d'= 0.62) even though the scene contexts were completely irrelevant to the task. The negative effect of scene context was more pronounced if the scene was present during the encoding phase (d'=0.53) compared to the retrieval phase (d'=1.44). In other circumstances, context information has been shown to facilitate retrieval of object details (Hollingworth, 2006, J Exp Psychol Learn, 32, 58-69; Hollingworth, 2007, J Exp Psychol Human, 33, 31-47). Here, however, these findings suggest that scene information is involuntarily encoded with object information in a manner that has a disruptive effect on memory for the object alone, in context, or, especially, in a novel context.