Perisaccadic broadening of receptive fields predicts compression of space and time
Guido Marco Cicchini and Maria Concetta Morrone
Saccades cause profound transient changes to vision, both to the spatial properties of receptive fields of parietal cortex of macaque monkey and to human perception. In particular, the apparent separation of stimuli are heavily compressed, in both space and in time. Here we modelled saccadic compression and its dynamics by considering that neuronal RF undergoes an enlargement of the spatial and temporal impulsive responses. The front stage of the model is a battery of parallel linear filters whose impulse response broadens (on command from a corollary discharge signal). The crucial stage is a decision/classification of stimulus separation, which measures the spatial-temporal overlap of the neuronal activity (redundancy). In fixation, the model performs as an optimal detector for separation, simulating well Weber’s law. Assuming a peri-saccadic receptive-field broadening of a factor of 3, we can model successfully both spatial and temporal compression, simulating well the dynamic for space and time. Remarkably the model also predicts a preservation of Weber's Law for perisaccadic stimuli [Morrone et al, 2005 Nature Neuroscience, 8(7):950-954]. Overall, this model simulates quantitatively a large battery of psychophysical data (which have proven very resistant to attempts to model them), and also many physiological findings.
Acoustic modulation of perisaccadic visual detection
Michela Panichi, Francesco Guidotti and Stefano Baldassi
Around the time of saccades space and time are misperceived. Using psychophysical reverse-correlation, we investigated the perceptual mechanisms subserving visual detection when a brief sound was played at different times relative to saccadic onset and display of the visual target. In a 2AFC task, observers detected the presence of a near-threshold white bar embedded in a 15°x1.5° strip of white dynamic noise briefly flashed either during fixation or immediately after onset of a 15° horizontal saccade. Each trial was either silent (unimodal) or accompanied by a 21 ms sound (bimodal), simultaneous with the visual target, or 43 or 105 ms after it. We computed classification images (CIs) for each condition and found that they depended both on eye position and on temporal separation between visual and acoustic stimuli. The sound increased the amplitude of the CI during fixation only when it was presented simultaneously, whereas during saccades it did so only when it was delayed. The results suggest that cross-modal interaction facilitates perisaccadic detection by providing temporal references to the visual system, thus reducing uncertainty; and they reinforce studies (Binda et al, J. Neuroscience, 2009; Panichi et al, JoV, 2012) suggesting that visual processing is delayed during saccades.
The influence of perceptual grouping by proximity and good continuation on saccadic eye movements
Tandra Ghose, Frouke Hermens and Johan Wagemans
Previous research [Ghose, Hermens & Wagemans, VSS, 2012] suggested that saccade latencies can be used as an indirect measure of the strength of perceptual grouping. In these experiments, circles formed by a set of dots embedded in a background of randomly placed dots reduced the time to initiate a saccade to a target when the circle appeared in a location congruent with the target. While these findings suggest a role for perceptual grouping on saccade latency, the stimuli did not distinguish between the effects of grouping by proximity and by good-continuation. Here, we disentangle the effects of the two grouping factors. Fields of oriented Gabor elements were rendered using the GERT toolbox [Demeyer and Machilsen, 2011, BRM]. The circles were defined by proximity and by good continuation, or by proximity only (circle elements with random orientations but distance smaller than background), or by good-continuation only (the distance between the properly oriented circle elements was the same as that of the background). We found that grouping by proximity and good-continuation resulted in significant differences in saccade latencies between congruent and incongruent trials but proximity or good-continuation per se failed to show any significant effect in the absence of the other one.
Scene context and object information interact during the first epoch of scene inspection
Sara Spotorno, George Malcolm and Benjamin Tatler
Are target template and scene context used simultaneously or sequentially during the first epoch of visual search? We manipulated independently the specificity of the template (the picture or the name of the target) and the plausibility of target position in real-world scenes. The availability of a specific visual template facilitated search initiation mainly when the target was in an unexpected location, and a plausible position of the target facilitated search initiation mainly when the template was verbal. Especially with verbal template, participants were more likely to launch the first saccade toward the expected target location when it was occupied by a distractor object than when it was empty. Perceptual salience, evaluated by independent judges, also influenced first saccade direction, and the probability of saccading toward the target was greater when it was of higher salience than the distractor. Our findings show that both target template information and contextual guidance are utilised to guide eye movements from the beginning of scene inspection. They indicate, moreover, that the visual and semantic properties of the object are utilised as sources of local information during the first epoch of visual search.
Spatiotopic maps take time to construct
Eckart Zimmermann, David Burr and Maria Concetta Morrone
Many imaging and psychophysical studies suggest that there exist in the human brain spatiotopic neural maps. We investigated the temporal buildup of spatiotopic representation, using two techniques. In both cases, as subjects fixated a point, a target appeared to which they saccaded on cue after variable exposure duration (0, 500 or 1000 ms). We first measured the tilt-aftereffect: subjects adapted to a tilted grating in one part of the screen, then after the saccade judged the orientation of a grating in either the same retinotopic or spatiotopic position as the adapter. For short durations of saccadic target display, the adaptation was primarily retinotopic; but for longer durations (allowing more time to encode position), spatiotopic adaptation increased and retinotopic decreased, saturating at about 1 sec. The second experiment was a variant of “saccadic suppression of displacement” of a target displaced during saccades. Threshold performance improved considerably with exposure duration of saccade target. Both experiments suggest that encoding in spatiotopic coordinates builds up over time, up to one second. The data also account for much of the apparent inconsistencies in the literature about the existence of spatiotopic encoding.
Optimal integration of afferent and efferent signals in spatial localization
Martina Poletti, David Burr and Michele Rucci
As we explore a scene, the visual system can rely on multiple signals to keep track of where objects are in space: the retinal input, a corollary discharge, and extraocular proprioception. Statistically, the optimal strategy to combine multiple estimates is a linear weighted average based on their reliability. Here, we used a novel gaze-contingent display procedure to investigate whether spatial localization in humans conforms to the rules of such an ideal observer. In complete darkness, participants searched for two hidden targets that were briefly displayed at the current gaze position, each one after a predetermined number of saccades. Upon finding the second target, subjects reported the remembered location of the first. As predicted by an ideal observer model, the localization error increased linearly with 1-3 saccades then began to saturate. Extraocular proprioception contributed to 20% of the estimate with just one saccade, increasing to 40% after 3 saccades. Our results show that afferent and efferent signals are optimally combined in the representation of space, thus reconciling a large body of conflicting results in the literature.