Eye movements

Probability effects in antisaccade performance

Omar Johannesson, H. Magnus Haraldsson and Arni Kristjansson

Probability manipulations between the left and right visual fields have been reported to influence antisaccade performance of healthy observers such that latencies of antisaccades to low-probability landing points do not result in the widely reported latency cost for antisaccades compared to prosaccades (Liu et al. 2010, Journal of Neurophysiology, 103(3), 1438-1447). According to this, antisaccade costs are modulated by probabilistic contextual information such as regarding location. This probability effect was observed in a paradigm where the necessity for attetnional disengagement from a saccade target was eliminated. We investigated such probability manipulations for horizontal saccades in a number of different antisaccade paradigms, with and without gaps between fixation point and target and with 2 to 4 different target positions. No effects of probability manipulation were found upon latency costs for antisaccades in our paradigms. The latencies observed by Liu et al. were considerably longer than those typically seen for antisaccades, which raises the possibility that decision times for saccades were very long because of task difficulty. The disappearance of the antisaccade cost may therefore only apply to very difficult saccade tasks involving a challenging decision stage. Experiments are currently under way in our laboratory to explore this more thoroughly.

Distribution of gaze points during natural viewing under a wide field-of-view condition

Kazuteru Komine and Nobuyuki Hiruma

Eye tracking data during free viewing of natural scenes with a wide field-of-view (WFOV: 80 degrees) screen were collected using our newly developed gaze-tracking system, and the distribution of the data was compared with that acquired under a relatively narrow FOV (NFOV: 40 degrees) condition. The eye tracking system achieved 80 degrees in horizontal FOV and around two degrees in estimation error in almost measurable area without adding constraints to the subjects'viewing. We widened its measurable area by utilizing five eye-sensing cameras.The mean values of the entropy within each scene (5-30 sec) were derived from analysis applying a Gaussian mixture model to the collected data frame by frame, and these were compared for the two conditions. The numbers of salient areas in each frame were also extracted with a saliency model and blob analysis. We found that the mean entropy under the WFOV condition was considerably higher than that under NFOV for the scenes with moderate (5-15) salient regions. On the other hand, converse results were obtained for scenes with fewer or more salient areas. This suggests that the number of salient areas is a critical feature to differentiate viewers'eye movements under variant FOV conditions.

Importance of the position of face presentation for gaze and perceptual biases study

Helene Samson, Nicole Fiori, Karine Doré-Mazars, Christelle Lemoine and Dorine Vergilino-Perez

Previous studies demonstrated a left perceptual bias while looking at faces, observers using mainly information from the left side of a face to make a judgment task. Such a bias is consistent with right hemisphere dominance for face processing and has been sometimes linked to a left gaze bias, i.e. more and longer fixations on the left side of the face. Here, in several experiments we recorded eye-movements during a gender judgment task, using normal and chimeric faces presented at the top, bottom, left or right relative to the central fixation point or at the center. Participants performed the judgment task by remaining fixated on the fixation point or after executing one, two or three saccades. Overall, we observed that one saccade was sufficient to make a judgment task: Percentage of correct responses did not improve with further saccades. The left perceptual bias was function of the number of saccades that were performed, while the gaze bias was function of face position. No apparent link between gaze and perceptual biases was found in any experiments, meaning that a perceptual bias was not systematically coupled to saccades made toward the side of faces which was used to perform the gender judgment.

Dynamic characteristics of saccadic eye movements are affected by ocular size.

Mohammed Alhazmi, Lyle S. Gray and Dirk Seidel

Purpose: To determine the effect of eye size and ocular rigidity upon the characteristics of saccadic eye movements (SEM). Methods: 33 subjects (mean age 23.52±5.11 years) participated with informed consent. Axial length was measured using partial coherence interferometry (IOLMaster, Carl Zeiss, UK) and ranged from 21.3mm to 27.7mm. Ocular rigidity coefficients were determined using Schiotz tonometry and ranged from 0.013mm-3 to 0.019 mm-3. SEM were stimulated randomly up to 40 degrees right and left, in 10 degree steps using high contrast targets presented upon a widescreen monitor at 40cm viewing distance. Eye movements were recorded continuously at a sampling rate of 60Hz using the Viewpoint video-eyetracker (Arrington Research, USA). Subjects were grouped equally by axial length into short (22.63±0.66mm), medium (24.10±0.41mm) and long (25.90±0.10mm) groups. Results: Axial length was significantly negatively correlated with ocular rigidity (r2=0.78, p<0.001). Peak velocity was significantly faster for the short axial length group (F=20.825, df=2,263, p<0.001), although initial SEM accuracy was significantly worse (F=2.954 df=2,263 p=0.048). Peak velocity was significantly greater with increasing stimulus magnitude (F=741.759, df=3,263, p<0.001) and was also significantly greater for abductive movements (F=3.940 df=1,263 p=0.048). Conclusion: Eyes with short axial length and higher ocular rigidity generate significantly higher velocities of SEM.

Visual and virtual pursuit in movies: The oculomotor and memorial consequences

Yoriko Hirose, Benjamin W. Tatler and Alan Kennedy

The spatial coding of object position in movies is relatively poor compared to the processing of position information in static scenes or natural behaviour. During movie sequence that tracks an actor's progress, the background is in relative motion whereas the actor remains near-stationary. This virtual pursuit is in marked contrast to the situation when pursuing a moving target where the background is stationary but the target is in relative motion (visual pursuit). Is virtual pursuit (the phenomenal experience of pursuit without the concomitant eye movements) responsible for poor position memory in movies? Manipulating both the camera movement (static/tracking) and actor presence (present/absent), we compared the oculomotor and memorial influences of viewing movies with visual and virtual pursuit. The results showed that camera movement influenced fixation allocation on objects, whereas actor presence influenced both fixation allocation and durations. In contrast, object position and identity memory were explained by total fixation durations irrespective of camera movement or actor presence, while memory for colour was influenced by actor presence. The results indicate that camera movement and actor presence have oculomotor consequences and the latter also affects memory for colour, but these two factors do not influence information extraction for object position and identity.

Static Gaze Direction Detection in Children with Autism: a Developmental Perspective

Roberta Fadda, Giuseppe Doneddu, Sara Congiu, Azzurra Salvago, Giovanna Frigo and Tricia Striano

Gaze direction detection in Autism Spectrum Disorders (ASDs) is controversial. Accurate gaze judgement appears problematic for individuals with ASDs (i.e. Webster & Potter, 2008), while the basic geometric understanding of gaze direction seems to be preserved (Sweetenham et al., 2003). However, it is not clear whether possible impairments in ASDs might correspond to a possibly immature pattern of visual attention or to a specific deficit. Using eye tracking technology, we investigated gaze direction detection in children with ASDs, in comparison with three groups of typically developing individuals (TD): 20 chronological age-matched TD children, 20 TD toddlers and 20 TD adults. The results showed that ASDs were as accurate as controls across ages in gaze direction detection (F (3;76) = 3,006; p=0,035). However, they focused significantly less upon the eyes, which is the most relevant region of the picture for task completion, and they did not show any preference for the gaze target overall (F(6,152)= 5.02, p <0.05). Their pattern of attention was different from TD children and adults but similar to that of TD toddlers and therefore it could be considered as a result of an immature pattern of visual attention. References. Webster, S., & Potter, D. D. (2008). Brief report: Eye direction detection improves with development in autism. Journal of Autism and Developmental Disorders, 38(6), 1184-1186. Swettenham, J., Condie, S., Campbell, R., Milne, E., Coleman, M. (2003). Does the perception of moving eyes trigger reflexive visual orienting in autism? Philosophical Transactions of the Royal Society, Series B, 358, 325-334.

Spatial localization of sounds under free-field and dichotic presentation: The role of eye movements

Stefano Targher, Alessio Fracasso, Massimiliano Zampini and Francesco Pavani

In a previous study, Pavani and coworkers [Pavani et al., 2008, Experimental Brain Research, 189, 435-449] found interfering effects of an eye movement intervening between two consecutive free field sounds, when participants were asked to compare relative spatial position of the sounds (same/different task). By contrast, Kopinska and Harris [Kopinska and Harris, 2003, Canadian Journal Experimental Psychology, 57, 23-37] found no eye movement interference, using dichotic sounds delivered through headphones in a task in which participants had to indicate (i.e., reposition) the remembered location of a sound after the intervening eye movement. Free field sounds appear in a visible external space, whereas intracranial sounds do not. Here we adopted the paradigm by Pavani et al. (2008) to examine the impact of eye movement on auditory spatial cognition when sounds are delivered across blocks either free field or intracranially. Although a significant eye movement effect emerged regardless of sound presentation, the spatial interaction between eye movement direction and auditory change direction depended on whether sounds were free field or intracranial. This indicates different interactions between spatial coding of eye movement and sounds, as a function of whether the sounds appeared in a visible external space or not.

Comparing eye movements during impression judgment of faces in different personality traits: analysis of fixation locations and durations

Natsuko Nakamura, Ayumi Maruyama, Yoshinori Inaba, Hanae Ishi, Jiro Gyoba and Shigeru Akamatsu

We intentionally manipulated the attributes and social impressions evoked by facial images based on a statistical face model (Kobayashi et al., 2004; Walker & Vetter, 2009). However, manipulation remains unresolved of the parameters that dominantly contribute to the local features of facial appearance, which is specifically salient in personality traits. To clarify this issue, we investigated whether different features are gazed at while making impression judgments using synthesized face images generated by a statistical model as stimuli. Pairs of synthesized face images, both of which were generated by a statistical model of faces with parameters indicating different salience of a given personality trait (i.e., seniority, proficiency, etc.), were sequentially presented at normalized positions. For each pair of faces, subjects decided which one was more extreme with respect to the trait in question, while their eye movements were measured by a rapid eye-movement measurement system called EyeLink-C. The eye-movement results were represented in 2D histograms to indicate the spatial distribution of the cumulative duration of the gaze at each fixation point; positions corresponding to the mode of each histogram were analyzed by 2-way ANOVA (Nakamura et al., 2011). The results suggest that attention is drawn to different features in impression judgments. References [Kobayashi et al., 2004] Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp.711-716 [Walker & Vetter, 2009] Journal of Vision, 9(11): 12, pp.1-13 [Nakamura et al., 2011] Perception 40 ECVP Abstract Supplement, p.113

Sarkozy: left or right? How early can we choose?

Marie A Mathey, Gabriel Besson, Gladys Barragan-Jason, Pierre-Marie Garderes, Emmanuel J Barbeau and Simon J Thorpe

When a face and a vehicle are flashed left and right of fixation, reliable saccades to the face can be triggered from 100-110 ms (Crouzet et al, J Vis, 2010). But can subjects make selective saccades that depend on face identity? Using a manual go/no-go paradigm, our lab has already shown that subjects can detect photographs of Nicolas Sarkozy, with an accuracy of 94.6 ± 3.0%, mean RTs of 364 ± 20 ms and the fastest reliable responses occurring as early as 260 ms (Besson et al, ECVP, 2011). Here, we used the same material in a saccadic choice task. On each trial a photograph of Nicolas Sarkozy was paired with a photograph of another male face with roughly the same age and appearance. Over the complete set, the images were matched for pose and expression. With the saccade task, accuracy was lower, but still way above chance ([range 60-75]%). Above all, the responses were very fast, (median RT <190ms and minRT <140 ms). These results imply that information about identity can impact behaviour much faster than had previously been suspected. How such feat can be achieved is currently under investigation.

Saccadic mislocalization: related to eye movement amplitude or visual target location?

Maria Matziridi, Jeroen B. J. Smeets and Eli Brenner

A stimulus that is flashed around the time of a saccade tends to be mislocalized in the direction of the saccade target. Our question is whether the mislocalization is towards the visual target location or towards the gaze position at the end of the saccade. We separated the two with a visual illusion that influences the perceived distance to the target. We asked participants to make horizontal saccades from the left to the right end of the shaft of a Müller-Lyer figure. Around the time of the saccade, we flashed a bar at one of five possible positions. As expected, participants made shorter saccades for the fins-in (<-->) configuration than for the fins-out (>--<) configuration of the figure. The illusion also influenced the mislocalization pattern, with flashes presented on the fins-out configuration being perceived beyond flashes presented on the fins-in configuration. During the saccade, the effect of the illusion on the perceived location of the flash was similar to its effect on saccade amplitude (~22%). We conclude that the mislocalization is related to the eye movement amplitude rather than to the visual target location.

The effect of the Müller-Lyer illusion on reflexive, delayed, and memory-guided saccades

Anouk J. de Brouwer, Eli Brenner, W. Pieter Medendorp and Jeroen B.J. Smeets

The amplitude of saccadic eye movements is affected by size illusions such as the Müller-Lyer illusion. The magnitude of the effect of the Müller-Lyer illusion on saccades varies largely between studies (2.5-28.7%; Bruno et al, 2010, Vision Research 50, 2671-2682). Our goal is to further clarify this variability by testing the influence of a delay on the effect of the Müller-Lyer illusion. According to the 'two visual systems hypothesis' (Milner & Goodale, 1995, The Visual Brain in Action, Oxford, Oxford University Press), responses to memorized target positions rely on a perceptual representation coming from the ventral 'perception'pathway, which is affected by visual illusions. Reflexive actions depend on the dorsal 'action'pathway that does not have access to this perceptual representation. The proposed distinction between perception and action therefore implies that while reflexive actions are (largely) immune to visual illusions, memory-guided responses are influenced by illusions. In the present study, subjects performed two reflexive saccade tasks (with and without a gap) and two delayed saccade tasks (delayed and memory-guided responses) with the Müller-Lyer illusion. Contrary to the prediction of the two visual systems hypothesis, the effect of the illusion was not the smallest for reflexive responses.

Saccade targeting of spatially extended objects: A Bayesian model

André Krügel and Ralf Engbert

Visual perception in scene viewing, visual search, or reading is based on saccadic selection of spatially extended objects or patches from the environment for foveal analysis. Saccade planning, however, requires the computation of a localized target position. For example, word centers are the functional target locations within words during reading. An open problem is how readers transform the distributed spatial information into a localized target position for a saccade. Recently, Engbert and Krügel (Psychological Science, 21(3), 366-371, 2010) presented a Bayesian model for optimal saccade planning during reading. The model assumes that readers apply an efficient strategy for the computation of the center of target words. Here we present an extended Bayesian model that includes the probabilistic computation of the word center. Using this example from saccade generation in reading, we analyze the general problem of how two visual cues (i.e., word boundaries in reading) can be used to derive optimal estimates of the center of spatially extended targets. We demonstrate that our Bayesian model is compatible with well-established oculomotor effects. We expect that the model might help to explain saccade planning across a broad range of tasks from scene perception to visual search.

A simulation study of retinal enhancement effects caused by fixation eye movements

Takeshi Kohama, Takuya Fukuoka and Hisashi Yoshida

In this study, we performed simulation experiments using a mathematical model of the retina to evaluate the effects of fixation eye movements on the peripheral retina. Recent studies indicate that drifts and tremor enhance particular spatial frequency components [Rucci et al., 2007, Nature, 447, 852-855], and microsaccades emphasize contrast differences present in the visual stimuli [Donner and Hemila, 2007, Vision Res., 47, 1166-1177]. However, the effects of fixation eye movements on the retina, especially on the peripheral retina, are not well understood. The proposed model of the retina considers the distribution function of cone cells, and reproduces the increasing size of the peripheral receptive fields of ganglion cells , based on the mathmatical model of the retinal network [Hennig and Worgotter, 2007, Frontiers Comp. Neurosci., 1, 1-12]. The simulation results showed that drifts and tremor enhanced the responses of ganglion cells for high spatial frequency input, and microsaccades enhanced the responses for low spatial frequency input. These trends were more prominent for the M-type ganglion cells at some distance from the fovea. Furthermore, the simulation suggested that fixation eye movements might generate synchronous fluctuation in the responses of ganglion cells.

Computational mechanisms of visual stability

Fred Hamker and Arnold Ziesche

Cells in many visual areas are retinotopically organized and thus shift with the eyes, posing the question of how we construct our subjective experience of a stable world. While predictive remapping (Duhamel et al., 1992, Science, 255, 90-92) and the corollary discharge (CD) to move the eyes (Sommer & Wurtz, 2006, Nature, 444, 374-377) have been proposed to provide a potential solution, there exists no clear theory let alone a computational model of how CD and predictive remapping contribute. Based on a recent model of area LIP (Ziesche & Hamker, 2011, Journal of Neuroscience, 31, 17392-17405) that focused on spatial mislocalization of brief flashes in total darkness, we show that predictive remapping emerges within a model of coordinate transformation by means of the interaction of feedback and CD. Moreover, we demonstrate the influence of predictive remapping on visual stability as objectified by a suppression of saccadic displacement task (Deubel et al., Vis Res, 199, 36, 985-996) using the same model. Remapping introduces a feedback loop which stabilizes perisaccadic activity and thus leads to the typical increase in displacement detection threshold, thereby preventing misperceptions about stimulus displacements which might otherwise arise from motor errors in the saccade execution.

Motion direction integration following the onset of multistable stimuli (I): dynamic shifts in both perception and eye movements depend on signal strength

Andrew Meso, James Rankin, Olivier Faugeras, Pierre Kornprobst and Guillaume Masson

We used an obliquely oriented moving luminance grating within a square aperture as a stimulus. We probed how the aperture problem (determining direction of a contour within an aperture) is solved following onset. It is perceived to move in horizontally (H), diagonally (D) or vertically (V), shifting perception during extended presentation. The initial percept (D) leads to two competing 'stable' solutions H and V from the orthogonal 2D cues around the edges. During brief stimulus presentations of 200-500ms, participants reported perceived direction (H, D or V) in a 3-alternative forced choice task while eye movements were recorded. As expected when solving the aperture problem, integration took time: reported direction is predominantly 1D (D) at 200ms, shifting to 2D (H/V) by 500ms. Eye direction traces converge to an average direction (H, D or V) that corresponds to participant decisions. The latency of this separation of averaged traces depends on input signal strength parameters like contrast. The relationship between input signal, distributions of perceived direction and forced choice decision thresholds are well described by a neural fields model in our companion abstract (II). The onset dynamics of multistable direction representation are demonstrated to be well studied by ocular following eye movements.

Perceived motion blur and spatial-interval acuity during smooth pursuit eye movements and fixation

Harold E. Bedell, Michael J. Moulder and Jin Qian

Retinal image motion during smooth pursuit eye movements results in a smaller extent of perceived motion blur than when similar image motion occurs during steady fixation (e.g., Bedell & Lott, 1996, Current Biology, 6: 1032-1034). We asked if this reduction of perceived motion blur during pursuit influences spatial-interval acuity. Observers pursued a target moving horizontally at 4 or 8 deg/s and judged whether the horizontal separation between two physically stationary lines, presented for 167 ms, was larger or smaller than a previously viewed standard. Three different line separations were tested for each pursuit velocity. The observers also made spatial-interval judgments during fixation, for pairs of lines moving horizontally to generate the same distribution of retinal image speeds as those during pursuit. Spatial-interval acuity was better during pursuit than fixation, especially for smaller separations between the lines of the spatial-interval stimulus. The results indicate that the reduction of perceived motion blur during pursuit eye movements is associated with improved visual spatial performance.

What can eye movements tell us about object recognition in the normal and impaired visual system? The case of integrative agnosia

Charles Leek, Candy Patterson, Robert Rafal and Filipe Cristino

In a recent study (Leek et al., 2012, Journal of Vision, 12, 1, 7) we have shown how eye movement patterns may be used to elucidate shape information acquisition mechanisms during object recognition. Here we report some of the first ever evidence examining eye movement patterns during object recognition in visual agnosia. The eye movements of an integrative visual agnosic patient IES, and controls, were recorded during two object recognition tasks: Object naming and novel object recognition memory. Differences in the spatial distributions of IES's fixations, and fixation dwell times, were correlated with recognition performance in object naming. In addition, in both object naming and novel object recognition memory the patient showed abnormal saccade amplitudes with a bias towards shorter saccades. In contrast, the patient showed normal directional biases and sensitivity to low-level visual saliency. It is suggested that this bias towards low amplitude saccades, and the aberrant spatial distribution of fixations in common object naming, reflects a breakdown in the functional link between bottom-up and top-down guidance of eye movements during shape perception.

Instrumental activities of daily life in Age-Related Macular Degeneration (AMD)

Céline Delerue, Miguel Thibaut, Thi Ha Chau Tran and Muriel Boucart

Questionnaires of quality of life indicate that people with AMD report difficulties in performing vision-related daily activities, such as reading, writing, cooking... Studies on visual perception in AMD classically use photographs of objects. However, images differ from the natural world in several ways, including task demands and the dimensionality of the display. Our study was designed to assess whether central vision loss affects the execution of natural actions. We recorded eye movements in people with AMD and age-matched normally sighted participants while they accomplished a familiar task (sandwich-making) and an unfamiliar task (model-building). The scenes contained both task-relevant and task-irrelevant objects. Temporal and spatial characteristics of gaze were compared for each group and task. The results show that patients were able to perform both tasks, though patients were slower and less accurate than controls to copy the display model in the unfamiliar task. Patients exhibited longer gaze durations than controls on irrelevant objects in both tasks. They also looked longer at task-relevant objects and needed to manipulate more the objects to identify them. People with AMD exhibit difficulties in natural actions, but they seem to establish compensatory strategies (e.g., object manipulations) to accomplish correctly the task.

Object And Scene Exploration In People With Age-Related Macular Degeneration (AMD)

Miguel Thibaut, Céline Delerue, Tran Thi Ha Chau and Boucart Muriel

People with lesions of the macula develop a central scotoma and must rely on their peripheral vision. Most studies in people with central field loss have focused on reading which constitutes the main complaint of patients. Some studies have shown impairments in face or object detection or recognition tasks. Little is known about how people with central visual field loss explore realistic images. Could it be that deficits in object perception result from impaired eye movements? We recorded scan paths, saccades and fixations, and naming times in 20 patients with wet AMD (mean acuity 3/10) and 15 age-matched normally sighted controls (mean acuity 9.3/10). Photographs of isolated objects, natural scenes and objects in scenes were centrally displayed for 2 sec. The mean angular size was 30 horizontal X 20°. On average accuracy was higher by 23% for controls than for patients. This difference was equivalent for isolated objects and for objects in scenes. The proportion of fixations in regions of interest was lower in people with AMD and their number of saccades was larger than controls. These results suggest that abnormal patterns of exploration might contribute to deficits in object and scene recognition in people with AMD.

Predicting which objects will be named in an image

Alasdair Clarke, Moreno Coco and Frank Keller

Object detection and identification are fundamental visual tasks. Einhauser et al [2008, Journal of Vision, 8(14):18] found that objects are a better predictor of fixation locations than early saliency models. Spain and Perona [2011, IJCV, 91(1)] constructed a model of 'object importance' (defined as the probability of naming an object in a scene) based on object position, size and saliency. For the present study we carried out an eye-tracking experiment involving 24 participants and 132 natural scenes. In each trial, an image was displayed for 5 seconds. Once the image had been removed from the screen, participants had to name as many objects from the scene as possible. In our analysis we examine the extent to which an individual's naming behaviour can be predicted from their scan-path. We test how well scan-path similarity metrics, such as ScanMatch [Cristino et al, 2010, Behaviour Research & Methods, 42(3)], identify participants who named a similar sequence of objects. In particular, we investigate cases in which an object is correctly named without being fixated; and fixated without being named. We aim to model this behaviour using a combination of visual (object saliency), contextual and linguistic (word frequency) information.

Illusory perceptual mislocalization of a spatially extended target does not effect eye movements

Dhanraj Vishwanath

In order to judge the relative perceptual location of a spatially extended object, or to make a saccadic eye movement to it, an abstract central reference position such as the center of gravity has to be computed. I demonstrate a perceptual illusion in which the center of a spatially extended target is mislocalized as a function of its 2D orientation. Observers visually aligned a small dot with the center of an oriented target (an elongated elliptical shape). There was a large misalignment when the target axes were oriented away from the cardinal directions; the size of the misalignment was a function of object orientation and as large as 15% of the major object axis length. However, average landing positions for single saccades made to the object from the comparison dot showed no similar mislocalization bias. Furthermore, perceptual mislocalization varied as a function of the eye-movement/fixation task: fixating the comparison dot when doing the alignment yielded the largest bias, followed by free scanning. The lowest errors were obtained when the oriented target itself was fixated. Taken together, the results suggest that eye movement programming has access to the true center of an object even in situations where its perceptual location cannot be accurately determined. The results bear on the debate regarding differing representations for visual judgements and visually-guided motor actions.

Why you are more likely to see a snail move if it is surrounded by grasshoppers: Influence of the prior assumption of stationarity on saccadic suppression

Marianne Duyck, Mark Wexler and Thérèse Collins

Object displacement often goes unnoticed when it happens around the beginning of a saccade. It has been suggested that this phenomenon, saccadic suppression, results from a combination of three sources of information: retinal, extra-retinal, and the assumption that the world is stable during eye movements. Here, we investigated this issue experimentally using a classic saccadic suppression paradigm. We manipulated the prior assumption stationarity by varying the probability of target displacement around the saccade in two conditions: in the unstable condition, the target jumped on every trial whereas in the stable condition, those jumps occurred on only 25% of the trials. The subjects'task was to maintain fixation on the target and to follow its first displacement, then to report the direction of the second displacement in a 2AFC procedure. Results indicate that saccadic suppression increases in the stable condition compared to the unstable one: subjects are worse at detecting target displacements around the beginning of the saccade when target displacement is rare, than when the target always shifts during saccades. Thus, visual stability mecanisms seem to take into account knowledge regarding the stability of the world, knowledge that can be updated during the course of a two-hour experiment.

Peripheral spatial frequency processing affects timing and metrics of saccades

Jochen Laubrock, Anke Cajar and Ralf Engbert

The visual acuity gradient with eccentricity and cortical magnification suggests a specialization of foveal vision for high-acuity analysis and peripheral vision for orienting and target selection. How does this specialization affect eye movements? We used Gaussian-envelope gaze-contingent filtering of spatial frequencies in natural scenes and colored-noise images to investigate influences of level-of-detail on saccade amplitudes and fixation durations in free-viewing and search tasks. Filter characteristic (low-pass vs. high-pass), filter location (foveal vs. peripheral), and filter size were independently varied. Spatial results reveal a preference for targeting regions in the region that was unfiltered during the current fixation, and an additional influence of filter characteristic. Surprisingly, temporal results indicate that peripheral information does play a role in controlling fixation duration. Task influences are mainly visible in how the filter characteristic affects the evolution of durations and amplitudes over the course of the trial. We conclude that both foveal and peripheral vision contribute to how and when we move our eyes.

Effect of perceptual grouping by similarity on eye movements in processing simple visual stimuli

Ivars Lacis, Ilze Laicane and Jurgis Skilters

In the present study we analyzed the impact of grouping by similarity on saccadic processing of simple sequential stimuli. We used four sets of stimuli: (1) points, (2) points, triangles, squares, and rhombs (two or more figures of the same shape were followed by two or more figures of a different shape), (3) the 2nd set of stimuli colored according to the shape, and (4) the 2nd set of stimuli with a colored background according to the shape. Stimuli were distributed in the same distance from each other and the size of stimuli was equant. In measuring saccadic amplitudes and stability of gaze during fixations we observed increasing processing load in grouping stimuli. This is reflected in (a) increase of standard deviations of amplitude dispersion in the tasks with grouping effects (additionally the distribution of saccadic amplitudes is significantly higher in stimuli with grouping effects); (b) increase of small saccades in the grouping tasks, (c) a systematic increase of the amplitude of microsaccades within a fixation time. These observations enable us to confirm two general assumptions: a. the grouping significantly influences saccadic processing of simple visual stimuli, b. the complexity of grouping is reflected in complexity of saccadic processing.

Eye movement deficits in neglect dyslexia

Silvia Primativo, Lisa Saskia Arduino, Maria De Luca, Roberta Daini and Marialuisa Martelli

Neglect Dyslexia (ND) is an acquired reading disorder often associated with right-sided brain lesions and Unilateral Spatial Neglect (USN). In reading aloud single words patients with ND produce left-sided errors. The reported dissociations between USN and ND suggest that the latter can be interpreted as a selective reading deficit distinct from USN. We analyzed eye movements in USN patients with and without ND (respectively ND+ and ND-) and in a group of controls (right brain-damaged patients without USN) comparing a reading aloud task and a saccadic task (left-right saccade test). Only ND+ patients did left-sided errors and showed an impaired behavior in saccade execution both in reading and in the saccadic tasks. Finally, in a speeded reading-at-threshold experiment, that doesn't allow for eye movements, ND- patients, but not controls, did left-sided errors. Our results indicate that ND+ patients have an impaired eye movement pattern in addition to their spatial attention disorder that exposes the neglect gradient in reading, ND- patients show the same gradient in reading errors when eye movements are prevented. We conjecture that ND rather being a dissociated disorder is the result of the USN syndrome when the fine eye movements required in reading are compromised.

Reading parallel texts - augmentation and eye movements

Gavin Brelstaff, Francesca Chessa, Ludovica Lorusso and Enrico Grosso

The experience of reading translated texts can be augmented by presenting parallel texts, of the source and translation, side-by-side on the page. On screen those texts can be made to dynamically reveal correspondences between their words and phrases (Chessa & Brelstaff, Proc. of CHItaly, ACM, 2012). We attempt to assess the utility of such methods by monitoring the reader's eye movements, with and without augmentation - using a Tobii TX300 tracker. The literature on eye movements and reading (e.g. Rayner Psychological Bulletin, 1998; Schultz et al JOV, 2011) has little to say on parallel text appreciation: Nevertheless, a greater number of fixation regressions is a likely consequence of switching gaze between columns of parallel texts. Our results indicate if regression rates improve with augmentation - via on-screen dynamic colour highlighting. We also report variations occurring for the task of reading-while-listening (Levy-Schoen 1981) to the text in either language.

Post-saccadic location judgements after presentation of multiple target-like objects

Sven Ohl, Stephan Brandt and Reinhold Kliegl

In the present study we examine how the interplay between oculomotor error, secondary (micro-)saccades and available visual information affect post-saccadic location judgements when multiple target-like objects are presented during post-saccadic fixation. During saccade flight the screen presenting the target was replaced by a screen with 63 target-like objects presented horizontally side by side. Subjects were asked to fixate the target and to indicate via mouse click the object they assume to be identical with the pre-saccadic target. Each subject participated in two sessions which differed with respect to whether or not a blank period of 200 ms was inserted between saccade initiation and presentation of the target array. Contrary to our expectation, preliminary analyses did not reveal significantly different location judgements between the blank and no-blank condition. Nevertheless, inserting a blank significantly influenced oculomotor behavior by decreasing the number of secondary (micro-)saccades during an early time window. When subjects generate a post-saccadic eye movement, they strongly tend to choose the object which is fixated after the post-saccadic eye movement. Results are discussed with respect to current mechanisms proposed to underlie the perception of a stable visual world.

The effect of compensatory scanning training on visual scanning in hemianopia patients

Gera de Haan, Joost Heutink, Bart Melis-Dankers, Oliver Tucha and Wiebo Brouwer

Homonymous hemianopia, the most common form of Homonymous Visual Field Defects (HVFD), refers to a loss of perception for half the visual field, affecting both eyes, due to acquired postchiasmatic brain injury. This partial blindness may lead to a disorganized visual search strategy and particular difficulties with visual exploration. A new Compensatory Scanning Training (CST) protocol has been developed, which aims to improve awareness, scanning and mobility in daily life. The main focus of this training is to teach patients to adopt a systematic scanning rhythm. Among other tests, we administered three visual scanning tests before and after CST in a group of 50 hemianopia patients. The eye movements were registered during a dot counting task, a search task with a parallel and a serial search condition, and a hazard perception task, in which subjects watched photos of traffic situations from the perspective of the car driver. The eye movements on the different scanning tasks before and after CST will be compared, as well as the relationship between scanning strategy and performance, as measured by reaction times and accuracy scores. The hypotheses are that CST has a beneficial effect on scanning and that different search tasks require different scanning strategies.