Recognition errors in crowding-effect in central vision
Valeriy Chikhman, Valeria Bondarko, Marina Danilova and Sergei Solnushkin
In central vision we compared the recognition of test stimuli presented in isolation or in the presence of surrounding objects. The stimuli were stylized low-contrast letters having size 1.1, 2.1 or 4.3 deg., and surrounding objects were digits 1 - 9 of size 1.3 deg. The digits were presented at various distances from the test image. In different experimental sets, participants were asked to identify either the central test stimulus or the both the test and the additional peripheral digit. We found that small separations resulted in a crowding-effect in central vision. Recognition errors in the presence or absence of surrounding objects were compared using correlation analysis and modeling. At small separations, recognition errors were less correlated with the errors to the isolated test (test stimuli presented alone) than were the errors recorded at large separations. In modeling the 'distances' between stimuli were calculated using a template model, or a model in which the difference spectra were calculated when the images'centroids were superposed. The best correlation with the experimental data was obtained with the template model. Our results support the hypothesis that failure of feature integration occurs in the presence of crowding-effect.
Shifts of visuospatial attention in perihand space
Marc Grosjean and Nathalie Le Bigot
Research on shifts of visuospatial attention in perihand space have apparently lead to conflicting findings. In particular, whereas some Posner cueing studies have found that hand proximity modulates the size of cueing effects, others have not. One reason for such discrepancies may be related to the types of cues (uninformative and informative) that have been used, as they are known to induce different types of shifts of attention (involuntary and voluntary, respectively). To systematically address this question, two experiments were performed in which an uninformative peripheral cue (Experiment 1) or an informative central cue (Experiment 2) preceded a peripheral target with a short (100-150 ms) stimulus-onset asynchrony. Participants performed the tasks under four hand positions: Left hand, right hand, both hands, or no hands near the display. Cueing effects were obtained in both experiments, however, in contrast to Experiment 2, Experiment 1 also revealed an interaction between cue validity and hand position. This reflected that there was a larger cueing effect in the right- and both-hands conditions than in the other conditions. These findings suggest that involuntary shifts of attention are affected by hand proximity, while voluntary shifts are not.
Top-down control in contour grouping: An EEG study.
Gregor Volberg, Andreas Wutz and Mark W. Greenlee
Human observers tend to group oriented line segments into full contours following the Gestalt rule of 'good continuation'. It is commonly assumed that contour grouping emerges automatically in early visual cortex. In contrast, recent work in animal models suggests that contour grouping requires learning and thus involves top-down control from higher brain structures. Here we explore mechanisms of top-down control in perceptual grouping by investigating synchronicity within EEG oscillations. Human participants saw two micro-Gabor arrays in a random order, with the task to indicate whether the first (S1) or the second stimulus (S2) contained a contour of collinearly aligned elements. Contour compared to non-contour S1 produced a larger posterior post-stimulus beta power (15-21 Hz). Contour S2 was associated with a pre-stimulus decrease in posterior alpha power (11-12 Hz) and in fronto-posterior theta (4-5 Hz) phase couplings, but not with a post-stimulus increase in beta power. The results indicate that subjects used prior knowledge from S1 processing for S2 contour grouping. Expanding previous work on theta oscillations, we propose that long-range theta synchrony shapes neural responses to perceptual groupings by either up- or down-regulating lateral inhibition in early visual cortex.
Core Characteristics Defining the Nature of Unilateral Neglect Syndrome
Marina Pavlovskaya, Nachum Soroker, Yoram Bonneh and Shaul Hochstein
We are studying perceptual-cognitive capabilities of patients with unilateral neglect. Traditionally, neglect is the inability to perceive left-side stimuli, not due to lack of sensation. Deficits may include failure at cancellation and line-bisection tasks and extinction. Results from our study series put strong constraints on underlying mechanisms of Neglect. We determine neglect/extinction rigorously measuring left-side Gabor-patch contrast threshold in presence/absence of right-stimuli. Left-side perception is statistical, with accurate contrast determination for perceived stimuli. Thus, contrast sensitivity reduction is not low-level attenuation, but reflects heightened-threshold at higher processing levels, preventing conscious awareness of sensory events. We find great differences between the impacts of neglect on tasks requiring focused attention (conjunction search) versus those requiring spread attention (feature search), suggesting neglect only affects the process of focusing attention, and that spread-attention deficits only arise from extinction-like effects. Similar results derive from experiments on patients'assessing statistical properties of clouds of elements. USN patients compute full-field means, giving reduced weight to left-side stimuli, largely due to extinction. This confirms the conclusion that neglect is a high-level effect, which prevents conscious integration of left-side elements with those on the right, especially in cases where stimulus elements are present on both sides. This research was supported by the Israel Science Foundation (ISF) and the US-Israel Bi-national Science Foundation (BSF), as well as the National Institute for Psychobiology in Israel (NIPI) to author MP.
Attentional preference for attractive faces
Nadine Kloth, Lindsey Short and Gillian Rhodes
Faces are stimuli of outstanding social relevance, and our visual system seems to dedicate 'special' mechanisms and separate neural resources to their processing. In line with this, faces also receive privileged attentional resources when presented amongst non-face objects, possibly from a separate face-specific attentional system with a capacity limit of one face at a time. It is therefore plausible to assume that some faces stand out more than others when presented within a group. Prior research indeed has shown that faces displaying certain emotional expressions or direct gaze are preferentially attended. However, the effect of other facial characteristics on attentional selection has hardly been explored. Here, we used a modified dot-probe paradigm to investigate the existence and nature of attentional preferences for highly attractive faces relative to less attractive faces. We found some evidence of an attentional bias towards attractive faces. Factors such as SOA, participant gender, and stimulus gender modulated this effect, suggesting that facial attractiveness biases attention in a complex manner, interacting with other characteristics of both the stimulus and the observer.
Development of visual working memory and its relation to academic performances in elementary school children
Hiroyuki Tsubomi and Katsumi Watanabe
Visual working memory (VWM) enables active maintenance of visual information. It is also crucial to exclude distractors in order to keep once-stored items in VWM. Here, we investigated how VWM would develop and become distractor-proof in Japanese elementary school children and their relations to academic performances in the classroom. A total of 123 Japanese children (7 to 12-year-old) were instructed to remember the positions of 4 colored squares for a 2-sec retention period filled with blank or visual distractors. In the blank retention condition, the VWM capacity reached the average adult level (i.e., three objects) at 10-year-old. However, the VWM capacity in the visual distractor condition did not reach the adult level until 12-year-old. Children with high VWM capacity in the visual distractor condition tended to exhibit higher academic performances (Japanese language, arithmetic, science, and social studies) than those with low VWM capacity. These results suggest that the capacity of VWM matures earlier than the exclusion process of visual distractors for VWM and they might have relations to academic achievement in elementary school children.
Does oculomotor preparation have a functional role in social attention?
Daniel Smith and Emma Morgan
Observing a change in gaze direction triggers a reflexive shift of attention and appears to engage the eye-movement system. However, the functional relationship between social attention and the oculomotor activation is unclear. One extremely influential hypothesis is that the preparation of an eye movement is necessary and sufficient for a shift of attention (the Premotor theory of attention; Rizzolatti et al., 1987, Neuropsychologia 25, 31-40). In order to test this hypothesis for social attention we examined gaze-cueing under conditions where the preparation of some eye-movements was not possible. Contrary to the Premotor theory, we observed significant and robust gaze cueing at locations to which observers could not prepare an eye-movement. However, although gaze-cueing was unaffected by eye-abduction overall participants were much poorer at detecting changes that occurred in the temporal hemifield when the eye was abducted. This finding is consistent with previous reports that changes in posture can elicit attentional biases. These data demonstrate that motor preparation is functionally dissociated from social attention and may be problematic for theories of social cognition that propose a link between the ability to make inferences about the intentions of others and the ability to activate the corresponding motor plan in their own action system.
Preventing oculomotor preparation disrupts spatial but not visual or verbal working memory.
Keira Ball, David Pearson and Daniel Smith
We used the eye abduction paradigm [Craighero, Nascimben & Fadiga, 2004, Current Biology, 14, 331-333] to separate the contributions of covert attention and oculomotor processes on working memory. Stimuli were presented wholly in the nasal or temporal hemifield, and the participant's dominant eye was either in the centre of its orbit (frontal condition) or abducted by 40° as a result turning their head and body while maintaining central fixation (abducted condition). Spatial memory was assessed using either the corsi blocks task or arrow span, and visual memory was assessed using the visual patterns task or a size estimation task. Verbal memory was measured using digit span. We found a significant interaction between field of presentation and eye position in the corsi block task, with eye abduction reducing spatial memory span for temporally presented stimuli. This effect was not seen in the arrow span task, or for visual and verbal memory. As abduction makes it physically impossible to both execute and plan eye-movements to locations further into the temporal hemifield, we conclude that the oculomotor system is involved in visuospatial working memory for specified locations but not when directional information indirectly cues these locations.
The use of motion information in multiple object tracking
Markus Huff and Frank Papenmeier
In multiple-object tracking participants track several moving target objects among identical distractor objects. Recently it was shown, that the human visual system uses motion information for keeping track of targets. A texture on an object that moved in the opposite direction than the object impaired tracking performance. In this study, we examined the temporal interval at which motion information is integrated in dynamic scenes. In three multiple-object tracking experiments, we manipulated the texture motion on the objects: The texture moved either in the same direction, in the opposite direction, or alternated between same and opposite direction at varying interval lengths. We show that motion integration can take as short as 100 ms. Further, we show that there is a linear relationship between the proportion of opposite motion and tracking performance. That is, increasing the proportion of opposite motion within the alternate conditions decreased tracking performance. We suggest that texture motion might cause shifts in perceived object locations thus influencing tracking performance.
Texture segregation and contour integration depend on right-hemisphere attention-related brain areas
Kathleen Vancleef, Johan Wagemans and Glyn Humphreys
Whether perceptual organization requires attention is still uncertain. Extinction patients who have problems in attending to a contralesional stimulus when two competing stimuli are presented, provide us with the opportunity to study the role of attention-related brain areas in the presence of intact low-level visual areas. Although we know that a wide range of perceptual grouping processes are unimpaired in these patients, texture segregation and contour integration are unexplored. In this study, four right and five left extinction patients, as well as twelve healthy controls, were presented with texture and contour stimuli consisting of oriented elements. We induced regularity in the stimuli by manipulating the element orientations resulting in an implicit texture border or explicit contour. Subjects had to discriminate curved from straight shapes without making eye movements while stimulus presentation time was varied according to a QUEST procedure. Results show that for both textures and contours, only the left extinction patients need a longer presentation time to determine the shape of the border/contour on the contralesional side. These results indicate that texture segregation and contour integration are modulated by attention-related brain areas in the right hemisphere, such as the right temporo-parietal junction (TPJ), which is typically damaged in extinction.
Configural Effects on Positional Priming of Pop-out
Ahu Gökce and Thomas Geyer
This study investigated facilitatory and inhibitory positional priming using a variant of Maljkovic and Nakayama's priming of pop-out task (Maljkovic & Nakayama, 1996, Perception & Psychophysics, 58(7), 977-991). In three experiments, the singleton target and the distractors could appear within variable (illusionary) configurations (triangle, square, etc.) across trials. This manipulation was intended to disentangle the relative contributions of configural (i.e., a leftward-pointing triangle followed by a rightward-pointing triangle display) and categorical (i.e., a diamond followed by a leftward pointing triangle display) information on positional priming. The results were that of significant facilitatory and inhibitory priming effects. However, while facilitation was contingent on configural information, inhibition was reliant on repetition of item categories across trials. These results suggest that facilitatory and inhibitory priming are distinct phenomena (Finke et al., 2009, Psychological Research, 73, 177-185) and that positional memory traces include subtle information about the arrangement of the items.
Interactions between top-down colour and bottom-up luminance signals during sustained visual attention
Jasna Martinovic, Justyna Mordal and Soren Andersen
We examined the interplay between bottom-up luminance contrast and top-down colour-selection biases in sustained visual attention. This EEG experiment consisted of an S-(L+M) block, with bluish and yellowish dots, and an L-M block, with reddish and greenish dots. Two fully overlapping, flickering random dot kinematograms (RDKs) were presented, with the dots being either at the same or at different luminance levels, both brighter than the background. On each trial, participants were colour-cued to attend one RDK in order to detect brief coherent motion targets, whilst ignoring any events in the unattended RDK. Performance was lowest for bluish. Reaction time differences between colours were observed at low target luminance levels only, with bluish being the slowest. Furthermore, steady-state visual evoked potential (SSVEP) amplitude for bluish did not depend on target luminance levels, while for the other colours amplitude was higher at low levels of target luminance, following a similar pattern as reaction time differences. We conclude that feature-selection is equally effective for concurrent luminance and L-M inputs and S-cone decrements, with S-cone increments being selected less effectively. The neural basis of observed colour-luminance interactions is likely to reside at least partly in earlier visual areas that receive low-level chromatic inputs.
Previously fixated visual features improve scene recognition
Christian Valuch and Ulrich Ansorge
During examination of a scene, only a limited amount of visual features is fixated. How do such fixations help in recognizing familiar scenes? In our eye tracking study, participants first viewed a series of natural scenes on photographs. Contingent upon the individual fixation pattern of every participant, two classes of smaller cutouts were extracted from each of the viewed photographs. The fixated/old cutout showed the region of longest fixation, and the control/old cutout showed a region that was not fixated but contained salient low-level feature contrasts. Subsequently, participants saw three types of trials (randomly intermixed): (1) Trials with fixated/old cutouts, (2) trials with control/old cutouts, and (3) trials with new cutouts (the latter from new, hitherto not presented photographs). All cutouts were shown at screen center. The task was to rapidly and accurately decide whether a cutout was from an old or a new photograph. With fixated/old cutouts reaction times were significantly lower than with control/old cutouts and new cutouts. Moreover, recognition accuracy was at chance level for control/old cutouts and above chance in all other conditions. Our results point to the significance of reorienting attention and gaze to previously fixated visual features during the successful recognition of natural scenes.
Is inhibition of return reset by a subsequent search in the same display?
Margit Höfler, Iain D. Gilchrist and Christof Körner
Inhibition of return (IOR) discourages the re-inspection of recently inspected items. When a search is finished and immediately followed by a subsequent search in the same display, IOR is reset at the end of the previous search [Höfler et al, 2011, Attention, Perception, & Psychophysics, 73(5), 1385-1397]. However, other researchers have demonstrated that IOR is still functioning after a single search has finished [e.g. Klein, 1988, Nature, 344, 430-430]. Here we investigated whether it is the start of the subsequent search that resets IOR once the previous search has finished. To this end, participants had to search consecutively twice in the same display while their eye movements were monitored. Immediately after the end of the first and after the end of the second search we probed one of the items. The probed item had been previously inspected or not. Saccadic latencies to the probes were used to measure IOR. Again, IOR was reset at the end of the previous search. However, we also found IOR to be reset after the end of the subsequent search (i.e., when no further search followed). This suggests that the start of a subsequent search is not responsible for the resetting of IOR.
Electrophysiological correlates of multiple object processing in the absence of awareness
Silvia Pagano and Veronica Mazza
Representing multiple objects simultaneously is fundamental to interact efficiently with the environment. Such ability requires at least two mechanisms. A first attentional mechanism individuates a limited amount of elements and produces coarse object representations by attaching features to indexed locations. A second working memory (WM)-related mechanism encodes objects in greater detail leading to full object representation. This electrophysiological study investigated whether these two stages underlying multiple objects processing require awareness to operate. We asked participants to enumerate a variable number of targets (0-3) presented among distractors while recording two neural markers- the N2pc and CDA- likely associated with individuation and WM, respectively. On target-present trials, one target was surrounded by a four-dot configuration that offset together with the stimulus (common-offset) or not (delayed-offset). Participants'accuracy was lower on delayed-offset trials, indicating the occurrence of a reliable masking effect. ERP results showed that the amplitude of N2pc increased as a function of target numerosity both in delayed- and common-offset trials, whereas such modulation was present for CDA only on common-offset trials. The results indicate that while individuation can operate with reduced awareness, WM-related processes cannot, and suggest that awareness is progressively required to build a complete representation of multiple objects.
Pointing to the temporal modulation of attentional effects on face categorisation
Genevieve Quek and Matthew Finkbeiner
A range of experimental paradigms and clinical case studies have now demonstrated the unique resilience of faces to attentional modulation. The present study sought to examine the effect of tightly controlled spatial and temporal attention on the processing of masked face stimuli. Using a sensitive continuous measure, reaching trajectories, we have shown that masked faces produce priming irrespective of how well attention is focussed in space or time. Nevertheless, by examining reaching responses as a function of target viewing time, we have demonstrated for the first time that the timecourse of masked priming is subject to modulation by both spatial and temporal attention. When attention is optimally focussed, subjects need to view the target for a shorter length of time to produce reliable priming
Distractor processing in serial visual search: Evidence from fixation-related potentials
Christof Körner, Verena Braunstein, Matthias Stangl, Alois Schlögl, Christa Neuper and Anja Ischebeck
The search for a target in a complex environment is an everyday visual behavior that stops on finding the target. When we search for two identical targets, however, we have to continue the search after finding the first target and memorize its location. We had participants perform a multiple-target search and measured eye movements and EEG simultaneously. We used fixation-related potentials (FRPs) to investigate the neural correlates of different stages of the search. Having found the first target influenced subsequent distractor processing. Compared to distractor fixations before the first target fixation, a negative shift was observed for three subsequent distractor fixations. This result suggests that processing a target in continued search modulates the brain's response, either transiently by reflecting temporary working memory processes or permanently by reflecting working memory retention.
Fruitful search: Top-down contingent capture extends to colour-variegated stimuli
Nils Heise and Ulrich Ansorge
In the past, highly-controlled visual search paradigms used monochromatic stimuli to confirm top-down contingent capture of attention by colour (e.g. Folk & Remington, 1998). These studies lack one critical aspect of everyday colour search: colour variegation. This could be crucial because colour-variegated targets cover a larger colour spectrum and thus exhibit potentially more overlap with irrelevant colour distractors. In addition, top-down search settings for colour-variegated stimuli could be more demanding. As a consequence, top-down contingent capture could be restricted to artificial monochromatic stimuli. To study colour capture under more natural conditions, we used photographs of real fruits/vegetables as colour-variegated stimuli. In Experiment 1, we nonetheless found evidence for top-down contingent capture of attention by colour-variegated stimuli. These results could have partly reflected response interference. This was ruled out in Experiment 2. Together, results demonstrated that top-down contingent capture extends to colour-variegated stimuli.
Strategic scanning in visual search: Implications for the measurement of attentional bias
Jillian Hobson and Stephen Butler
Several theories of addiction posit that drug related stimuli, through processes of classical conditioning, are able to elicit physiological and subjective craving (for review see Franken, 2003, Progress in Neuro-Psychopharmacology & Biological Psychiatry, 27, 563- 579). Attentional bias to drug related stimuli is argued to be a key mechanism in this process, however, our understanding of its role and mechanisms remain unclear, possibly due to task limitations. The present study measured eye movements whilst participants completed a flicker change blindness task employing three conditions of grid like displays, which differed in degree of structural regularity, with both an alcohol and neutral change competing for detection. The results demonstrated no significant difference in behavioural or eye movement measures of attention to alcohol related stimuli between non-drinkers, light drinkers and heavy social drinkers, suggesting that social drinkers do not display an alcohol related attentional bias. However, there was evidence of strategic scanning in all conditions, demonstrated by the frequency of saccade directions.The extent of which was modulated by the level of regularity of the grids structure. Therefore the ability of the flicker change blindness task to accurately measure attentional bias may be limited, highlighting important implications for future attentional bias research.
Emotional Inconsistencies Capture Attention Irrespective of Valance
Ido Amihai, Fook-Kee Chua and Shih-Cheng Yen
Previous studies have shown that objects that are inconsistent with the semantic gist or physical structure of a scene lead to longer fixation durations (Vo and Henderson, 2009, Journal of Vision, 9, 1-15). Stimuli with negative emotions have also been shown to capture attention faster than positive or neutral stimuli (Yang, Zald, and Blake, 2007, Emotion, 7, 882-886). In the current study we investigated both semantic and structural scene inconsistencies in the context of emotion. We performed eye tracking experiments in which subjects were presented with natural images that contained a face that was inconsistent with the emotional gist of the scene or with its physical structure. We found that emotionally and structurally inconsistent faces were viewed for a longer period of time relative to consistent faces, with mean differences of 797 ± 109 ms (two-sample t-test, p<0.001) and 938 ± 161 ms (p < 0.001) for the emotional and structural conditions, respectively. Moreover, the emotional valence of the face did not influence this effect, and positive and negative inconsistent faces were equally fixated (2,522.5 ± 164 ms vs. 2,522.9 ± 179 ms, p > 0.99). Acknowledgement: A*Star Human Factors Engineering Thematic Strategic Research Program
Colour as a cue of sexual attractiveness and attentional preference in Japanese Macaques
Lena Sophie Pflüger, Christian Valuch, Ulrich Ansorge and Bernard Wallner
Japanese Macaques and humans possess highly comparable colour vision which could serve similar functions, e.g. the interpretation of socio-sexual signals in hairless skin colouration. We investigated the significance of facial reddishness as a cue to sexual attractiveness and attentional preferences in semi free-ranging Japanese macaques during their reproductive period. We presented two pictures of the same female face on two monitors to different male monkeys individually (n=22). The presented faces were carefully manipulated in a natural range of reddish facial colours. We analyzed male behaviour (gaze duration/frequency, and number of approaches towards one versus the other monitor) as a function of the presented stimuli. Male behaviour differed between individuals but in a subset of the sample increased attentional behaviour towards faces with higher shades of facial red was clearly observable. We further incorporated the social dominance ranks as well as endocrinological data into our analyses to clarify the degree to which these variables moderate attention towards the socio-sexual stimuli. Our results add to the knowledge about the adaptive functions of colour vision and are of relevance for research on colour perception in humans and non-human primates.
Cognitive Mechanisms Underlying Attentional Blink and Creative Reasoning
Saskia Jaarsveld, Priyanka Srivastava, Marisete Welter and Thomas Lachmann
Attentional blink (AB) magnitude depends on the individual differences in working memory operation span (OSPAN) over and above the effects of fluid intelligence. Fluid intelligence, measured with Standard or Advanced Progressive Matrices (SPM/APM) is shown to be positively correlated with creative reasoning defined as the ability to shift between 'divergent and convergent production'. In the Creative Reasoning task participants create a puzzle in a 3x3 matrix, similar to Raven's matrices. It is assumed that performance on both AB and CRT tasks, although evolving in different time scales, might be based on a general cognitive flexibility. Two experiments were conducted (N1=50, with SPM; N2=50, with APM) employing the AB task, OSPAN, test of creative thinking (TSD), and creative reasoning task (CRT). Results showed no significant correlation (p>.05) between AB and creative reasoning, suggesting possibly distinct cognitive mechanisms underlying these two processes. Specifically, based on the current results, it can be speculated that AB involves more narrow attention, whereas CRT involves relatively broader attention. Follow-up experiments are currently being conducted to identify the specific cognitive mechanisms underlying AB and creative reasoning.
Visual search in barn owls: orientation pop-out?
Julius Orlowski, Torsten Stemmler and Hermann Wagner
Investigations of the mechanisms underlying visual attention have a long history in human and primate research. We detect a differently oriented object amongst a set of distracters independent of the number of distracters. This is called pop-out. We were interested whether other vertebrates have similar capabilities and chose the barn owl as a role model, because in this bird gaze can be tracked with a head-mounted camera, the OwlCam. We designed two experiments: In one, barn owls were confronted with patterns on the floor of a room containing one odd target amongst several identical distracters. The gaze path and fixations were examined. In this setting the barn owls looked faster, more often and longer at the object than at a randomly chosen distracter. This experiment demonstrated visual search capabilities, but it does not allow determining reaction time. In the second approach we currently train barn owls to observe two patterns displayed at different locations on a monitor. The time the owl will need to press a switch that signifies the location of the target will be measured. If this time were independent of the number of distracters, pop-out existed in barn owls.
Attentional gain control and competitive interactions influence visual stimulus processing independently
Christian Keitel, Søren K. Andersen, Cliodhna Quigley and Matthias M. Müller
We tested two assumptions of a biased competition account of human visual attention: 1) An attended stimulus is released from a mutually suppressive competition with concurrently presented stimuli and 2) an attended stimulus experiences greater gain in the presence of competing stimuli than when it is presented alone. To this end, we recorded frequency-tagged potentials elicited in early visual cortex that index stimulus-specific processing. We contrasted the processing of a given stimulus when its location was attended or unattended and in presence or absence of a nearby competing stimulus. At variance with previous findings, competition similarly suppressed processing of attended and unattended stimuli. Moreover, the magnitude of attentional gain was comparable in the presence or the absence of competing stimuli. We conclude that visuospatial selective attention does not per se modulate mutual suppression between stimuli but instead acts as a signal gain, which biases processing toward attended stimuli independent of competition.
Optimizing Attention Deployment in Object-Based Attention: The Role of Cue Validity
Wei-Lun Chou and Su-Ling Yeh
Using the classic Egly, Driver and Rafal (1994) two-rectangle paradigm, we examined the effect of cue validity on object-based and location-based attention. We manipulated the likelihood of a target appearing on the same object versus the different object, in the invalid cue conditions: informative of location but not object (Experiment 1), informative of location and object (Experiment 2), and informative of object but not location (Experiment 3). The results indicated a spatial-cueing effect (i.e., shorter RT at the cued location than at the uncued location) and a same-object advantage (i.e., shorter RT for the cued object than for the uncued object) when the location-based and object-based cues were informative, respectively (Experiment 1 and 3), and both spatial-cueing effect and same-object advantage when both kinds of cues were informative (Experiment 2). Unlike previous studies in which the two kinds of cues were co-varied, this study differentiates the two, and the results obtained are inconsistent with either the spreading hypothesis or the prioritization hypothesis of object-based attention. As explained by our optimization hypothesis, we demonstrate here that the validity of the location cue is not the causal reason for the same-object advantage; object-based cue validity-the probability that the target will appear on the cued object as a whole-plays a decisive role in object-based attention.
N2pc attentional capture by threatening schematic faces
Nicolas Burra and Dirk Kerzel
It has been reported that attention detects highly relevant stimuli in an automatic manner. In recent studies measuring event-related potentials, it has been demonstrated that fearful faces capture attention. However, other studies demonstrate that threatening, angry faces can also facilitate the allocation attention. Moreover, behavioral interference effects have been observed when these facial expressions are task-irrelevant. We suggest that expression-related interference can be measured electrophysiologically by using the N2pc which is an index of the attentional selection of lateralized stimuli. We displayed a variant of the 'additional singleton' paradigm in which the main task was not related to facial expressions and the irrelevant distracters were angry or happy faces. Our data demonstrate that independently of the task, facial expression stimuli increased the face-related N170 component as compared to neutral faces and, mainly, that only irrelevant angry faces elicit an N2pc while happy faces do not. Different perspectives and control experiments are discussed.
Predicting Visual Perception: An ERP Approach
Hannah Pincham and Denes Szucs
Neuroscientific explanations of successful visual perception typically focus on the neural events elicited by stimuli. However, there is evidence to suggest that the ongoing state of the brain can predict whether or not a visual stimulus will be perceived. Here, we used the attentional blink paradigm in combination with event-related brain potentials to examine whether neural activity before a stimulus can determine successful stimulus detection. Participants were required to detect 2 target letters from digit distractors while their brain activity was being recorded. Trials were classified based on whether the secondcritical target (T2) was detected. We found that T2-detection was predetermined by brain activity prior to the onset of the stimulation stream. Specifically, T2-detected trials were predicated by a frontocentral positive going deflection that started more than 200 ms before the stream began. Accurate T2 detection was also accompanied by enhanced poststimulus neural activity, as reflected by a larger P3b component. Furthermore, prestimulus and poststimulus markers of T2-detection were highly correlated with one another. The results suggest that conscious visual perception is shaped by potentially random fluctuations in neural activity.
How old are you? The effect of face age on gaze-following behaviour
Francesca Ciardo, Angela Rossetti, Rossana Actis-Grosso and Paola Ricciardelli
Gaze following is considered a building block of social interaction. Several studies have shown that the interaction between two people is influenced by similarity (i.e., perceived overlap between two individuals). Age is known to be an important similarity factor [Preston and de Wall, 2002 Behavioral and Brain Sciences 25 1-20]. Using an oculomotor task, we tested whether the degree of perceived similarity, as indicated by the age of the other person's, can modulate gaze following. Distracting faces of four different age ranges (8-10;18-25; 35-45 and over 70 years old) gazing left or right were presented to university students. Their task was to ignore the distractor while making a saccade towards one of two horizontal peripheral targets, depending on the color of an instruction cue. The distracting gaze could be congruent or incongruent with the instructed direction. The results show that participants made more errors in incongruent than in congruent trials (p = 0.014). Interestingly, gaze following errors (errors in the direction of the distracting gaze in incongruent trials) increased when the distractor's age (18-25 years old) matched the age of participants, indicating that similarity in terms of membership of a perceived age group can modulate gaze following behaviour
Task-dependent crossmodal processing of combined visuo-auditory conjunction oddballs in a mixed sequence of visual and auditory stimuli
Evelyn B. N. Friedel, Michael Bach and Sven P. Heinrich
Can attention be captured by oddball stimuli that are defined through the conjunction of visual and auditory features and embedded in a random sequence of non-conjunct stimuli? We used the P300 of the event-related potential as an index for the allocation of attention. Four different conditions were tested. (1) Rare conjunction stimuli embedded in sequence of non-conjunct auditory and visual stimuli, with auditory stimuli (conjunct or not) attended. (2) Same stimulus sequence as before, but conjunction stimuli attended. (3) Rare auditory stimuli (non-conjunct) in sequence of visual stimuli (non-conjunct), with auditory stimuli attended. (4) Rare visual stimuli (non-conjunct) in sequence of auditory stimuli (non-conjunct), with auditory stimuli attended. In both non-conjunction sequences, the rare stimuli elicited similar P300 responses, despite the visual stimuli not being attended. No P300 was found with conjunction stimuli when auditory stimuli were attended. With the conjunction stimuli being attended, a P300 was present, but small and delayed, compared to non-conjunction conditions. The results suggest that visuo-auditory conjunctions are not processed preattentively, and even when task-relevant they require additional processing that delays and disperse the allocation of attention. As the non-conjunction sequences show, this is not due to the fact that the visual modality is disattended.
Performance related visual attention and awareness of social evaluative cues in social anxiety.
Mel Mckendrick, Madeleine Grealy and Stephen Butler
Schultz and Heimberg (2008) Clinical Psychology Review, 28(7), 1206-1221, reviewed cognitive biases in social anxiety, concluding that without direct measures in ecologically valid paradigms, limited knowledge of attentional focus could be ascertained. In response, we tracked eye-movements during a social performance task. Participants were told that a 'live web linked interview panel'was evaluating their performance. Despite no differences in visual attention early in the task, both trait anxiety and unexpected situational anxiety in more confident individuals appeared to have increased awareness of social cues. As the task progressed, more visual attention was paid to emotional than neutral social cues. The low anxiety group were aware of more negative cues, whereas the high anxious group were equally aware of all behaviours. Afterwards, high anxious participants rated private and public evaluation more negatively than low anxious individuals. Thus situational anxiety in the early performance stages may disrupt preferential allocation of attention to emotional faces but heighten awareness of social cues. However as attention to emotional faces increases, less socially anxious individuals may be better able to discriminate a genuine emotional threat than those with high social anxiety.
Just Passing Through: Lack of IOR in Planned Saccade Sequences
W. Joseph MacInnes, Hannah Krueger and Amelia Hunt
Responses tend to be slower to previously attended spatial locations. This is known as Inhibition of Return (IOR). We compared IOR for intermediate locations along planned and unplanned saccade sequences. Sequences of two saccades were instructed using a colour-based verbal cue. In the planned condition all the saccade target colours were visible before the saccade sequence began. In the unplanned condition, the second colour did not reveal itself until after the first saccade was initiated. Following the sequence a probe was presented at the first saccade target location or a control location. With saccadic responses to probes, IOR was observed only when saccade sequences were unplanned. IOR was absent for planned saccade sequences. IOR was also predominantly observed when probes appeared soon after the saccade sequence, and was absent later. When we repeated the experiment with manual responses to probes, IOR was absent in both planned and unplanned sequences. The results show that intermediate locations along a pre-planned sequence are not inhibited. When the sequence cannot be planned in advance, intermediate locations are inhibited, but this inhibition appears to be transient and stronger for saccadic responses, suggesting a motor rather than attentional locus.
Qualitative differences between attention capture by conscious and unconscious cues
Isabella Fuchs, Jan Theeuwes and Ulrich Ansorge
Classical attention theory assumes exogenous capture of attention by unconscious cues and endogenous capture by conscious cues. Here we tested the classical view by varying cue visibility. In Experiment 1, we demonstrate that unpredictive cues (equally likely at target position and away from the target) lead to capture. With conscious cues, attention capture was restricted to endogenously fitting cues: If the participants searched for black targets, black but not white cues captured attention, and this pattern reversed if white targets were searched for. By contrast, with unconscious cues, capture was equal for searched and unsearched contrast-cues. In Experiment 2, we used antipredictive cues: After 75% of the right cues, the target was on the left and after 75% of the left cues the target was on the right. Here, participants endogenously directed attention towards the opposite side of the conscious but not of the unconscious cue. Together, the findings support the classical view.
Visual word recognition in latvian children with and without reading difficulties
Evita Kassaliete, Elina Megne, Ivars Lacis and Sergejs Fomins
In Latvia approx. 15-20% of school-aged children are with reading difficulties. Latvian is complicate language and cause of poor reading has not been studied. There are many neural processes which participate in text decoding during reading. The aim of the study was to determine differences in visual word recognition for children with and without reading difficulties. Fifty-two children from Grade 3 (n=22) and Grade 4 (n=30) took part in the study. Using a modified One minute reading test, children were divided in two groups - with and without reading difficulties. The stimulus set for word recognition contained 150 words. The length of the words varied from four to ten letters. Each word was shown on a computer screen for 500 ms. The answers were expected verbally. Correct and incorrect answers were recorded. Each word length was shown 15 times. Letter size corresponded to 6 cycles/ degree. Data of correctly named words for children with and without reading difficulties were significantly different (p<0.05) in both groups for all word lengths. The study confirms that children with reduced reading speed use letter-by letter reading pattern, when normal reading children use parallel letters activation. Word recognition and processing speed improves with age and lexical experience.
Averaging of simultaneous instances of familiar and unfamiliar faces
Alexander Marchant, Xandra Van Montfort, Jan De Fockert and Rob Jenkins
There is growing evidence that faces are represented in terms of a summary description based on shared features of multiple, previously seen faces. This phenomenon has been reported for multiple faces that have been processed either sequentially [e.g. Burton, Jenkins, Hancock, & White, 2005, Cognitive Psychology, 51, 256-284] or simultaneously [e.g. De Fockert, & Wolfenstein, 2009, Quarterly Journal of Experimental Psychology, 62(9), 1716-1722]. Here we investigate the relationship between sequential and simultaneous averaging, by measuring averaging of simultaneously presented faces that either have (familiar faces) or have not (unfamiliar faces) also received sequential processing prior to the experiment. Participants were asked to match a single test face with one of a set of four previously seen photographs of the same person. A morphed average of all four set members and a previously seen photograph were equally likely to be endorsed as having been present in the previous set. Interestingly, a morphed average of four previously unseen photographs was endorsed significantly more often than a single previously unseen photograph. This effect was the same regardless of the familiarity of the face to the observer.
The time course of cueing: When is it U-shaped and when not?
Anna Wilschut, Jan Theeuwes and Christian N. L. Olivers
Performance in spatial cueing tasks is often characterized by a rapid attentional enhancement with increasing cue-target SOA. A recent study [Wilschut et al., 2011, PLoS ONE, 6, e27661] found that this enhancement function also applies when the cue and the target are presented invariably at a single central location, suggesting a universal cueing time course. However, using a very similar central task, others have found a rather different, U-shaped pattern, reminiscent of an attentional blink [Nieuwenstein et al., 2009, JoV, 9,1-14]. The present study varied the properties of the cue-target pair in order to investigate the mechanisms underlying the different time functions. In several experiments, accuracy was generally found to improve with the increasing cue-target SOA. The level of performance at the shortest cue-target intervals (33-83 ms), however, depended on the relative strength of the cue and the target, akin to what has been found in visual masking studies. We suggest that the early part of the attentional cueing function is modified by stimulus-based visual interactions, and that these together with later attentional effects determine whether the cueing time course is U-shaped or monotonic.
The effects of cross-sensory attentional demand on subitizing and on mapping number onto space
Giovanni Anobile, Marco Turi, Guido Marco Cicchini and David C. Burr
Various aspects of numerosity judgments, especially subitizing and the mapping of number onto space, depend strongly on attentional resources. We use a dual-task paradigm to investigate the effects of cross-sensory attentional demands on visual subitizing and spatial mapping. The results show that subitizing is strongly dependent on attentional resources, far more so than is estimation of higher numerosities. But unlike many other sensory tasks, visual subitizing is equally affected by concurrent attentionally demanding auditory and tactile tasks as it is by visual tasks, suggesting that subitizing may be amodal. Mapping number onto space was also strongly affected by attention, but only when the dual-task was in the visual modality. The non-linearities in numberline mapping under attentional load are well explained by a Bayesian model of central tendency.
On the perception of natural scenes: proto-objects versus spatial locations
Victoria Yanulevskaya, Jasper Uijlings and Nicu Sebe
Do people attend to spatial locations or to discrete objects while looking around? Most of state-of-the-art models for visual attention are based on the spatial-based attention theory. This states that every time we move our eyes, information from a circular region around the gaze is being processed, where the shape of this region is supposed to be fixed. Alternatively, the object-based attention theory argues that people perceive the world as a collection of proto-objects, which are coherent image regions that, by visual coherence in most objects in the world, roughly correspond to part of an object, complete object, or group of objects. Within this theory, the shape of the attended region is influenced by the visual structures of the image and coincides with proto-objects. We propose a visual attention model based on object-based attention theory, where we automatically extract proto-objects using hierarchical image segmentation. We compare our object-based model with a state-of-the-art spatial-based attentional model on the task of eye-fixation prediction. Our results demonstrate that by distributing saliency within proto-objects, we efficiently highlight entire image regions, which attract most of the attention. In contrast, the spatial-based method generally highlights only high contrast details, which mostly coincide with object borders.
A study to evaluate peripheral visual perception
Ieva Timrote, Gunta Krumina, Tatjana Pladere and Mara Skribe
There are disorders linked to M, P visual pathways that can be improved until certain age [Parrish et al, 2005, Vision Research, 45, 827-837]. For this reason we are developing a method to evaluate peripheral visual perception. An individual has to count a specific letter from a set of letters on white background, in lines or in squares in central visual field and no noise, five times five or ten times ten black dots in peripheral visual field. Additional peripheral stimulus in different size appears during the task. Time needed to accomplish the counting is statistically significant for two of the individuals (p<0.05) - it takes more time to count letters in white central background comparing with letters in squares or in lines. A black stimulus should be in size of 4 cm and green, red or blue stimulus in size of 2 cm for a figure to be distinguished in the peripheral visual field. Three individuals did not notice any of the peripheral stimuli. Therefore our method can be used to evaluate peripheral perception hence trying to look for a problem in visual pathways, although we have to make some improvements to adapt these tests for children.
The effects of temporal cueing on metacontrast masking
Simon Mota and Maximilian Bruchmann
Temporal information can induce expectations about when a given event will occur; expectations that facilitate perception by selectively directing attentional resources to discrete moments in time (temporal orienting effect). A temporal analog to the standard spatial cueing paradigm was used to examine the effects of temporal attention on the perception of briefly presented visual stimuli in a metacontrast masking paradigm. Subjects rated the visibility of a target stimulus that was followed by a mask after various stimulus onset asynchronies. In two separate runs subjects were made to expect the target either after 100ms or 1s. In some instances, however, the subjects'expectations were violated by presenting the target earlier or later than assumed. We observed cueing effects in the form of higher visibility when the target appeared at the expected point in time compared to when it appeared too early. Unlike spatial cueing effects on metacontrast masking reported in the literature, these effects were not restricted to the late branch of the masking function, but enhanced visibility over it's complete range. These results suggest that the neural subsystems involved in temporal attention differ from those involved in spatial attention.
Neural activity in category-selective regions is modulated by both subjective and physical disappearance
Andrea Loing, Rob van Lier, Arno Koning and Floris de Lange
Although it is well-known that early visual areas can contain stimulus representations in the absence of subjective awareness, it is more controversial whether high-level visual areas such as the fusiform face area (FFA) and parahippocampal place area (PPA) are driven by physical stimulus properties or subjective perception. We examined whether physical and/or subjective disappearance of face and house stimuli modulated the neural activity in FFA and PPA, using a contrast decrement technique. Nine participants participated in an fMRI experiment. On each trial, the participants were shown a face or house stimulus, after which the stimulus was either removed (physical disappearance) or its contrast was reduced, leading to subjective disappearance of the stimulus in a proportion of trials (on average, 53%). Category-selective neural activity in FFA and PPA was lower during physical disappearance, in the context of equal subjective experience. It was also lower during subjective disappearance, in the context of equal physical stimulation. Together, these results demonstrate that neural activity in category-selective higher-order visual regions is not solely determined by the subjective experience of perceiving the category, but rather represents sensory evidence for its category, on the basis of both physical stimulus characteristics and subjective experience.
Eye movement behaviour to natural scenes as a function of sex and personality
Felix Mercer Moss, Roland Baddeley and Nishan Canagarajah
Women and men are different. As humans are highly visual animals, these differences should be reflected in the pattern of eye movements they make when interacting with the world. We examined fixation distributions of 52 women and men while viewing 80 natural images and found systematic differences in their spatial and temporal characteristics. The most striking of these was that, compared to men, women looked away and usually below many objects of interest, particularly when primed with the task for threat. We also found reliable gaze differences correlated with the images'semantic content, the observers'personality, and how the images were semantically evaluated. Information theoretic techniques showed that many of these differences increased with viewing time. The effects reported are not small: the fixations while viewing a still from a single action or romance film, on average, allow the classification of the sex of an observer with 64% accuracy. Our results indicate that while men and women may live in the same environment, what they see in this environment is reliably different. Our findings have important implications for both past and future eye movement research while confirming the significant role individual differences play in visual attention.
Do all negative images similarly retain attention? Time course of attentional disengagement from disgust- and fear-evoking stimuli.
Christel Devue, Johanna C. van Hooff, Paula E. Vieweg and Jan Theeuwes
While disgust and fear are both negative emotions, they are characterized by different physiology and action tendencies, which might in turn lead to different attentional biases. However, the potential disgusting aspect of threatening stimuli has somehow been neglected which might contribute to discrepancies in the literature. The goal of this study was to examine whether fear- and disgust-evoking images produce different attentional disengagement patterns. We pre-selected IAPS images according to their disgusting, frightening, or neutral character and presented them as central cues while participants had to identify a target letter briefly appearing around them. To investigate the time course of disengagement from those central images, we used 4 different cue-target intervals (200, 500, 800 and 1100 ms). Reaction times were significantly longer with the disgust-evoking images than with neutral- and fear-evoking images at the 200 ms interval only. This suggests that only disgust- and not fear-related images hold participants'attention for longer. This might be related to the need to perform a more comprehensive risk-assessment of disgust-evoking pictures. These results have important implications for future emotion-attention research as they indicate that a more careful selection of stimulus materials that goes beyond the dimensions of valence and arousal is needed.
Letter crowding and the benefit of parafoveal preview during reading
Adjacent letters in peripheral vision induce crowding that prevents efficient letter recognition. The region of effective vision (i.e., where crowding does not compromise letter identification) can be described by means of visual-span profiles (VSPs). VSP parameters have shown to predict reading rate suggesting that crowding limits processing beyond central vision and slows down reading. We investigated crowding as an effect influencing parafoveal preview benefit during normal reading. Eye movements were recorded of 58 subjects while reading single-line sentences in which preview of a word was denied until the eyes crossed an invisible boundary to the left of the word. Fixation durations revealed substantial benefit when preview was available, but the size of such preview benefit did not correlate with subjects'VSPs. Based on estimation of individual VSP accuracies on trial level, linear mixed-model analyses confirmed that letter-identification performance at the preview location modulated fixation durations but did not interact with preview condition. Thus, crowding seems to affect processes different from those involved in parafoveal preview benefit, potentially on the level of visual word encoding. The accumulation of preview information for subsequent word recognition seems to be less affected supporting the importance of cognitive processing on eye guidance during normal reading.
Local object gist by symmetries of lines and edges in V1
Javier Pinilla-Dutoit, David Lobato, Kasim Terzic and J.M.H. Du Buf
Global gist vision addresses entire scenes. The purpose of local gist is to prepare a spatial layout map before precise object recognition is achieved: which types of objects are about where in a scene. In case of man-made objects, often a repertoire of geometric shapes like rectangles and ellipses can be applied [Martins et al., 2009, Perception, 36 ECVP Supplement, 41-42]. In case of less geometric shapes, other object properties must be employed. We are therefore developing a model for extracting symmetries. This model exploits the multi-scale line and edge representation in area V1 [Rodrigues and du Buf, 2009, BioSystems, 95, 206-226]. It is a V2 model because larger line and edge fragments and angles are represented there. Sets of grouping cells are used to detect symmetric pairs of lines and edges with different angles. When applied at different scales, the combined results yield a sort of tree structure or skeleton, although not in the sense of the medial axis transform. This structure describes an object's main shape with links to corners at the object's contour. Once trained to individual objects viewed against a homogeneous background, the model can be applied to spot the objects in complex scenes. [Projects: PEst-OE/EEI/LA0009/2011, NeFP7-ICT-2009-6 PN: 270247]
Spatial attention does not influence Object Substitution Masking
Michael Pilling, Angus Gellatly and Ioanis Argyropoulos
The distribution of spatial attention is deemed to be a critical variable in Object Substitution Masking (OSM: a phenomenon in which a brief target is rendered imperceptible by surrounding mask elements which trail target offset). For instance DiLollo and colleagues (DiLollo, V. Enns J. & Rensink, 2000, Competition for consciousness among visual events. Journal of Experimental Psychology:General, 129, 481-507) report that OSM is dramatically reduced when attention was spatially prior-cued to the target location (by presenting the mask elements before onset of the stimulus array). We review this and other claims which argue for a role of attention in OSM and present alternative accounts of the results. We further present three experiments in which we manipulate spatial attention using a cueing paradigm. All experiments find that valid pre-cueing of the target location increases the perceptibility of a target stimulus, however in none of the experiments is there an interaction with mask duration. We conclude on the basis of these and other results from our laboratory that spatial attention is not a relevant factor in OSM. We discuss the implications of this for understanding of OSM itself, and for theories of re-entrant processing and object-updating.
Implicit Social Learning in Relation to Autistic-like Traits
Tjeerd Jellema, Tanja Nijboer and Matthew Hudson
We investigated if the extent in which typically-developed individuals possessed autistic-like traits (using the Autism-spectrum Quotient, AQ) influenced their ability to implicitly learn the social dispositions of others. In the learning phase of the experiment, participants repeatedly observed two different identities who displayed specific combinations of gaze direction and facial expression, such that they conveyed either a pro- or antisocial disposition toward the observer. Debriefing indicated that participants were not aware that they had learned something about these identities. The second phase of the experiment consisted of a gaze-cueing paradigm, in which the gaze directions of the two identities (displaying neutral expressions) were used to cue the appearance of peripheral targets. Participants made speeded responses to target appearance; gaze directions were non-predictive of target location. The low AQ group (n = 50) discriminated between the two identities: they showed a smaller gaze-cueing effect for the antisocial than for the prosocial identity. In contrast, the high AQ group (n = 48) showed equivalent gaze-cueing effects for both identities. The results first of all suggest that others'intentions/dispositions can be learned implicitly. Secondly, they suggest that this ability becomes impaired with increasing levels of autistic-like traits.
Spatio-temporal dynamics of visual processing in autism revealed by Attentional Masking
Luca Ronconi, Simone Gori, Enrico Giora, Milena Ruffino and Andrea Facoetti
Autism spectrum disorder (ASD) has been associated with a detail oriented perception and an overselective attention. However, both clinical observations and experimental studies highlighted an inefficient visual selection under certain conditions. In order to understand this dissociation, we investigated the spatio-temporal dynamics of visual processing in children with ASD and IQ-matched typically developing (TD) controls, employing an Attentional Masking paradigm. Attentional Masking refers to a reduction in target identification that is followed by a second irrelevant masking object at different degrees of proximity in space and time. We found that the performance of ASD and TD group did not differ when the masking object was displayed in the same position of the target. In contrast, when the masking object appeared in lateral position in respect to the target, children with ASD showed a deeper and prolonged interference on the target identification compared to the TD group. These findings contribute to explain the dissociation between over- versus under-selectivity of visual processing in ASD, and could be interpreted in the light of the altered neural connectivity hypothesis and the reentrant theory of perception.
Introspection during visual search
Gabriel Reyes and Jérôme Sackur
Recent advances in the field of metacognition have shown that participants are introspectively aware of many different cognitive states, such as their confidence in a decision, their feeling of knowing the answer to a memory question, or their own internal decision time. Here we set out to expand the range of introspective knowledge put under experimental scrutiny by asking whether participants could introspectively differentiate between two types of visual searches. We designed a series of experiments where we contrasted easy, feature searches, where the target of search is expected to pop-out, and difficult, conjunction searches, requesting more serial processing. In addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of elements perceived before a perceptual decision was reached. Results show partial awareness of the processing difference: participants seemed unable to give a numerical estimate of pre-decision perceived elements, while they could qualitatively distinguish pop-out from effortful searches. Our results are consistent with the two-stage models for visual search: we show that depending on task context and instructions, participants gave more weight in their introspections either on the first, pre-attentional stage, or on the second, guided search stage. In general, this introspective distinction was overshadowed by the more salient introspective decision time. Implications for models of visual search are drawn.
The existence of purely serial search shows that visual search is actually parallel
In purely visual search, items are processed one at a time. On target present trials, search terminates when the target is found. On target absent trials, all items have to be inspected. Variability in reaction times should be larger on present trials, since all items are equally likely to be the target. Participants searched for T amongst L (medium) and for a square with a smaller square in the left top corner amongst squares with a smaller square in another corner (difficult). For medium search, reaction time variability was smaller for present trials. However, for difficult search reaction time variability was larger for present trials. Moreover, there was also more variability in the number of fixations on present trials. The serial nature of difficult search implies that easier search is not serial. Recently, Young and Hulleman (JEP:HPP, in press) have proposed a theoretical framework that predicts these results. Visual search is modeled as a combination of serial fixations and parallel processing within fixations. The number of items processed within a fixation depends on task difficulty. When search becomes very difficult, the number of items processed is reduced to one and only the serial component due to eye movements is left.
Attention orienting shifts the numerical distance effect
Mariagrazia Ranzini, Matteo Lisi and Marco Zorzi
The tendency to represent small numbers on the left and large numbers on the right of 'mental number space' has been reported by many studies. Visuo-spatial attention is thought to mediate number-space interactions, but a detailed investigation of how attention orienting may modulate number processing is lacking. We investigated this issue by manipulating the voluntary deployment of visuo-spatial attention during a number comparison task. Each trial started with a visual cue displayed at one of four spatial locations (horizontally placed, two at each side of fixation). Then, a target digit (1-9, without 5) was centrally presented and participants orally indicated whether it was larger or smaller than the reference 5. Finally, they were prompted to make a saccade to the cued position. Comparison reaction times were influenced by the position of the spatial cue. In addition to a correspondence effect between cue position and number magnitude, we found that the numerical distance effect - slower responses for targets close to the reference than far ones - was modulated by the spatial cue, as if the reference 5 was 'mentally shifted' in the direction of the cued position. This finding indicates that attention mechanisms play an important role in number processing.
Does contextual cueing occur in a comparative search task?
M. Pilar Aivar
Ever since the classic experiments performed by Chun & Jiang (Chun & Jiang, 1998, Cognitive Psychology, 36, 28-71) it is known that the repetition of spatial context reduces RTs in visual search tasks when the context is predictive of target location. However one of the biggest problems of the traditional contextual cueing paradigms is that they require a massive number of trials for the effects to appear. Also, it is not clear how far cueing effects can be generalized to other tasks. In this study we try to reproduce contextual cuing in a different task (comparative search) and with a smaller number of trials (120). In each trial, the screen was divided into two halves and a random configuration of elements was presented in each. Both halves could either be identical or differ on one element's color. Five configurations were repeated 12 times along the experiment, intermixed with newly generated configurations. Participants had to determine whether the two halves of the screen were identical or not. Results showed that RTs were significantly shorter for the repeated configurations, but only on those trials in which both halves were identical. This suggests a partial contextual cueing effect. [Research supported by grant FFI2009-13416-C02-02].
The process of impression formation for faces in a gaze cueing task
Hirokazu Ogawa, Masato Nunoi and Sakiko Yoshikawa
In a gaze cueing task, faces whose gaze directions are always predictive of a target location are evaluated as more trustworthy than those whose are not [Bayliss & Tipper, 2006, Psychological science, 17(6), 514-520]. In the present study, we examined how the impressions on the predictive faces are developed over time. In a gaze cueing task, participants were presented some faces that always looked at the target (valid faces) and the other faces that always looked at the opposite side of target (invalid faces). Each face was presented either from one to 12 times in the cueing trials. The results showed that the amounts of the gaze cueing effect was consistent across the number of presentations. Furthermore, the participants evaluated the valid faces as more trustworthy than the invalid faces, which is consistent with the previous studies. However, this bias decreased as the number of presentations increased. A recognition task confirmed that the participants were not aware of the relationship between the validity of gaze directions and face identities. The discrepancy between the cueing effect and the evaluation of the faces suggests that the impressions of the faces were modulated independently of the attentional guidance by gaze directions.
Visuomotor adaptation to prismatic lenses influences numerical cognition
Mark Yates, Carmelo Vicario, Tobias Loetscher and Michael Nicholls
Representations of visual space and number appear to be linked. Neglect patients -who demonstrate a pathological bias to the right side of space - misjudge the midpoint between two numbers - e.g. '11' and '19' - to the 'right'of the true midpoint - e.g. '17'. Conversely, healthy individuals display a (smaller) bias to the left matched by a tendency to mis-bisect numerical intervals leftwards. One interpretation of these phenomena is that shifts of attention in external space elicit corresponding shifts of attention within mental representations. However, the evidence that shifts in spatial attention can alter numerical cognition rests largely on number line bisection data. To determine whether these effects extend to other tasks, we investigated the effect of spatial attention on random number generation. Thirty-six participants generated 40 random numbers (from 1-30) before and after adaptation to either: left-shifting optical prisms, right-shifting prisms or control spectacles. Against expectations, participants who adapted to left-shifting prisms (eliciting a rightward shift of attention) generated smaller numbers after adaptation versus before adaptation. By contrast, there was no difference between the magnitude of numbers generated before versus after adaptation to right-shifting prisms or control spectacles. Possible explanations for these results are considered.
Asymmetries in saccadic latencies during interrupted ocular pursuit
Hans-Joachim Bieg, Jean-Pierre Bresciani, Heinrich H. Bülthoff and Lewis L Chuang
Smooth pursuit eye movements can be interrupted and resumed at a later stage, e.g., when a concurrent task requires visual sampling from elsewhere. Here we address whether and how interruptive saccades are affected by pursuit movements. Our participants pursued an object which moved horizontally in a sinusoidal pattern (frequency: 0.25 Hz, amplitude: 4 deg. visual angle). During this, discrimination targets appeared at 10 deg. eccentricity, to the left or right of the center. They were timed so that they appeared for 1 second while the pursuit object moved either toward or away from the discrimination target's position. Saccade reaction times were earlier when the discrimination targets appeared in a position that the tracking object was moving towards. Interestingly, saccade RTs back to the pursuit object were shorter when the object moved away from the discrimination target. We conclude that interruptions of pursuit movements lead to asymmetries in saccade generation. These asymmetries could have been caused by biases in attention along the predicted pursuit path.
Motion-related allocation of attention doesn't depend on a set towards a moving object
Natalia Tiurina and Igor Utochkin
We probed attentional allocation over space caused by motion perception with two attentional sets towards a moving object. Observers had to respond rapidly to a briefly presented probe asterisk while seeing a straightforwardly moving circle. A probe could be presented either beside, behind, or in front of a moving circle as well in the absence of a circle. The first group of observers was instructed to ignore a moving circle as potentially disrupting probe detection. The second group was instructed to track a circle. We expected, according to a hypothesis by Utochkin [Utochkin, 2009, Attention, Perception & Psychophysics, 71, 1825-1830], that tracking a moving object aids allocation of attention in front of motion while ignoring doesn't. Our results essentially replicated those reported by Utochkin. We found faster responses to all probes in the presence than in the absence of a moving circle. 'Behind'condition yielded additional benefit in reaction time in comparison with 'beside'and 'in front'conditions suggesting involuntary allocation of attention to locations previously marked by motion. Contrary to our hypothesis, the pattern was the same for both ignoring and tracking groups but tracking RTs were systematically slower reflecting cost of dividing attention between tracking and probe detection.
Attentional modulation of configurational preference for visual hierarchical stimuli by an insect brain
Aurore Avargues-Weber, Adrian G. Dyer and Martin Giurfa
How do visual systems construct meaningful representations of complex natural scenes? Do the brains first analyse the details of a scene (featural information) to reconstruct the complexity, or do they first analyse the scene as a whole (global configurational construct) to then process the details? We used hierarchical stimuli to investigate global/local preferences in honeybees. The bee is a classical model to assess the mechanisms underlying learning and visual perception. Indeed the bees learning capacity associated with its relatively simple brain enables novel insights into biological mechanisms that may be applied to artificial visual systems. The bee has also to learn to use its vision to recognize complex spatial information in noisy natural environments for efficient foraging and navigation behavior. We show that honeybee demonstrates an initial preference for global processing independent of stimulus density. This result is surprising in the context of traditional models of insect processing, but is consistent with new evidence of configural type processing in bees. Importantly we reveal that the bee brain also possesses the neural flexibility to modulate preferences to either local- or global-features depending upon specific contexts like image priming. The bee's preference to process configurations is thus not hardwired due to physiological or evolutive constraints but can be modulated by experience.
Attentional Bias and Spider Fear: A Prior Entry-Study
Anke Haberkamp, Melanie Schröder and Thomas Schmidt
It is widely accepted that spider phobics show an early attentional bias towards spiders. Beyond, attended stimuli are perceived as occurring earlier compared to unattended stimuli. The latter effect of prior entry 'is usually identified by a shift in the point of subjective simultaneity (PSS) in temporal order judgements (TOJs)' [Weiß and Scharlau, 2011, Quarterly Journal of Experimental Psychology, 64(2), 394-416, p.394]. We wondered if the attentional bias of spider phobics is strong enough to trigger a prior entry impression. In our study with spider-fearful and non-anxious participants, we presented natural images of animals (spiders, snakes, and butterfly) in pairs with neutral natural images (flowers, and mushrooms) on both sides of the fixation cross with a varied time interval between the onset of the two stimuli (SOA: 0ms, 12ms, 24ms, 35ms, 47ms). Our participants had to judge which picture appeared first and indicate the position of the image (left or right). As a result, spider pictures induced a significant difference between the two groups. Spider-fearful participants perceived the spider pictures relative to the neutral images as occurring earlier. Neither snakes nor butterflies led to a prior entry impression.
Role of landmark objects in the orienting of attention across saccades
Matteo Lisi, Patrick Cavanagh and Marco Zorzi
Covert attention can facilitate visual processing at selected relevant locations, but what happens if the eyes move elsewhere? Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of the previously attended position after an eye movements [Golomb, Chun & Mazer, 2008, Journal of Neuroscience, 28, 10654-62]. These results are puzzling, since the retinotopic location is behaviourally irrelevant in most ecological situations and also raise the question of how we can accomplish tasks that require both frequent eye movements and dissociations between gaze and attentional focus (e.g., team sports). Critically in these studies participants were asked to maintain attention on a blank location of the screen, not on a defined object. In this study we tested whether the continuing presence of a visual object at the cued location influences the postsaccadic attentional topography. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by squared black frames. Results show that the presence of the squares selectively increases the attentional facilitation at the spatiotopic location compared to the retinotopic after the saccade. Overall, the spatiotopic cueing effects predominate over retinotopic when the cued object remains present following the saccade.
Does an object's contrast affect attention and detection equally?
Bernard Marius 'T Hart, Hannah Schmidt, Ingo Klein-Harmeyer, Christine Roth and Wolfgang Einhäuser
When images are presented in quick succession (rapid-serial-visual-presentation, RSVP), low-level features, like luminance contrast affect object detection. Since objects attract attention, such features may affect detection and gaze (overt attention) similarly. To test this, we used identical stimuli in two tasks; prolonged viewing and RSVP. Stimuli consisted of natural images, in which the luminance contrast of an object and its background were independently manipulated. In prolonged viewing, eye positions were recorded during 3s of presentation. Subsequently, observers provided keywords describing the scene. In RSVP, observers'performance in detecting a target object in a 1-second stream of 20 images was tested. Although gaze control and detection performance are very different measures, changes in the low-level feature (luminance-contrast) affect both alike. Further experiments reveal that the pattern of results does not depend on the presence of distractor objects, and that the changes in luminance contrast do not change how characteristic an object is perceived to be for the scene. These results imply that scene content interacts with low-level features to guide both detection and overt attention (gaze), while certain aspects of higher-level scene perception are not affected by the same low-level features.
The time course of attention allocation in textures of different homogeneity
Tobias Feldmann-Wüstefeld and Anna Schubö
Searching a target in a texture does not only depend on target but also on texture properties, presumably due to differential attention allocation (Duncan & Humphreys, 1989). Aim of the present study was to reveal the time course of attention deployment within textures of varying degrees of homogeneity. To that end a search task was combined with a probe detection task. Probes could appear at the same or at a different location as previously presented targets. The screen onset asynchrony (SOA) between texture and probe was varied to track attention allocation at different points in time. Behavioral measures and event-related potentials (ERPs) showed that homogeneous textures lead to more efficient attention allocation than heterogeneous contexts, causing fewer erroneous answers, shorter N2pc latencies and larger N2pc amplitudes. Probes appearing at target locations yielded shorter RTs and larger P1 amplitudes for all SOAs, indicating a persistent sensory gain due to attention shifts. This on-target advantage was more pronounced in trials with homogeneous contexts for short SOAs, but disappeared at longer SOAs, suggesting that the impact of texture homogeneity outlasts the texture presentation for some time but vanishes thereafter.
Female capture of other females faces
Stephen Butler and Tedis Tafili
The human face, due to its high social and biological significance captures and retains our attention; hence it is also better remembered when compared to other objects. Female faces are only better remembered by females, resulting in an Own-Gender Bias (having a better memory for faces of their own gender). Males however do not show such a bias. The mechanisms underlying Own-Gender Bias have not been fully understood. The present study examines whether there are differences in attentional capture and retention between male and female faces, and whether these differences are more prominent in males or females. A measure of reaction time and eye movement behaviour was taken during a visual search task where the face was not task relevant. Results indicate that while there is no influence on latencies, an initial orientation effect is present for females. The presence of a female face slowed down the females'attention towards the search target; however the male face did not have such effect. This initial orientation does not seem to last, indicating that females'attention tends to be only captured initially by the female face, and not held by it.
Visual search for a feature-singleton: Not all display densities are made equal
Dragan Rangelov, Hermann J. Müller and Michael Zehetleitner
In visual search tasks, a feature-singleton target, e.g., red among green distractors, can be found on the basis of two different properties: (i) it is a unique item, and (ii) it has a particular feature. The former is known as the singleton search mode, and the latter as the feature-search mode. Previous research showed profound differences between the singleton- and feature-search, suggesting different cognitive mechanisms underlying the two modes. Presently, we investigated whether or not different display densities influence the search mode. The target was either red or green singleton, randomized across trials. Across two consecutive trials the target could either change or repeat: reaction times are typically faster for repetitions relative to changes, an effect termed Priming-of-Popout (PoP). As recent accounts relate PoP to the feature-search mode, its non/existence in an experiment indicates whether singleton- or feature-search mode was used. Comparing PoP magnitude for sparse (3) and dense (36 items) showed a dissociation between densities: PoP was significant only for sparse displays. These findings suggest that, although the singleton-search mode was possible in all conditions, sparse displays do not allow for valid target selection based solely on its uniqueness. Contrary to accounts of feature-singleton search, simply having singleton targets does not suffice for singleton-search mode to operate efficiently.
Visual Symptoms in Children and Adults with Autism Spectrum Disorder
David Simmons and Ashley Robertson
Autism Spectrum Disorders (ASDs) are common developmental disorders thought to affect at least 1% of individuals. Official diagnostic criteria for ASD concentrate on signs and symptoms associated with social behaviour, but sensory difficulties are also a major component in its presentation (Simmons et al, 2009, Vision Research 49, 2705). We have investigated sensory aspects of ASD using questionnaires and focus groups. This work is necessary as a precursor to better-targeted behavioural experiments. Focus groups were conducted with children (n=10) and adults with ASD (n=6). The visual symptoms reported were consistent with severe visual stress: sensitivity to bright light, flicker from fluorescent lamps and repetitive patterns like shelving or grids, sometimes reported as being painful. A subset also reported idiosyncratic responses to certain colours. Whilst positive sensory responses were also reported, most of these were not (obviously) in the visual domain, but seemed to be mainly tactile. A major factor in whether or not sensory stimulation is problematic appears to be the amount of control the individual has over its amplitude, rather than the amplitude itself, potentially suggesting a major role for attentional factors. These results will be put into the context of current neural models and intervention strategies for ASD.
Differential phase-encoded method revealed location of spatial attention-related activities in parietal, temporal and occipital cortex: an fNIRS study
Masamitsu Harasawa, Masanori Nambu, Michiteru Kitazaki and Hiroshi Ishikane
Variation of location of visuospatial attention causes modulation of neural activities. Here, we demonstrate this effect could occur in a variety of cortical areas including parietal, temporal and occipital cortex by functional near-infrared spectroscopy and the technique using continuously modulated visual stimuli and differential neural responses [Tajima et al, 2010, JNS, 30(9), 3264-3270][Saygin and Sereno, 2008, Cerebral Cortex, 18(9), 2158-2168]. Subjects performed RSVP task detecting numbers among alphabets arranged circularly and peripherally. The target position continuously moved clockwise along the circle at the speed of 360 deg / min. The initial position of the target was the top or the bottom of the circle and was cued before the task period lasting 75 sec. The changes of oxy-Hb concentration were measured at 47 points covering parts of parietal, temporal and occipital cortex. Differential responses by the initial target position were fitted by sinusoid with the wavelength of 60 sec. The fitted phases were almost opposite for left and right cerebral hemispheres. This result was reproduced even with different moving speed, direction, and frame rate, and disappeared in the control condition with center RSVP task. These results suggest even the cortical areas apparently without retinotopy are involved in location-specific spatial attention.
Direction information acquired from attended objects not from distractors in MOT
In the current experiment we tested to what extent we process the motion information from the tracked objects. Specifically, we were interested, whether Multiple Object Tracking (MOT) display can affect sensitivity to motion, which is in the direction of either targets or distractors. In the first experiment we presented two random dot motion displays (RDM) in quick sequence (coherence 0% or 30%). The subjects were asked to detect whether coherent motion was present in the later display. We manipulated the angular difference of RDM's and found higher accuracy for 0deg or 180deg differences. In the second experiment we presented MOT task with 3 targets and 6 distractors, followed with RDM for 500 ms. The subjects were asked about the targets and the direction in RDM. We confined direction of the tracked object, at each moment only 3 directions were allowed, making sum of motion vectors constant. We measured the threshold for coherence in RDM for conditions: 1) motion parallel with recent motion of the targets, 2) motion different from the motion of the targets, 3) different motion of the targets. We found decreased sensitivity in the second condition, suggesting that we preferentially process the direction information from the attended objects.
How do we use the past to predict the future in oculomotor search?
A variety of findings indicate that visual search can become more efficient by using various featural, spatial, and temporal cues in the world that are probabilistically related to a search target. More specifically, the results of two experiments are reported that examine how we make predictions about the location of a target in an upcoming search based upon previous experience. Previously, it was known that search can adapt to situations in which simple statistics govern target location, such as a spatial bias, but it was unclear what mechanism was responsible for this and to what extent search adapts to different kinds of environmental statistics. Participants conducted a simple gaze-contingent oculomotor search task for a target in 1 of 4 locations under 5 conditions, each with different 1st or 2nd order statistics govern the target's location. In general, participants rapidly adapted their search to the prevailing statistics. A mechanistic model proposed by others to account for priming in pop-out search tasks could only account for search behaviour when the environmental statistics were simple, and was not able to provide a robust account of how observers searched in all conditions. However, a very simple probabilistic model that makes predictions based upon past observations provided a good account of how the participants searched across all 5 conditions. The results constrain possible models of search, and suggest that while people bring assumptions to search tasks, people can quickly adapt to the statistics of the environment.
Does inhibition of return during joint action reflect individual attentional processes?
Mark A. Atkinson, Geoff G. Cole, Andrew Simpson and Paul A. Skarratt
When two individuals act alternately upon visual targets in three-dimensional environments, an individual is frequently slower to respond to a target following a response by another individual to the same target (Welsh et al., 2005, Neuroscience Letters, 385, 99-104). It has been suggested that this social inhibition of return (SIOR) effect is due to simulating the motor processes of another individual, such that individuals are relatively slower to execute an action after observing the same action in another. It is possible however, that response slowing is due to an egocentric inhibition of return (IOR) mechanism where attention is slow to return to recently attended locations. The aim of the present work was to examine whether this IOR-like effect is a consequence of attentional orienting. Two experiments demonstrate that like IOR, SIOR is modulated by the perceptual grouping of targets and the requirement to discriminate, rather than detect targets. In addition, the magnitude of an individual's SIOR effect is correlated with their performance on a two-dimensional IOR task, where no goal-directed actions are made. These findings support that hypothesis that SIOR is due to egocentric shifts of visual attention.
Spatial representation of time in music
Valter Prpic, Antonia Fumarola, Matteo De Tommaso, Irene Gratton and Tiziano Agostini
The Spatial Numerical Association of Response Codes (SNARC) suggests the existence of an association between number magnitude and response position, with faster left-hand responses to small numbers and faster right-hand responses to large numbers [Dehaene et al, 1993, Journal of Experimental Psychology: General, 122, 371-396]. Moreover, Rusconi et al. [2006, Cognition, 99, 113-129] showed that the internal representation of pitch height is spatially organised, especially in participants with formal musical education (i.e., Spatial Musical Association of Response Codes: the SMARC effect). We investigated whether a similar association exists between time in music (beats per minutes) and the spatial position of response execution. The first task were to judge the timbre of beat sequences (metronome vs non-metronome) pressing the right/left key with the right/left hand. The second task were to judge whether a beat sequences was faster or slower than a reference sequence. Results showed a global trend with faster left-hand responses to 'slow' beats and faster right-hand responses to 'fast' beats. We can conclude that the musical tempo is spatially represented as pitch height and the number.
Can Bees See At A Glance?
Vivek Nityananda, Peter Skorupski and Lars Chittka
Both humans and monkeys are capable of extremely rapid scene categorization, making visual decisions about scenes in under 100 ms. Is this capacity to represent and process scenes in a sensory snapshot a consequence of the bigger brain size and computational power of primates? If so, one would predict that other animals, such as insects, that have far smaller brains, would be incapable of such rapid decisions and have to actively scan the scene instead. No studies have, however, investigated whether non-primates have the capability to see at a glance or not. We used a learning paradigm to ask if bumblebees could learn to detect and discriminate between stimuli which were presented for durations of 100 ms or less. We find that bumblebees can detect the presence of stimuli and discriminate between differently oriented stimuli even when they are presented as briefly as 20 ms. This is the first demonstration of a non-primate seeing at a glance and our results raise questions of how bumblebees can process visual features this rapidly and whether they are capable of seeing more complicated features of a scene at a glance.
Come Together: Perceptual Averaging Contributes to Cueing Effects
Hannah M Krueger, W. Joseph MacInnes and Amelia R. Hunt
An uninformative exogenous cue speeds target detection if the cue and target appear in the same location separated by a brief temporal interval. This finding is traditionally ascribed to the orienting of spatial attention to the cued location. Here we examine the role of perceptual averaging of the two trial events in speeded target detection. That is, the cue and target may be perceived as a single event if they appear in the same location and therefore the perceived target onset is temporally bound to the earlier cue onset. We measured manual reaction times to detect cued and uncued targets, and observed the traditional facilitation of cued over uncued targets. We asked the same observers to judge target onset time by noting the time on a clock when the target appeared. Observers consistently judge the onset time of the target as being earlier than it appeared, with cued targets judged as earlier than uncued targets. When the cue-target order is reversed so that the target precedes the cue, perceived onset is highly accurate in both cued and uncued locations. These findings suggest that perceptual averaging, in addition to attentional orienting, can contribute to cueing effects.
Suppression of color cues depends on stimulus size
Josef Gerard Schönhammer and Dirk Kerzel
In spatial precue paradigms with non-predictive cues, discrimination responses are usually faster to cued than to uncued locations at onset asynchronies shorter than 200 ms. When cues and targets are singletons with different features, responses to cued and uncued locations are equally fast. However, recent studies occasionally found slower responses to cued than to uncued locations, indicating suppression of the cued location [e.g., Anderson and Folk, 2010, Attention, Perception, & Psychophysics, 72(2), 342-352]. Experimental manipulations and underlying processes connected to these effects are still unclear. We varied color and size of cue singletons, in situations where target singletons had a different color than the cue. Suppression effects were observed when cues were small. When the size of the cues was increased, responses were equally fast to cued and uncued locations. This suggests that suppression effects depend on stimulus factors as well as the features observers are set to respond to.
Localized oscillatory activity in the human attention network in response to predictive and unpredictive visual cues
Isabel Dombrowe and Claus C Hilgetag
People orient attention towards stimuli in their visual field with the help of a bilateral network of regions mostly in the frontal and parietal lobes. The areas within this network communicate through synchronized neural activity. To study the dynamics of this network during orienting, neural activity can be non-invasively measured by electroencephalography (EEG) and perturbed by transcranial magnetic stimulation (TMS). In the present study, participants performed a visual cueing task while their EEG was recorded. The cue was either unpredictive (block A: 50 % cue validity) or predictive (block B: 67% cue validity) with regard to target location. Corresponding to these conditions, we found distinct patterns of oscillatory activity in the alpha, beta and gamma bands, which was lateralized and topographically confined and helps to reveal targets for regionally and frequency-specific perturbation by TMS.
A neurodynamical model for contrast-invariant and scale-invariant contour grouping
Ilia Korjoukov and Pieter Roelfsema
We present a neurodynamical network model to study mechanisms of contour grouping in vision. The model resembles the organization of the visual cortex: a hierarchy of areas, feedforward and recurrent connectivity and feature-specific tuning. The model has two processing stages: in the initial, feedforward, stage the model carries rapid recognition of familiar contour configurations, whereas in the late, recurrent stage, the model incrementally builds a reliable grouping code for any arbitrary configuration selected by visual attention. The two stages are modeled with simple and realistic single-unit dynamics that is modulated by attention and not disrupted by variations in input contrast. The model offers a plausible computational account for contrast-invariant and scale-invariant perceptual grouping.
Task Congruency in Inattentional Blindness
Nika Adamian and Maria Kuvaldina
Some works on Inattentional Blindness (IB) have explored the inhibition hypothesis which states that IB arises from some feature-based [Andrews et al, 2011, Journal of Experimental Psychology, 37,1007-1016] or space-based [Thakral, Slotnick, 2010, Consciousness and Cognition 19, 636-643] inhibition. Our hypothesis is that this inhibition occurs due to the contextual incongruence of the critical object. To prove this we tried to manipulate the degree of contextual expectancy/congruency in a series of experiments. In a modified paradigm of IB we use lexical stimuli and control the level of subjects' expectancy of the critical stimulus. Subjects were asked to solve an anagram presented for 200 msec. After several trials two new letters were added so that the anagram could be solved in two ways: by using only the main letters or by adding two new letters. Corpus frequency and word form of anagrams as well as task-congruency of critical stimuli were varied. The results showed that in that task-incongruent, low-frequent and improper form conditions the level of IB significantly increased. We explain this effect as a result of inhibition of the new stimuli that are inconsistent to the general target representation, and discuss it in terms of inhibition hypothesis of IB.
Visual search for letters within words and nonwords in the right and left visual hemifields
Elena Gorbunova and Maria Falikman
Although visual search for letters within words has been investigated in a number of studies, there is no agreement on whether there is a word superiority effect in such search as compared to search within within meaningless letter strings (nonwords). In our experiments, observers searched for a prespecified letter in displays containing pairs of 6-letter words or nonwords placed left and right to the fixation, with a variable target letter position within word and nonword strings, but constant absolute distance between fixation and a target letter. The RT data collected from 28 participants demonstrate a significant interaction of all three experimental factors (visual hemifield, letter string type and target letter position, repeated measures ANOVA, p<.002) and provide evidence for serial search for a letter within a word in the left visual hemifield and within a nonword in the right visual hemifield and, surprisingly, parallel search for a letter within a nonword in the left visual hemifield and within a word in the right visual hemifield. This pattern of results might reflect hemispheric differences in lexical information processing, together with the hemispheric asymmetry in selective visual attention.
Metacognitive regulation of the dead zone of attention
Yulia M. Stakina and Igor S. Utochkin
The dead zone of attention (DZA) was previously found in change blindness paradigm and described as exaggerated inability to see a change near the center of interest in a scene [Utochkin, 2011, Visual Cognition, 19, 1063-1088]. Here we tested a hypothesis that DZA can at least partially be explained as a consequence of a spontaneous search strategy that makes attention to avoid regions near center in favor of larger skips. We manipulated search strategy at metacognitive level by informing observers what DZA is and making them use this information to improve performance. Like in earlier experiments by Utochkin, observers looked for a marginal change near or far from a once noticed central change. In Experiment 1, only one marginal change occurred per a trial, in Experiment 2 both changes were placed in competition. We compared results with earlier Utochkin's results without informing about DZA. DZA informing had no effect on search time (Experiment 1) or probability of prior detection (Experiment 2) but reduces the number of errors in near condition. Our results suggest that metacognitive regulation of DZA is limited. It appears that it has no effect on global search strategy. Yet, it improves deployment of attention within DZA itself. The study was implemented within the Programme of Fundamental Studies of the Higher School of Economics in 2012.
Neural mechanisms of feature attention revealed by frequency tagging in MEG
Daniel Baldauf and Robert Desimone
We used a frequency tagging paradigm to study brain networks mediating feature attention. The stimuli were sequences of compound images of faces and houses flickering at different tagging frequencies. Our fMRI-guided analysis of the MEG/EEG signals revealed a network of areas in frontal and temporal cortex that closely followed the attended stimulus'frequency at differential phase lags. We further analyzed interactions between the involved brain areas by means of neuronal synchrony and coherence across the spectrum, and we cross-validated the observed functional connectivity with each participant's individual tractography. Our results imply that the inferior frontal gyrus provides attentional top-down signals to stimulus-tuned temporal areas by engaging into coherent oscillations.
Visuospatial awareness is modulated by dual-task demands: evidence from healthy participants and right hemisphere damaged patients
Mario Bonato, Matteo Lisi, Chiara Spironelli, Konstantinos Priftis, Carlo Umiltà and Marco Zorzi
We studied the modulation of visuospatial awareness - and the resultant asymmetries across visual hemifields - induced by a dual-task manipulation that consumed the attentional resources available for spatial monitoring. Reaction times and detection rates for lateralized, briefly-presented, masked or unmasked visual target(s) were assessed in healthy participants and right-hemisphere damaged patients. In the single-task condition, participants had to report only the position of the target(s) ('right', 'left', or 'both' sides). In the dual-task conditions, while monitoring for target(s) onset, they performed also a second task, visual or auditory, to increase the cognitive load [Bonato et al., 2010, Neuropsychologia, 48, 3934-3940]. Healthy participants showed increased hemifield asymmetries under dual-task, suggesting a key role for aspecific attentional resources in spatial monitoring. Right-hemisphere damaged patients (tested without masking) showed severe contralesional awareness deficits, that is neglect and extinction under both dual-task conditions. Thus, important asymmetries in spatial awareness emerge when attentional resources are consumed by a concurrent task, loading either on visuospatial or on working-memory resources. For patients, this turns into a contralesional awareness deficit that is patognomonic for neglect even when they appear intact on clinical tests. Implications for the mechanisms subtending normal and pathological attentional orienting and spatial awareness are discussed.
Feature based attention across eye movements
Donatas Jonikaitis, Heiner Deubel and Jan Theeuwes
Primate visual system has been shown to compensate for eye movement induced retinotopic shifts in the visual image. For instance, neurons with receptive fields coding for post-saccadic retinotopic stimulus location are activated even before a saccade starts. Similarly, spatial attention also predictively shifts to the post-saccadic stimulus location. Such predictive updating of spatial location information is considered to be the mechanism mediating visual stability. However, contributions of attended object features - such as shape or color - have been relatively neglected. We investigated feature based attention across saccades and its potential contributions to visual stability. In this study, we asked participants to do two things at the same time - to make a saccade to a colored dot and to discriminate a probe (a Gabor patch tilted to left or right) presented at a distractor location which either matched or did not match the color of the saccade target. Tilt discrimination performance hence served as a measure of feature based attention - before a saccade started, participants were better at discriminating probes presented at distractor locations that matched the color of the saccade target, than at distractor locations that did not match that color. This is a classic feature attention effect - allocating attention to one feature (color of the saccade target) lead to performance increases at other locations matching that feature (distractor locations matching that color). Importantly, we observed that immediately after the saccade was finished, feature based attention benefits persisted at the distractor location with matching color, regardless of the fact that it now had a different retinotopic position. Thus, feature based attention and predictive shifts of spatial attention could combine to quickly find the location of relevant objects across saccades.
Working memory load influences on attention process of the emotional task-irrelevant information.
In the everyday activity, attention is easily disturbed by the environmental stimuli and information which are irrelevant to the current purpose of the behavior. In the previous studies reveals that task-irrelevant information processing depends critically on the level and type of load involved in the processing of goal-relevant information. Whereas high perceptual load can eliminate distractor processing, high load on higher cognitive control processes like working memory increases distractor processing [Lavie, 2005, Trends in Cognitive Science, 9(2), 75-82]. In the present study, we manipulated working memory load using by n-back task, and examined whether the emotional faces affect executive cognitive process although those are completely task-irrelevant. Angry, neutral and happy faces were presented peripheral as task-irrelevant stimuli. Participants were informed that those faces had no relation to the task, and they asked to ignore the faces. The results showed the differential effect of load level and emotion. When the working memory load was high, the error rate was increased especially angry face was appeared as task-irrelevant stimulus. The result suggests that ignoring the threatening related emotion required more executive resource rather than rejecting other emotions.
Is there a spatial representation of non-symbolic numerical quantities?
Giorgia Tamburini, Antonia Fumarola, Riccardo Luccio and Tiziano Agostini
The Spatial Numerical Association of Response Codes (SNARC) showed the existence of an association between number magnitude and response position [Dehaene et al., 1993, Journal of Experimertal Psychology: General, 122, 371-396]. Recently, it has been investigated the spatial representation of non-symbolic numerical quantities (dots) using a line bisection task [de Hevia and Spelke, 2009, Cognition, 110, 198-207; Gebuis and Gevers, 2011, Cognition, 121, 248-252] but the results are not agree. We investigated whether there is the spatial association between non-symbolic numerical quantities (dots) and response position using a simple detection experiment [Fischer et al., 2003, Nature Neuroscience, 6, 555-556]. The dots was used as prime and it appeared at the centre between two lateral boxes. The participants'task was to respond by pressing the space bar as soon as they detected the target which appeared after the prime. The target was a grey square that appeared in one of two boxes. Results showed RTs faster for small quantities associated with the left target and for big quantities associated with the right target. Our data support the idea that the non-symbolic quantities have a spatial representation, in the form of a left-to-right oriented mental line.
Abnormal Attentional Masking in Children with Specific Language Impairment
Andrea Facoetti and Marco Dispaldro
In order to become a proficient user of language, infants must detect temporal cues embedded within the noisy acoustic spectra of ongoing speech by rapid attentional engagement. According to the neuro-constructivist approach, a multi-sensory dysfunction of attentional engagement - hampering the rapid temporal sampling of stimuli - might be responsible for language deficits typically shown in children with Specific Language Impairment (SLI). In the present study, the efficiency of visual attentional engagement was investigated in 22 children with SLI and 22 typically developing (TD) children by measuring attentional masking (AM). AM refers to impaired identification of the first of two sequentially presented masked objects (O1 and O2) in which the O1-O2 interval was manipulated. Children with SLI showed a deeper AM and more sluggish AM recovery. Our results suggest that a multi-sensory engagement deficit - probably linked to a dysfunction of the right fronto-parietal attentional network - might impair language development.
The dynamic-updating hypothesis: Attentional selection of dynamically changed objects
San-Yuan Lin and Su-Ling Yeh
While there is a time delay between the appearance of an object and our awareness of it, maintaining the most updated information in the constantly changing visual world is crucial for survival. Previous theories of object-based attention mainly derived from studies using objects of fixed boundaries. This constrained these theories to such objects. We proposed a dynamic-updating hypothesis for object-based attention: Object representation-after being attended-still undergoes a constantly and continuously updating process as long as the physical object is present. To test this, in a modified double-rectangle cueing paradigm, we changed the object display after one of the objects was cued. In a speeded detection task, the attended object was either changed globally (grouped with the other object via amodal completion), or locally (changed its shape via boundary change), or disappeared and then reappeared, or maintained blank at the target frame. Object-based attention remained-a shorter reaction time for the target appearing on attended than on unattended object-when the originally cued object was changed or reappeared, but not when it disappeared at the target frame. This suggests that object representation that attention has selected undergoes dynamic reorganization constantly and continuously, supporting the dynamic updating hypothesis.
On color and emotion. An ERP study of visual attention
Joanna Pilarczyk and Michal Kuniecki
Reaserch problem Delineation emotional from neutral stimuli occurs at very early stages of visual information processing, which manifests in facilitation of capturing attention by affective images. One of the basic features modulating this process might be the coloring of stimuli. In particular, the red color, due to its evolutionary importance, may serve as a cue governing attention and facilitating processing of red objects. Method Using dot-probe paradigm we briefly presented pairs of IAPS (International Affective Picture System) images of equal valence (positive, negative or neutral). Each pair consisted of one picture featuring prominent red dominant and the other in non-red coloring, with brightness and contrast adjusted. The following target-dot was flashed either on the side of the red picture (congruent condition) or non-red picture (incongruent condition). Procedure included EEG signal recording and reaction time measurement. Results P1 component was modulated solely by valence of the cue, with positive and negative cues yielding larger amplitudes than neutral cues. N2pc component was sensitive to both emotional valence and color, being more negative contralateral to the red picture but only in case of emotional stimuli. At the behavioral level, reaction times were shorter in congruent as compared to incongruent condition, especially for emotional pictures. Conclusions The results suggest that affective valence of the stimulus is evaluated extremely fast regardless of coloration. However, the red dominant acts as an attractor on the later stages of visual processing capturing visual attention and eventually facilitating motor response.
Variation of Attentional Capture by transient luminance changes over repeated exposure
M. Isabel García-Ogueta
It's well-known that transient luminance changes even irrelevant to the task capture attention. In previous research we got attentional capture effects on locations congruent or incongruent with the subsequent stimuli locations, causing benefits and costs respectively. Our aim in the present research is to know whether attentional capture changes as a function of repeated exposure to capture trials and if this repeated exposure affects in a different way to congruent and incongruent capture or whether it depends on the stimulus asynchrony (SOA) between capture and stimuli onset. We also considered different perceptual load conditions. Our results showed accuracy costs of incongruent capture not only for 64 or 128 attentional capture trials exposure but also for 192 and even 256 trials exposure, in a 165 ms. asynchrony condition. However RT costs only remained for the first 64 capture trials. In case of congruent capture, RT benefits remained through the whole series of capture both with a long (165 ms.) and a sort asynchrony (110 ms). Effects of perceptual load should also be taken into account to explain results.
Effects of mood induction on eye movement inhibition: An antisaccade task with emotional faces
Nicolas Noiret, Aline Claudon and Eric Laurent
We investigated the effects of mood on eye movement inhibition in an antisaccade task. After a mood induction procedure (MIP) in which participants watched a film (i.e., positive, negative or neutral), the antisaccade task required them to generate saccades in the opposite hemifield to emotional faces (i.e., sad, happy or neutral) appearing on the left or right of a central fixation point. Measures of pupil size and subjective rates were carried out both before and after MIP, and confirmed that participants'mood was significantly different between the three MIP conditions. Results showed that participants had more correct antisaccades following the presentation of sad faces than following the presentation of happy or neutral faces. These results are congruent with a hedonic perspective in which individuals automatically avoid negative stimuli and look for more positive stimuli in order to optimize their mood. Moreover, the longer antisaccade latencies observed in positive and negative mood inductions, compared to the neutral mood induction, suggest that individuals in positive or negative moods could have more difficulty in inhibiting eye movements, and more generally indicate that mood moderates inhibitory attentional control.
Is the relevance of the critical object irrelevant for the Inattentional Blindness?
Maria Kuvaldina, Nika Adamian and Anastasia Shaposhnikova
One of the distinctive features of the Inattentional Blindness (IB) experimental paradigm is the critical object irrelevance that makes it easy for the subject not to notice it [Most et al. Psychological Science Vol. 12 No.1 Jan. 2001 p. 9-17, Koivisto, Revonsuo, Psychological Research, (2008) 72: 39-48]. But if some features of to-be-attended objects are primed in the procedure, the probability of noticing these more relevant stimuli in the IB task could be increased. To test this we explored the influence of relevance of the critical object on the IB level in the following ways: 1) increasing expectations about the appearance of the critical object by enumerating its features right before it appears in the task 2) providing prime by making targets transform to the shape of the critical object right before it appears in the task 3) matching the trajectory of the critical object to the trajectory of targets. All experiments used a sustained IB procedure. The IB rate observed was not different from the control conditions in which none of the above experimental manipulations took place. These results support the idea of the feature-based inhibition in IB [Andrews et al., 2011, Journal of Experimental Psychology, 37, 1007-1016].
Singleton-detection or Feature-search: Working memory capacity may be the thing, wherein we'll catch attention with a ring
Hayley E. P. Lagroix, Matthew R. Yanko and Thomas M. Spalek
When searching for a uniquely-coloured target in a RSVP stream of homogeneously-coloured distractors, observers can use one of two search modes: singleton-detection or feature-search. Using an attentional-capture paradigm, we varied (a) the number of possible target colours from 1 to 4, in Experiments 1-4 respectively, and (b) the appearance of a coloured ring around one of the distractors in the RSVP stream. When present, the ring was either the same colour as one of the possible target colours (Colour-Match), or an irrelevant colour (Colour-Mismatch). Capture was measured as the impairment in target identification accuracy when the ring was present relative to when it was absent. Greater capture in the Colour-Match than in the Colour-Mismatch condition was regarded as evidence of feature-search mode. Equal capture in the two conditions was regarded as evidence of singleton-detection mode.Contrary to the common belief that singleton-detection is the default mode [Bacon and Egeth, 1994, Attention, Perception & Psychophysics, 55(5), 485-496], we show that observers shift gradually from feature-search to singleton-detection mode as the number of target colours increases to four. This shift may be related to the capacity of visual working memory, estimated at three to four items [Luck and Vogel, 1997, Nature, 390, 279-281].
A rapid reaching task reveals the extraction of probability information from arbitrary colour cues
Daniel K Wood, Jennifer L Milne, Craig S Chapman, Jason P Gallivan, Jody C Culham and Melvyn A Goodale
Our group has developed a novel technique for probing the earliest stages of motor planning by requiring participants to rapidly react to an array of potential targets, all of which have an equal probability of being cued as the final target. Deviations in initial trajectories mirror the 'global effect'observed in low-latency saccades when multiple targets or distractors are present. That is, the initial trajectories of reaches toward a field of potential targets reflect a fine-grained sensitivity to the probabilities inherent in the spatial distribution of targets. To investigate whether or not this fast system for planning reach direction is sensitive to other ways of communicating probability information, we used colour as a cue. Subjects quickly (< 325 ms) initiated reaches toward two potential targets, a green and a red circle. Subjects had to adjust their in-flight trajectories and hit the final target, which was cued only after the reach had been initiated. One of the target colours was 3 times as likely to become the final target. Initial trajectories were biased toward the high-probability target, regardless of final target location. These results indicate that the motor system can rapidly extract probability information from arbitrary colour cues and incorporate that information into the planning of reaches.
Attentional modulation of target location significantly affects pointing performance
Anna Ma-Wyatt and Xinlu Huang
The planning and execution stages of goal directed movements often rely on visual information about the target location. Cueing a target location can improve performance on a variety of tasks, indicating the representation of this information is enhanced by attention. However, it is not clear if attentional modulation of a cued location will also affect pointing performance. We asked observers to point to targets presented in noise with a signal to noise ratio that varied across blocks. Target locations were cued at different times in the reach to assess the effect of attentional modulation of target location on the planning and execution of a rapid goal directed movement. We measured movement latency, movement time and pointing precision, using a cue validity of 20% or 80%. Pointing precision improved with increasing signal to noise ratio. The cue was most effective in improving performance when it was presented early in the reach. The overall movement time was also significantly decreased when people pointed rapidly to a cued target, across all noise levels. The results suggest that the same visual representation that is modulated by visual attention is also used to plan and update rapid pointing movements.
The Effects of Stimulus' Representative Consistensy on Inattentional Blindness
Hong-Zhen Chen, Zeng Wang and Yu-Kun Liu
Three memory tasks were used to test the effects of the words and the pictures' respresentative consistency on inattentional blindness for unattended words showed in the selective-attention task. The recognition memory test was used to test whether the subjects occurred inattentional blindness for unattended words. The category association test and the perceptual identification test were used to test the priming level for the unattended words. The study adopted 2x2x3 within-subject design. The three factors were word type, consistency of semantic representation and task type. The subjects were 31 college students, including 17 males and 14 females. The results showed that the subjects occurred inattentional blindness for the unattended words showed in the selective-attention task. In addition, compared to the unattended words whose semantic representation was not consistent with the pictures, the unattended words that had the same semantic representation with the pictures were more easily to result in the subjects'inattentional blindness. What's more, the unattended words that had the same semantic representation with the pictures processed not only perceptually but also conceptually, but the processing level of the unattended words whose semantic representation was not consistent with the pictures still need a further study. Key word: inattentional blindness, consistency of semantic representation, selective-attention Acknowledgment: The research was supported by Technology R&D Program of Hebei Province (Grant No. 11457202D-57), Natural Science Foundation of Hebei Province (Grant No. C2012205046) and Social Science Foundation of Hebei Province (Grant No. HB10VJY033).
A Neurocomputational Account of Mental Curve Tracing
A neural model is proposed whose temporal dynamics simulates the properties of mental curve tracing. Behavioral studies revealed that the tracing is a time consuming process that serially spreads attentional label along the target curve. However, the speed of tracing is not fixed and it could be flexibly adjusted depending on the density of image elements. Single-unit recordings in the monkey primary visual cortex showed that tracing is associated with elevated firing rate for neurons whose receptive field falls on the traced curve. In order to explain behavioral and neurophysiological findings, the proposed model implements a novel form of neural filling-in that enables activity spreading along the target curve. Filling-in occurs at multiple spatial scales in order to account for different speeds of tracing. The model implements object-level competition and selection among distinct image elements. Computer simulations showed that the model exhibits appropriate scaling of tracing speed with distance between curves thus simulating the hot spots of attention. The speed of tracing slows down when the curves are close to each other and speeds up when they are far apart. Furthermore, the model is able to store the traced pattern in short-term memory.