Attention I

Effective processing of masked eye-gaze requires volitional control

Shahd Al-Janabi and Matthew Finkbeiner

Extant literature indicates that averted eye-gaze cues orient spatial attention. Despite the ease with which gaze-triggered shifts of attention occur, however, there remain important questions about the automaticity of one’s response to averted gaze. The aim of the present study is to investigate this aforementioned issue by determining whether shifts of attention to eye-gaze cues can occur when the cues are masked. While we find that unmasked eye-gaze cues are effective in producing a validity effect in a central cueing paradigm, we also find that the efficacy of masked eye-gaze cues is sharply constrained by experimental context. Specifically, masked eye-gaze cues only produced a validity effect when they appeared in the context of predictive unmasked eye-gaze cues. Unmasked eye-gaze cues, in contrast, produced validity effects across a range of experimental contexts, including when 80% of the cues were invalid. These findings demonstrate that, unlike unmasked eye-gaze cues, the effective processing of masked eye-gaze cues is volitional.

Visual Attention: Is Posner’s beam the same as Treisman’s glue?

Robert Snowden

Many experiments show that attending to a particular stimulus enhances the person’s abilities to process this stimulus. Two major proposals are that attention enables (1) faster and more accurate detection of the stimulus (Posner’s “beam”), (2) features of a stimulus to be combined (Treisman’s “glue”). Are these simply two manifestations of the same mechanism? If attention is required to combine features (e.g., colour and shape), then tasks that require feature combinations should show greater effects of attentional manipulation than those that do not. Using a simple cueing paradigm, we compared the effects of attention on tasks that required either a single feature to be discriminated (colour or shape) with one that required the combination of such features (colour and shape). In separate experiments, we manipulated spatial attention via endogenous or exogenous cues. In both experiments clear effects of cue validity and task were obtained, but there was no interaction between these variables. Similar results were also obtained using a comparison of single lines (feature) versus shape (conjunction). Our results suggest that the guidance of attention (Posner’s beam) is not the same as the binding of stimulus features (Treisman’s glue) and supports the notion of two distinct processes.

Emotion potentiates the effect of attention on appearance

Antoine Barbot and Marisa Carrasco

Attention enhances apparent contrast (Carrasco, Ling & Read, 2004). Emotional cues potentiate the effect of attention on contrast sensitivity (Phelps, Ling & Carrasco, 2006). Here, we investigated whether emotion modulates the effect of attention on both contrast sensitivity and appearance. In each trial, observers saw two simultaneous gratings (40 ms) at iso-eccentric locations. These stimuli appeared 120 ms (Exp.1) or 580 ms (Exp.2) after the onset of either a peripheral cue adjacent to one location (focal attention) or of two cues adjacent to both locations (distributed attention). The cues were a set of Ekman faces of either neutral or fear expression, upright or inverted (control). Observers were asked to report the orientation of the higher contrast grating. In Exp.1, upright, but not inverted, fearful faces increased the effects of attention on orientation discrimination and enhanced apparent contrast relative to neutral faces. These effects were absent in Exp.2, indicating that the performance and appearance effects are due to the transient nature of exogenous attention. These findings provide strong evidence that emotion interacts with attention at early stages of visual processing. Thus, emotion not only potentiates the effect of attention on performance but also on appearance, altering the way we see. Support: NIH R01-EY016200

A bias against higher level processing in the learning of individuals with autism: observations in an image feature level visual search task under the interference of object level information

Li Zhaoping, Mara Tribull and Sarah White

Four search stimulus types were interleaved, a letter ‘N’ target among its mirror reversals (N-search), the mirror reversal target among letter ‘N’s (RN-search), a tilted X shape (X-search), or its thinner variant target (SX-search), among rotated versions of the X shape. Observers were told that each target was defined by having a uniquely oriented bar (feature) in the image. However, target and non-target object shapes were rotated or reflected versions of each other except in the SX-search. Viewpoint invariant, task irrelevant, shape recognition interferes with the feature detection task, when observers confuse the target and non-targets by their identical shape [Zhaoping and Guyader, 2007, Current Biology, 17:26-31]. Interference (which contrasts X-search from SX-search) mainly causes a prolonged latency to report the target after observer's gaze reaches the target during search. Compared with typically-developed control subjects, high-functioning adults with autism spectrum disorder displayed marginally weaker interference during initial search trials. With more trials, they speeded up their gaze arrival to the target no less than controls but improved significantly less in resisting subsequent interference. We discuss how this learning bias against higher-level processing for top-down control of task strategy may relate to a local processing bias in autism.

Treating others as intentional agents influences our own perception. An EEG study

Agnieszka Wykowska, Eva Wiese and Hermann Müller

Directing attention to where others look is the basis for efficient social interaction. Accordingly, other people’s gaze direction guides our attention towards potentially relevant locations. With the use of the so-called gaze-cueing paradigm, it has been shown that targets are typically detected, identified, or localized better at locations that were gazed-at by a centrally presented face, relative to other locations [Friesen & Kingstone, 1998, Psychon Bull Rev 5: 490-495]. Our previous findings [Wiese et al., submitted] showed that orienting attention to where others look is modulated by whether people believe that the gazer represents human or nonhuman behavior. In the present study, we used the EEG/ERP method and a gaze-cueing paradigm with human and robot faces to examine whether treating an interaction partner as an intentional agent influences the readiness to engage in social interactions, as measured by gaze cueing effects. Results showed that the gaze-cueing effects reflected in the P1 ERP component were modulated by the type of the gazer (human or robot). Based on this, we conclude that higher-level social/cognitive processes such as adopting an intentional stance towards an observed agent influence early mechanisms of perceptual selection.

Stimulus Context Modulates the Speed of Visual Attention in Illusory Figure Localization

Thomas Töllner, Markus Conci and Hermann Josef Müller

In classic visual-search paradigms, processing times to feature singleton targets are typically speeded with decreasing target-distracter similarity. Recently, this well-known and extensively-studied effect has been demonstrated to originate from a pre-attentive processing stage: the coding of stimulus saliency [Töllner et al., 2011, PLoS ONE, 6(1), e16276]. In particular, the conspicuity of target objects was strongly tied to the timing of the posterior-contralateral-negativity (PCN) component, which is triggered based on the outcome of early sensory feature-contrast computations. In everyday life, however, most objects typically remain their identity but the context they are embedded in changes. Thus, this raises the question whether changing the context that surrounds a fixed target identity influences focal-attentional selection times similarly as changing the target identity against a fixed context. To approach this question, we employed a illusory-figure search task which required participants to localize (left versus right) a Kanizsa square, composed of four inward-facing pacman inducers, amongst seven non-target configurations, composed of either one, two, or three inward-facing (together with one outward-facing) pacman elements. Our results revealed PCN latencies being the more delayed the less, relative to more, the target differed from its surround, demonstrating that stimulus context can bias target selection in human visual cortex.

EEG cross-frequency interaction during an RSVP task

Chie Nakatani and Cees van Leeuwen

Rapid serial visual presentation (RSVP) is a visual stimulus presentation method in which a train of visual stimuli is presented consecutively in a rapid rate, typically about 10 Hz. Of the train of stimuli, some are targets and others are non-targets. In a typical RSVP task, observers are asked to report the targets. RSVP stimuli evoke oscillations in EEG, of which the frequency is the same as that of the RSVP. We hypothesized that our brain will become entrained to such exogenous activity during practice by increasing the coupling with endogenous EEG oscillations. We recorded EEG while participants were performing an “attentional blink” task, in which they needed to report two targets amongst 17-20 stimuli in 10 Hz RSVP. The 8.0-12.0 Hz band of EEG was considered RSVP-evoked activity. Based on findings from our previous studies, theta-band (4.0-8.0 Hz) EEG was considered as task-relevant, but non RSVP-evoked activity. We computed the amplitude-phase coupling between the 10 Hz amplitude and the theta phase. A strong coupling was observed in parietal, occipital and posterior temporal loci. In accordance with our hypothesis in the occipital and right temporal regions the coupling strength increased between sessions.

Does visual search have a memory?

Svetlana Bialkova, Andrey Nikolaev and Cees van Leeuwen

We tested the memory characteristics of visual search in an event-related potentials study. Participants searched for a target letter presented among nontarget letters. We varied target identity (switch to new vs. swap with nontarget from previous trial), nontarget identity (switch to new vs. swap with target from previous trial), target location (repeat, switch, swap with nontarget), and nontarget location (repeat, switch, swap with target). Repeated target locations were responded to faster and more accurately than switched ones. There was a slowdown in performance when nontarget and target swapped their locations. The amplitude in the N2pc component was smaller when target location switched than repeated. The amplitude in the N1 component was highest when target and nontarget swapped their roles and target location switched; and smallest when both target and nontarget switched to new identity and target location repeated. These data show that visual search has memory for both target location and identity, but also nontarget identity and location play a role in processing.