Face processing

The congruency effect in the composite face paradigm

Bozana Meinhardt-Injac, Malte Persike and Günter Meinhardt

The inability of observers to judge face parts independently suggests that faces are perceived holistically (Tanaka & Farah, 1993). Using the composite face paradigm (CFP; Young, Hellawell, & Hay, 1987) we show that the congruency effect (CE), indicating that nonattended face parts affect the judgements of the attended face parts, is strong (about 20%) when (i) presentation is brief, (ii) no feedback about correctness is provided, and (iii) observers are informed about the target parts shortly before the test image, and are uninformed at study (Conditions A). CEs attenuate down to about 5% at relaxed viewing times, when observes receive feedback about correctness, and when they are informed about the target features rigth from the beginning of the trial (Conditions B). Moreover, there is strong response bias toward the "different" category for conditions A but not for conditions B. Since no CEs are observed with non-facial stimuli (watches), results indicate that the CE is face specific, but different in nature for conditions of high and low levels of control and feature certainty. For conditions A the CE reflects mostly perceptual (holistic) effects, for conditions B, however, holistic effects are overlayed by effects of selection and control at the decisional level.

The contributions of external and internal features to face discrimination

Gunter Loffler, Andrew J Logan, Sara Rafique and Gael E Gordon

Face discrimination requires internal (e.g. eyes) and external (e.g. head-shape) feature information to be combined. We quantified the contributions of different features for unfamiliar faces. Discrimination thresholds were determined for synthetic face stimuli for the following conditions: (i) ‘full-face’: all face features visible and modified by equal amounts; (ii) ‘individual feature’: all features visible, one feature modified; (iii) ‘isolated feature’: single feature presented in isolation. Features were eyes, nose, mouth, eyebrows, head-shape and hairline. Performance for isolated features was poorer than the full-face condition for all features but head-shape. Average threshold elevations were 0.84,1.08,2.12,3.24,4.07 and 4.47 for head-shape, hairline, nose, mouth, eyes and brows respectively. Hence, for eyes to be discriminable in isolation, a 4.07x greater change is required than when presented within the full-face. Threshold elevations were higher for the individual feature conditions than for the isolated conditions (0.94,1.74,2.67,2.90,5.94 and 9.94). Observers are better at discriminating isolated features than when they are embedded in an otherwise fixed face. Similar thresholds for head-shapes and full-faces suggest that observers rely heavily on head-shape when discriminating unfamiliar faces. The pattern of threshold elevations is consistent with lowest sensitivity for features affected by face dynamics e.g. expression (eyes, eyebrows and mouth).

Orientation tuning for faces in the Fusiform Face Area and Primary Visual Cortex

Valerie Goffaux, Felix Duecker, Christine Schiltz and Rainer Goebel

Face identity processing is tuned to horizontally-oriented cues. Here we used fMRI to investigate the neural correlates of this horizontal-tuning in the Fusiform Face Area (FFA) and V1. Eight subjects viewed blocks of upright, inverted, and phase-scrambled faces filtered to preserve a 20°-orientation range centered either on horizontal, vertical, or oblique orientations. Univariate analysis revealed that the FFA responded most strongly to upright-horizontal faces whereas V1 showed no orientation preference. Linear support vector machines were then used to decode stimulus category (upright, inverted, scrambled) or orientation content (horizontal, vertical, left-oblique, right-oblique) based on FFA and V1 activation patterns. In the FFA, classification of stimulus category was significantly better for upright-horizontal faces than upright-vertical faces. No orientation preference was found for inverted and scrambled faces. In contrast, category decoding was comparable across vertical and horizontal conditions in V1. When decoding orientation, high accuracies were obtained in V1 for upright and inverted faces whereas classification performance was close to chance for scrambled faces. In FFA, orientation decoding was close to chance level in all stimulus categories. These results indicate that (1)FFA is tuned to horizontally-oriented information selectively when processing upright faces and that (2)this horizontal-tuning was not passively inherited from V1.

When faced with faces: individual differences in face perception and recognition

Roeland J. Verhallen, Gary Bargary, Jenny M. Bosten, Patrick T. Goodbourn, Adam J. Lawrance-Owen and J. D. Mollon

Despite the importance of being able to recognise faces, not everybody shows equal performance. To examine this variation between individuals, we tested 395 participants (251 females; mean age 24.2) on four well-established tests of face perception and recognition ability: the Mooney Face task, the Glasgow Face Matching Test (GFMT), the Cambridge Face Memory Test (CFMT), and the Composite Face task. Participants also gave a subjective rating of ability on a scale of 1 to 10, before testing. Results show a broad distribution of performance for all four tests, with scores ranging from 35% to 100% correct. Spearman correlations show a significant positive relationship between subjectively rated ability and performance on each test. Furthermore, performance on each test is significantly correlated with that on every other test. The GFMT and CFMT have the strongest correlation (ρ = .49) and the Mooney Face task and the Composite Face task the weakest (ρ = .20). Thus, the shared variance never exceeds 25% and is often much smaller. This could imply that these four tests each primarily measure different aspects of face perception. Our results thereby indicate how multifaceted is the perception and recognition of faces.

Extensive visual training in adulthood significantly reduces the face inversion effect

Giulia Dormal, Renaud Laguesse, Aurélie Biervoye, Dana Kuefner and Bruno Rossion

Human adults are poor at recognizing inverted faces, that is, at recognizing visual stimuli that are as complex as upright faces, yet are neither preferentially attended at birth, nor visually experienced during development. This lower performance for recognizing inverted relative to upright faces constitutes one of the most well known and robust behavioral effects documented in the field of face processing (Rossion, 2008, Acta psychologica, 128, 274 - 289). Here we investigated whether extensive training at individualizing a large set of inverted faces in adulthood could nevertheless modulate the inversion effect for novel faces. Four observers were trained for 2 weeks (16 hours) at individualizing a set of 30 inverted face identities presented under different depth-rotated views. Following training, they all showed a significant reduction of their inversion effect for novel face identities as compared to the magnitude of the effect before training, and to the magnitude of the face inversion effect of a group of untrained participants. These observations indicate for the first time that extensive training in adulthood can lead to a significant reduction of the face inversion effect, suggesting a larger degree of flexibility of the adult face processing system than previously thought.