Linking face-spaces for emotion and trait perception
Atsunobu Suzuki, Nobuyuki Watanabe, Ryuta Suzuki, Hiroyuki Yoshida and Hiroshi Yamada
Physiognomic features that resemble an emotional expression give rise to trait impressions like those elicited by the emotion, which has been called an emotion overgeneralization effect (Zebrowitz, 1997, Reading Faces, Boulder, CO, Westview Press; Zebrowitz and Montepare, 2008, Soc Pers Psychol Compass, 2/3, 1497-1517). For example, the subtle similarity of neutral faces to happy and angry expressions, respectively, disposes us to perceive that a person should be approached and avoided, resulting in trustworthy and untrustworthy impressions (Oosterhof and Todorov, 2008, Proc Natl Acad Sci USA, 105, 11087-11092). In this study, we examined the emotion overgeneralization effect by analyzing how the major dimensions of emotion perception (i.e., pleasantness and arousal; Russell and Bullock, 1985, J Pers Soc Psychol, 48, 1290-1298) and trait perception (i.e., trustworthiness and dominance; Todorov et al, 2008, Trends Cogn Sci, 455-460) are related with one another. A total of 150 neutral faces were rated by 443 participants. Results showed that pleasantness and arousal were both correlated positively with trustworthiness, while they were respectively correlated negatively (although weakly) and positively with dominance. Our findings suggest a close but possibly oblique relationship between face-spaces for emotion and trait perception.
Age-related contextual modulation in face recognition
Fatima Maria Felisberti, Jana Wanli, Rebecca Cox and Cassandra Dover
Social exchanges rely on efficient recognition of potential cooperators and cheaters. A previous study showed that face recognition can be modulated by the social context during encoding [Felisberti & Pavey, 2010, PLOS One], but possible aging effects are not known. Here behavioural descriptors were tagged to faces in a scenario involving money exchanges during memorization. The three descriptors contained no rules of the social contract and no moral values (cheating, cooperation or neutral behaviours were implicit). Participants (N= 170) had to complete an old/new recognition task. Results showed an increase in false alarms and reaction time with age. Hit rates and sensitivity to faces of 'cooperators'were higher than for 'cheaters'in both young (<30 yo) and old (>55 yo) adults. Differences between cooperators and cheaters were attenuated when the person lending money to hypothetical friends changed from an unknown person to the participants themselves, but reaction time was still longer for cheaters. Although seniors might have been exposed to more cheaters in their lifespan, cheater's recognition was not significantly better than in young adults, which suggests an age-invariant contextual bias towards prosocial behaviours in face recognition.
Investigating The Orthogonality of Configural, and Holistic Processing Mechanisms in Face Recognition.
Elizabeth Nelson, Nicholas Watier, Isabelle Boutet and Charles A. Collin
There is a lack of consistency in conceptual terminology, as well as debate in the literature, about how the generally accepted processing types in face recognition-featural, configural, and holistic-are to be distinguished from each other. This study compared within subject performance across four seminal face perception tasks thought to tap into either holistic or configural processing mechanisms: face inversion, part-whole, configural/featural difference detection, and composite face identification. This was done in order to determine if, and to what extent, performance among these tasks is related, and therefore how many distinct mechanisms underlie face processing. Each task was implemented as a 2AFC sequential matching paradigm with a 2x2 design. 20 participants were assessed on all four tasks. Correlational analysis indicates that performance on the part-whole task is correlated with performance on the composite face task, suggesting that these tasks do indeed tap into the same holistic processing mechanism. Similarly, performance on the face inversion and configural/featural tasks correlates, suggesting that they tap into a common configural processing mechanism. This analysis provides evidence as to what each of these widely-used tasks is actually measuring, as well as providing evidence for a distinction between configural and holistic modalities in face processing.
The impact of within-person variability on face perception and identification
Psychological studies of face perception have typically ignored within-person variation in appearance, instead emphasising differences between individuals. Studies assume that a photograph adequately captures a person's appearance, and for that reason most studies use just one, or a small number of photos per person. In a simple task participants were given a set of 40 images made up of 20 images of two different males, and were asked to sort the images into 'identities'. With no additional information participants sorted the images finding an average of ten different identities amongst the images instead of two. In subsequent tasks using different images the findings were replicated with additional images of visually similar people, visually dissimilar people and just one person. These findings suggest that photographs are not consistent indicators of facial appearance because they are blind to within-person variability. This observation is critical for our understanding of face processing as well as showing that this scale of variability has important practical implications, for example, our findings suggest that face photographs are unsuitable as proof of identity.
What facial information is important for rapid detection of the face? : Comparative cognitive studies between humans and monkeys
Ryuzaburo Nakata, Ryoi Tamura and Satoshi Eifuku
PURPOSE: Based on previous work (Nakata et al, ECVP 2011) suggesting that participants efficiently detected the faces of their own species in visual search tasks, and that inner features (e.g., eyes) of the face were not important for efficiency, this study further explores what contributes to efficient face detection. METHOD: Subjects were two Japanese macaques and human participants. Stimuli consisted of several types of faces and non-face distracter objects. Subjects were asked to detect an odd element (the face) in an array of distracters (non-face objects) that were of different sizes (4-20). RESULTS AND DISCUSSION: Both humans and monkeys efficiently detected the face with low spatial frequency components, and the face with which they had fewer visual experiences (the other race faces for humans, and rhesus monkey faces for Japanese macaques); however, they did not efficiently detect the face with high spatial frequency components, the silhouettes of faces, and the back of the heads. These results suggest that the information of low spatial frequency components contained within outer features of their own species face was possibly affected as antecedent information for detecting the face in the face-processing mechanism.
Reduction of the perceptual field for inverted faces: evidence from gaze contingency with full view stimuli
Goedele Van Belle, Philippe Lefèvre and Bruno Rossion
Displaying only the fixated part of a face by gaze-contingency decreases the face inversion effect (FIE) while masking the fixated part increases the FIE [Van Belle et al., JOV, 2010]. This observation indicates that the FIE is due to a difficulty at simultaneously perceiving multiple facial parts present outside of the fixated part, for inverted faces. Here we aimed at directly observing the differential use of central and peripheral information in upright/inverted faces, with faces fully visible. 14 Participants had to match a reference face to one of two simultaneously presented faces in full view (Figure 1). The reference face was a combination of the two other faces in a gaze-contingent way. That is, the part in the center of gaze equaled the corresponding part of one of the faces, while the peripheral part equaled the other face. The reference face was updated upon each gaze position shift. The proportion of choices for the answering alternative corresponding to the centrally presented part of the reference face was higher for inverted than for upright faces. These observations confirm the narrower perceptual field of vision for inverted than for upright faces, supporting a decreased holistic processing for for inverted than upright faces.
Right perceptual bias and self-face advantage in Congenital Prosopagnosics
Manuela Malaspina, Andrea Albonico and Roberta Daini
Left Perceptual Bias refers to the tendency to base our judgments more on the right half-part of face (Gilbert and Bakan, 1973, Neuropsychologia, 11(3): 355-362), while self-face advantage consists in a faster response in recognizing our own face, suggesting different processing for our and others' faces (Keenan et al., 1999, Neuropsychologia, 37(12): 1421-1425). The aim of this study was to verify the existence of an interaction between these two perceptual effects in normal and dysfunctional conditions, allowing us to better understand mechanisms underlying psychophysiological processing of faces. In particular reaction times and accuracy were recorded from 13 subjects with congenital prosopagnosia and 13 subjects with typical development, matched by age and sex, during matching task involving chimeric stimuli depicting their and others' faces. Both groups showed a better performance in recognizing their own face than the others'faces, suggesting the consistency of the self-face advantage. Moreover, while the control group had an higher accuracy when their own left half-face fell to the left side of chimeric stimuli, the experimental group showed a better performance when their own right half-face fell to the right (right-perceptual bias), suggesting for these subjects a different lateralization in face processing.
The face of terrorism: Stereotypical Muslim facial attributes evoke implicit perception of threat
Geza Harsanyi, Marius Raab, Vera M. Hesslinger, Denise Düclos, Janina Zink and Claus Christian Carbon
Al-Qaida's founder Osama bin Laden wore highly iconic paraphernalia [Carbon, 2008, Perception, 37(5), 801-806] namely a turban and a characteristic beard. As the media consistently presented him in this distinctive style, his outward appearance formed a visual stereotype of Islamist terrorists that, in most cases, did not match the appearance of Islamist assassins. Using the multidimensional Implicit Association Test [md-IAT, Gattol et al., 2011, Plos One, 6(1), e15849] we tested the effect of adding accordant stereotypical paraphernalia to male Caucasian faces ('Muslim-version'): Compared to the original, non-manipulated versions, the 'Muslim-versions'were evaluated as being more irrational, unintelligent, unreliable and, particularly, as being more dangerous. Importantly, non-psychologists'and psychologists'data did not show any significant difference concerning these implicit measures but did so for explicit measures assessed by a further test. This dissociate data pattern demonstrates that iconic presentations elicit stereotypical associations independently of explicit reports. We argue that visual attributes like a particular kind of beard and a turban are associated with conformity to Islam, which is again associated with terrorist threat. More generally, the results suggest that the mere presence of visual attributes can induce implicit black-and-white categorization and undifferentiated prejudice toward people of other cultures.
Contrast reversal of the eyes diminishes infants' face processing
Hiroko Ichikawa, Yumiko Otsuka, So Kanazawa, Masami K. Yamaguchi and Ryusuke Kakigi
The contrast polarity relationship between sclera and iris is important for face recognition. Gilad et al. [2008, PNAS, 106, 5353-5358] reported that the contrast polarity around eyes (darker eye region) is critical for familiar face recognition and for FFA activation in adults. Otsuka et al. [under review] have reported that 7-to 8-month-olds discriminate between faces only when the contrast polarity of the eyes is preserved, irrespective of the contrast polarity of the rest of the face. In the present study, we investigated the effect of contrast polarity of eyes on face related neural activity in infants by using near-infrared spectroscopy (NIRS). We measured hemodynamic responses in bilateral temporal area of 7- to 8-month-old infants. The hemodynamic responses to faces with positive eyes and those with negative eyes were compared against the baseline activation during the presentation of object images. We found that the presentation of faces with poisitve eyes increased the concentration of oxy-Hb and total-Hb in the right temporal area comared to baseline, while no such change occurred for the presentation of faces with negative eyes. Our results suggest the importance of contrast polarity of eyes in the face selective neural responses from early in the development.
Contribution of cardinal orientations to the "Stare-in-the-crowd" effect
Sanae Okamoto-Barth and Valerie Goffaux
Evidence showed that the processing of face identity relies on horizontally-oriented cues, with little contribution of vertically-oriented cues. Besides identity, faces convey a wealth of fundamental social cues such as gaze. We investigated whether the processing of gaze is tuned to horizontal orientation as observed for identity. Participants were presented with arrays of six faces and instructed to search for a target face with either direct gaze (DG) or averted gaze (AG). The 'stare-in-the-crowd' effect refers to the observation that DG is more easily detected than AG. Faces were filtered to preserve a 20°-orientation range centered either on horizontal or vertical orientation (H and V condition, respectively). In a third condition, horizontal plus vertical information was preserved (HV) by summing the H and V filtered images. Our results replicate the 'stare-in-the-crowd' effect; namely, detecting DG was overall more accurate and faster than detecting AG. More importantly, the 'stare-in-the-crowd' effect was significant only for vertically-filtered faces, in trials where a DG target was present. The same pattern was observed on RT. These findings suggest that although horizontal information is central for the processing of face identity, vertical information contributes to the processing of some core social signals conveyed by faces.
Psychological Distance and Face Recognition: Thinking about Own Local Place Impairs Face Recognition
Kyoko Hine and Yuji Itoh
This study investigated the effect of psychological distance on face recognition. According to 'Construal Level Theory' (Liberman & Trope, 1998), thinking about proximal distance event activates featural information. If so, the activation may evoke featural processing that would be carried over to the face recognition task. The featural processing is also said to decrease accuracy of face recognition. Because previous work found that proximal temporal distance impaired face recognition (Wyer, Perfect, & Pahl, 2010), we predicted that proximal spatial distance also impaired face recognition. Participants (N=64) were randomly assigned to one of the three conditions (near distance, far distance, control). Participants in all condition watched the video depicting crime. After watching the video, participants in the near distance condition imagined what they did in their local place whereas participants in the far distance condition imagined what they did in a foreign country. Participants in the control condition were assigned the filler task. Finally, all participants took a face recognition test. The accuracy of face recognition in the near distance condition was significantly lower than that in the far distance and control condition. This result supported that psychological spatial distance influenced facial memory.
Facial distinctiveness is affected by facial expressions
Takahashi Nozomi and Hiroshi Yamada
Bruce and Young's (1986) model posited that the processes underlying facial identity and facial expression recognition are independent. However, recent studies have shown some possible interactions between those processes [e.g. Schweinberger & Soukup, 1998, Journal of Experimental Psychology, 24(6), 1748-1765; Fox et al, 2008, Journal of Vision, 8(3), 1-13]. Relating to this issue, we examined whether facial distinctiveness is affected by facial expressions. We used 168 images of twenty person's face (twelve males) with neutral and six facial expressions (happiness, surprise, fear, sadness, anger and disgust) as stimuli. Seventy participants were randomly assigned to one of seven groups, in which ten participants rated each of 24 different faces with the same one of six facial expressions or neutral in terms of how easy they think the person would be to spot in a crowd on an 8-point Likert scale. The results showed that mean distinctiveness ratings of neutral faces were highly and significantly correlated with the ones of their happy faces but not with their sad faces, indicating that happy face could keep or maintain the distinctive properties of neutral face but sad face does not. We will discuss the plausible interactions between facial identity and facial expression recognition based on these results.
Attractiveness enhances the perceived familiarity of unfamiliar faces but not familiar faces
Isabel M. Santos, Felicia Toyn, Bobby Watson and Chris Longmore
Previous research has indicated that the perceived familiarity of an unfamiliar face is positively correlated with the attractiveness of the face and that brief exposure to previously unfamiliar faces increases the strength of this correlation [Peskin and Newell, 2004, Perception, 33, 147-157]. In addition, the familiarity ratings of even highly familiar faces are influenced by the expression of the face with positive expressions yielding higher familiarity ratings [Lander and Metcalfe, 2007, Memory, 15, 63-69]. In the study reported, we attempted to establish whether the perceived familiarity of both familiar and unfamiliar faces is affected by the attractiveness of the face. Participants were shown four groups of faces (familiar attractive, familiar less-attractive, unfamiliar attractive, unfamiliar less-attractive) and were asked to rate each face for familiarity. The results indicated that attractiveness had an effect on the perceived familiarity of unfamiliar faces with attractive unfamiliar faces being rated significantly more familiar than less-attractive faces. No difference was found for the familiarity ratings of familiar faces. The results indicate that unlike expressions, attractiveness does not influence the perceived familiarity of familiar faces. It is suggested that non-changeable aspects of a face (such as attractiveness) do not influence familiarity once a face is sufficiently familiar.
Can a test battery reveal subgroups in congenital prosopagnosia?
Janina Esins, Isabelle Bülthoff, Ingo Kennerknecht and Johannes Schultz
Congenital prosopagnosia, the innate impairment in recognizing faces exhibits diverse deficits. Due to this heterogeneity the possible existence of subgroups of the impairment was suggested (e.g. Kress & Daum, 2003, Behavioural Neurology, 14, 109-21). We examined 23 congenital prosopagnosics (cPAs) identified via a screening questionnaire (as used in Stollhoff, Jost, Elze, & Kennerknecht, 2011, PloS one, 6, e15702) and 23 age-, gender and educationally matched controls with a battery consisting of nine different tests. These included well known tests like the Cambridge Face Memory Test (CFMT, Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-85), a Famous Face Test (FFT), and new, own tests about object and face recognition. As expected, cPAs had lower CFMT and FFT scores than the controls. Analyses of the performance patterns across the nine tests suggest the existence of subgroups within both cPAs and controls. These groups could not be revealed only based on the CFMT and FFT scores, indicating the necessity of tests addressing different, specific aspects of object and face perception for the identification of subgroups. Current work focuses on characterizing the subgroups and identifying the most useful tests.
Classifying faces as faces: effects of identity and expression strength
Andrew Skinner and Christopher Benton
A variety of different experimental approaches have provided evidence that visual representations of facial identity are coded in a multi-dimensional face space. Recently, adaptation studies have suggested facial expression might be represented this way too. Here, we explored this apparent similarity between the representations of identity and expression by returning to look again at the effect distinctiveness has on our ability to classify visual stimuli as faces - one of the phenomena that originally inspired the development of the multi-dimensional face space model. For the first time we compared the effects variations in distinctiveness (now achieved by varying the strength of caricaturing) of both identity and expression have on our performance classifying faces as faces. We replicated previous findings showing it takes longer to classify faces with distinctive (higher strength) identities as faces, in line with the predictions of face space. Crucially, we observed the same pattern of results for expressions. Our findings provide new evidence that identity and expression are represented in a similar manner. This aligns with an emerging view in which visual representations of identity and expression are not separate, and may exist within a single representational framework.
The McGurk effect is not affected by face orientation
Priscilla Heard and Harriet Osborne
The well known McGurk effect (McGurk, H., & MacDonald, J. (1976) Nature, 264, 746-748) has incongruent audio-visual speech stimuli , the visual lip-movements affect the perception of the auditory stimuli so that a different sound from the stimulus-sound is perceived or 'heard'. The visual lip movements of Ga dubbed with the audio of Ba, gives the percept of Da. Various workers have reported that the effect is less strong when the face is presented at angles which are not upright (Jordan T.R., & Bevan K. (1997) Journal of Experimental Psychology :Human Perception and Performance, 25, 388-403). We investigated this further at 8 different angles 0, 45, 90, 135, 180, 225, 270, 315. 30 participants were tested with 4 different incongruent stimuli such as Ga-(vis) with Ba-(aud). If the response was Da it was classified as a 'McGurk' and if as a Ba it was classified as a correct audio response. Eye tracking was recorded to check for attention. No main effect of orientation was found although there were highly significant differences between the incongruent stimuli. The results will be discussed with reference to brain processing for face recognition and speech reading .
Neural Correlates of Eye Contact and the Monalisa Effect
Evgenia Boyarskaya, Oliver Tuescher and Heiko Hecht
Perception of eye contact and averted gaze activate distinct behavioral and cognitive mechanisms and might be processed by dissociable brain networks. Gamer and Hecht (2007) proposed the metaphor of a cone of gaze to describe the range of gaze directions within which a person feels looked at. The width of the gaze cone is about nine degrees of visual angle. Thus, neural activation patterns should differ depending on whether a gaze is directed straight at the viewer, at the edge of the gaze cone, or if it is clearly averted. We conducted an fMRI study to locate the brain regions involved in mutual gaze detection, presenting portraits with varying gaze directions. An irrelevant task was added to ascertain that subjects looked at the portrait's eyes. Moreover, the portraits were presented at central and laterally displaced positions. Real heads stop to make eye contact when displaced laterally, however, portrait heads continue gaze at the observer. This is the so called Mona Lisa effect, which is remarkably robust and breaks down only at extremely oblique vantage points. We hypothesized that the same brain areas process gaze in physical and pictorial situations. The cortical activation patterns were found to differ depending on vantage point.
Abnormal response-dynamics of face-related areas in congenital prosopagnosia
Kornél Németh, Márta Zimmer, Éva Bankó, Zoltán Vidnyánszky and Gyula Kovács
Congenital prosopagnosia (CP) is a life-long disorder of face perception. We investigated the neural bases of CP in three patients of a family (father, daughter and son), as well in healthy, age-matched controls, using combined neuropsychological, electrophysiological and functional magnetic resonance imaging (fMRI) methods. Neuropsychological tests demonstrated significant impairments of face perception/recognition in each patient. To reveal the impairments of the core face processing network we presented faces and nonsense objects in a block-design experiment to the patients and to a control group in the fMRI scanner. We found that the activity of the fusiform and occipital face areas (FFA, OFA) was reduced, when compared to controls, but remained category-selective. Analysis of the hemodynamic response function, however, revealed a significantly faster and stronger adaptation in all areas (FFA, OFA and the lateral occipital cortex) in the CP patients when compared to controls. Our results emphasize the dysfunction of the core system in CP. Further, it suggests that it is not the magnitude of activation, rather the response dynamics that lies behind the impairments of face perception in CP. This work is supported by The Hungarian Scientific Research Fund (OTKA) PD 101499 (M. Z.).
Evidence of a size underestimation of upright faces
Yukyu Araragi, Takehiro Aotani and Akiyoshi Kitaoka
We quantitatively examined the difference in perceived size between upright and inverted faces using the method of constant stimuli. The stimuli included seven face images modified from two cartoon faces produced by Kitaoka (2007; 2010) and five photographic faces. Experiment 1 showed that upright faces were perceived to be significantly smaller than upright and inverted outlines, whereas inverted faces were not perceived to be significantly larger than upright or inverted outlines using two cartoon face stimuli. Experiments 2 showed that upright faces were also perceived to be significantly smaller than inverted faces using five photographic face stimuli. These results provide quantitative evidence for a size underestimation of upright faces.
Head Size Illusion: Head Outlines Are Processed Holistically Too
Kazunori Morikawa, Kazue Okumura and Soyogu Matsushita
We investigated a novel facial illusion called the head size illusion. When the lower half outline of a face (i.e. the contour of cheeks and jaw) is wider or narrower than average, the upper half outline (i.e. the head contour above eyes) also appears wider or narrower than it really is, respectively. We used the staircase procedure to measure the illusion magnitude. We found that the illusion decreases by half when faces are inverted so as to selectively disrupt holistic processing. The illusion also decreases by half when the internal features (i.e. eyes, nose, and mouth) of faces are erased. These experiments showed that at least 50% of the head size illusion depends on holistic perceptual processing of faces per se. It is well known that facial features (i.e. eyes, nose, and mouth) are processed holistically. The head size illusion demonstrates that head outlines are also processed holistically because the lower half outlines influence perceived shape of the upper half outlines. This phenomenon may be an example of a new class of illusions called 'biological illusions'.
The retrieval of semantic and episodic information from faces and voices: A face advantage.
Catherine Barsics and Serge Brédart
Recent findings indicate that semantic and episodic information is more likely to be retrieved from faces than voices [Damjanovic & Hanley, 2007, Memory and Cognition, 35(6), 1205-1210]. Previous studies investigating this 'face advantage' over voice used famous faces and voices as stimuli, which induced several methodological difficulties. We present four studies aimed at further examining the differential retrieval of semantic and episodic information from faces and voices. Study 1 and 2 compare the retrieval of semantic and episodic information from pre-experimentally personally familiar faces and voices [Brédart, Barsics, & Hanley, 2009, European Journal of Cognitive Psychology, 21(7), 1013-1021; Barsics & Brédart, 2011, Consciousness and Cognition, 20(2), 303-308]. In Study 3, an associative learning paradigm is used in order to strictly control the frequency of exposure with faces and voices. The recall of semantic information is subsequently assessed from faces and voices. In Study 4, distinctiveness impact on semantic information retrieval from faces and voices is assessed, as it could constitute a key factor underlying the face advantage over voice. All results are in line with the face advantage. These findings are discussed at the light of current models of person recognition. An account in terms of expertise is finally proposed.
Different facets of facial attractiveness: Specification of the relationship between attractiveness, beauty, prettiness and sexual attraction
Ramona A. Luedtke, Vera M. Hesslinger and Claus-Christian Carbon
Research on facial attractiveness repeatedly reveals high correlations between ratings of attractiveness, beauty, prettiness, and sexual attraction. However, attempts to define their specific relationship as facets of a superordinate concept of attractiveness and to draw precise distinctions between the corresponding terms are sparse. Employing 80 faces per sex presented in two different modes the current experiment addressed both aspects: In the blockwise condition, the faces were presented successively and rated in four separate blocks (one block per variable). In the sequential condition, each face was presented four times in a row and rated for all variables before the next stimulus was displayed. Besides a high consistency for the assessment of the variables, we found an effect of presentation mode with the sequential condition leading to significantly higher correlations. Apparently, participants proceed in a more economic fashion by reducing cognitive effort when the situation allows it. Moreover, results show that prettiness is a good predictor for beauty whereas attractiveness is mainly predicted by sexual attraction when stimuli are presented blockwise. Integrating these findings and common theoretical accounts concerning the comprised variables the present study qualifies their usage and asks for clearer definitions in research dealing with these variables and associated constructs.
On the plausibility of a generalized model of perceived similarity between faces.
Ludovica Lorusso, Gavin Brelstaff, Luca Pulina and Enrico Grosso
Is there consensus among observers about which faces are seen as similar, or do large individual variations across observers occur? We test the plausibility of a generalized model of perceived visual similarity between faces as opposed to an individualized, subject-dependent model [cf. Simmons and Estes, 2008, Cognition, 108, 781-795]. Unlike previous studies, we avoid the use of ratings to measure perceived face-pair similarity and instead adopt a direct comparison task between sets of face-pairs. We use a two-alternative forced choice (2AFC) protocol where the observers must choose between two candidate face-images by indicating which looks the most similar to a third. Each observer completes a randomized sequence of 2AFC trials whereby a small set of face-pairs gets fairly compared in similarity against our data-set of 54 different face-images. Degrees of similarity, derived from a statistical analysis of the results, indicate a high degree of consensus among observers, boosting the plausibility of a generalized model of perception of similarity. A preliminary extension of this analysis to known subclasses of face-images related by sex, age or kinship, may reveal insights about the structure of that model.
Forty years later: Are objects still mentally rotated as in 1971?
Claus-Christian Carbon and Fabián Diener Rico
In 1971, a groundbreaking paper published by Shepard and Metzler put forward the idea that 3D-objects are 'mentally rotated' when perceived in an orientation deviated from upright. Referring to clear empiric evidence given by this seminal contribution, the theory of mental rotation states that reaction times required for matching two objects follow a linearly increasing function of the angular difference between the respective objects'orientations. Accordant inferences are limited by the fact that (1) only same-matching trials were considered and that (2) correct rates were neglected. We conducted a replication using the original material (Tetris-like 3D-objects) plus further stimulus classes of varying complexity: houses (low), greebles (medium) and faces (high) with different-matching trials presenting the referring stimulus together with a thatcherised analog. For 3D-objects, the proposed linear RT trend emerged for same-matching, but not for different-matching trials. Furthermore, a strong linear increase of errors (from error-free to near-to-chance) for same-matching trials suggests that the linear RT trend could alternatively be described as a speed-accuracy tradeoff artifact. For faces, a linear RT trend was not found, which indicates, in line with Carbon et al. [2007, Perception, 36, 1635-1645], that matching of more complex stimuli is based on more sophisticated, multifaceted processing.
Do you sound or look as old as you are? A study of age estimation in young and older adults.
Evelyne Moyse, Aline Beaufort and Serge Brédart
Studies on age estimation usually indicated that people are fairly accurate at estimating the age of a person from her/his face or from her/his voice (with an absolute difference of five and ten years respectively) [e.g. Amilon et al., 2000, in: Speaker Classification II. Lectures Notes in Artificial Intelligence, C Müller, Berlin, Springer-Verlag]. However studies showed also that performance depends on the age of participants and the age of stimuli [Rhodes, 2009, Applied Cognitive Psychology, 23, 1-12; Braun, 1996, Forensic Linguistics, 3, 65-73]. The aim of the present study is to compare age estimation performance from faces and voices by using an experimental design in which the age of participants (young vs older), the age of stimuli (young vs older) and the stimulus domain (face vs voice) were crossed. Overall, the age of faces was more accurately estimated than the age of voices. Moreover performance of age estimation was better for young stimuli than for older stimuli. Finally, young participants made smaller absolute errors than older participants. However there is no difference between young and older participants when estimating the age of older stimuli.
Spatial configuration of faces and Japanese characters differently affects perceptual dominance in binocular rivalry
Eiji Kimura, Akiko Hidaka and Ken Goryo
Perceptual dominance in onset rivalry (binocular rivalry between brief stimuli) can be modulated in a stimulus-specific fashion by presenting a binocular preceding stimulus. Analyzing the properties of the dominance modulation can provide insights into how different types of stimulus are represented in the visual system. This study investigated onset rivalry between familiar stimuli, i.e., face images and Japanese Kana characters, focusing on the processing of spatial configuration. The rivalrous test stimulus was a pair of either an upright or inverted (180° rotated) stimulus. The binocular preceding stimulus was the same as one half-image, but its orientation was manipulated (upright or inverted). Results showed distinctive patterns of dominance modulation for face and Kana stimuli. For face stimuli, the modulation was asymmetric. The upright preceding face phenomenally suppressed the same test face regardless of orientation, although the cross-orientation modulation was weaker. In contrast, the inverted preceding face produced little, if any, cross-orientation modulation. However, for Kana stimuli, the modulation was symmetric. The upright preceding character phenomenally suppressed the same upright test character more strongly than the inverted one, and vice versa. These results suggest that Kana characters and face stimuli are represented differently in terms of spatial relationship among local image elements.
The speed of face recognition: A 50ms gain between personally familiar faces and famous faces
Thomas Busigny, Clara Bled, Gabriel Besson and Emmanuel J Barbeau
Despite the generally accepted notion that humans are very good and fast at recognizing familiar individuals from their faces, the actual speed with which this fundamental brain function can be achieved remains largely unknown. Furthermore, whether this recognition speed is similar for famous faces and personally familiar faces is another unresolved question. A group of 11 participants was required to respond when presented with photographs of personally familiar faces, or, in a separate run, famous faces. The personally familiar faces were extracted from the personal photosets of the participants (440 pictures amongst a total of 23,322 photographs). Matched unknown faces were used as distractors. This go/no-go recognition task performed using speed constraints revealed that personally familiar faces could be recognized as early as 330 ms after presentation, about 50 ms faster than famous faces. Such rapid behavioral recognition constrains how early the effects of familiarity could be observed and demonstrates that personally familiar faces are recognized significantly faster that famous faces. Given the time required to execute a manual response (about 100 ms), the earliest familiarity-dependent modulation at the electrophysiological level could be expected at about 230 ms after stimulus onset, a value consistent with a number of EEG studies.