Emotions

Inverting natural facial expressions puzzles you.

Kathrin Kaulard, Johannes Schultz, Christian Wallraven, Heinrich H. B├╝lthoff and Stephan De La Rosa

The face inversion effect has often been demonstrated in face identification tasks. Less is known about whether processes underlying face expression recognition are also sensitive to face inversion. Face expression recognition is usually investigated using pictures of six emotional expressions. In everyday life, humans are however exposed to a much larger set of facial expressions, which are dynamic. Here, we examine the effect of face inversion on expression recognition for a variety of facial expressions displayed statically and dynamically. We measured participants'recognition accuracy for 12 expressions using a 13 alternative-forced-choice task. We varied the dynamics (videos versus pictures) and the orientation (upright versus inverted) of the presentation of the expressions in a completely crossed design. Accuracy was significantly higher when expressions were presented as videos (62%) than as pictures (47%). Similarly, recognition accuracy was significantly higher for upright (84%) compared to inverted (64%) expressions. Moreover, the effect of orientation changed significantly with expression type. No other effects were significant. This is the first study to report that face inversion affects the recognition of natural facial expressions. Because face inversion effects are interpreted as a sign of configural processing, our results suggest configural processing for a majority of facial expressions.

Factors affecting orienting behaviour to emotional expressions: stimulus duration, spatial frequency and response mode

Arash Sahraie, Rachel L Bannerman and Paul B Hibbard

It is often assumed that threat related stimuli are preferentially detected and oriented to, as they are consistently of significant value to an observer. The behavioural evidence however are mixed, as anxious individuals show preferential processing of fear stimuli, however, the cueing effects in healthy observers do not show a threat advantage. In a series of investigations, we have measured saccadic and manual responses as an indicator of orienting behaviour to emotional expressions in face stimuli. We have shown that in normal observers, a fear advantage can be seen but only for briefly presented stimuli (20ms) when saccadic responses are measured, whereas manual response measures need longer stimulus durations (100ms). We will also report on the role of spatial frequencies on fast saccadic orienting behaviour. Single face stimuli (fear, happy, neutral) filtered to contain mainly low, high or broad spatial frequencies were presented briefly (20ms). Preferential processing of emotional stimuli was shown in all but the high spatial frequency condition, with faster orienting to fear than happy stimuli only at low spatial frequencies. There were no differences in saccadic responses between any emotions at HSF. A range of control experiments show that the findings cannot be attributed to low level stimulus artefacts.

Components of subjective experience of visual objects and scenes

Slobodan Markovic

The purpose of this study was to specify the underlying structure of subjective experience of visual objects and scenes. Subjective experience includes features imposed upon the scene by the perceiver (e.g. pleasure, interestingness, etc). In preliminary study 1 a set of twenty photographs of various visual stimuli (humans, objects, natural scenes etc) was selected. In preliminary study 2 a set of forty-nine representative descriptors of subjective experience of the visual world was selected (the descriptors were then transformed in the form of bipolar scales, e.g. pleasant-unpleasant). In the main study sixteen participants judged twenty stimuli on forty-nine scales. Using the 'stringing-out'method a unique matrix of judgments was created. In the factor analysis (principal component method plus Promax rotation) three main factors were obtained: Attraction (most saturated scales: attractive, interesting, good, beautiful, etc), Regularity (regular, organized, harmonious, connected, etc) and Relaxation (relaxed, calming, non-offensive, tender, etc). Regularity referred to the perceptual aspect of subjective experience including the impression of figural goodness, compositional harmony and the like. The other two factors referred to two different affective domains: Attraction represents appetitive tendencies in behavior (actions towards attractive and interesting stimuli), whereas Relaxation represents hedonic effects of reduction of tension.

Subjective feeling of fluency and affective response

Michael Forster, Helmut Leder and Ulrich Ansorge

According to the fluency hypothesis, the objective fluency of a perceptual process is accompanied by a subjective experience. This experience or subjective feeling can be a strong source for later judgments, particularly liking judgments [Forster et al., submitted]. According to psychobiological approaches, the affective response (arousal or valence) towards an object should also influence object preference. Interestingly, it has not yet been thoroughly studied whether fluency may also play at least a mediating role in our affective responses. Therefore, we conducted a series of experiments addressing measurement of the feeling of fluency and its impact on liking, as well as on arousal and valence. Our results indicate that the feeling of fluency can be explicitly reported and is related to objective fluency. Analyzing the influence of objective fluency manipulated through differences in presentation duration, we found that liking and arousal ratings-but not valence ratings-were influenced by manipulations of fluency. However, a higher subjective feeling of fluency led to higher ratings in all three dimensions (liking, arousal, and valence). This indicates that the feeling of fluency may be an important source for explaining the interplay of affective responses to and evaluations of an object.

Registering eye movements in collaborative tasks: methodological problems and solutions

Alexander Kharitonov, Alexander Zhegallo, Kristina Ananyeva and Olga Kurakova

Our studies are rooted in long-standing tradition of studying cognitive processes in communication that stems from the approach developed by Lomov in the 70-ies and numerous studies of oculomotor activity launched by Yarbus [1967, Eye Movements and Vision, New York, Plenum Press] and followers. Modern eye-tracking technologies make possible combination of these approaches and conduct of new types of studies in joint activities and communication. However, organization of such experimental setups requires a number of methodological problems to be solved. Stationary eye-trackers can be used in dyadic experiment with minor adaptation, namely special software for synchronization of dual stimuli presentation, eye movement and speech registration. This being realized, we conducted two experiments in joint perception of (1) faces of different racial type and (2) basic facial expressions. In both series either similar or slightly different (modified with morphing procedure) stimuli were presented to the participants instructed to determine whether they observe same or different pictures. The data thus collected suggest specific coordination of cognitive processes (visual search, 'joint attention'). Several episodes of fixational patterns overlap were observed. We propose that the amount of eye movements synchronization registered in collaborative tasks can predict the efficiency of collaboration and participants'joint performance. Supported by GK 16.740.11.0549

Playing a violent videogame with a gun controller has an effect on facial expression recognition but no selective effect on prosocial behaviour

Marco Righi, Paola Ricciardelli and Rossana Actis Grosso

Playing a violent videogame has both a (i) perceptual and (ii) social effect, reducing respectively the happy face advantage (i.e. happy faces are recognized faster than sad faces) [Leppanen and Hietanen, 2003, Emotion 3 315-326] and prosocial behaviour [Sheese and Graziano, 2005 Psychological Science 16 354-357] Here we investigated whether playing with a more realistic and interactive device enhances these negative effects. We asked participants (n=45) to play either a neutral (sport, Group 1) or a violent videogame with a Nintendo WII. The violent videogame could be played either with a standard controller (Group 2) or with a Gun shaped controller (Group 3). Before and after each playing session participants underwent prosocial behaviour self-rating projective measures. Following the videogame task, they were asked to recognize emotional facial expressions - unambigous vs. ambiguous - from the Ekman and Friesen database. Results showed no effect on recognition times, but a significant interaction between accuracy and ambiguity in recognizing positive vs. negative emotions across Groups. Prosocial behaviour ratings significantly decrease in all groups, suggesting that the negative effects of playing could be more related to frustration than violence. The results are discussed in relation to the interplay between action, perception and social cognition.

Changes in the fractal dimensions of facial expression perception between normal and noise-added faces

Takuma Takehara, Fumio Ochiai, Hiroshi Watanabe and Naoto Suzuki

Many studies have reported that the structure of facial expression perception can be represented in terms of two dimensions: valence and arousal. Some studies have shown that this structure possesses a fractal property; the fractal dimensions of such structures differ significantly for short and long stimulus durations [Takehara et al, 2006, Perception, 35 ECVP Supplement, 208] and photographic positive and negatives [Takehara et al, 2011, Perception, 40 ECVP Supplement, 74]. In this study, we examined the changes in the fractal dimensions of the structure of facial expression perception by using normal and noise-added faces as stimuli. A statistical analysis revealed that the fractal dimension derived from noise-added faces (1.43 dimension) was higher than normal faces (1.32 dimension); t (23) = 3.53, p < .01. Consistent with previous studies, a higher fractal dimension was considered to be related to difficulty in facial expression perception. On the other hand, correct response rate for noise-added faces was reduced since the noise on the faces disrupted high spatial frequencies [McKone et al, 2001, Journal of Experimental Psychology: Human Perception and Performance, 27(3), 573-599]. Therefore, our results might suggest that a higher fractal dimension in noise-added faces are related to disruption of high spatial frequencies.

Explicit and implicit contamination sensitivity in children with autistic spectrum disorders: an eye tracking study.

Roberta Fadda, Michael Siegal and Paul G. Overton

Contamination sensitivity, which typically emerges at around 4yrs of age thanks to a combination of cognitive abilities and social learning processes, seems to be particularly impaired in children with Autistic Spectrum Disorders (ASDs; Kaliva, Pelizzoni et al., 2009). However, since contamination sensitivity has been investigated only through behavioral studies, to what extent children with ASDs who lack explicit contamination sensitivity in behavioral tasks are not implicitly sensitive to disgust elicitors needs to be specifically investigated. In this study we evaluated implicit contamination sensitivity in 15 children with ASDs who lack explicit contamination sensitivity (ASDnocs), compared with 15 children with ASDs who showed explicit contamination sensitivity (ASDcs) and 30 TD controls, through an eye-tracking intermodal preferential looking paradigm. The results showed that TD and ASDcs had a looking preference for an uncontaminated drink in sharp contrast to children with ASD who did not possess contamination sensitivity. Therefore children with ASDs who lack explicit contamination sensitivity also lack an implicit sensitivity to disgust elicitors, highlighting the importance of pairing behavioral tasks with eye-tracking measures to fully assess clinical populations. References Kalyva, E., Pellizzoni, S., Tavano, A., Iannello, P., Siegal, M. (2009). Contamination sensitivity in autism, Down syndrome, and typical development. Research in Autism Spectrum Disorders, 4(1):43-50.

Congenital Prosopagnosia: The role of changeable and invariant aspects in famous face identification

Andrea Albonico, Manuela Malaspina and Roberta Daini

The role of non-emotional changeable aspects in face recognition has been less considered than the dissociation between facial invariant features and emotional expressions. Indeed, the idiosyncratic movements of a specific face can help the recognition of the face itself (O'Toole et al., 2002, Trends Cogn Sci, 6(6): 261-266). We aimed at understanding if congenital prosopagnosics, i.e. subjects with impairment in face recognition from birth, can use that information to improve their familiar faces identification. We selected two groups of 14 subjects, one with poor performance in face episodic recognition tasks (experimental) and the other with good performance in the same tasks (control). Videos of 16 famous persons were presented in three different conditions: motionless, with non-emotional expressions, with emotional expressions. The results showed an effect of the group, indicating that the group selected as lower performers in face episodic recognition tasks also performed poorly with a famous people identification task. Moreover, the experimental group increased its performance from motionless condition to conditions with facial movements, whereas the control group did not show any difference among conditions. The results suggest an important role of changeable aspects, other than emotional expressions, in face recognition of congenital prosopagnosics.

Discrimination of real, but not morphed, facial expressions correlates with emotional labeling

Olga A. Kurakova

The ability to discriminate between similar emotional facial expressions may depend on interaction of verbal labeling them as belonging to different emotional categories, and noncategorical system for low-level discrimination [Roberson et al, 2010, Emotion Review, 2, 255-260]. We applied the classical paradigm of inspecting categorical perception to novel emotional faces dataset to reveal differences in discriminating natural and artificial transitional FEs. In our study, two sets of stimuli were used: photographic images of 6 transitions between posed basic emotions and 6 morphed transitions between same emotional prototypes. Using Differential Emotions Scale, the prototypes were evaluated by subjects as having different emotional profiles. Discrimination involved AB-X task; the identification task was 7-way multiple choice between basic emotions labels. Theoretical discrimination performance between consequent images in transitions was predicted as sum of absolute differences in labeling rates for all emotions. The results showed positive correlation of theoretical and empirical (AB-X) discrimination performance for photographic transitions, but not for morphed ones. Instead, empirical discrimination of morphed images correlated positively with physical between-stimuli distances. We propose that discrimination of sequential photographs of posed transitions between basic FE relies mostly on emotional labeling, whereas picking out differences between artificial morphs requires predominantly low-level percepts comparison. Supported by RFH grant no. 12-36-01257a2.

Your face looks funny -The role of emotion on perceived attractiveness of face images

Brendan Cullen and Fiona Newell

In the domain of facial attractiveness, the role of factors such as averageness and symmetry have been investigated primarily using static face images with a neutral expression. However, facial attractiveness can be modulated by factors such as facial expression or social information about the person. For example, face images associated with 'humorous'descriptions are rated as more desirable than face images described as 'non-humorous'(Bresler E. R. & Balshine S, 2006. Evolution and Human Behaviour 27, 29-39). In Experiment 1 we manipulated the proportions of a particular emotional facial expression across a number of exposures to unfamiliar face images. We found that attractiveness ratings increased as the proportional number of 'happy', but not fearful, expressions increased relative to the number of neutral expressions. In Experiment 2 we investigated if attractiveness ratings were influenced by cross-sensory emotional information provided during exposure to the face images. Ratings were compared across face images associated with auditory emotional (e.g. 'humourous'description or laughter) or neutral (e.g. 'neutral description or coughing) information. The emotional content, particularly positive emotion, effected preferences for some faces over others. Our findings suggest important cross-sensory influences on the perceived attractiveness of a static facial images.

Effects of direction, intensity range, and velocity on perception of the dynamic facial expressions.

Haruka Inoue and Makoto Ichikawa

When observing a sequence of dynamic facial expressions, perceived emotional intensity for the last face in the sequence tends to shift toward the direction of expression change. For example, the observers would perceive the neutral face as happy (angry) face if it is presented as the last face in the sequence from angry (happy) face. In order to understand the processing of dynamic facial expressions, we examined how direction, intensity range, and velocity of the sequence affect the shift. In each trial, the observers rated the emotional intensity of the last face in the sequence of dynamic facial expression in one of the three emotions (happy, angry, and surprise). We found that the shift for the sequence from strong intensity of expression to neutral face was larger than that for the sequence with opposite direction, and that the shift for the sequence toward the middle intensity was larger than that for the sequence to the neutral face or to strong intensity. These results suggest that disappearing direction and specific intensity range would enhance the overshooting in the processing of the dynamic facial expressions.

Response properties of visual areas that are responsive to fearful scenes

Zhengang Lu, Bingbing Guo, Xueting Li and Ming Meng

Previous studies have mainly investigated neural correlates of affective perception by using complex stimuli such as faces and scenes, whereas only a few recent studies looked at whether simple shapes, such as a downward pointing triangle, could lead to threatening perception [Larson et al, 2008, Journal of Cognitive Neuroscience, 21(8), 1523-1535]. It is unknown whether and how low- and mid-level visual features may modulate response functions of the visual areas that are responsive to complex emotional stimuli. By contrasting fMRI activation corresponding to an independent set of fearful scenes versus scrambled images, we first localized regions of interest (ROIs) in bilateral fusiform gyrus (FG) and lateral occipital cortex (LOC). We then applied an event-related design to investigate response properties of these ROIs as a function of roundness, orientation, and contrast polarity of stimuli that consist of simple shapes. Further, we investigated whether brain activation induced by the simple shapes may be modulated by adding schematic facial contexts. Whereas activity in LOC was modulated by both shape and facial context, activity in FG was only modulated by facial context. We didn't find any effect of contrast polarity. These mixed results suggest different functional roles of LOC and FG in affective perception.

Mud sticks: How biographical knowledge influences facial expression perception

Luc Charmet-Mougey, Anina Rich and Mark Williams

Although our ability to process facial expression is a crucial factor in human relations and communication, we know little about the way our knowledge of people influences the way we process their expressions. In this study, participants were trained for a week to memorise short biographical vignettes depicting benevolent or malevolent characters, paired with neutral faces. They were aware of their fictitious nature from the start of the experiment. We used fMRI to acquire whole-brain images from participants viewing the character faces with happy, neutral or angry expressions. The amygdala responded differentially to faces of the individuals portrayed as benevolent and malevolent. Furthermore, behavioural testing demonstrated that participants were significantly faster to categorise the emotion of a face when the expression was congruent with the vignette (e.g., angry expression and malevolent character). These results indicate that prior knowledge of character traits of an individual has an effect on the way we perceive facial expressions. They also suggest that the amygdala could be involved in higher cognitive functions than sensory-based emotions, and affected by our emotional memory.

Behind every strong man there is a strong background: The effect of dynamic background textures on facial evaluation

Alexander Toet, Susanne Tak, Marcel P. Lucassen and Theo Gevers

Human evaluation of facial expressions is significantly affected by the emotional context of the visual background [Koji and Fernandes, Can. J. Exp. Psychol., 2010, 64(2), 107-116]. We recently found that dynamic visual textures elicit a wide range of emotional responses, with dominance (strength or conspicuity) being one of the principal affective dimensions [Toet et al., i-Perception, 2012, 2(9), 969-991]. In the current study we investigate whether dynamic textured backgrounds also affect the judgement of human facial expressions. Participants rated the dominance of 12 (neutral ) male faces. In the first experiment we validated the neutrality of these faces by placing them on a neutral (black) background. Results show that none of the faces resulted in a non-zero dominance score. In the second experiment the faces were overlaid (opacity 80%) on 12 different natural dynamic background textures, six of which were very strong/conspicuous and six which were very weak/inconspicuous. The results show that the (neutral) faces were rated significantly more dominant on strong/conspicuous backgrounds than on neutral backgrounds. There is no significant difference between ratings obtained with weak/inconspicuous backgrounds and with neutral backgrounds. We conclude that natural dynamic backgrounds (typically not perceived as emotional) can significantly affect the evaluation of facial expressions.

The angry face advantage in the visual search task is derived mainly from the efficient rejection of distractors.

Takahiro Kirita

Many studies have reported that angry faces should be detected faster than any other emotional faces. Current study, using schematic and photographic stimuli, reexamined the detection efficiencies of angry and happy faces in the visual search task. The results showed that, for schematic stimuli, angry faces were detected faster than happy faces irrespective of emotions of distractors, suggesting the angry face advantage. However, there was no difference in detection efficiency between angry and happy targets among neutral distractors when photographic faces were used. Furthermore, in target absent conditions, there were substantial differences between two stimulus classes. For schematic stimuli, neutral distractors were rejected faster than happy distractors, which in turn were rejected faster than angry distractors. Accordingly, whereas angry targets among neutral distractors were detected most rapidly, the detection of happy targets among angry distractors was the slowest. For photographic faces, happy distractors were rejected faster than neutral distractors, which in turn were rejected faster than angry distractors; while angry faces were detected most efficiently among happy distractors, happy faces were detected most slowly among angry distractors. These results suggested that the angry face advantage in the visual search task should be derived mainly from the efficient rejection of distractors.

Neural Correlates of Perceptual Learning of Objects in the Hippocampus and the Dorsolateral Prefrontal Cortex

Matthias Guggenmos, Marcus Rothkirch, Klaus Obermayer, John-Dylan Haynes and Philipp Sterzer

Perceptual learning is the improvement in a perceptual task through repeated training or exposure, typically during the course of several days or weeks. In this study we investigate reward-dependent perceptual learning of object recognition. Human subjects had to recognize briefly presented and backward-masked objects over the course of five days. On days 2 to 4 subjects received either high-reward or low-reward feedback on their choices (training phase). On days 1 and 5 they performed the task inside the fMRI scanner, without feedback, to compare pre- and post-training fMRI activity. Each object belonged to one of three category pairs, one of which was omitted in the training phase (control category pair). Behaviorally we find that the subjects' performance improved significantly more for trained compared to control categories with an additional advantage for high-rewarded stimuli. FMRI data analysis revealed a neural correlate of perceptual learning in the posterior hippocampus. This hippocampal activation was sensitive to the reward magnitude and was paralleled by an increased frontostriatal activation for high versus low reward. Additionally activity in the dorsolateral prefrontal cortex was strongly modulated by the inter-subject variability of the behavioral improvement, potentially reflecting enhanced clarity of perceptual decisions through training.

Relationship between visuotactile and affective/aesthetic qualities of natural materials

Naokazu Goda and Hidehiko Komatsu

We have recently shown that human ventral visual cortex represents natural material categories (e.g., metal, wood and fur) in a way reflecting their visual and tactile qualities such as smoothness and hardness (Hiramatsu et al., 2011, NeuroImage, 57, 482-494). Seeing materials evoke not only visuotactile feelings but also affective and aesthetic feelings - we feel furry objects comfortable and seeing a log cabin may make us feel relaxed. What neural/psychological processes underlie the emergence of such affective/aesthetic feelings? To understand this, we examined visuotactile and affective/aesthetic qualities for CG images of nine natural material categories using a semantic differential method with >30 adjective-pairs. We found that inter-individual correlation of the rating can be a good measure to define a set of scales that characterize affective/aesthetic qualities: ratings with adjectives such as hard-soft were highly consistent across participants whereas those such as beautiful-ugly tended to differ among participants. With this objective measure, we separately evaluated affective/aesthetic and visuotactile qualities of the materials, each of which was represented as a multidimensional space. The structure of those spaces differed but could be related by simple transformations. We present a model predicting affective/aesthetic qualities of materials from their visuotactile qualities.