3D Perception

Parallactic movement beats binocularity in the presence of external visual noise

Nicole Voges, Michael Bach and Guntram Kommerell

Binocular vision provides a considerable advantage over monocular vision in the presence of disturbances along the view line [Otto et al, 2010, Graefe’s Arch Clin Ophthamol, 248, 535-–541]. A typical example is a driver who tries to identify objects through a windshield dotted with snowflakes. During driving, any bumpiness of the road will cause a vertical parallactic up-and-down movement of particles on the windshield with respect to the visual object. We simulated this dynamic situation and found: (1) The benefit of binocular over monocular vision largely vanishes. (2) This strong loss of binocular benefit is partly due to a 'ceiling effect'. An additional experiment that avoided 'ceiling effects', however, showed that the effect of moving vs. stationary noise was still markedly larger than the effect of binocular vs. monocular viewing.

Stereoacuity across the visual field: An equivalent noise analysis

Susan Wardle, Peter Bex, John Cass and David Alais

Stereopsis is a hyperacuity in the fovea; depth differences can be distinguished that are smaller than individual photoreceptors. However, as stereoacuity declines faster than resolution acuity across the visual field, additional factors must contribute to the reduction in peripheral stereoacuity. We examine the increase in depth discrimination thresholds with distance from the fovea using an equivalent noise analysis to separate the contributions of internal noise and sampling efficiency. Observers discriminated the mean depth of patches of "dead leaves" composed of ellipses varying in size, orientation, and luminance as a function of disparity noise [0.05 - 13.56 arcmin] and visual field location [0 - 9 degrees]. At low levels of disparity noise, depth discrimination thresholds were lower in the fovea than in the periphery. At higher noise levels (above 3.39 arcmin), thresholds converged and there was little difference between foveal and peripheral depth discrimination. Parameter estimation from the equivalent noise model revealed that an increase in internal noise limits peripheral depth discrimination, with no change in sampling efficiency. Sampling efficiency was uniformly low across the visual field. The results demonstrate that a loss of precision of local disparity estimates early in visual processing limits fine depth discrimination in the periphery.

The effect of variability in other objects’ sizes on the extent to which people rely on retinal image size as a cue for judging distance

Rita Sousa, Eli Brenner and Jeroen Smeets

Retinal image size can be used to judge objects’ distances because for any object one can assume that some sizes are more likely than others. Increasing the inter-trial variability in the size of otherwise identical target objects reduces the weight given to retinal image size when judging distance. We examined whether increasing the variability in the size of objects of a different colour, orientation or shape also reduces the weight given to retinal image size. Subjects indicated target cubes’ 3D positions. Retinal image size was given significantly less weight as a cue for judging the target cubes’ distances if the target cubes were interleaved with cubes of different simulated sizes, even if such cubes were always coloured or oriented differently. This was not so if the objects with different sizes were spheres rather than cubes. We also examined whether increasing the variability in the size of cubes in the surrounding reduces the weight given to retinal image size when judging distance. It does not. We conclude that variability in surrounding or dissimilar objects’ sizes has a negligible influence on the extent to which people rely on retinal image size as a cue for judging distance.

Viewing comfort and naturalness – key factors in understanding and evaluating the perception of stereoscopic images

Raluca Vlad, Anne Guérin and Patricia Ladret

Most of the research into evaluating the perception of stereoscopic 3D images tries to draw conclusions on the overall 3D quality only by analyzing the 2D quality and the depth. But experiments show that the viewing comfort and the naturalness of the scene are two other significant factors that define the 3D experience. We implemented a psycho-visual experiment in which we explored the way these two subjective factors are perceived. We recorded the oral observations made by 26 users while watching 24 stereoscopic images displayed on a 3D screen in identical conditions. Key words were extracted from the recordings and subsequently analyzed. Our results show a very different perception of comfort among participants while watching the same images. Two hypotheses were envisaged. The first and most probable is that the degree of subjectivity due to the vision characteristics and to the background of the participants is more consistent that believed. The second hypothesis is that these large differences could be due to a slight unavoidable imprecision of a test of this type. The results also revealed the presence of the cardboard effect artifact in our images and a direct influence of this artifact on the naturalness perceived was observed.

Perceiving swinging surface in depth from luminance modulation

Yuichi Sakano and Hiroshi Ando

It is well known that we perceive 3D shape of an object from its shading, that is, perception of a spatial variation of surface orientation of an object can be induced by the spatial variation in luminance across the surface. In the present study, we examined whether perception of a temporal, rather than spatial, variation of surface orientation can be induced by a temporal variation in luminance of the surface. We expected this phenomenon because assuming "light-from-above" in general, a surface receives more light when it orients upward and thus the luminance becomes higher than when it orients downward. We used a stimulus composed of paired gray rectangles placed side-by-side on a dark background. The luminance was uniform within each rectangle but temporally modulated in antiphase to each other. The rectangles appeared to change in slant, i.e., they appeared slanted upward when the luminance was high, while they appeared slanted in the opposite direction when the luminance was low. This swinging-in-depth phenomenon occurred even when there was only one rectangle although its impression was weaker. These results suggest that the human visual system utilizes not only spatial luminance variation but also temporal luminance changes for the computation of surface orientation.

Spatiotemporal characteristics of binocular disparity channels for very large depth

Masayuki Sato and Shoji Sunaga

It is well-known that an excessive disparity causes diplopia and unclear depth impression. However, we recently found that target motion facilitates stereopsis for very large depth [Sato et al., 2007, ITE Technical Report, 31(18), 25-28]. To examine the spatiotemporal characteristics of the responsible mechanism we compared contrast sensitivities between a static and a dynamic condition using one-dimensional DoG targets. The standard deviation of positive Gaussian was ranged from 0.11º to 2.3º, corresponding to 1.6-0.08 c/deg peak special frequency. In the static condition two targets (one above and the other below the fixation point) were presented for 2 s and a crossed disparity was given to one target and an uncrossed disparity was given to the other. In the dynamic condition two targets oscillated horizontally in counter phase as if the observer moved side to side. The results show that sensitivity drops rapidly above 2º disparity range in the static condition while high sensitivity remains until much larger disparity range in the dynamic condition and that the highest sensitivity was obtained for the target with 0.38º-1.1º standard deviation, corresponding to 0.48-0.16 c/deg. It appears that a dynamic mechanism tuned to that spatial frequency range or size mediates stereopsis for large depth.

Accommodation responses to floating image

Yuta Horikawa, Ryousuke Kujime, Hiroki Bando, Shiro Suyama and Hirotsugu Yamamoto

We have realized several types of floating displays, including a volumetric 3D display by use of a liquid-crystal varifocal lens [Suyama et al, 2000, Jpn. J. Appl. Phys., 39, 480-484], a volumetric 3D display by use of multi bifocal lenses [Sonoda et al, 2011, Proc. SPIE, 7863, 786322], and a floating LED signage by use of crossed-mirrors [Yamamoto et al, 2012, Proc. SPIE, 8288, 828820]. One of advantages of floating 3D display is that a real image is formed in 3D space. However, most of viewers don't have experiences on viewing floating aerial images. Therefore viewers sometimes responded that they perceived an image that was pasted on the rear optical apparatus. In this research, accommodation responses for floating displays are investigated experimentally. Auto Refract-Keratometer (WAM5500: Rexxam Co. Ltd) was used for experiments. Accommodation responses were measured under binocular and monocular viewing conditions for floating displays and aerial image formed by a lens. It is revealed accommodations for a floating image is more unstable than for a paper image. Furthermore, binocular viewing stabilized accommodations. Such responses were significant in floating LED signage by use of CMA because the point spread function was as large as LED pitches.

Subjective depth position of an object displayed around a frame on a stereoscopic monitor

Hisaki Nate

I examined if subjective depth position of a target displayed on a stereoscopic monitor (21.5 inch 3D polarized monitor) changed by varying the distance between the target and the monitor’s frame. I conducted three experiments in line with pairwise comparison methotd. All targets was displayed before the monitor. In the first experiment, the target was always displayed on the center of the monitor. I varied the distance between the target and the frame by varying the monitor's size ( 3 kinds of size). In the second experiment, the monitor's size was always same. I varied the distance between the target and the frame by varying its size (3 kinds of size). In the third experiment, the target's position on the monitor was varied (7 positions). The first and third experiment results showed that the subjective depth distance from backgraound to the target was long when the distance between the object and the frame was close. The second experiment result showed that the subjective depth perception was not varied when the distance between the object and the frame was varied. These results indicated the subjective target's distance was not suppressed by the frame effect of the object displayed on the 2D.

Adaptation aftereffects when seeing full-body actions: do findings from traditional 2d presentation apply to ‘real-world’ stereoscopic presentation?

Bruce Keefe, Joanna Wincenciak, James Ward, Tjeed Jellema and Nick Barraclough

Extended viewing of visual stimuli, including high-level stimuli such as faces and actions, can result in adaptation causing an aftereffect (bias) in subsequently viewed stimuli. Previously, high-level visual aftereffects have been tested under highly controlled, but unnaturalistic conditions. In this study, we investigated if adaptation to whole-body actions occurred under naturalistic viewing conditions. Participants rated the weight of boxes lifted by test actors, following adaptation to a different identity actor lifting a heavy box, lifting a light box, or standing still. Stimuli were presented under 3 different conditions: (1) life-sized stereoscopic presentation on a 5.3 x 2.4m screen, (2) life-sized presentation on a 5.3 x 2.4m screen without stereoscopic depth information, (3) smaller than life presentation on a 22in monitor without stereoscopic depth information. After adapting to an actor lifting light or heavy boxes, subsequently viewed boxes lifted by different actors were perceived as significantly heavier or lighter, respectively. Aftereffects appeared to show similar dynamics as for other high-level face and action aftereffects, and were similarly sized irrespective of the viewing condition. These results suggest that when viewing people in our daily lives, their actions generate visual aftereffects, and this influences our perception of the behaviour of other people.

The zograscope and monocular stereopsis

Jan Koenderink, Maarten Wijntjes and Andrea van Doorn

The zograscope is an optical device designed for the viewing of single (i.e., different from stereo pairs) pictures. It was widely used in the 18th century, and even in the 19th century, when binocular stereoscopes were already in widespread use. The optical principle of the zograscope is that it removes physiological depth cues as accommodation, vergence, monocular parallax and binocular disparity. Thus, it effectively removes cues that define the picture as being a flat object in near space, leaving free room for monocular stereopsis to develop. We have constructed a zograscope and have used it to compare depth reliefs for visual objects seen in photographs of sculpture and reproductions of paintings, when viewed with and without this optical device. As expected from historical reports, the zograscope has a marked influence on pictorial relief. We show and discuss empirical results.

Assessment of glassless 3D viewing on a portable game machine

Masako A Takaoka and Hiroshi Ashida

We assessed the effects of playing a video game with glassless 3D viewing on a recently released portable game machine (Nintendo 3DS, Nintendo, Japan). We asked paid volunteers (university students) to play a racing game (Mario Card 7, Nintendo, Japan; one of the best selling 3D games of the time) for 10 min, once with 2D and once with 3D viewing. The order was counterbalanced across participants. After each play, they were asked to fill in the survey form. The questionnaires were compiled on the basis of SSQ (Simulator Sickness Questionnaire: Kennedy et al, 1993, Journal of the Institute of Image Information and Television Engineers, 203-220), VAS (Visible Analogue Scale: Japanese Society of Fatigue Science), Ohno and Ukai (2000, Journal of the Institute of Image Information and Television Engineers, 54, 887-891), and additional items for capturing positive aspects. The participants felt stronger presence with 3D, but it did not always lead to more fun. 3D viewing made game control harder, and caused more fatigue. 3D was somewhat better appreciated after playing with 2D. The results highlighted the fact that 3D viewing is not yet appreciated as a crucial entertaining factor by the majority of our participants. [Supported by JSPS Grant-in-Aid for challenging Exploratory Research (24653187) for MAT]

Increasing the depth-of-field with lighting conditions in 3D stereoscopic displays

Cyril Vienne, Laurent Blondé, Didier Doyen and Pascal Mamassian

One major distinction between stereoscopic and real-world conditions of observation is the uncoupling of vergence and accommodation. This situation is generally acknowledged as a potential source of visual fatigue. The focus and the vergence distances do not match because accommodation is adjusted by spatial frequencies while vergence is adjusted to binocular disparities. However, the models of cross-coupling between accommodation and vergence merely reduce the focus distance in the screen plane while the optics of the eye define a range of distances where vision remains sharp (the depth of field (DOF)). Thus, the mismatch between accommodation and vergence naturally occurs when the vergence distance exceeds the range of distances defined by the DOF. Hence, this study aims at investigating the general benefits of increasing the DOF through the decrease of pupil aperture using an illuminated surface directed toward the observer’s eyes. In a first experiment, observers had to judge the 3D shape of an object located at different simulated distances. In a second, we tested the pupil size effect in a visual search task. We discuss the results regarding the triad model (accommodation, vergence and pupil) and the potential application to the creation of 3D contents as well as stereoscopic displays.

Activating the inhibitory effect of binocular rivalry to contend ghosts in 3D imagery

Laurent Blondé, Cyril Vienne and Didier Doyen

3D display technologies suffer from leakage of one view into the other, creating an unintended and annoying ghosting effect. This ghosting effect decreases the image quality, almost to the point of disturbing binocular fusion, or at least modifying the 3D experience. While there is no perfect 3D display technology, ghosting has to be considered and possibly concealed when presenting 3D stereoscopic content. In this paper we present a method to reduce the impact of perceived 3D ghosting by introducing a non-linear and asymmetric processing between the left and right content. This processing activates the inhibitory effect of binocular rivalry [Baker, Wallis, Georgeson & Meese, 2012, Vision Research, 56:1-9] and allows the visual system to suppress the regions where ghosting was generated. A 3D display characterization step was done to identify ghosting generation conditions. And a dedicated processing was applied on image pairs to reduce the visibility of ghosting, modifying saliency between views for selected local regions. The consequences of this processing have been tested with a set of observers to verify that the visibility threshold of ghosting has been improved without perceptibly affecting 3D observation.

The vision of Federico da Montefeltro

Gert van Tonder, Daniele Zavagno, Kenzo Sakurai and Hiroshi Ono

The erstwhile Duke of Urbino Federico da Montefeltro (1422-1482), Warlord and patron to the painter Piero della Francesca, suffered various scars in his military career; among these a few that, already while he was still a young man, affected his visual ability. Here, we present these historical details – including portraits of the Duke at two different stages in his life – to obtain insight into his vision, that is, what his visual capabilities were, which visual cues he most relied on, and the strategies he devised to improve this residual vision. Specifically, the evidence suggests that the Duke was willing to sacrifice his facial appearance for the sake of improved vision, i.e. that he had his nose surgically altered to enlarge his field of view. It also served him in his role as a horse-mounted military leader, and in some aspects may have enhanced some of his depth cues beyond the capacity of a person endowed with normal vision. We finally show how Piero della Francesca, in his masterful 1465-66 profile portrait of Federico, gives us an intuitive but uncannily personal glimpse of the way in which the Duke of Urbino looked out onto the world.

Making visual sense of Piranesi’s labyrinthine spaces

Andrea Van Doorn, Jan Koenderink and Johan Wagemans

In 1745 Giambattista Piranesi (1720-1778) started work on a series of etchings (14 published in 1750; 16 (the original 14 reworked) in 1761) known as the Prisons (Carceri d’Inventione). The prints show very complicated spaces, prolonged viewing leading to many novel visual understandings. Detailed analysis shows that Piranesi purposely introduced ambiguities and inconsistencies. The Carceri were evidently meant to entertain the viewer due to the labyrinthine nature of the depicted spaces. The viewer can forever dwell in them without exhausting their power to entertain. Yet one is visually aware of something, even on a cursory view. Do different observers see the same space? Does a single observer see the same space at different viewings? Do visual observers have coherent spatial impressions at all? We approach such problems empirically. We determine depth order for many (over a thousand) point pairs. This allows us to check consistency (e.g., whether a single linear order exists). The main analysis of these results focusses on the nature of inconsistencies (there are many), and the inter-observer differences.

Inferring visual attentional capture from search slopes and intercept differences

Paul Skarratt, Geoff Cole and Angus Gellatly

A visual stimulus is said to capture attention when associated targets remain comparatively immune to increases in display size. That is, when they give rise to shallower search functions than do targets associated with other stimulus features. On that basis, we [Skarratt, Cole & Gellatly, 2009, Attention, Perception, & Psychophysics, 71(4), 964-970] reported that targets that loom towards or recede away from the observer are equivalent in attracting attention. However, looming targets elicit overall faster responses, an additive effect we attributed to motor priming. This supposition was tested in two experiments that examined perceptual accuracy for looming, receding, and static targets. We reasoned that any motoric contribution to the looming advantage would be absent when measuring accuracy. However, results showed that accuracy was consistently higher for looming than for receding targets, suggesting they do receive attentional priority. These findings indicate that attentional primacy can manifest in terms of main effects, even in the absence of search slope differences.

The role of global 3D visual processing in motion-induced-blindness

Orna Rosenthal, Martin Davies, Anne Aimola Davies and Glyn Humphreys

Motion-induced blindness (MIB) refers to the alternating illusory disappearance and re-appearance of local targets against a moving background. Previously Graf et al. [2002; Vision Res 42(25): 2731-5] showed that MIB is modulated by binocular global depth cues. We studied the effect of monocular global 3D convexity/concavity cues on MIB frequency. The MIB stimuli comprised two static targets presented on a background of coherently moving dots forming a global 3D hourglass-like structure. Critically, the two halves of the hourglass had similar 2D local properties. However, using kinetic depth and occlusion cues, one half of the hourglass was perceived as hollow and the other as convex. MIB was increased for targets located on the convex relative to the concave half, consistent with prior effects of binocular depth. Interestingly, the convexity effect was limited to the left visual field -- consistent with previously reported anisotropies in global processing. Taken together, our findings suggest (1) an underlying role of global 3D processing in MIB which (2) interacts with attentional bias towards the global context of the scene, which is anisotropic in nature.

Matching dynamic views of biological motion

Ian M. Thornton, Pille Pedmanson and Zac J. Wootton

Although motion capture techniques have made 3D point-light data ubiquitous, many studies continue to explore perception in the context of fixed, 2D views. In the current work, we examine how dynamic changes of viewpoint, such as those that would be encountered by a moving observer, affect the ability to perceive human action. On each trial of our matching task, the action performed by a central target figure was also performed by one of two equidistant flankers. The observer’s task was simply to make a speeded response indicating whether the target matched the left or right flanker. Actions were randomly selected from a database consisting of familiar activities, such as walking, jumping and waving. All actions were performed “in place” and looped continuously. In a series of experiments, we explored how matching was affected by systematic rotation and translation of the target in depth. Our findings indicate that a) the addition of dynamic, multiple viewpoints, via smooth rotation around the Y-axis, improves matching performance relative to a single arbitrary view; b) increasing angular offsets between target and matching flanker systematically decreased performance; c) performance was remarkably invariant to translations of the central target in depth, almost to the point of convergence.

Change blindness to 2D and 3D objects

Alexei N. Gusev, Olga A. Mikhaylova, Igor S. Utochkin and Denis V. Zakharkin

In our experiment, we tested object depth effect on change blindness. We used CAVE virtual reality system for presenting stimuli for efficient simulation of 3D cues. Observers were exposed with arrays of 5 or 20 objects under flicker conditions typically inducing change blindness. They had to see which object is changing (disappearance, color change, or spatial shift) between interruptions. Objects could be presented either randomly in space, or arranged in global regular configuration. In 2D condition, sets of squares were presented in frontal plane before observers. In 3D condition, sets of binocularly simulated cubes were presented in frontal plane. Both reaction times and error rates were measured in the experiment. We found in the result that change detection performance benefits from 3D arrays as compared to 2D arrays. Although the magnitude of effect slightly varied depending on set size, change type, and regularity, the superiority of 3D over 2D objects was essentially the same for all conditions. This principal finding is in line with previously made observations that 3D cues play an important role in deployment of attention over visual scenes [Enns & Rensink, 1990, Psychological Science, 1, 326-323; Nakayama & Silverman, 1986, Nature, 320, 264-265].

Enhanced perception by real-time tracking and interpreting driver actions using a driving simulator

Madalina-Ioana Toma

Driving is a complex dynamic task that has become more and more important in the human life since the invention of the car. The driver perception is a cognitive process and it represents an essential part of driving which expresses the ability to see potential problems that become hazardous or dangerous. A lot of clear indications show that novice drivers have insufficiently developed skills, and perceive hazardous situations too late, or even not at all. Therefore, they can easily cause or be involved in car accidents. A method to improve the driver’s perception is to develop an intelligent system which enhances his/her perception, it reasons instead of the driver, and it gives feedback only when he makes a wrong or an adequate maneuver. Our work tries to answer a research question that is relevant in reducing the risk of car accidents: Is it possible to enhance the perception of novice drivers with an intelligent system which analyzes the driver’s maneuvers, and gives visual and auditory feedback according to different traffic situations? To track the driver’s actions in real time we record hands motion by processing data from a KINECT depth sensor, and gaze and head movements with an Eyelink II tracker. To assess perception of the driver we interpret the driver’s actions performed in several hazardous traffic situations built in the TORCS open source driving simulator. Automatic detection of driver mistakes were detected using an artificial intelligence method and the corresponding system response consisted of automatic alerts. An enhancement of the driver’s perception was observed based on our proposed method during experimental scenarios which involve hazardous or dangerous situations. Results suggest that there is significant opportunity to enhance driver perception through the use of visual and audio feedback. In addition, we observed the improved driving performance after a training time of the novice drivers involved in the experiment.

Human adults can use two different geometrical cues when reorienting in immersive virtual environments

Hey Tou Chiu, Karin Petrini and Marko Nardini

A recent study [Lee, Sovrano and Spelke, 2012, Cognition, 123, 144-161] has shown that two-year-old children can reorient in a room using certain geometric cue (wall distances), but not others (wall lengths). Here we investigated adults’ abilities to use both kinds of information. In four virtual rectangular rooms, room geometry was specified by differing wall lengths, differing wall distances, both, or neither. After locating an object in one of the four room corners and being disoriented, 17 participants had to reorient and find the corner at which the object had disappeared. Adult participants could use both wall distance and wall length information when reorienting, reporting a higher proportion of correct responses when both cues were available. Our findings indicate that adults can use geometrical cues that very young children cannot to reorient in a new environment. This suggests that very early-developing mechanisms able to encode locations relative to distances to boundaries are supplemented by later-developing mechanisms processing a wider range of cues. Relating these changes to development of neuronal encodings of space [Wills et al, 2010, Science, 328, 1573-1576] is an important question for further study.

A disparity energy model improved by line, edge and keypoint correspondences

Jaime A. Martins, Miguel Farrajota, Roberto Lam, J.M.F Rodrigues, Kasim Terzic and J.M.H. Du Buf

Disparity energy models (DEMs) estimate local depth information on the basis of V1 complex cells. Our recent DEM [Martins et al., 2011, ISSPIT, 261-266] employs a population code. Once the population's cells have been trained with random-dot stereograms, it is applied at all retinotopic positions in the visual field. Despite producing good results in textured regions, the model needs to be made more precise, especially at depth transitions. We therefore combine the DEM with two complementary disparity models: (1) Responses of V1 end-stopped cells are used to detect keypoints like edge junctions, line endings and points with large curvature. Responses of simple cells are used to detect orientations of the keypoints underlying line and edge structures. The annotated keypoints are then used in the left-right matching process, with a hierarchical, multi-scale tree structure. (2) Responses of simple and complex cells are used to detect line and edge events. In the left-right matching process, disparity evidence is accumulated by combining corresponding event types, polarities and their numbers. This is done by grouping cells in the multi-scale line-edge space. By combining the three models, disparity can be improved at depth transitions and in regions where the DEM is less accurate. [Projects: PEst-OE/EEI/LA0009/2011, NeFP7-ICT-2009-6 PN: 270247, RIPD/ADA/109690/2009; PhD grants SFRH-BD-44941-2008, SFRH/BD/79812/2011]

Subthrehold Contrast smoothness as a Depth Cue

Yoshiaki Tsushima, Kazuteru Komine and Nobuyuki Hiruma

It is well known that the luminance contrast change of a visual stimulus is one of cues to depth. However, it is unclear how the smoothness of contrast change correlates with depth perception. Here, we investigate the relationship between the contrast smoothness and depth perception. Two same-sized bars were vertically presented on the display. Both bars contained the luminance contrast difference from one side to the other (LtoR or RtoL). The contrast difference and smoothness of contrast change from one to the other were varied from trial to trial. One participant group was asked to report which bar was more tilted (Depth task), another group reported which bar was darker (Luminance task). Both tasks were conducted with monocular viewing. In general, the performance of the luminance task would be the same or better than that of the depth task because of cognitive hierarchy. However, we find that the performance of the depth task was better than that of the luminance task when the contrast smoothness was subthreshold. The present results demonstrate two suggestions. First, contrast smoothness would be useful as one of depth cues. Second, the contrast smoothness becomes a relatively effective depth cue especially when it is subthreshold.

Contour Shape and Perception of Holes on 3-dimensional Surfaces

Haider Eesa, Ik Soo Lim, David Hughes, Mark Jones and Ben Spencer

Convexity and concavity are powerful determinants of figure-ground segmentation; concave sides look more like a hole (background) while convex ones as a figure [Bertamini, 2006, Perception, 35, 883-894]. In most studies of figure-ground segmentation, however, ‘flat’ or 2-dimensional figures are used; no variance of depth within each figure. The objective of this work is to study whether the roles of convexity and concavity are still maintained with ‘curved’ or 3-dimensional surfaces as figures. 3-dimensional surfaces are rendered using computer graphics software. Two images of the same 3-dimensional surface (one with a convex hole projected on it, the other with a concave hole) are presented side-by-side to an observer, who is required to choose the one looking more like a hole; 85 observers participate in this experiment. No statistically significant difference is found between convex and concave holes. In the second experiment, we augment the scene with cast shadows; the shadow due to the hole is rendered to fall on the ground, which is visible through the hole. Unlike the first experiment, much more observers choose concave holes over convex ones for looking more like a hole in a statistically significant way.

StarTrek illusion demonstrates a phenomenon of depth constancy

Jiehui Qian and Yury Petrov

Size constancy is a well-known example of perceptual stabilization accounting for the effect of viewing distance on the retinal image size. In a recent study (Qian & Petrov, JOV (2012) 0(0):1, 1–10) we demonstrated a similar stabilization mechanism for contrast perception and suggested that brain accounts for effects of viewing distance on various other object features in a similar way, the hypothesis that we called "general object constancy". Here we report a new illusion of depth further supporting this hypothesis. Pairs of disks moved across the screen in a pattern of radial optic flow. A pair appeared as a small black disk floating in front of a larger white disk, the percept of depth separation created by binocular disparity. Observers were judging whether the depth separation changed in the course of the optic flow. The illusory depth change was measured with a nulling paradigm, where the disparity separation for each pair varied in the course of the optic flow. The measured depth illusion was much stronger than the accompanying size and contrast illusions. Given that horizontal disparity decreases much faster than size with viewing distance (~d^2 vs ~d), this result supports the hypothesis of general object constancy.

Stereo-fusion efficiency and oculomotor stability: Effects of central and peripheral fusion locks

Michael Wagner, Nir Schwalb and Jonathan Shapiro

We studied the efficiency and oculomotor stability of stereo fusion performed with or without zero-disparity fusion locks. Twelve normal-sighted participants were trained to elicit free-fusion stereo images embedded in RDS stereo pairs. Stimuli were static Landolt-C rings (1.2 and 3.2 deg diameter), 4 direction gaps (crossed or uncrossed horizontal disparities: 8, 20, 40 arcmin.), on dark screen. Bright line frames “fusion-locks” were centrally displayed on screen surface (forming inner rectangular frame 1.3 deg vertical, 2 deg horizontal, superimposed on target area), or peripherally (inner frame 10 deg vertical and 20.5 horizontal). We measured choice-reaction-times for Landolt-C gap detection (pressing one of 4 keys corresponding to Landolt-C gap directions). Inter-trial-intervals contained eye-strain-relaxing tasks. Binocular eye-movements were recorded (EyeLink II system) during trials and ITI’s. Presence of peripheral lock significantly improved target detection RT’s in all participants (6.7s), reflecting improved stereo-fusion efficiency. Without peripheral fusion lock, participants needed longer stereo-fusion periods, showed vergence drifts and binocular instability. Unlike peripheral lock, central lock impaired binocular performance and prolonged target detection (11.8 s), especially with uncrossed disparity, apparently reflecting conflicting accommodative - vergence cues. Our results support the role of remote peripheral zero-disparity images as trigger of a sustained vergence “fusion-lock” mechanism during stereopsis.

The lack of tranfer-learning in laparoscopic surgery

Stephanie Preuß, Heiko Hecht and Ines Gockel

Nowadays many routine surgeries are made laparoscopically. Typically the surgeon operates with help of a camera and two laparoscopic instruments. At least two incisions are required for “dual-port key-hole surgery”. The camera provides an image of the three-dimensional surgery site, which is viewed as a 2-D image on a monitor. Newer technoloqy is able to thread all instruments through a “single-port” that requires only one incision, having the advantage that the risk of inflammation is only half as high. In single-port surgery the two laparoscopic instruments have to be crossed, thus complicating perceptual motor action. We conducted two experiments to investigate transfer-learning between the two methods. In a cross-over design, different groups of novices started to practice with one method and then switched to the other. The single-port method was more difficult than the dual-port method. The more complex spatial mapping in the case of crossed instruments might be responsible for this effect. Also, the 2D picture might have been harder to translate into a 3D representation in the single-port case. Subjects improved with practice of a given method, but transfer learning did not take place. We currently investigate if mental practice might facilitate a change of methods.

New Insights into the Ouchi Illusion

Teluhiko Hilano, Sohei Fukuda, Taku Oshima and Kazuhisa Yanaka

The Ouchi illusion (an upper left figure on p.75 in J. Ouchi, Japanese Optical and Geometrical Art, Dover, 1977, New York) consists of a ring and a disc which are each filled with mutually perpendicular oblong checkered patterns. This figure is perceived as if only the middle disc were floating and moving autonomously. We show some variations of this illusion, one of which consists of two rings and a disc, and another with a checkered pattern in a different direction or different inner figures . This figure is perceived as if only the inner ring were floating and moving autonomously. Furthermore, we point out that stereoscopic versions of the Ouchi illusion are less perceived as optical illusions. The Ouchi illusion may be caused only when no parallax is present in the texture of the center disc. We also make a 3D model of the Ouchi illusion in which the central disc is a little distant from the surrounding ring was created. When we look at this model from a distance, it is perceived as if it were the original. However, the amount of optical illusions becomes smaller as we approach it. This fact supports the above hypothesis.

On the 3D Aperture Problem of Binouclar Motion Perception

Martin Lages, Hongfang Wang and Suzanne Heron

The 3D aperture problem occurs when an object moves behind a circular aperture in binocular view so that endpoints from an oriented line or edge remain occluded. Similar to the 2D aperture problem, perception of local velocity remains ambiguous but its solution may reveal processing characteristics of the human 3D motion system (Lages & Heron, 2010. PLoS Comp Biol, 6(11), e1000999). Here we investigate how observers solve the 3D aperture problem. In two psychophysical experiments we used a two-screen Wheatstone configuration to display a moving line oriented at 45° (oblique) or 90° (vertical). The line moved on a trajectory in depth at a binocular viewing distance of 55cm. It was shown through a circular aperture so that line endpoints remained occluded. The slant of the line was varied across trials with orientation disparity ranging between -6 and +6°. In an open-loop matching task observers repeatedly adjusted tilt and slant of a probe to indicate perceived line motion direction. Adjustments from four observers gave comparable results but did not match geometric model predictions. We therefore combined likelihoods of velocity constraints for the left and right eye with a conjugate prior that reflects observers' knowledge of 3D motion. We also expressed (orientation) disparity as a likelihood which may be affected by a zero disparity prior. The resulting disparity estimates were used to establish the velocity constraints in a binocular viewing geometry. Best-fitting ML estimates of this Bayesian model revealed a large bias for (orientation) disparity and small noise for motion processing. This suggests that observers approximate a 3D vector normal solution but incorporate bias from disparity processing when resolving the 3D aperture problem.

Detection of linear ego-acceleration from optic flow

Freya Festl, Fabian Recktenwald, Chunrong Yuan and Hanspeter Mallot

Human observers are able to estimate various ego-motion parameters from optic flow, including rotation, translational heading, time to collision (TTC), time to passage (TTP), etc. The perception of linear ego-acceleration or deceleration, i.e.\ changes of translational velocity, is less well understood. While time-to-passage experiments indicate that ego-acceleration is neglected, subjects are able to keep their (perceived) speed constant under changing conditions, indicating that some sense of ego-acceleration or velocity change must be present. In this paper, we analyze the ego-acceleration perception and its relation to geometrical parameters of the environment using simulated flights through cylindrical and conic (narrowing or widening) corridors. Theoretical analysis shows that a logarithmic ego-acceleration parameter, called the acceleration rate rho, can be calculated from retinal acceleration measurements. This parameter is independent of the geometrical layout of the scene; if veridical ego-motion is known at some instant in time, acceleration rate allows to update ego-motion without further depth-velocity calibration. Results indicate, however, that subjects systematically confuse ego-acceleration with corridor narrowing and ego-deceleration with corridor widening, while veridically judging ego-acceleration in straight corridors. We conclude that judgments of ego-acceleration are based on first order retinal flow, and do not make use of acceleration rate or retinal acceleration.

Stereo visual cues help object motion perception during self-motion

Diederick C. Niehorster and Li Li

Recent studies have suggested that the visual system subtracts the optic flow pattern experienced during self-motion from the projected retinal motion of the environment to recover object motion, a phenomenon called “flow parsing” [Warren and Rushton, 2007, Journal of Vision, 7(11):2, 1-11]. In this experiment, we tested how adding stereo visual cues to help accurate depth perception of a moving object relative to the flow field affected the flow parsing process. The displays (26°x26°, 500ms) simulated an observer approaching a frontal plane that was composed of 300 randomly placed dots. A red probe dot moved vertically over this plane or over the image plane of the projection screen through a midpoint at 3° or 5° eccentricity. A horizontal component (along the world X-axis) under control of an adaptive staircase was added to the probe dot’s vertical motion to determine when the probe motion was perceived as vertical. Participants viewed the display with and without stereo visual cues. We found that with stereo visual cues, flow parsing gains were significantly higher when the probe moved over the frontal plane, but significantly lower when it moved over the screen surface. We conclude that stereo visual cues help veridical perception of object motion during self-motion. Acknowledgment: Supported by: Hong Kong Research Grant Council, HKU 7480/10H.

Motion from Structure

Baptiste Caziot and Benjamin T. Backus

When an observer moves while looking at a static random dot stereogram (RDS), the surfaces with different disparities appear not to be stationary in space. We call this phenomenon “Motion From Structure” (MFS). In a real static 3D scene, the distal stimulus appears stable even though the proximal stimulus actually contains a relative motion signal. So the illusory motion in an RDS may reflect the operation of a mechanism that is responsible for the apparent stability of the real world. The problem is geometrically simple and more information than needed is known to be processed by the brain. What we do not know is which of these cues are used by the visual system to solve the problem. We conducted a series of experiments in which the subjects translated their heads (45cm peak-to-peak amplitude, 0.5Hz) while adjusting the gain (target speed/head speed) of a crossed-disparity target to make the target appear stationary. The target was a square 46cm wide displayed on a 180cm x 240cm rear projection screen at 200cm. Targets had one of 4 different disparities (8, 16, 24 and 32 arcmin). Subjects also used a stick to report the perceived distance between the background and the target. Perceived depth averaged 83% of the depth specified by disparity. However, gain settings were consistently close to 50% of the gain specified by geometry. Small head movements (5cm amplitude, 0.5Hz) are sufficient to perceive MFS but subjects perceived the target as stationary for smaller gains. Removing the background did not destroy MFS but again subjects reported smaller gains. Fixation strategy (4 different fixation positions, 2 on the target moving with it, 2 on the background stationary) modifies gain settings but the pattern of gains is highly idiosyncratic between subjects. Finally dynamic RDS give qualitatively similar gain settings but the gains are more variable.

Dependence of 3D motion integration on convex/concave surface structure

Masayuki Kikuchi and Satoshi Kodama

Previous study revealed that the perception of object’s 2D motion, which is integrated from the 1D motion of straight line-segments observed from some apertures, is strongly affected by the polarity of border-ownership assigned on those moving lines (Kikuchi and Nagaoka, ACV2006), suggesting that there exists the intimate relation between motion detection and figure-ground separation mechanisms in the visual system. On the other hand, we usually view objects binocularly and perceive objects’ surface slated in depth. Therefore it is relevant to assume that perception of global motion of 3D object is attained by the integration of motion of local 3D patches of surfaces with different slants. This study executed psychophysical experiment investigating the nature of 3D motion integration mechanism, especially focusing whether 3D convex / concave structure of the surface affects the performance of motion integration or not. We used pares of small surface patches slanted in depth as stimuli. The patches were drawn by DRDS, aligned horizontally. In “convex condition”, two patches expressed parts of the two planes jointing in convex manner, on the other hand, in “concave condition”, two patches expressed parts of planes jointing in concave manner. Each patches moved back and forth. The motion of pare of patches was consistent with translating-rotation whose central axis is vertical. The rotation was clockwise or counter-clockwise defined with hypothetical downward gaze, displayed for 4s. The subjects’ task was to answer the direction of rotation by 2AFC. We obtained the result that the correct rate on the convex condition was higher than the concave condition (p<0.05). This result suggests that 3D version of Gestalt factor affects the 3D motion integration.

Perceived depth and stability from motion parallax natural scene movies

Kenzo Sakurai, Soyogu Matsushita, Hiroshi Ono, Sumio Yano and Kenji Susami

We developed a laterally oscillating movie-camera platform and produced motion parallax natural scene movies to test whether observers perceive depth from those movies with moving their heads. The camera platform is a motor-driven device that oscillates a camera laterally and slightly rocks it in order to converge its optical axis on a certain point in depth. The video signal obtained with the camera on this device provides the identical message as an observer viewing the scene while moving his/her head from side to side. Video movies of an actual stable object (Bonsai) were made with the camera on the platform with moving laterally back and forth in 65 mm. The observers viewed the movies in the head-movement condition (with synchronizing their head movement to the camera movement) and in the head-stationary condition. In Experiment 1, 16 observers reported approximately the same magnitude of apparent depth in both conditions. In Experiment 2, 15 observers reported greater stability (less apparent motion) in the head-movement condition than in the head-stationary condition when they moved their heads in larger amplitude. Although the results show no clear advantage of motion parallax in seeing depth, this device could be a new natural scene 3-D display system.

The effect of stereoscopic camera separation on the estimation of a right angle

Rob Black, Georg Meyer and Sophie Wuerger

To evaluate distortions introduced in stereoscopic 3D viewing conditions, we employed a hinge stimulus [Shibata et al, 2011, Journal of Vision, 11(8):11, 1–29] and manipulated the virtual camera separation (or interaxial distance) of a stereoscopic image with veridical shading and texture cues at a constant viewing distance and screen size. The task of the participants (n=20) was to judge whether the hinge angle (30° - 110° in 10° steps) was greater than or less than 90°. The virtual camera separation was manipulated (20, 40, 60, 80, 100 mm) resulting in retinal disparities ranging from 0.358° to 1.432°. All 25 stimulus configurations (5 hinge angles x 5 camera separation) were randomly interleaved. Object size (15cm2), screen size (58cm screen diagonal) and viewing distance were kept constant. From the psychometric functions for each camera separation, the point of subjective equality (PSE, the angle which is perceived as 90°) and the slope were derived. We report two results: (1) the perceived hinge angle becomes more acute with an increase in camera separation; (2) the sensitivity in discriminating between different hinge angles is not affected by camera separation. Implications for viewer comfort will be discussed. Acknowledgement: RB is supported by an ESPRC CASE studentship.

Perceived length to width relations of city squares are task and position dependent

Harold T. Nefs, Arthur van Bilsen, Sylvia C. Pont, Huib de Ridder, Maarten W.A. Wijntjes and Andrea J. van Doorn

We investigated how people perceive the aspect ratio of city squares. Earlier research has focused on distance perception and 'open' spaces rather than the perceived length to width relations urban areas enclosed by buildings and filled with people, cars etc. In two experiments we measured the perceived aspect ratio of five city squares in the historic city center of Delft, the Netherlands. We also evaluated the effect of the observer’s position on the square. In the first experiment participants were asked to set the aspect ratio of a small rectangle such that it matched the perceived aspect ratio of the city square. In the second experiment participants were asked to estimate the length and width of the city square separately. In the first experiment we found that the perceived aspect ratio was in general lower than the physical aspect ratio. However, in the second experiment, we found that the calculated ratios were close to veridical except for the most elongated city square. Thus, although indirect measurements are nearly veridical, the perceived aspect ratio is an underestimation of the physical aspect ratio when measured in a direct way. Moreover, the perceived aspect ratio also depends on the location of the observer.

Non-uniform Image Blur and Perceptual Transparency

Haider Eesa, Ik Soo Lim, David Hughes, Mark Jones and Ben Spencer

Perceptual transparency needs the visual construction of two distinct surfaces from a single pattern of light intensities: a partially transmissive surface and an underlying opaque surface. The visual system takes the presence of blur as an image cue in assigning transmittance to partially transmissive surfaces (e.g., translucent materials) [Singh et al., 2002, Perception, 31, 531-552]. Owing to depth-of-focus limitations in the eye, however, the visual system also uses image blur as depth cue (e.g., the background detail is blurred) [Mather et al., 2002, Perception, 31, 1211-1219]. The objective of this study is to identify the components of image blur that contribute to transparency cue, but not depth cue. Our hypothesis is that randomly non-uniform blur emulates the light scattering effect of a translucent layer better than the uniform image blur used in [Singh et al., 2002, Perception, 31, 531-552]. Each of 90 observers is presented with a pair of images of a translucent layer enclosing an underlying opaque layer; one of the images is created with the randomly non-uniform blur, the other with the uniform blur. In a statistically significant manner, the majority of observers choose the randomly non-uniform blur for looking more like a translucent layer.

Three-dimensional effects of “completion by folding”

Daniela Bressanelli, Enrico Giora and Simone Gori

Elementary lines drawings can be interpreted by the visual system as representing complex 3-D objects. A crucial role is there played by amodal completion, which leads to perceive planar forms as the borders of unified solids. “Completion by folding” (Massironi and Bressanelli, 2002, Acta Psychologica, 110(1), 35-61) is a representative case of those perceptual effects. This phenomenon consists of a pattern composed by two polygons, separated by a line or a third polygon, which are perceived as a lamina folded across a bar. In the present research the illusory depth effect occurring in this pattern is investigated. The distance between the two polygons, seen as the two arms of a unique lamina, was manipulated to test its influence on perceiving depth. The stimuli were stereoscopically presented and the binocular disparity value was taken as a direct measure of the perceived depth. Observers had to adjust the arm perceived behind the bar until it appeared to be coplanar to the arm perceived in front of the bar. Results showed that the perceptual depth increased with the distance between the two arms, until they were not seen as portions of the same laminar folded figure anymore.

Human discrimination of depth of field in stereo and non-stereo photographs

Tingting Zhang, Harold Nefs and Ingrid Heynderickx

Previous research has focused on blur discrimination in artificial stimuli and natural photographs. The discrimination of depth of field (DOF), however, has received less attention. DOF is defined as the distance range in depth that is perceived to be sharp in a photograph. In this case blur is related to distance and many levels of blur are simultaneously present, therefore it is unclear what the discrimination thresholds of DOF are. We measured the discrimination threshold of DOF in natural images using a 2AFC-task. Ten participants were asked to observe two images and select the one with larger DOF. Three factors were manipulated in the experiment, namely: 1) stereo and non-stereo stimuli, 2) scene content, and 3) scale of the scene. First, we found that the thresholds for deep DOF were higher than those for shallow ones. Second, there was no significant difference in threshold between stereo and non-stereo images. Third, scene content did not significantly affect threshold. Finally, the threshold decreased when the scale of scene increased. We conclude that DOF discrimination does not depend only on the distance range that is sharp but also the distance range that is blurred in the image.

The influence of shape complexity in visual depth perception of CAD models

Florin Girbacia, Andreea Beraru and Doru Talaba

"Real perception of dimensions in Computer Aided Design (CAD) related activities plays an important role in the decision-making process of a design solution. While the geometrical database is 3D since long time, the user interaction within the software has not significantly changed. We have conducted two experiments to measure and record the depth value estimation of several CAD models with different shape complexity using traditional desktop workspace and an immersive 3D Holo-CAVE system (the first experiment) and to assert the variation of stereopsis depth perception (the second experiment). In the first experiment the complex shape objects depth was underestimated while simple objects depth was estimated more accurately. Another interesting result was that the estimated depth accuracy suffered a significant increase with the depth size that has to be perceived. The results of the second experiment show that the users presented more accurate stereopsis when the disparity value is small, while increasing the disparity value leads to more imprecise stereopsis. The conducted experimental study illustrates that the use of immersive stereoscopic visualization is considerably useful during Computer Aided Design related activities, enhancing the realism of virtual environments and objects.
[Supported by SOP HRD European Social Fund and Romanian Government under contract POSDRU/89/1.5/S/59323]."

The effect of blur on interocular suppression of luminance-modulated and contrast-modulated stimuli

Akash S. Chima, Sarah J. Waugh and Monika A. Formankiewicz

Suppression is a binocular condition that elicits repression of the visual field invoked by dissimilar inputs received by each eye. Anisometropia achieves this dissimilarity with different refractive errors. Depth and extent of suppression were measured using a 12° radius circular stimulus split into rings. Each ring’s area was doubled from the central ring. Observers matched contrast interocularly to measure the depth of suppression (DoS) in eight sectors within eight rings. Different levels of monocular dioptric blur (up to 4D) induced anisometropia. The sector being adjusted was dichoptically viewed on two head-mounted displays, the observer responding to it being a higher or lower luminance-modulation (LM) or contrast-modulation (CM) with respect to the surrounding ring, using a one-up one-down staircase. DoS is the threshold relative to baseline in each of the LM and CM conditions. Increasing induced-anisometropia revealed no local suppression scotomata within the central 12° of the binocular visual field, although a general increase in suppression depth occurred at significantly different rates of ~15% per dioptre with CM compared to ~6% with LM (p<0.05). Interocular suppression across the visual field deepens with increasing induced-anisometropia. Additionally, this disruption to binocular function has a greater effect on CM than LM stimuli.

Effects of central and peripheral interaction of motion in depth on postural sway

Hiroaki Shigemasu

Although it is known that motion in large visual field induces postural sway, how motions in depth by binocular cue with different temporal frequency in central and peripheral area interact is unknown. In this study, the effects of interaction were examined by measuring head movements induced by motion in depth. The stimuli were random-dot patterns which appear to oscillate in depth at 0.1, 0.25 or 0.5 Hz in central and peripheral area. The participants’ (N = 7) head movements were measured by 3D magnetic motion tracker during the observation of the stimuli for 90 s in each condition. The power spectrum of the sway in anteroposterior direction showed peaks at the frequency of both central and peripheral motion in depth, and the result of motion at 0.1 and 0.5 Hz showed the power was significantly higher when the motion is displayed in peripheral than central area. These results suggest that postural sway is not induced only by central motion in depth which induces vergence eye movement, and the different effects between central and peripheral display suggest that postural sway was not induced by relative disparity between central and peripheral region itself and peripheral region has greater effect on postural sway.

Fast cyclic stimulus flashing modulates dominance duration in binocular rivalry

Henrikas Vaitkevicius, Rytis Stanikunas, Algimantas Svegzda, Vygandas Vanagas, Remigijus Bliumas and Januss Kulikowski

We provide a new test addressing the problems of the information processing under the situation of the binocular rivalry. In this situation, the two competing stimuli (two mutually orthogonal bars oriented relatively to the vertical line by ±45°) were periodically flickering in range 125 - 25 flashes/s. Using the tachistoscope governed by PC, the flash duration could be changed in steps of 1 ms ranging between 4-20ms. Throughout one session (lasting 3 min), the span of the flash did not change. The next span of the flash was set randomly (17 spans were used). The subject had to press the key at the moment when the alteration of the perceived bars occurred. The factor (PCA) and MDS analysis of the registered DT showed that the variance of the DT spans depends at least on the four common factors which explain about 70 % of total amount of data variance. The loadings on the third factor, if plotted against SOA, are shown to undergo DT modulations that are periodic with a cycle of 4-5 ms, thus demonstrating that the flash duration is the relevant parameter modulating the DT. In a concluding part we discuss a tentative model.

Depth perception of texture with high-resolution 3D display

Toshihiro Bando and Yasunari Sasaki

We can easily get 3D television now, but not so popular. One of the reasons why it is not so popular might be due to insufficient depth information from binocular disparity in 3D television now. Lack of detail depth perception, for example, makes the distant view planar as stage backdrop and makes unnaturalness to the scene of 3D television. Texture is important feature to consider about depth perception and studied much from the aspect “3D shape from texture” [Todd and Thaler, 2010, Journal of vision, 10(5), 10(5):17, 1–13]. On the other hand detail depth information is important to get information of texture, because many surface textures are consist of detail concavity and convexity of surface, and insufficient depth information should make texture pore. In this study, we investigated if higher resolution 3D display improve unnaturalness and put reality to the surface texture of the object in the image, using super high-resolution 3D display about six times high density compare to usual 3D television. As the results of evaluation experiment we could make it sure that sufficient depth cue of binocular disparity could give us better depth information to perceive natural surface texture and also to break away from stage backdrop scene.

Visual search in depth: cue combination during natural behaviour

P. George Lovell, Marina Bloj and Julie M. Harris

Threshold studies allow us to measure the human visual system’s sensitivity to particular cues; however this does not tell us if they play a significant role in our everyday interactions. By adopting a visual search approach with naive observers, we use reaction times to explore how cues are weighted during natural behaviour. Stimuli consisted of a rendered scene, lit from above and in-front, featuring a shaded rectangular aperture. Within the aperture, circular discs floated in front of a background. Discs varied in their binocular disparity, size and grey-level (albedo). These 3 cues were scaled so that they were equally discriminable. Participants were asked to report the location of the disc with most depth. A multiple-regression analysis of reaction times and choices demonstrates that for the majority (4/5) of participants, the binocular disparity cue is the major driver of the observer’s choice of disc and of the speed of their response. When cues are matched for visibility, participants ignore the size cue and assign larger weight to the disparity over the albedo cue. With a naturalistic task, the visual system favours disparity over other depth cues, despite matched cue reliabilities. This behaviour is not accounted for by standard cue-combination models.

Implied gaze direction in Japanese Ukiyoe print : An event related fMRI study

Naoyuki Osaka, Daisuke Matsuyoshi and Mariko Osaka

One of the issues in the neuroaesthetics of visual art is how our brain reads the mind of portrayed people using implied eye direction (IED).Artists developed various cues for representing IED. Direct/averted gaze suggests attentional concentration while divided visual direction (DVD: both eyes are fantastically squinted) implies distraction of mind. Hokusai (Ukiyoe painter) made great progress in representing both IED and implied motion (Osaka et al,, 2009,Neuroreport, 21,264-267). However, effect of DVD on face recognition remained unsolved. Artists have tried to represent IED by direct/averted gaze while Hokusai, using DVD, succeded to show player’s mind dynamically by changing positions of the pupils. To investigate the issue, we used a schematic illustration of a face in 3 conditions: 1) both eyes were averted, 2) DVD (each eye direction was divided: left eye to left and right eye to right), and 3) control whose pupil was not drawn (no implied gaze direction). Results of fMRI scans (11 subjects) while watching illustrations showed DVD and averted gaze significantly activated IPL and SPL, respectively, while an illustration that does not imply eye direction did not activate these areas. We conclude that parietal lobule would be a critical region for understanding IED(DVD) along with STS.