Representation of Egomotion in the Brain and Its Relation to Vection

Maiko Uesaki and Hiroshi Ashida

Representaion and processing of visual cues to egomotion have been associated with visual areas MT+, V6; multimodal areas ventral intra-parietal area (VIP), cingulate sulcus visual area (CSv), precuneus (Pc); and vestibular areas parieto-insular vestibular cortex (PIVC), putative area 2v (p2v) [Cardin & Smith, 2010, Cerebral Cortex, 20, 1964-1973]. This study assessed whether optic flow is encoded differently depending on the magnitude of vection, using 3T fMRI. Two types of optic-flow stimulus were presented in blocks: One consisted of dots moving through spiral space at a speed scaled to eccentricity, whilst the other consisted of dots moving through the same spiral space at a constant speed. The former optic flow was consistent with egomotion in terms of the speed of the dots and therefore induced stronger vection. Contrarily, the latter was inconsistent with egomotion, and induced weaker or no vection. All seven areas responded well to optic flow. Bilateral areas CSv, VIP, p2v and right Pc responded more strongly to the scaled optic flow than to the unscaled. Results suggest that when the speed gradient of visual stimulation is consistent with egomotion, activation in the multimodal and vestibular areas is greater, which may reflect vection.

Making sense of noisy visual input: interactions between lateral and medial ventral brain regions associated with object recognition

Barbara Nordhjem, Anne Marthe Meppelink, Branislava Ćurčić-Blake, Remco Renken, Bauke M. de Jong, Klaus L. Leenders, Teus van Laar and Frans W. Cornelissen

The ventral visual pathway has been implicated in conscious recognition of objects. Earlier studies have pointed towards the lateral section of the ventral cortex as an essential region for object recognition, but recent work also implies a function for more medial sections. The interactions between these different sections within the ventral pathway are not well understood. Data were collected in an fMRI study where thirteen subjects recognized images of objects and animals that were gradually revealed from noise [Meppelink et al, 2009, Brain, 132 (11), 2980-2993]. Here we investigated effective connectivity within the ventral pathway with dynamic causal modeling (DCM) and Bayesian model selection. We defined bilateral intrinsic connections in a network comprising the primary visual cortex (V1), lingual gyrus (LG) and the lateral occipital cortex (LO), and studied the modulatory effect of object recognition on this network. We found that object recognition modulated both feed-forward connectivity from V1 to LG and LO as well as bilateral connectivity between LG and LO. Our results suggest that object recognition is the result of an interplay between areas in the medial and lateral sections of the ventral stream.

Processing of relative visual location in the Superior Parietal Lobule (SPL)

Laura A. Inman and David T. Field

Data is presented which suggests that part of the SPL is selectively involved in processing relative locations of visual stimuli in the scene, and how these locations change over time. The processing of relative visual locations of surrounding objects is important during self-motion for perceiving one's position in the scene and detecting a course of travel free of collision. The involvement of the identified SPL sub-region in self-motion processing has been identified by previous fMRI studies [Field et al., 2007, Journal of Neuroscience, 27(30), 8002-8010; Peuskens et al., 2001, Journal of Neuroscience, 21(7), 2451-2461]. Here, using fMRI, we show that the same SPL region can be activated by visual stimuli and tasks that do not imply self-motion. The region activates selectively when action depends on processing relative locations of objects. This activation occurs regardless of whether motion accompanies changes of object location, and regardless of whether changes of location are predictable or not, which were two other possible driving forces behind the activation. We thus present the best evidence to date concerning what information is processed in this region and show that its role is more general and not specific to self-motion processing as was previously proposed.

Action Understanding within and outside the motor system

Sandra Petris and Angelika Lingnau

The human mirror neuron system (hMNS) has been suggested to play a key role in the process of action understanding by means of a matching mechanism between observed and executed actions [Rizzolatti & Sinigaglia, 2010, Nature Review Neuroscience, 11(4): 264-274]. Here we aimed to determine to which degree areas outside the hMNS are involved in action understanding. We presented participants with point-light displays depicting human actions and engaged them in tasks that required identifying the goal or the effector that constitutes an action. Our paradigm revealed a stronger blood-oxygen level dependent (BOLD) signal during the Goal in comparison to the Effector Task not only within the hMNS, but also in the middle temporal gyrus (MTG) and the anterior ventrolateral prefrontal cortex (aVLPFC). This effect was modulated by task difficulty, with the MTG being sensitive to the difference between the Goal and the Effector Task only when actions were easy to recognize, whereas aVLPFC and inferior frontal gyrus (IFG) were sensitive to this difference also when the task was difficult. Our results suggest an important role for MTG and aVLPFC in action understanding and thus provide important constraints to the assertion that action understanding is based on a direct matching mechanism.

Activation of parahippocampal place area when looking at geographic maps: an fMRI study

Renata Rozovskaya and Ekaterina Pechenkova

An extensive literature shows that there is an area within parahippocampal gyrus, labeled parahippocampal place area (PPA), which specifically responds to images of natural scenes such as landscapes or cityscapes. Several studies have also demonstrated PPA activation when subjects perform spatial map-related tasks, including mental map rotation and navigation [Lobben et al., 2005 Proceedings of the 22nd International Cartographic Conference]. However, it is still unclear whether this area responds to geographic maps or schematic images of real places as such or is recruited by the specific spatial task. In our fMRI study, participants viewed series of maps and scrambled maps presented at the rate of about one image per second while performing one-back task. Each subject also completed a PPA functional localizer (one-back task with images of houses and faces). The ROI analysis revealed significant PPA activation for maps vs. scrambled maps at the group level and in 11 out of 16 individual participants. Thus, the study has shown PPA activation for geographic maps even in the absence of specific spatial map-related task, supporting the idea that abstract depictions of land regions elicit activation within brain areas involved in natural scene perception. Research supported by RFBR grant # 10-07-00670-a.

Neural bases for individual differences in the experience of time

Jason Tipples, Patrick Johnston and Victoria Brattan

How do we perceive time when there is no sensory modality dedicated to time? A dominant idea is that humans possess a pacemaker-accumulator clock that produces a number of pulses that increase with the experienced duration. Here, we test the idea that individual differences in the rate of this device predicts increases in neural activation the areas that subserve time perception. Seventeen participants completed both a control task and temporal bisection task, in a blocked fMRI design. In keeping with previous research we recorded increased activation in a network of regions typically active during time perception including the right caudate, insula, supplementary motor area (SMA) and putamen. Furthermore, in keeping with the idea of a pacemaker-accumulator clock, increases in the rate of the pacemaker clock (as indexed by lower bisection points) were associated with increased activation in specific neural areas (SMA and right-orbitofrontal cortex) that were active during time perception.

Classification of Material Properties

Elisabeth Baumgartner, Christiane B. Wiebel, Richard H. A. H. Jacobs and Karl R. Gegenfurtner

The representation of different materials has been investigated through brain activation [Hiramatsu et al, 2011, NeuroImage, 57(2), 482-494] and machine classification [Liu et al, 2010, IEEE CVPR, 239-246]. Here we explored whether individual properties of materials could be classified as well. Subjects rated images of materials on different material properties: glossiness, transparency, colourfulness, roughness, hardness, coldness, fragility, naturalness and prettiness. These images were then analyzed according to statistical parameters [Portilla & Simoncelli, 2000, International Journal of Computer Vision, 40(1), 49-71].We applied a linear multivariate classifier to the image statistics and could successfully discriminate images with high or low ratings. Classification accuracy was between 75% and 96% correct for the different properties. Taking only pixel statistics into account performance was still clearly above chance for several properties. To test whether property information contained in the image statistics would be reflected in brain activation patterns three subjects were scanned with fMRI while viewing material images. A classifier was then applied to the voxel timeseries in ventral visual cortex. Again, we found classification accuracy to be significantly better than chance. These results demonstrate that individual material properties can be classified based both on image statistics and fMRI activation patterns.

Material categories in the brain

Richard H. A. H. Jacobs, Elisabeth Baumgartner and Karl R. Gegenfurtner

Previous fMRI-studies indicated that the parahippocampal gyrus or medial visual cortex is involved in processing texture and material properties. These studies employed textures on rendered objects. Here, we examined brain activation to close-ups of material surfaces. In the first study, observers classified images of wood, stone, metal, and fabric into these four categories. We classified voxel-patterns occurring in response to the pictures to predict the material category of the picture. Both region-of-interest and whole-brain searchlight analyses confirmed an earlier finding of material coding in the early visual regions, with accuracy declining as one moves anteriorly. In the second study, we used an adaptation paradigm. Participants viewed images of wood, stone, metal, and fabric, presented in blocks with either images of different material categories (no adaptation) or images of different samples from the same material category (material adaptation). As a baseline, blocks with the same material sample were presented (full adaptation). This time, material adaptation effects were found mainly in the parahippocampal gyrus. Our results generalize earlier findings to photographs of textured surfaces. Our findings suggest that the parahippocampal gyrus might not be directly involved in the categorization of material categories.

Representation of changing heading in Cingulate Sulcus Visual Area (CSv)

Michele Furlan, John Wann and Andrew T. Smith

The processing of optic flow to extract information about heading direction is fundamental for many species. Several human brain regions (MST, VIP, V6, CSv) have been implicated in encoding information about instantaneous heading, but heading typically varies during egomotion and whether change of heading is explicitly encoded is not known. We used multi-voxel pattern analysis (MVPA) to test for the existence of neurons that respond selectively to specific directions of change of heading. We used 3T fMRI to record the BOLD response in seven participants engaged in a demanding central task while optic flow was presented. Changing heading was simulated by smoothly moving the focus of expansion of a dot pattern either from left to right (category 1) or right to left (category 2). Local motion was balanced. Part of the data was used to train a linear support vector machine classifier, the remainder was used for testing. The results showed some evidence for sensitivity to direction of change in MST and VIP. However, the most striking result was in CSv, which was strongly sensitive to direction of heading change, showing up to 90% decoding accuracy. This suggests that area CSv may have a special role in monitoring heading change.