Event Abstract

Neural mechanisms of visual motion integration and ego-motion estimation - a modelling investigation

Problem
During locomotion, temporally varying light patterns of the ambient optic array impinge the retina to generate the so called optical flow. Characteristic global patterns of optical flow are produced by different forms of self-motion (Gibson, The perception of the visual world, 1950). How is the steering and spatial navigation through a complex environment controlled by the visual processing of optical flow? Neural analysis of flow is mainly realized in areas V1, MT, MSTd, and MST along the dorsal pathway in visual cortex. Yet, the interaction of cells in the different areas and the interaction with form representations in the ventral pathway remains an ongoing research topic. In particular, it is unclear how the visual processes adapt and how they generate incrementally more complex representations as a function of navigational steering behaviours.

Methods
We present a dynamic model of primate motion analysis to solve navigational tasks and discuss how more complex and distributed representations are recruited when more detailed information is required, e.g., to avoid moving obstacles in cluttered scenes. Unlike previous approaches our model combines mechanisms of optical flow detection and integration with mechanisms of ego-motion estimation. The proposed model consists of model areas V1, MT and MST, to simultaneously integrate and segregate motion into different components, and is augmented by a heading map which is fed by MSTd responses. Initial processing detects raw motion from spatio-temporal correlations utilizing a bank of Gabor filters. These responses are integrated in area MT to build a distributed velocity representation of speed and direction (Rodman and Albright, Vision Research, 27, 1987). Local center-surround interactions allow the representation of multiple motions at one spatial location in MT. Activity is integrated by cells over large spatial regions of the visual field (Duffy and Wurtz, J. of Neurophysiology, 65, 1991; Graziano et al., J. of Neuroscience, 14, 1994). Unlike previous approaches, the model selectively integrates different speed channels with similar direction selectivity to robustly estimate heading direction independent of scene depth (Perrone and Stone, Vision Research, 34, 1994). On the other hand, different speeds are integrated to segregate different object motions, unlike (Mingolla et al., J. of Vision [Abstracts], 8, 2008).

Results and Conclusion
Model simulations demonstrate how the model performs in navigational tasks in which a simulated agent approaches a target. It is demonstrated how mechanisms of motion detection in model V1 and MT integrate and segregate distinct object motions and their direction. Systematic errors in heading estimation occur when translational observer motion is overlaid by rotations or when independently moving objects disturb the motion pattern (Royden et al., Vision Research, 34, 1994). Motion of independent objects can be indicated by the response of motion contrast cells. The model has been probed with synthetic as well as real sequences such as those acquired by a camera mounted in a moving car in different traffic scenarios. Future work will focus on the question how the distributed representations influence the decision-making to control the behavioural strategies during locomotion.

Supported by EU-Project 027198(DiM) and Graduate School Univ.Ulm.

Conference: Computational and systems neuroscience 2009, Salt Lake City, UT, United States, 26 Feb - 3 Mar, 2009.

Presentation Type: Poster Presentation

Topic: Poster Presentations

Citation: (2009). Neural mechanisms of visual motion integration and ego-motion estimation - a modelling investigation. Front. Syst. Neurosci. Conference Abstract: Computational and systems neuroscience 2009. doi: 10.3389/conf.neuro.06.2009.03.090

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 02 Feb 2009; Published Online: 02 Feb 2009.