Event Abstract

Area MT pattern motion selectivity by integrating 1D and 2D motion features from V1 - a neural model

Problem The neural mechanisms of detecting, integrating, and disambiguating visual motion still remain largely a puzzle. While many neurons in area V1 show coarse direction and speed selectivity signaling component motion direction, evidence suggests that end-stop cells in V1 compute the true motion direction for 2D image features (Pack et al., Neuron 39, 2003). In area MT, neurons have been found that are selective to speed and respond to pattern motion direction in the case of plaids from combined differently oriented gratings (Movshon et al., Pattern Recognition Mechanisms, 1985). For elongated bars, MT neurons resolve the so-called aperture problem to signal the correct object motion after a temporal course of disambiguation (Pack&Born, Nature 409, 2001). The construction of selectivity to pattern motion from component selective input is a topic of ongoing research. Different experiments suggest that the visual system might use computational strategies of integrating different motion directions or selecting localized features (Pack&Born, The Senses: A Comprehensive Reference, 2008).

Methods We propose a neurodynamical model of motion integration in areas V1, MT, and MSTv which unifies the integrationist and selectionist concepts. Normal flow responses are initially detected by model V1 simple/complex cells while inhibitory interactions between cells at different V1 laminae generate local end-stop responses after a temporal delay. These responses are integrated for a short temporal episode and enhanced to generate strong direction signals. MT neurons then receive input from V1 complex and end-stop neurons (size ratio 1:5, direction tuning +/-40°). The integration of component selective responses leads to weak pattern direction selectivity in MT, further sharpened by the input from localized end-stop neurons. Feedback from MT to V1 enhances neurons with feature selectivity and slightly reduces normal flow responses near localized features. Feedforward and feedback connections between MT and MSTv (size ratio 1:1.25) disambiguate MT responses along extended outline boundaries.

Results and Conclusion To investigate the aperture problem, we tested the model with a vertically aligned bar moving in diagonal direction. Consistent with neurophysiological findings (Pack&Born, Nature 409, 2001), after few iterations MT neurons showed sharp speed and direction tuning for the correct velocity, while the tuning of V1 neurons only slightly changed. Originating from 2D signals at corners, the correct flow was further propagated in MT along the stimulus by each iteration while propagation time increased with bar length (Born et al., Prog. Brain Res. 140, 2002). For a plaid stimulus, model MT neurons were tuned to pattern motion after some iterations, while V1 neurons indicated normal flow responses along the plaid components, except for end-stop neurons located at crossings. Hence, V1 neurons already provided some of the correct 2D cues, their overall tuning nevertheless remained coarse in speed and direction.
To conclude, the integration of 1D and 2D V1 motion features in MT as proposed by our model suggests the generation of MT cell pattern selectivity based on component integration and feature selection. The model explains findings of various neurophysiological experiments and unifies different modeling approaches into one framework.

Supported by EU-project 027198 (Decisions-in-Motion)

Conference: Computational and systems neuroscience 2009, Salt Lake City, UT, United States, 26 Feb - 3 Mar, 2009.

Presentation Type: Poster Presentation

Topic: Poster Presentations

Citation: (2009). Area MT pattern motion selectivity by integrating 1D and 2D motion features from V1 - a neural model. Front. Syst. Neurosci. Conference Abstract: Computational and systems neuroscience 2009. doi: 10.3389/conf.neuro.06.2009.03.163

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 04 Feb 2009; Published Online: 04 Feb 2009.