Decision making and perceptual learning for speed discrimination
Problem Perceptual learning in visual tasks can improve the performance of decision making (Dosher et al., Psychological Review, 112, 2005). Instead of static inputs we investigate motion stimuli, particularly perceptual learning for motion speed discrimination and how improved performance can be transferred between different motion patterns. We utilize configurations of moving random dot patterns simultaneously displayed in four quadrants (central fixation) where in one quadrant a coherent motion pattern (target) was displayed while the others contain random motion. In a 2AFC task of two subsequent displays a decision needs to be made about which presentation contained the faster coherent stimulus speed. Through learning (using one target pattern) subjects can significantly improve their discrimination performance. The goal is to evaluate whether performance is still improved when the coherent motion pattern is changed.
Methods We propose a neural model of motion perception which consists of a hierarchy of areas to represent the main cortical processing stages along the dorsal pathway, namely V1, MT, and MSTd (Bayerl & Neumann, Neural Computation, 16, 2004; Ringbauer et al., LNCS 4669, 2007). Optical flow is detected in area V1 and integrated in area MT by speed and direction sensitive neurons. Global motion patterns are spatially integrated subsequently in model area MSTd cells which are sensitive to rotational, radial, and laminar motion patterns (Graziano et al., J. of Neuroscience, 14, 1994). Model MSTd cells project to dorso-lateral area LIP where cells temporally accumulate responses to judge and discriminate different motion configurations (Hanks et al., Nature Neuroscience, 9, 2006). In our model speed activity is integrated over all directions generating direction independent speed profiles for different quadrants. Such responses are spatially integrated over two presentation phases with different speed parameterizations. Activities of model MSTd pattern cells allow to identify the target quadrant together with a confidence measure. Speed-selective responses are integrated separately with a bias on high speeds and are each fed forward to decision units in model LIP neurons. Here, a recurrent competitive field of neuron pools exists tuned for different speeds (Grossberg & Pilly, Vision Research, 48, 2008). The decision depends on both the input stimulus and the strength of mutual inhibition between decision cells each controlled by a weighted difference of the two speed profiles. These weights result from a trial by trial adaptation of the characteristics of the speed differences to support and enhance the decision making process.
Results and Conclusion After several trials of training the model shows a decreased reaction time. Small speed differences can be discriminated more accurately even if the target pattern has been changed against another that has not been probed during training. Thus the model predicts that improved speed discrimination performance after perceptual learning using a selected motion pattern can be transferred to other patterns. A psychophysical experiment with human participants using the same visual stimuli is currently being developed to compare resulting data with the model predictions and to adapt the model parameters in accordance to the experimental findings.
Funding: BMBF 01GW0763(BPPL); Grad.School Univ.Ulm
Conference:
Computational and systems
neuroscience 2009, Salt Lake City, UT, United States, 26 Feb - 3 Mar, 2009.
Presentation Type:
Poster Presentation
Topic:
Poster Presentations
Citation:
(2009). Decision making and perceptual learning for speed discrimination.
Front. Syst. Neurosci.
Conference Abstract:
Computational and systems
neuroscience 2009.
doi: 10.3389/conf.neuro.06.2009.03.342
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
10 Feb 2009;
Published Online:
10 Feb 2009.