Systems level model integration and embodiment: a case study with gaze control
-
1
University of Sheffield, Department of Psychology, United Kingdom
-
2
University of Manchester, School of Electrical and Electronic Engineering, United Kingdom
Many computational models in neuroscience deal with individual microcircuits or brain nuclei. These models are invaluable and are key building blocks in developing an understanding of the brain. However, a richer understanding will ensue if we are able to integrate these disparate models into unified systems in which we can investigate complete sensori-motor competences and make contact with behavioural data. Moreover, we advocate an approach in which we don’t ‘reinvent the wheel’ and develop each component ourselves, rather we reuse existing, published models, if these are adequate for purpose.There are however, several issues which must be addressed in such a programme. First, the input signals for each component must be compatible with the outputs of its afferent structures. A key requirement here is that the semantics of the signal representations must be the same which may have implications for unifying model components drawn from different theoretical viewpoints. Further, In trying to model a complete sensori-motor competence we may require many more processes than we can reasonably model in a biologically plausible way. In this case, we may choose to implement those computations which are less central to our current interest in a more ‘engineered’, rather than biologically plausible, form.. The strongest test of a model is obtained by its embodiement in robotic form, and is the approach we take here. Embodiment can, however, raise its own issues not least of which is real-time operating.Here we apply the methodology described above in a large scale, embodied model of top-down and bottom-up gaze control in the primate visual system. The creation of the model centres around a core set of biomimetic components which include both existing, published, models and new models where the existing models were deemed unsuitable for the purpose. Around this core are ‘engineered’, phenomenological components. The entire model is embodied by the robotic hardware, comprising a pan-tilt head and two cameras. A key feature of our approach is a modular softare infrastructure – BRAHMS (Mitchinson et al 2010) - which acts to bind the disparate processes and hardware together.Two versions of the model have been developed, differing by the objects they recognise. One uses two colour flags, and can learn a flag preference based on a reward signal while habituating to phasic distractor onsets. This model is capable of performing this task in realtime. A second version, recognising numeric digits, showed, in preliminary testing, large performance increases when utilising increasingly complex hardware support including an nVidia graphics card (see Figure 1)..For further details see the supplementary material.
Figure 1: The performance gains from a variety of parallel computing. Mean time (over 5 runs) to simulate one second of the model. Single Machine - (2 x Intel Xeon 3.2GHz); Distributed - 2 x Intel Xeon (3.2GHz), and 4 x Intel Core2 (3.0GHz); CUDA: 2x Intel Xeon (3.2GHz), 1 x nVidia 295GTX using CUDA; CUDA + custom image chip (SCAMP Dudek and Carey 2006))- identical to CUDA, but with extra image processing on the SCAMP chip not performed in the other models.
Keywords:
computational neuroscience
Conference:
Bernstein Conference on Computational Neuroscience, Berlin, Germany, 27 Sep - 1 Oct, 2010.
Presentation Type:
Poster Abstract
Topic:
Bernstein Conference on Computational Neuroscience
Citation:
Cope
A,
Chambers
J,
Barr
D,
Dudek
P and
Gurney
K
(2010). Systems level model integration and embodiment: a case study with gaze control.
Front. Comput. Neurosci.
Conference Abstract:
Bernstein Conference on Computational Neuroscience.
doi: 10.3389/conf.fncom.2010.51.00077
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
14 Sep 2010;
Published Online:
23 Sep 2010.
*
Correspondence:
Dr. Alex Cope, University of Sheffield, Department of Psychology, Sheffield, United Kingdom, a.cope@shef.ac.uk