Skip to main content

ORIGINAL RESEARCH article

Front. Robot. AI
Sec. Robot Vision and Artificial Perception
Volume 11 - 2024 | doi: 10.3389/frobt.2024.1435197
This article is part of the Research Topic Computer Vision Mechanisms for Resource-Constrained Robotics Applications View all 7 articles

A Spiking Neural Network for Active Efficient Coding *

Provisionally accepted
  • 1 UMR6602 Institut Pascal (IP), Aubière, France
  • 2 Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany

The final, formatted version of the article will be published soon.

    Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption. Here, we propose a first AEC system that is fully implemented as a Spiking Neural Network (SNN) driven by inputs from an event-based camera. This input is efficiently encoded by a two-layer SNN, which in turn feeds into a spiking reinforcement learner that learns motor commands to maximize an intrinsic reward signal. This reward signal is computed directly from the activity levels of the first two layers. We test our approach on two different behaviors: visual tracking of a translating target and stabilizing the orientation of a rotating target. To the best of our knowledge, our work represents the first ever fully spiking AEC model.

    Keywords: Active Efficient Coding, Spiking Neural network, Event-based cameras, unsupervised learning, reinforcement learning

    Received: 19 May 2024; Accepted: 14 Oct 2024.

    Copyright: © 2024 BARBIER, Teulière and Triesch. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Thomas BARBIER, UMR6602 Institut Pascal (IP), Aubière, France

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.