AUTHOR=Liu Yuhang , Liu Tingyu , Hu Yalun , Liao Wei , Xing Yannan , Sheik Sadique , Qiao Ning TITLE=Chip-In-Loop SNN Proxy Learning: a new method for efficient training of spiking neural networks JOURNAL=Frontiers in Neuroscience VOLUME=17 YEAR=2024 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2023.1323121 DOI=10.3389/fnins.2023.1323121 ISSN=1662-453X ABSTRACT=

The primary approaches used to train spiking neural networks (SNNs) involve either training artificial neural networks (ANNs) first and then transforming them into SNNs, or directly training SNNs using surrogate gradient techniques. Nevertheless, both of these methods encounter a shared challenge: they rely on frame-based methodologies, where asynchronous events are gathered into synchronous frames for computation. This strays from the authentic asynchronous, event-driven nature of SNNs, resulting in notable performance degradation when deploying the trained models on SNN simulators or hardware chips for real-time asynchronous computation. To eliminate this performance degradation, we propose a hardware-based SNN proxy learning method that is called Chip-In-Loop SNN Proxy Learning (CIL-SPL). This approach effectively eliminates the performance degradation caused by the mismatch between synchronous and asynchronous computations. To demonstrate the effectiveness of our method, we trained models using public datasets such as N-MNIST and tested them on the SNN simulator or hardware chip, comparing our results to those classical training methods.