Skip to main content

REVIEW article

Front. Neurosci.
Sec. Neuromorphic Engineering
Volume 18 - 2024 | doi: 10.3389/fnins.2024.1383844
This article is part of the Research Topic Deep Spiking Neural Networks: Models, Algorithms and Applications View all 4 articles

Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods

Provisionally accepted
  • 1 Peng Cheng Laboratory, Shenzhen, China
  • 2 Faculty of Computing, Harbin Institute of Technology, Harbin, Heilongjiang Province, China
  • 3 School of Information Engineering, Shenzhen Graduate School, Peking University, Shenzhen, Guangdong, China
  • 4 National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, China

The final, formatted version of the article will be published soon.

    Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks (ANNs), in virtue of their high biological plausibility, rich spatial-temporal dynamics, and event-driven computation. The direct training algorithms based on the surrogate gradient method provide sufficient flexibility to design novel SNN architectures and explore the spatialtemporal dynamics of SNNs. According to previous studies, the performance of models is highly dependent on their sizes. Recently, direct training deep SNNs have achieved great progress on both neuromorphic datasets and large-scale static datasets. Notably, transformer-based SNNs show comparable performance with their ANN counterparts. In this paper, we provide a new perspective to summarize the theories and methods for training deep SNNs with high performance in a systematic and comprehensive way, including theory fundamentals, spiking neuron models, advanced SNN models and residual architectures, software frameworks and neuromorphic hardware, applications, and future trends.

    Keywords: Deep spiking neural network, Direct training, Transformer-based SNNs, Residual connection, energy efficiency, high performance

    Received: 08 Feb 2024; Accepted: 03 Jul 2024.

    Copyright: © 2024 Zhou, Zhang, Yu, Ye, Zhou, Huang, Ma, Fan, Zhou and Tian. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Zhengyu Ma, Peng Cheng Laboratory, Shenzhen, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.