Skip to main content

ORIGINAL RESEARCH article

Front. Neurosci.
Sec. Neuromorphic Engineering
Volume 19 - 2025 | doi: 10.3389/fnins.2025.1522788
This article is part of the Research Topic Algorithm-Hardware Co-Optimization in Neuromorphic Computing for Efficient AI View all 4 articles

Optimising Event-Driven Spiking Neural Network with Regularisation and Cutoff

Provisionally accepted
Dengyu Wu Dengyu Wu 1,2*Gaojie Jin Gaojie Jin 3,4Han Yu Han Yu 5,6Xinping Yi Xinping Yi 7Xiaowei Huang Xiaowei Huang 1
  • 1 University of Liverpool, Liverpool, United Kingdom
  • 2 King's College London, London, England, United Kingdom
  • 3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences (CAS), Beijing, Beijing Municipality, China
  • 4 University of Exeter, Exeter, England, United Kingdom
  • 5 Chalmers University of Technology, Göteborg, Vastra Gotaland County, Sweden
  • 6 Queen's University Belfast, Belfast, Northern Ireland, United Kingdom
  • 7 Southeast University, Nanjing, Jiangsu Province, China

The final, formatted version of the article will be published soon.

    Spiking neural network (SNN), as the next generation of artificial neural network (ANN), offer a closer mimicry of natural neural networks and hold promise for significant improvements in computational efficiency. However, the current SNN is trained to infer over a fixed duration, overlooking the potential of dynamic inference in SNN. In this paper, we strengthen the marriage between SNN and event-driven processing with a proposal to consider a cutoff in SNN, which can terminate SNN anytime during inference to achieve efficient inference. Two novel optimisation techniques are presented to achieve inference efficient SNN: a Top-K cutoff and a regularisation.The proposed regularisation influences the training process, optimising SNN for the cutoff, while the Top-K cutoff technique optimises the inference phase. We conduct an extensive set of experiments on multiple benchmark frame-based datasets, such as CIFAR10/100, Tiny-ImageNet, and event-based datasets, including CIFAR10-DVS, N-Caltech101 and DVS128 Gesture. The experimental results demonstrate the effectiveness of our techniques in both ANN-to-SNN conversion and direct training, enabling SNNs to require 1.76 to 2.76 × fewer timesteps for CIFAR-10, while achieving 1.64 to 1.95× fewer timesteps across all event-based datasets, with near-zero accuracy loss. These findings affirms the compatibility and potential benefits of our techniques in enhancing accuracy and reducing inference latency when integrated with existing methods. Code available: https://github.com/Dengyu-Wu/SNNCutoff

    Keywords: Spiking Neural network, ANN-to-SNN conversion, SNN Regularisation, SNN Cutoff, Adaptive inference

    Received: 05 Nov 2024; Accepted: 27 Jan 2025.

    Copyright: © 2025 Wu, Jin, Yu, Yi and Huang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Dengyu Wu, University of Liverpool, Liverpool, United Kingdom

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.