Skip to main content

ORIGINAL RESEARCH article

Front. Neurosci.

Sec. Neuromorphic Engineering

Volume 19 - 2025 | doi: 10.3389/fnins.2025.1545583

This article is part of the Research Topic Brain-Inspired Computing under the Era of Large Model and Generative AI: Theory, Algorithms, and Applications View all 3 articles

Dynamic Spatio-Temporal Pruning for Efficient Spiking Neural Networks

Provisionally accepted
Shuiping Gou Shuiping Gou 1Jiahui Fu Jiahui Fu 1Yu Sha Yu Sha 1Zhen Cao Zhen Cao 1Zhang Guo Zhang Guo 1*Jason K. Eshraghian Jason K. Eshraghian 2Ruimin Li Ruimin Li 1Licheng Jiao Licheng Jiao 1
  • 1 Xidian University, Xi'an, China
  • 2 University of California, Santa Cruz, Santa Cruz, California, United States

The final, formatted version of the article will be published soon.

    Spiking neural networks (SNNs), which draw from biological neuron models, have the potential to improve the computational efficiency of artificial neural networks (ANNs) due to their eventdriven nature and sparse data flow. SNNs rely on dynamical sparsity, in that neurons are trained to activate sparsely to minimize data communication. This is critical when accounting for hardware given the bandwidth limitations between memory and processor. Given that neurons are sparsely activated, weights are less frequently accessed, and potentially can be pruned to less performance degradation in a SNN compared to an equivalent ANN counterpart. Reducing the number of synaptic connections between neurons also relaxes memory demands for neuromorphic processors. In this paper, we propose a spatio-temporal pruning algorithm that dynamically adapts to reduce the temporal redundancy that often exists in SNNs when processing Dynamic Vision Sensor(DVS) datasets. Spatial pruning is executed based on both global parameter statistics and inter-layer parameter count and is shown to reduce model degradation under extreme sparsity. We provide an ablation study that isolates the various components of spatiotemporal pruning, and find that our approach achieves excellent performance across all datasets, with especially high performance on datasets with time-varying features. We achieved a 0.69% improvement on the DVS128 Gesture dataset, despite the common expectation that pruning typically degrades performance. Notably, this enhancement comes with an impressive 98.18% reduction in parameter space and a 50% reduction in time redundancy.

    Keywords: spiking neural networks, Spatio-Temporal Pruning, dynamic vision sensor, sparse connectivity, Adaptive Temporal Dynamics

    Received: 15 Dec 2024; Accepted: 03 Mar 2025.

    Copyright: © 2025 Gou, Fu, Sha, Cao, Guo, Eshraghian, Li and Jiao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Zhang Guo, Xidian University, Xi'an, China

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    94% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more