About this Research Topic
To advance the research on large-scale Deep SNNs, we focus on the following key issues.
First, scaling up SNNs to large-scale networks encounters challenges in both algorithms and architectures, due to the complexity arising from their temporal dynamics and spike-based computation. Overcoming problems like gradient explosion/vanishing and performance degradation and designing scalable architectures like transformer to push the field to new frontiers are essential in constructing large-scale SNNs.
Second, studying SNNs is motivated by their resemblance to biological neural systems. However, existing biological algorithms, such as STDP, are unsupervised and struggle to converge in large-scale SNNs. Furthermore, mainstream rate-based training methods fail to exploit temporal dynamics. These issues call for bio-inspired algorithms incorporating advanced neuron models, neural plasticity, and temporal dynamics.
Third, energy-efficiency is another motivation of deep SNNs. Most deep SNNs are not purely event-driven, and there is a lack of frameworks and computing chips for efficient training of large-scale event-driven SNNs. Given the growing demand for energy-efficient computing, we encourage contributions in pure event-driven models, low-power hardware designs, model compression, and efficient computing frameworks to maximize energy-efficiency without compromising performance.
Finally, the real-world applications of large-scale SNNs should be highlighted, as they offer immense potential for enhancing performance and efficiency. The integration of neuromorphic hardware, such as Dynamic Vision Sensors, further enhances this potential by providing event-based, asynchronous, and adaptive sensory information that’s well-suited for deep SNNs. These applications include, but are not limited to: robotics, autonomous vehicles, natural language processing, computer vision, speech recognition, medical diagnosis, and AI for brain systems.
In summary, exploring the potential of large-scale SNNs requires addressing challenges in scaling up, developing specialized brain-inspired algorithms, energy-efficient computing and real-world applications. Therefore, in this Research Topic we welcome articles that aim to address the aforementioned issues related to large-scale SNNs, specifically focusing on:
Summary topics:
- Algorithms and architectures for constructing large-scale SNNs.
- Bio-inspired algorithms for large-scale SNNs integrating biological plasticity and temporal dynamics effectively.
- Large-scale SNN models, algorithms and computing frameworks for achieving both energy-efficiency and high - performance.
- Chip designs supporting large-scale online SNN learning.
- Real-world applications of large-scale SNNs.
Keywords: Spiking Neural Networks, Bio-inspired algorithms, Energy-efficient computing, Real-world applications
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.