About this Research Topic
As such, if neuromorphic computing systems succeed in boosting efficiency of AI acceleration, they will pave the way for a far richer and more diverse edge-AI application ecosystem that entails real-time adaptation, autonomy, rich temporal dynamics, and less reliance on cloud computing resources.
Nevertheless, at the moment this efficiency that will make future AI sustainable is yet to be demonstrated. Despite advances in neuromorphic processor engineering (that has led to market available solutions), and although we have finally succeeded in training spiking neural-network models often to competitive application performance, a key question still remains: "Is neuromorphing processing more energy efficient?" So far, our solutions barely capitalize on the cornerstones of neuromorphic processing; including the exploitation of spatiotemporal sparsity, asynchronous event-driven execution, synaptically delayed communication, on-device learning, etc.
For example, it is questionable if rate-based models trained off-line, on non-neuromorphic platforms, with back-propagation, input data framing, and per-layer synchronization, are a good match for sparse asynchronous event-based inference on neuromorphic hardware. Algorithm-hardware co-optimizations, on-device learning/fine-tuning, or delay parameterization, are all promising exploration paths for making models more economical, compact, or efficient when deployed on neuromorphic systems. Dynamic mapping and on-device adaptation may utilize neuromorphic processor resources more efficiently. Finally, benchmarks and diversity of datasets are essential to validate and quantify the effectiveness of different solutions in this direction.
We invite contributions that help the community better understand the challenges, and develop solutions for how to better exploit neuromorphic processing; including modeling, training, on-device learning and adaptation, co-optimizations, benchmarking, or other. Relevant themes include (but are not limited to):
- Applications datasets and benchmarks for demonstrating learning and adaptation on neuromorphic platforms
- On-line or on-device model learning/adaptation to improve efficiency of neuromorphic platforms
- Algorithm-hardware co-optimizations and adaptation for neuromorphic processing
- Exploiting synaptic (axonal, dendritic) delays in model
- Hardware-aware or hardware-in-the-loop training for non-deterministic processing (on digital asynchronous event-driven and/or analog neuromorphic platforms)
- Multi-timescale and delay-based parameterization of neural network models for (hardware-)efficiency and performance
- Model mapping and scheduling for neuromorphic processors
Topic editor Manolis Sifalakis is employed by Imec (Eindhoven, Netherlands). All other Topic Editors declare no competing interests with regards to the Research Topic subject.
Keywords: efficient AI acceleration, neuromorphic processing, algorithm-hardware co-optimization, model learning and adaptation, dynamic scheduling and model mapping, asynchronous computing
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.