Benchmarks, competitions, and grand challenges have greatly influenced the growth of many scientific and engineering disciplines. From Hilbert’s problems, which greatly impacted mathematics in the 20th century, to competitions for robotic soccer players or vehicle drivers, to present day large scale machine ...
Benchmarks, competitions, and grand challenges have greatly influenced the growth of many scientific and engineering disciplines. From Hilbert’s problems, which greatly impacted mathematics in the 20th century, to competitions for robotic soccer players or vehicle drivers, to present day large scale machine learning competitions and benchmarks for natural image and sound recognition; each has created a drive for advances in their field with massive benefits in terms of maturation of the field and public visibility. Neuromorphic engineering is concerned with building hardware and software systems that mimic the architectures and computations of brains, and is still a young and unconventional discipline. Although the field has delivered many success stories, it has so far lacked efforts for creating benchmarks that allow meaningful comparisons between different neuromorphic systems, as well as comparisons to conventional computing systems. A major reason is that while machine learning benchmarks for classification and regression typically pursue the single goal of maximizing the accuracy of the prediction, neuromorphic systems are built with multiple target criteria in mind: these include low power consumption, small response latency, efficient use of hardware resources, robustness to real-world noise and variability, and the abstract goal of closely modeling biology. Furthermore, many problems in neuromorphic engineering deal with precisely timed event- or spike-based computation, which calls for different evaluation criteria than for conventional clocked digital systems. However, as the races for tiny improvements in machine learning have shown, putting the focus on accuracy alone and neglecting other practically relevant criteria like power or resource consumption creates risks for many disciplines besides neuromorphic engineering.
The goal of this Frontiers Research Topic is to engage in a critical discussion on the role of benchmarks and challenges as driving forces for progress in neuromorphic engineering. We aim at establishing benchmark problems which facilitate the comparison of existing and novel algorithms and hardware systems, and enable comparisons to other disciplines. We will create novel challenges that concern all ICT fields by focusing not only on the quality of results, but also on the upper limits for the power consumption and computational power needed. We encourage contributions from related disciplines like computational neuroscience, machine learning, or computer vision and robotics.
Relevant topics include, but are not limited to:
1. Presentation of benchmark datasets for neuromorphic engineering, in particular datasets collected with neuromorphic sensors
2. Comparisons of existing and novel neuromorphic systems on new and established benchmark problems, and comparisons to state-of-the-art solutions from related disciplines
3. Introduction of performance measures and evaluation criteria for neuromorphic systems
4. Quantitative comparisons of the strengths and weaknesses of biologically-inspired systems compared to systems based on conventional computing and electronics
5. Specific benchmarks and evaluation criteria for event- and spike-based computing systems for spatio-temporal pattern recognition
6. Opinion articles on the benefits and risks of benchmarking for neuromorphic engineering or related disciplines
7. Proposals of grand challenges and competitions that showcase the strengths of neuromorphic engineering, or aim to overcome current deficits
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.