Skip to main content

EDITORIAL article

Front. Comput. Neurosci., 17 March 2021
This article is part of the Research Topic Understanding and Bridging the Gap between Neuromorphic Computing and Machine Learning View all 16 articles

Editorial: Understanding and Bridging the Gap Between Neuromorphic Computing and Machine Learning

  • 1Department of Precision Instrument, Center for Brain Inspired Computing Research (CBICR), Tsinghua University, Beijing, China
  • 2College of Computer Science and Technology, Zhejiang University, Hangzhou, China
  • 3Zhejiang Lab, Hangzhou, China
  • 4Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States

Introduction

On the road toward artificial general intelligence (AGI), two solution paths have been explored: neuroscience-driven neuromorphic computing such as spiking neural networks (SNNs) and computer-science-driven machine learning such as artificial neural networks (ANNs). Owing to availability of data, high-performance processors, effective learning algorithms, and easy-to-use programming tools, ANNs have achieved tremendous breakthroughs in many intelligent applications. Recently, SNNs also attracted a lot of attention due to its biological plausibility and the possibility of achieving energy-efficiency (Roy et al., 2019). However, they suffer from ongoing debates and skepticisms due to worse accuracy compared to “standard” ANNs. The performance gap comes from a variety of factors, including learning techniques, benchmarks, programming tools and execution hardware, all of which in SNNs are not as developed as those in the ANN domain.

To this end, we propose a Research Topic, named “Understanding and Bridging the Gap between Neuromorphic Computing and Machine Learning,” in Frontiers in Neuroscience and Frontiers in Computational Neuroscience to collect recent researches on neuromorphic computing and machine learning to help understand and bridge the aforementioned gap. We received 18 submissions in total and accepted 14 of them in the end. The scope of these accepted papers covers learning algorithms, applications, and efficient hardware.

Learning Algorithms

How to train SNN models is the key to improve its functionality, thus bridging the gap between ANN models. Unlike the ANN domain that has grown rapidly via sophisticated backpropagation-based learning algorithms, the SNN domain is still short of effective learning algorithms due to the complicated spatiotemporal dynamics and non-differentiable spike activities. Currently, there are overall two categories of learning algorithms for SNNs: unsupervised synaptic plasticity with biological plausibility [e.g., spike timing dependent plasticity, STDP (Diehl and Cook, 2015)] and supervised backpropagation with gradient descent [e.g., indirect learning by acquiring gradients from the ANN counterpart (Diehl et al., 2015; Sengupta et al., 2019), direct learning by acquiring gradients from the SNN itself (Lee et al., 2016; Wu et al., 2018; Gu et al., 2019; Zheng et al., 2021), or the combination of both (Rathi et al., 2020)]. The latter can usually achieve higher accuracy and has advanced the model scale to handle ImageNet-level tasks. In the future, we look forward to seeing more studies on SNN learning to close the gap.

Next, we briefly summarize the recent progress of neural network (especially SNN) learning presented in our accepted papers. Inspired by the curiosity-based learning mechanism of the brain, Shi et al. propose curiosity-based SNN (CBSNN) models. In the first training epoch, the novelty estimations of all samples are obtained through bio-plausible synaptic plasticity; next, the samples whose novelty estimations exceed the threshold are repeatedly learned and the novelty estimations are updated in the next five epochs; then, all samples are learned with one more epoch. The last two steps are periodically taken until convergence. CBSNNs show better accuracy and higher efficiency in processing several small-scale datasets than conventional voltage-driven plasticity-centric SNNs. Daram et al. propose ModNet, an efficient dynamic learning system inspired from the neuromodulatory mechanism in the insect brain. An inbuilt modulatory unit regulates learning based on the context and internal state of the system. The network with modulatory trace achieves 98.8% ± 1.16 on average over the omniglot dataset for five-way one-shot image classification task while using 20x fewer trainable parameters compared to state-of-the-art models. Kaiser, Mostafa et al. introduce deep continuous local learning (DECOLLE), an SNN model equipped with local error functions for online learning. The synaptic plasticity rules are derived from user-defined cost functions and neural dynamics by leveraging existing autodifferentiation methods of machine learning frameworks. The model demonstrates state-of-the-art performance on N-MNIST and DvsGesture datasets. Fang et al. propose a bio-plausible noise structure to optimize the performance of SNNs trained by gradient descent. Through deducing the strict saddle condition for synaptic plasticity, they demonstrate that the noise helps the optimization escape from saddle points on high dimensional domains. The accuracy improvement can reach at least 10% on MNIST and CIFAR10 datasets. Panda et al. modify the SNN configuration with backward residual connections, stochastic softmax, and hybrid artificial-and-spiking neuronal activations. In this way, the previous learning methods are improved with comparable accuracy but large efficiency gains over the ANN counterparts.

Applications

Unlike the artificial neuron in ANNs, each spiking neuron in SNNs has intrinsic temporal dynamics, which is appropriate for processing sequence information. In this Research Topic, we accepted two papers that discuss SNN applications. Wu et al. explore the first work that uses SNNs for large-vocabulary continuous automatic speech recognition (LVCSR) tasks. Their SNNs demonstrate competitive accuracies on par with their ANN counterparts while consuming only 10 algorithmic timesteps and 0.68× total synaptic operations. They integrate the models into the PyTorch-Kaldi Speech Recognition Toolkit for rapid development. Kugele et al. apply SNNs for processing spatiotemporal event streams (e.g., N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture datasets). They improve the ANN-to-SNN conversion learning method by introducing connection delays during the pre-training of ANNs to match the propagation delays in converted SNNs. In this way, the resulting SNNs can handle the above tasks accurately and efficiently.

In addition, besides energy-efficiency (Merolla et al., 2014), recent studies further find that the event-driven computing paradigm of SNNs endows them high robustness (He et al., 2020; Liang et al., 2020) and superior capability in learning sparse features (He et al., 2020). We believe it is very important to mine the true advantages of SNNs to determine their true value in practical applications.

Efficient Hardware

Performing neural networks on general-purpose processors is inefficient, which stimulates the development of various domain-specific hardware platforms, including those for ANNs [e.g., DaDianNao (Chen et al., 2014), TPU (Jouppi et al., 2017), Eyeriss (Chen et al., 2017), Thinker (Yin et al., 2017), etc.), for SNNs (e.g., SpiNNaker (Furber et al., 2014), TrueNorth (Merolla et al., 2014), Loihi (Davies et al., 2018), DYNAPs (Moradi et al., 2017)], and for cross-paradigm modeling [e.g., Tianjic (Pei et al., 2019; Deng et al., 2020)]. In this Research Topic, we accepted seven papers for neural network hardware: three for ANNs, two for SNNs, and two for cross-paradigm.

Sim and Lee propose SC-CNN, the bitstream-based convolutional neural network (CNN) inspired by stochastic computing (SC) that uses bitstreams to represent numbers, to improve machine learning hardware. Benefitting from the CNN substrate, SC-CNN can achieve high accuracy; further benefitting from SC, SC-CNN is highly efficient, scalable, and fault-tolerant. Different from the common digital machine learning accelerators, Kaiser, Faria et al. present a clockless autonomous probabilistic circuit, wherein synaptic weights are read out in the form of analog voltages, for fast and efficient learning with no use of digital computing. They demonstrate a circuit built with existing technology to emulate the Boltzmann machine learning algorithm. Muller et al. introduce bias matching, a top-down neural network design approach, to match the inductive biases required in a machine learning system to the hardware constraints of its implementation.

To alleviate the high cost training of SNNs using backpropagation, Lee et al. propose a spike-train level direct feedback alignment (ST-DFA) algorithm. Compared to the state-of-the-art backpropagation learning algorithm, they demonstrate excellent performance vs. overhead tradeoffs on FPGA for speech and image classification applications. Dutta et al. propose an all ferroelectric field-effect transistors (FeFET)-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing. They implement a surrogate gradient supervised learning algorithm on their efficient SNN platform, which further accounts for the impacts of device variation and limited bit precision of on-chip synaptic weights on the classification accuracy.

Parsa et al. build a hierarchical pseudo agent-based multi-objective Bayesian hyperparameter optimization framework for both software and hardware. They can not only maximize the performance of the network, but also minimize the energy and area overheads of the corresponding neuromorphic hardware. They validate the proposed framework using both ANN and SNN models, which involves both deep learning accelerators [e.g., PUMA (Ankit et al., 2019)] and neuromorphic hardware [e.g., DANNA2 (Mitchell et al., 2018) and mrDANNA (Chakma et al., 2017)]. Instead of implementing ANNs and SNNs separately, integration of them has become a promising direction to achieve further breakthroughs toward AGI via complementary advantages (Pei et al., 2019). Therefore, the efficient hardware that can support individual modeling of ANNs and SNNs as well as their hybrid modeling is very important. This has been achieved by the cross-paradigm Tianjic platform (Deng et al., 2020), based on which Wang et al. further present an end-to-end mapping framework for implementing various hybrid neural networks. By constructing hardware configuration schemes for four typical signal conversions and establishing a global timing adjustment mechanism among different heterogeneous modules, they implement hybrid models with low execution latency and low power consumption.

Conclusion

Machine learning and neuromorphic computing are two modeling paradigms on the road toward AGI. ANNs have achieved tremendous breakthroughs in many intelligent applications benefitting from big data, high-performance processors, effective learning algorithms, and easy-to-use programming tools; in contrast, SNNs are still in its infant stage and there is a dire need for more neuromorphic benchmarks. Through cross-discipline research, we expect to understand and bridge the gap between neuromorphic computing and machine learning. This Research Topic is just a small step in this direction, and we look forward to more innovations that can achieve brain-like intelligence.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Funding

This work was supported in part by the key scientific technological innovation research project by Ministry of Education of China, Zhejiang Lab (Grant No. 2019KC0AD02), Center for Brain Inspired Computing (C-BRIC), a Semiconductor Research Corporation (SRC) program sponsored by DARPA, National Science Foundation, Sandia National Laboratory, ONR sponsored Multi-University Research Initiative (MURI), and Vannevar Bush Faculty Fellowship program.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Ankit, A., Hajj, I. E., Chalamalasetti, S. R., Ndu, G., Foltin, M., Williams, R. S., et al. (2019). “PUMA: A programmable ultra-efficient memristor-based accelerator for machine learning inference,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) (Providence, RI: ACM), 715–731. doi: 10.1145/3297858.3304049

CrossRef Full Text | Google Scholar

Chakma, G., Adnan, M. M., Wyer, A. R., Weiss, R., Schuman, C. D., and Rose, G. S. (2017). Memristive mixed-signal neuromorphic systems: Energy-efficient learning at the circuit-level. IEEE J. Emerg. Selected Topics Circuits Syst. 8, 125–136. doi: 10.1109/JETCAS.2017.2777181

CrossRef Full Text | Google Scholar

Chen, Y.-H., Krishna, T., Emer, J. S., and Sze, V. (2017). Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52, 127–138. doi: 10.1109/JSSC.2016.2616357

CrossRef Full Text | Google Scholar

Chen, Y., Luo, T., Liu, S., Zhang, S., He, L., Wang, J., et al. (2014). “Dadiannao: a machine-learning supercomputer,” in 47th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) (Cambridge: IEEE/ACM), 609–622. doi: 10.1109/MICRO.2014.58

CrossRef Full Text | Google Scholar

Davies, M., Srinivasa, N., Lin, T.-H., Chinya, G., Cao, Y., Choday, S. H., et al. (2018). Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99. doi: 10.1109/MM.2018.112130359

CrossRef Full Text | Google Scholar

Deng, L., Wang, G., Li, G., Li, S., Liang, L., Zhu, M., et al. (2020). Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation. IEEE J. Solid-State Circuits 55, 2228–2246. doi: 10.1109/JSSC.2020.2970709

CrossRef Full Text | Google Scholar

Diehl, P. U., and Cook, M. (2015). Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci. 9:99. doi: 10.3389/fncom.2015.00099

PubMed Abstract | CrossRef Full Text | Google Scholar

Diehl, P. U., Neil, D., Binas, J., Cook, M., Liu, S.-C., and Pfeiffer, M. (2015). “Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing," in International Joint Conference on Neural Networks (IJCNN) (Killarney: IEEE), 1–8. doi: 10.1109/IJCNN.2015.7280696

CrossRef Full Text | Google Scholar

Furber, S. B., Galluppi, F., Temple, S., and Plana, L. A. (2014). The Spinnaker project. Proc. IEEE 102, 652–665. doi: 10.1109/JPROC.2014.2304638

CrossRef Full Text | Google Scholar

Gu, P., Xiao, R., Pan, G., and Tang, H. (2019). “STCA: Spatio-temporal credit assignment with delayed feedback in deep spiking neural networks,” in International Joint Conferences on Artificial Intelligence (IJCAI), 1366–1372. doi: 10.24963/ijcai.2019/189

CrossRef Full Text | Google Scholar

He, W., Wu, Y., Deng, L., Li, G., Wang, H., Tian, Y., et al. (2020). Comparing SNNs and RNNs on neuromorphic vision datasets: similarities and differences. Neural Networks 132, 108–120. doi: 10.1016/j.neunet.2020.08.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., et al. (2017). “In-datacenter performance analysis of a tensor processing unit,” in Proceedings of the 44th Annual International Symposium on Computer Architecture (ISCA) (Toronto, ON: ACM), 1-12. doi: 10.1145/3079856.3080246

CrossRef Full Text | Google Scholar

Lee, J. H., Delbruck, T., and Pfeiffer, M. (2016). Training deep spiking neural networks using backpropagation. Front. Neurosci. 10:508. doi: 10.3389/fnins.2016.00508

PubMed Abstract | CrossRef Full Text | Google Scholar

Liang, L., Hu, X., Deng, L., Wu, Y., Li, G., Ding, Y., et al. (2020). Exploring adversarial attack in spiking neural networks with spike-compatible gradient. arXiv preprint arXiv:2001.01587.

Google Scholar

Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., et al. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673. doi: 10.1126/science.1254642

PubMed Abstract | CrossRef Full Text | Google Scholar

Mitchell, J. P., Dean, M. E., Bruer, G. R., Plank, J. S., and Rose, G. S. (2018). “DANNA 2: Dynamic adaptive neural network arrays,” in Proceedings of the International Conference on Neuromorphic Systems (ICONS) (Knoxville, TN: ACM), 1–6. doi: 10.1145/3229884.3229894

CrossRef Full Text | Google Scholar

Moradi, S., Qiao, N., Stefanini, F., and Indiveri, G. (2017). A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Trans. Biomed. Circuits Syst. 12, 106–122. doi: 10.1109/TBCAS.2017.2759700

PubMed Abstract | CrossRef Full Text | Google Scholar

Pei, J., Deng, L., Song, S., Zhao, M., Zhang, Y., Wu, S., et al. (2019). Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature 572, 106–111. doi: 10.1038/s41586-019-1424-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Rathi, N., Srinivasan, G., Panda, P., and Roy, K. (2020). “Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation,” in International Conference on Learning Representations (ICLR).

Google Scholar

Roy, K., Jaiswal, A., and Panda, P. (2019). Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617. doi: 10.1038/s41586-019-1677-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Sengupta, A., Ye, Y., Wang, R., Liu, C., and Roy, K. (2019). Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13:95. doi: 10.3389/fnins.2019.00095

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, Y., Deng, L., Li, G., Zhu, J., and Shi, L. (2018). Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12:331. doi: 10.3389/fnins.2018.00331

PubMed Abstract | CrossRef Full Text | Google Scholar

Yin, S., Ouyang, P., Tang, S., Tu, F., Li, X., Zheng, S., et al. (2017). A high energy efficient reconfigurable hybrid neural network processor for deep learning applications. IEEE J. Solid-State Circuits 53, 968–982. doi: 10.1109/JSSC.2017.2778281

CrossRef Full Text | Google Scholar

Zheng, H., Wu, Y., Deng, L., Hu, Y., and Li, G. (2021). “Going deeper With directly-trained larger spiking neural networks,” in AAAI Conference on Artificial Intelligence (AAAI) (AAAI Press).

Google Scholar

Keywords: neuromorphic computing, spiking neural networks, machine learning, artificial neural networks, cross paradigm

Citation: Deng L, Tang H and Roy K (2021) Editorial: Understanding and Bridging the Gap Between Neuromorphic Computing and Machine Learning. Front. Comput. Neurosci. 15:665662. doi: 10.3389/fncom.2021.665662

Received: 08 February 2021; Accepted: 12 February 2021;
Published: 17 March 2021.

Edited and reviewed by: Si Wu, Peking University, China

Copyright © 2021 Deng, Tang and Roy. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lei Deng, leideng@mail.tsinghua.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.