Deep learning featured among the 2013 top 10 technology breakthroughs in the MIT Technology Review. Currently it is an active research area for academic, industrial and government labs. Deep learning neural networks have enabled state-of-the-art machine learning results to approach human level performance in difficult machine perception tasks such as speech, language, and visual perception. Given the prominence of this topic in scholarly pursuit, and its overlap with many parts of neuromorphic engineering, it is ideally suited and timely for a Research Topic, one that highlights the technical strengths of the neuromorphic community. We will focus on hardware and neuromorphic accelerators, which will specifically leverage our strengths while making this Research Topic highly relevant for Frontiers in Neuromorphic Engineering.
The Research Topic will be focussed on various implementation aspects for deep learning architectures. One particular interest is efficient on-line learning algorithms implemented in hardware. State-of-the-art deep convolutional neural networks have achieved remarkable performances by using the back-propagation algorithm to fine tune the weights of the connections between layers. The storage of billions of meticulously tuned weights (as done in state-of-the-art deep neural networks) is the main bottleneck for hardware implementation of deep neural networks. More importantly, the training of the deep learning neural networks is quite time consuming. Hence efficient online-learning algorithms are vital for the deployment of deep learning on dedicated hardware. Papers on this aspect of deep learning will be prioritised. Furthermore, most of the widely used deep learning models are convolutional neural networks, deep belief networks, and deep networks with stacked auto-encoders. We will also invite papers that realised such algorithms as custom hardware using state-of-the-art IC design techniques, where the power, speed and complexity are optimised for these applications. Lastly, since typical deep learning architectures are inspired by biology, while not exactly being neurophysiologically plausible, we will also encourage papers that attempt to close the loop between biology, algorithms, and hardware.
Relevant topics include:
• Non-spiking hardware implementations of deep learning (analogue, digital, mixed-signal implementations);
• Spiking hardware implementations of deep learning (analogue, digital, mixed-signal implementations);
• Neuromorphic sensors combined with deep learning neural networks;
• Hardware models of biology inspired learning, such as learning with Cortical Circuits.
The resulting collection of original research articles, reviews and commentaries will be a reference for deep learning in neuromorphic systems, fostering the research progress through discussions and new collaborations among the different researchers in our community.
Deep learning featured among the 2013 top 10 technology breakthroughs in the MIT Technology Review. Currently it is an active research area for academic, industrial and government labs. Deep learning neural networks have enabled state-of-the-art machine learning results to approach human level performance in difficult machine perception tasks such as speech, language, and visual perception. Given the prominence of this topic in scholarly pursuit, and its overlap with many parts of neuromorphic engineering, it is ideally suited and timely for a Research Topic, one that highlights the technical strengths of the neuromorphic community. We will focus on hardware and neuromorphic accelerators, which will specifically leverage our strengths while making this Research Topic highly relevant for Frontiers in Neuromorphic Engineering.
The Research Topic will be focussed on various implementation aspects for deep learning architectures. One particular interest is efficient on-line learning algorithms implemented in hardware. State-of-the-art deep convolutional neural networks have achieved remarkable performances by using the back-propagation algorithm to fine tune the weights of the connections between layers. The storage of billions of meticulously tuned weights (as done in state-of-the-art deep neural networks) is the main bottleneck for hardware implementation of deep neural networks. More importantly, the training of the deep learning neural networks is quite time consuming. Hence efficient online-learning algorithms are vital for the deployment of deep learning on dedicated hardware. Papers on this aspect of deep learning will be prioritised. Furthermore, most of the widely used deep learning models are convolutional neural networks, deep belief networks, and deep networks with stacked auto-encoders. We will also invite papers that realised such algorithms as custom hardware using state-of-the-art IC design techniques, where the power, speed and complexity are optimised for these applications. Lastly, since typical deep learning architectures are inspired by biology, while not exactly being neurophysiologically plausible, we will also encourage papers that attempt to close the loop between biology, algorithms, and hardware.
Relevant topics include:
• Non-spiking hardware implementations of deep learning (analogue, digital, mixed-signal implementations);
• Spiking hardware implementations of deep learning (analogue, digital, mixed-signal implementations);
• Neuromorphic sensors combined with deep learning neural networks;
• Hardware models of biology inspired learning, such as learning with Cortical Circuits.
The resulting collection of original research articles, reviews and commentaries will be a reference for deep learning in neuromorphic systems, fostering the research progress through discussions and new collaborations among the different researchers in our community.