Recent rapid progress in deep learning and large language model (LLM) brings remarkable growth in various AI tasks, including speech, image, natural language processing and more. However, training neural network models are computationally expensive, and in addition to that, deploying pre-trained LLM models and performing inference on them further generates a significant amount of carbon emission. Therefore, energy consumption becomes the fundamental limit in the existing AI compute hardware system that is built on central processing unit (CPU) and graph processing unit (GPU). To this end, it is of vital importance that future AI systems use innovative approaches to perform neuromorphic computing with high throughput, low latency, and improved energy efficiency.
The aim of this research topic is to resolve the limitation of this energy efficiency issue, with the help of fundamental breakthroughs in algorithms and device technologies to enable a paradigm shift in the neuromorphic computing system. On one hand, existing LLM models involves computing transformer based large deep neural network (DNN) models, which requires intensive compute operations due to large fully connected (FC) layers and self-attention blocks. Algorithm innovations which introduce effective encoding systems rather than conventional transformer based DNN models open a pathway to achieve high accuracy with much less energy consumption on AI tasks. Whereas on the other hand, the existing hardware platform used for training DNN models heavily depends on using general-purpose processors (CPUs and GPUs) that are not optimized for AI workloads. Emerging hardware such as non-volatile memory-based crossbar array have proven to be effective in reducing data movement between processing and memory units and therefore allow high throughput and energy efficient implementation of AI model training and inference. Further innovations look at processing-in-memory using emerging hardware that serve as silver bullets to break the so-called von-Neumann bottleneck, and fully unleash the potential of high energy efficient neuromorphic computing for future sustainable AI.
The focus of the research topics is surrounding innovations on algorithm and device technology for energy-efficient neuromorphic computing. Pure device physics and algorithm theories are outside the scope of this research topic, but we encourage submissions on the topic of using novel algorithm, device approach and achievements in substantial improvements to existing benchmark. Below are a few examples of some specific themes outlined, but submissions are not limited to these examples as long as your article is in scope of the research topic.
-Effective inference at edge: in-memory computing using non-volatile memory
-Transfer learning on emerging hardware beyond digital approach
-Novel algorithm innovation for processing-in-memory accelerators
-Device-algorithm co-optimization for energy-efficient neuromorphic computing
-Design-Technology co-optimization for neuromorphic computing using non-volatile memory
Keywords:
Non-volatile memory, emerging hardware, energy efficient, neuromorphic computing, analog AI, in memory computing, process in memory, matrix-vector-multiplication, machine learning, phase change memory, RRAM, ferroelectric memory, flash memory
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Recent rapid progress in deep learning and large language model (LLM) brings remarkable growth in various AI tasks, including speech, image, natural language processing and more. However, training neural network models are computationally expensive, and in addition to that, deploying pre-trained LLM models and performing inference on them further generates a significant amount of carbon emission. Therefore, energy consumption becomes the fundamental limit in the existing AI compute hardware system that is built on central processing unit (CPU) and graph processing unit (GPU). To this end, it is of vital importance that future AI systems use innovative approaches to perform neuromorphic computing with high throughput, low latency, and improved energy efficiency.
The aim of this research topic is to resolve the limitation of this energy efficiency issue, with the help of fundamental breakthroughs in algorithms and device technologies to enable a paradigm shift in the neuromorphic computing system. On one hand, existing LLM models involves computing transformer based large deep neural network (DNN) models, which requires intensive compute operations due to large fully connected (FC) layers and self-attention blocks. Algorithm innovations which introduce effective encoding systems rather than conventional transformer based DNN models open a pathway to achieve high accuracy with much less energy consumption on AI tasks. Whereas on the other hand, the existing hardware platform used for training DNN models heavily depends on using general-purpose processors (CPUs and GPUs) that are not optimized for AI workloads. Emerging hardware such as non-volatile memory-based crossbar array have proven to be effective in reducing data movement between processing and memory units and therefore allow high throughput and energy efficient implementation of AI model training and inference. Further innovations look at processing-in-memory using emerging hardware that serve as silver bullets to break the so-called von-Neumann bottleneck, and fully unleash the potential of high energy efficient neuromorphic computing for future sustainable AI.
The focus of the research topics is surrounding innovations on algorithm and device technology for energy-efficient neuromorphic computing. Pure device physics and algorithm theories are outside the scope of this research topic, but we encourage submissions on the topic of using novel algorithm, device approach and achievements in substantial improvements to existing benchmark. Below are a few examples of some specific themes outlined, but submissions are not limited to these examples as long as your article is in scope of the research topic.
-Effective inference at edge: in-memory computing using non-volatile memory
-Transfer learning on emerging hardware beyond digital approach
-Novel algorithm innovation for processing-in-memory accelerators
-Device-algorithm co-optimization for energy-efficient neuromorphic computing
-Design-Technology co-optimization for neuromorphic computing using non-volatile memory
Keywords:
Non-volatile memory, emerging hardware, energy efficient, neuromorphic computing, analog AI, in memory computing, process in memory, matrix-vector-multiplication, machine learning, phase change memory, RRAM, ferroelectric memory, flash memory
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.