In recent years, with the substantial improvement of the scale of data in the public domain and computing power, deep neural network (DNN) models have been increasingly used in visual signal processing machine learning tasks such as image classification, object detection, and target tracking. To obtain higher performances and improve the learning ability of a model, increasingly larger model architectures that refresh the performance indicators of various tasks have been designed. Although these large models achieve superior performances, they are limited by high computing power and complexity. In intelligent robot systems, a large model is not conducive to high-performance deployment on edge platforms and does not satisfy the requirements of real-time operations.
In order to solve these challenges and promote the popularization of neural networks in intelligent robot systems, more researchers have started to study efficient DNN models to improve their running and training speeds. Therefore, technology that can effectively address DNNs to improve key indicators (such as energy efficiency, throughput, and latency), without sacrificing accuracy or increasing hardware costs, is essential for the extensive deployment of DNNs in intelligent robot systems.
For a broader impact, we aim to disseminate the outcomes and products from this topic to a wide range of communities, helping our peers as well as non-expert readers to understand the highly efficient design of DNNs. To this end, both theoretical and applied results with algorithms and applications are desired. This special issue offers a concentrative setting for researchers to rapidly exchange ideas and original research findings on lightweight DNNs, acceleration algorithms, and robotic system applications. We invite authors to submit manuscripts relevant to the topics of this special issue and do not published before.
Topics of interest include, but are not limited to:
1. Structure design of lightweight DNNs
2. Parameter pruning and sharing in DNNs
3. Pruning and thinning in DNNs
4. Quantification of DNNs
5. Knowledge distillation of DNNs
6. Low-rank decomposition of DNNs
7. Efficient image processing based on DNN in intelligent robot systems
8. Efficient pattern recognition based on DNNs in intelligent robot systems
9. Efficient visual navigation based on DNNs in intelligent robot systems
10. Efficient speech processing based on DNNs in intelligent robot systems
In recent years, with the substantial improvement of the scale of data in the public domain and computing power, deep neural network (DNN) models have been increasingly used in visual signal processing machine learning tasks such as image classification, object detection, and target tracking. To obtain higher performances and improve the learning ability of a model, increasingly larger model architectures that refresh the performance indicators of various tasks have been designed. Although these large models achieve superior performances, they are limited by high computing power and complexity. In intelligent robot systems, a large model is not conducive to high-performance deployment on edge platforms and does not satisfy the requirements of real-time operations.
In order to solve these challenges and promote the popularization of neural networks in intelligent robot systems, more researchers have started to study efficient DNN models to improve their running and training speeds. Therefore, technology that can effectively address DNNs to improve key indicators (such as energy efficiency, throughput, and latency), without sacrificing accuracy or increasing hardware costs, is essential for the extensive deployment of DNNs in intelligent robot systems.
For a broader impact, we aim to disseminate the outcomes and products from this topic to a wide range of communities, helping our peers as well as non-expert readers to understand the highly efficient design of DNNs. To this end, both theoretical and applied results with algorithms and applications are desired. This special issue offers a concentrative setting for researchers to rapidly exchange ideas and original research findings on lightweight DNNs, acceleration algorithms, and robotic system applications. We invite authors to submit manuscripts relevant to the topics of this special issue and do not published before.
Topics of interest include, but are not limited to:
1. Structure design of lightweight DNNs
2. Parameter pruning and sharing in DNNs
3. Pruning and thinning in DNNs
4. Quantification of DNNs
5. Knowledge distillation of DNNs
6. Low-rank decomposition of DNNs
7. Efficient image processing based on DNN in intelligent robot systems
8. Efficient pattern recognition based on DNNs in intelligent robot systems
9. Efficient visual navigation based on DNNs in intelligent robot systems
10. Efficient speech processing based on DNNs in intelligent robot systems