Advances in deep learning have led to the development of artificial intelligence algorithms with human-level performance in a wide range of applications. However, these algorithms primarily solve specific, isolated tasks which would seldom lead to a robust generalization over extended time periods due to distributional shifts in the input data space which makes persistent model training necessary for maintaining model generalization. Continual machine learning is a relatively new subfield where the goal is to develop autonomous agents that can learn continuously and adaptively to accumulatively develop skills necessary for performing more complex tasks without forgetting what has been learned before. This Research Topic collection focuses on advances in continual learning during the past few years and challenges that need to be addressed for future developments, as well as new models and algorithms.
Research interest in continual learning has grown significantly in the past few years. However, the vast majority of recent works are published in general venues of AI. This Research Topic aims to advance research in continual learning focusing on the latest research ideas and algorithms in continual learning in a specialized venue. We accept original research articles, technology and code, data reports, as well as reviews, perspectives and opinions.
The topics of interest include but are not limited to:
• autonomous learning
• self-driving cars
• distributed continual learning
• incremental class learning
• brain-inspired continual learning
• learning drifting concepts
• self-supervised learning
• continual meta-learning
• continual learning in robotics
• dataset distillation and model distillation
• new datasets and benchmarks for continual learning
• continual learning for large-scale models
• real-world applications of continual learning.
Advances in deep learning have led to the development of artificial intelligence algorithms with human-level performance in a wide range of applications. However, these algorithms primarily solve specific, isolated tasks which would seldom lead to a robust generalization over extended time periods due to distributional shifts in the input data space which makes persistent model training necessary for maintaining model generalization. Continual machine learning is a relatively new subfield where the goal is to develop autonomous agents that can learn continuously and adaptively to accumulatively develop skills necessary for performing more complex tasks without forgetting what has been learned before. This Research Topic collection focuses on advances in continual learning during the past few years and challenges that need to be addressed for future developments, as well as new models and algorithms.
Research interest in continual learning has grown significantly in the past few years. However, the vast majority of recent works are published in general venues of AI. This Research Topic aims to advance research in continual learning focusing on the latest research ideas and algorithms in continual learning in a specialized venue. We accept original research articles, technology and code, data reports, as well as reviews, perspectives and opinions.
The topics of interest include but are not limited to:
• autonomous learning
• self-driving cars
• distributed continual learning
• incremental class learning
• brain-inspired continual learning
• learning drifting concepts
• self-supervised learning
• continual meta-learning
• continual learning in robotics
• dataset distillation and model distillation
• new datasets and benchmarks for continual learning
• continual learning for large-scale models
• real-world applications of continual learning.