About this Research Topic
Artificial Intelligence (AI) is improving at an impressive pace. Sophisticated robots and powerful algorithms able to perform increasingly complex tasks are being developed every year. They are not only able to perform complex and lengthy tasks, but they can also discover better, or totally new, ways to support human activity and problem solving. Impressive new technologies, such as self-driving cars and sophisticated image and speech recognition systems have achieved significant breakthroughs, and their impact on human activities is expected to exponentially grow over the near future.
Regardless of the actual complexity of their behavior, the autonomy and versatility of current artificial agents are still limited compared to what biological agents are capable of. This lack of autonomy in present robots and systems prevents them from fully succeeding in interacting with realistic environments where they have to face situations that are unknown at design-time, where learning needs to be multi-task, incremental, and online, or when new goals/tasks have to be discovered and solved autonomously.
Over the last decade, intrinsically motivated learning (sometimes called “curiosity-driven learning”)
has been studied by many researchers as an approach to autonomous lifelong learning in machines. Intrinsic motivations are inspired by human ability to discover how to produce “interesting” effects on the environment driven by self-generated motivational signals not related to specific tasks or instructions. The research in this field aims to develop agents that acquire, under the guidance of intrinsic motivations, repertoires of diverse skills that are likely to become useful later when specific tasks need to be performed.
Advancing our knowledge in this direction is currently at the forefront of AI research. Indeed, the
most impressive AI outcomes, especially those based on deep neural networks, mainly involve
perception: notwithstanding the initial successes of approaches such as deep reinforcement learning and end-to-end robot control, the capability and autonomy of intelligent artificial agents on the side of action and motivation are still in their infancies. Research on intrinsically motivated open-ended learning now has an unprecedented opportunity to substantially advance the state-of-the-art.
Progress can be made in the direction of autonomous formation of goals, development of
parameterized skills able to solve multiple tasks and to transfer/generalize knowledge between
them, and composition and organization of multiple goals and skills to form hierarchies able to
solve more complex problems.
The goal of this Research Topic is to bring together researchers who are dealing with different
issues on intrinsic motivations and open-ended learning, including but not limited to:
- Autonomous robots lifelong learning
- Multi-task reinforcement learning
- Deep reinforcement learning
- Intrinsic motivations
- Curriculum learning
- Goal self-generation
- Multiple task solution and parameterized skills
- Neural/probabilistic representations and abstractions
- Architectures for open-ended learning
- Goal-based skill learning
- Knowledge transfer and avoidance of catastrophic forgetting
- Compositionality and chunking
- Hierarchies of goals and skills
- Mitigating risks of real-world deployment of open-ended learning systems
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.