AUTHOR=Laversanne-Finot Adrien , Péré Alexandre , Oudeyer Pierre-Yves TITLE=Intrinsically Motivated Exploration of Learned Goal Spaces JOURNAL=Frontiers in Neurorobotics VOLUME=14 YEAR=2021 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2020.555271 DOI=10.3389/fnbot.2020.555271 ISSN=1662-5218 ABSTRACT=

Finding algorithms that allow agents to discover a wide variety of skills efficiently and autonomously, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes (IMGEPs) have been shown to enable real world robots to learn repertoires of policies producing a wide range of diverse effects. They work by enabling agents to autonomously sample goals that they then try to achieve. In practice, this strategy leads to an efficient exploration of complex environments with high-dimensional continuous actions. Until recently, it was necessary to provide the agents with an engineered goal space containing relevant features of the environment. In this article we show that the goal space can be learned using deep representation learning algorithms, effectively reducing the burden of designing goal spaces. Our results pave the way to autonomous learning agents that are able to autonomously build a representation of the world and use this representation to explore the world efficiently. We present experiments in two environments using population-based IMGEPs. The first experiments are performed on a simple, yet challenging, simulated environment. Then, another set of experiments tests the applicability of those principles on a real-world robotic setup, where a 6-joint robotic arm learns to manipulate a ball inside an arena, by choosing goals in a space learned from its past experience.