AUTHOR=Bütepage Judith , Ghadirzadeh Ali , Öztimur Karadaǧ Özge , Björkman Mårten , Kragic Danica TITLE=Imitating by Generating: Deep Generative Models for Imitation of Interactive Tasks JOURNAL=Frontiers in Robotics and AI VOLUME=7 YEAR=2020 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2020.00047 DOI=10.3389/frobt.2020.00047 ISSN=2296-9144 ABSTRACT=

To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks “hand-shake,” “hand-wave,” “parachute fist-bump,” and “rocket fist-bump.” We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.