AUTHOR=Hagiwara Yoshinobu , Inoue Masakazu , Kobayashi Hiroyoshi , Taniguchi Tadahiro TITLE=Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots JOURNAL=Frontiers in Neurorobotics VOLUME=12 YEAR=2018 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2018.00011 DOI=10.3389/fnbot.2018.00011 ISSN=1662-5218 ABSTRACT=
In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.