AUTHOR=Taniguchi Akira , Taniguchi Tadahiro , Cangelosi Angelo TITLE=Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots JOURNAL=Frontiers in Neurorobotics VOLUME=11 YEAR=2017 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2017.00066 DOI=10.3389/fnbot.2017.00066 ISSN=1662-5218 ABSTRACT=
In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method.