
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Robot. AI
Sec. Robot Design
Volume 12 - 2025 | doi: 10.3389/frobt.2025.1546765
This article is part of the Research Topic Innovative Methods in Social Robot Behavior Generation View all articles
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Generating natural and expressive co-speech gestures for conversational virtual agents and social robots remains a persistent challenge, crucial for enhancing their acceptability and usage in real-world contexts. The strong cultural and linguistic influences on co-speech gestures further complicate this issue, highlighting the problem of the limited availability of cross-cultural cospeech gesture datasets. This study introduces a novel dataset, the TED-Culture Dataset, derived from TED talks, focusing on cross-cultural gesture generation based on linguistic cues.The proposed generative model based on the Stable Diffusion Model surpasses state-of-the-art baselines on the TED-Expressive Dataset and demonstrates rapid convergence across several specific languages within the TED-Culture Dataset, specifically Indonesian, Japanese, and Italian. The approach has been implemented on the NAO robot, showing its ability to produce contextually appropriate gestures in real-time. Results indicate improvements in the naturalness and communicative effectiveness of the generated gestures, validated through both objective and subjective evaluations. Also, it shows that individuals are more critical of co-speech gestures in their native language and expect higher performance from generative models in this context. By releasing the dataset, we allow for further research on multilingual co-speech gesture generation for embodied conversational agents.
Keywords: co-speech gesture generation, human-robot interaction, social agents, Virtual Avatar, humanoid robot
Received: 17 Dec 2024; Accepted: 14 Mar 2025.
Copyright: © 2025 Shen and Johal. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Yixin Shen, The University of Melbourne, Parkville, Australia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.