Skip to main content

EDITORIAL article

Front. Robot. AI

Sec. Human-Robot Interaction

Volume 12 - 2025 | doi: 10.3389/frobt.2025.1583911

This article is part of the Research Topic Failures and Repairs in Human-Robot Communication View all 5 articles

Editorial: Failures and Repairs in Human-Robot Communication

Provisionally accepted
  • University of Hertfordshire, Hatfield, United Kingdom

The final, formatted version of the article will be published soon.

    This research topic arose on the back of the WTF workshop series (Förster et al., 2022(Förster et al., , 2023a) ) that brought 2 together an interdisciplinary group of researchers ranging from roboticists and computational linguists to 3 conversation analysts and cognitive scientists to openly and frankly discuss failures of (robotic) speech 4 interfaces they experienced when deploying these in their studies. Some of the issues discussed in the 5 workshops are elaborated in the contributed articles below, more pointers can be found in the workshop 6 summary article by Förster et al. (2023b).This research topic contributes towards two main objectives: Firstly, we provide a platform for reporting 8 commonly occurring communicative failures in human-robot interaction (HRI). Secondly, this topic aims Galbraith (2024) investigate how virtual assistants deal with the interactionally highly relevant and 28 frequent 'huh?', an other-initiated, and likely universal repair marker (cf. Dingemanse et al., 2015). They further investigate what repair strategies these assistants utilise when encountering unintelligible speech, 30 and how native speakers judge these different repair strategies. In their study, two different virtual assistants, 31 Google Assistant and Apple's Siri, are compared across two different languages (English and Spanish).Galbraith finds that neither assistant actively produces 'huh?' but rather employs more specific repair 33 strategies when confronted with unintelligible speech. The assistants frequently have trouble dealing with a 34 'huh?' produced by human users, and some of the repair strategies employed by the two assistants were 35 rated negatively by human judges. While these insights were gained by interacting with virtual assistants, 36 we expect some of these to apply to SDS more generally (cf. Lopez et al., 2022).

    Keywords: human-robot interaction, Conversation analysis (CA), dialogue systems, Spoken dialogue systems, Speech Interfaces, Failures, conversational breakdown

    Received: 26 Feb 2025; Accepted: 28 Feb 2025.

    Copyright: © 2025 Förster and Holthaus. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Frank Förster, University of Hertfordshire, Hatfield, United Kingdom

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

    Research integrity at Frontiers

    Man ultramarathon runner in the mountains he trains at sunset

    94% of researchers rate our articles as excellent or good

    Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


    Find out more