
95% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Machine Learning and Artificial Intelligence
Volume 8 - 2025 | doi: 10.3389/frai.2025.1440582
This article is part of the Research Topic Hybrid Human Artificial Intelligence: Augmenting Human Intelligence with AI View all 8 articles
The final, formatted version of the article will be published soon.
You have multiple emails registered with Frontiers:
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
In Open Multi-Agent Systems (OMAS), the open nature of such systems precludes that all communication protocols are hardwired in advance. It is therefore essential that agents can incrementally learn to understand each other. Ideally, this is done with a minimal number of a priori assumptions, in order not to compromise the open nature of the system. This challenge becomes even harder for hybrid (human-artificial agent) populations. In such a hybrid setting, the challenge of learning to communicate is exacerbated by the requirement to do this in a minimal number of interactions with the humans involved. The difficulty arises from the conflict between making a minimal number of assumptions while also minimizing the number of interactions required. This study provides a fine-grained analysis of the process of establishing a shared task-oriented understanding for OMAS, with a particular focus on hybrid populations, i.e. containing both human and artificial agents. We present a framework that describes this process of reaching a shared task-oriented understanding. Our framework defines components that reflect decisions the agent designer needs to make, and we show how these components are affected when the agent population includes humans, i.e. when moving to a hybrid setting. The contribution of this paper is not to define yet another method for agents that learn to communicate. Instead, our goal is to provide a framework to assist researchers in designing agents that need to interact with humans in unforeseen scenarios. We validate our framework by showing that it provides a uniform way to analyze a diverse set of existing approaches from the literature for establishing shared understanding between agents. Our analysis reveals limitations of these existing approaches if they were to be applied in hybrid populations, and suggests how these can be resolved.
Keywords: Human-agent communication, task-oriented understanding establishment, hybrid open multi-agent systems, Shared understanding, Human-Agent Collaboration
Received: 29 May 2024; Accepted: 17 Mar 2025.
Copyright: © 2025 Kondylidis, Tiddi and ten Teije. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Nikolaos Kondylidis, VU Amsterdam, Amsterdam, Netherlands
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.