Competitions against adversarial effects often lead to complex and interesting cooperative behaviors exemplified by many biological collectives observed in nature as well as strategies in team sports. Furthermore, as the deployment of a large number of robots becomes increasingly practical, control strategies for autonomous teams and swarms of robots against adversarial effects have become particularly important in various civilian and military applications. To account for the complexity of the problem, the integration of diverse techniques at the intersection of a number of fields including multi-agent systems, game theory, machine learning, and control theory, is imperative.
The complexity associated with the control and decision making in multi-agent systems is further exacerbated when they must operate in adversarial environments. This Research Topic aims to highlight work that addresses this complexity at different levels: i.e., problem formulation, methodology, or application. More specifically, we seek appropriate formulations that capture the adversarial nature of the problem, lead to tractable analysis, and scale to large groups. The contribution may be in the new solution approach that addresses the complexity issues. We are also interested in seeing how existing methods can be applied to new scenarios that highlight teaming behaviors in adversarial environments.
The areas / topics of interest include, but are not limited to, the following:
· Models and tools for risk assessment
· Risk-aware motion planning and control
· Reliability and safety in autonomous systems
· Fault-tolerant control design
· Robust and adversarial learning
· Multi-player games
· Dynamic games
· Deception
· Resiliency in multi-agent systems
· Attacks on sensing / communication / mobility
· Effect of uncertainties / Value of information
Competitions against adversarial effects often lead to complex and interesting cooperative behaviors exemplified by many biological collectives observed in nature as well as strategies in team sports. Furthermore, as the deployment of a large number of robots becomes increasingly practical, control strategies for autonomous teams and swarms of robots against adversarial effects have become particularly important in various civilian and military applications. To account for the complexity of the problem, the integration of diverse techniques at the intersection of a number of fields including multi-agent systems, game theory, machine learning, and control theory, is imperative.
The complexity associated with the control and decision making in multi-agent systems is further exacerbated when they must operate in adversarial environments. This Research Topic aims to highlight work that addresses this complexity at different levels: i.e., problem formulation, methodology, or application. More specifically, we seek appropriate formulations that capture the adversarial nature of the problem, lead to tractable analysis, and scale to large groups. The contribution may be in the new solution approach that addresses the complexity issues. We are also interested in seeing how existing methods can be applied to new scenarios that highlight teaming behaviors in adversarial environments.
The areas / topics of interest include, but are not limited to, the following:
· Models and tools for risk assessment
· Risk-aware motion planning and control
· Reliability and safety in autonomous systems
· Fault-tolerant control design
· Robust and adversarial learning
· Multi-player games
· Dynamic games
· Deception
· Resiliency in multi-agent systems
· Attacks on sensing / communication / mobility
· Effect of uncertainties / Value of information