The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Aerosp. Eng.
Sec. Intelligent Aerospace Systems
Volume 4 - 2025 |
doi: 10.3389/fpace.2025.1454832
This article is part of the Research Topic Insights in Intelligent Aerospace Systems View all articles
Competency Self-Assessments for a Learning-based Autonomous Aircraft System
Provisionally accepted- 1 University of Colorado Boulder, Boulder, United States
- 2 Draper Laboratory, Cambridge, Massachusetts, United States
Introduction: Future concepts for airborne autonomy point towards human operators moving out of the cockpit and into supervisory roles. Urban Air Mobility, airborne package delivery, and military Intelligence, Surveillance, and Reconnaissance (ISR) are all actively exploring concepts or currently undergoing this transition. Supervisors of these systems will be faced with many challenges, including platforms operating outside of visual range and the need to decipher complex sensor or telemetry data in order to make informed and safe decisions with respect to the platforms and their mission. A central challenge to this new paradigm of non-co-located mission supervision is developing systems whose autonomy and internal decision-making processes are explainable and trustworthy. Methods: Competency self-assessments are methods that use introspection to quantify and communicate important information pertaining to autonomous system capabilities and limitations to human supervisors. We first discuss a computational framework for competency self-assessment, called Factorized Machine Self-Confidence (FaMSeC). Within this framework, we then define the Generalized Outcome Assessment (GOA) factor, which quantifies an autonomous system's ability to meet or exceed user specified mission outcomes. As a relevant example, we develop a competency-aware learning-based autonomous Uncrewed Aircraft System (UAS) and evaluate it within a multi-target ISR mission. Results: We present an analysis of the computational cost and performance of GOA-based competency reporting. Our results show that our competency self-assessment method can capture changes in the UAS's ability to achieve mission critical outcomes and discuss how this information can be easily communicated to human partners to inform decision-making. Discussion: We believe that competency self-assessment can enable AI/ML transparency and provide assurances that calibrate human operators to their autonomous teammate's ability to meet mission goals. This in turn can lead to informed decision-making, appropriate trust in autonomy, and overall improvements mission performance.
Keywords: Machine Self-Confidence, Human-Autonomy teaming, Intelligent Aerospace Systems, Trustworthy AI, uncrewed aerial vehicles
Received: 25 Jun 2024; Accepted: 02 Jan 2025.
Copyright: © 2025 Conlon, Acharya, Mcginley, Slack, Hirst, D'Alonzo, Hebert, Reale, Frew, Russell and Ahmed. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Nicholas Conlon, University of Colorado Boulder, Boulder, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.