The final, formatted version of the article will be published soon.
REVIEW article
Front. Big Data
Sec. Machine Learning and Artificial Intelligence
Volume 7 - 2024 |
doi: 10.3389/fdata.2024.1467222
This article is part of the Research Topic Towards Fair AI for Trustworthy Artificial Intelligence View all 3 articles
Establishing and Evaluating Trustworthy AI: Overview and Research Challenges
Provisionally accepted- 1 Graz University of Technology, Graz, Styria, Austria
- 2 Know Center, Graz, Austria
- 3 University of Graz, Graz, Styria, Austria
- 4 SGS Digital Trust Services GmbH, Graz, Styria, Austria
Artificial intelligence (AI) technologies (re-)shape modern life, driving innovation in a wide range of sectors. However, some AI systems have yielded unexpected or undesirable outcomes or have been used in questionable manners. As a result, there has been a surge in public and academic discussions about aspects that AI systems must fulfill to be considered trustworthy. In this paper, we synthesize existing conceptualizations of trustworthy AI along six requirements: 1) human agency and oversight, 2) fairness and non-discrimination, 3) transparency and explainability, 4) robustness and accuracy, 5) privacy and security, and 6) accountability. For each one, we provide a definition, describe how it can be established and evaluated, and discuss requirement-specific research challenges. Finally, we conclude this analysis by identifying overarching research challenges across the requirements with respect to 1) interdisciplinary research, 2) conceptual clarity, 3) context-dependency, 4) dynamics in evolving systems, and 5) investigations in real-world contexts. Thus, this paper synthesizes and consolidates a wide-ranging and active discussion currently taking place in various academic sub-communities and public forums. It aims to serve as a reference for a broad audience and as a basis for future research directions.
Keywords: Trustworthy AI, artificial intelligence, fairness, Human agency, robustness, Privacy, accountability, Transparency
Received: 19 Jul 2024; Accepted: 11 Nov 2024.
Copyright: © 2024 Kowald, Scher, Pammer-Schindler, Müllner, Waxnegger, Demelius, Fessl, Toller, Mendoza Estrada, Simic, Sabol, Truegler, Veas, Kern, Nad and Kopeinik. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Dominik Kowald, Graz University of Technology, Graz, 8010, Styria, Austria
Simone Kopeinik, Know Center, Graz, Austria
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.