
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
Volume 8 - 2025 | doi: 10.3389/frai.2025.1582880
The final, formatted version of the article will be published soon.
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
An understanding of the nature and function of human trust in artificial intelligence (AI) is fundamental to safe and effective integration of these technologies into organizational settings. The Trust in Automation Scale is a commonly used self-report measure of trust in automated systems, however it has not yet been subjected to comprehensive psychometric validation. Across two studies we tested the capacity of the scale to effectively measure trust across a range of AI applications. Results indicate that the Trust in Automation Scale is a valid and reliable measure of human trust in AI, however, with 12-items, it is often impractical for contexts requiring frequent and minimally disruptive measurements. To address this limitation, we developed and validated a 3-item version of the TIAS, the Short Trust in Automation Scale (S-TIAS). In two further studies we tested the sensitivity of the S-TIAS to manipulations of the trustworthiness of an AI system, as well as the convergent validity of the scale and its capacity to predict intentions to rely on AI-generated recommendations. In both studies the S-TIAS also demonstrated convergent validity and significantly predicted intentions to rely on the AI system in patterns similar to the TIAS. This suggests that the S-TIAS is a practical and valid alternative for measuring trust in automation and AI for the purposes of identifying antecedent factors of trust and predicting trust outcomes.
Keywords: Trust, Artificial Intelligence2, Automation, human-AI teaming3, collaborative intelligence4, Psychometrics5, measurement6 Font: Bold Moved (insertion) [1] Deleted: <#> ¶ Font: Not Bold, English (AUS) Font: Not Bold
Received: 25 Feb 2025; Accepted: 11 Apr 2025.
Copyright: © 2025 McGrath, Lack, Tisch and Duenser. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Andreas Duenser, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Sandy Bay, Australia
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Supplementary Material
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.