AUTHOR=Schreibelmayr Simon , Moradbakhti Laura , Mara Martina TITLE=First impressions of a financial AI assistant: differences between high trust and low trust users JOURNAL=Frontiers in Artificial Intelligence VOLUME=6 YEAR=2023 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2023.1241290 DOI=10.3389/frai.2023.1241290 ISSN=2624-8212 ABSTRACT=

Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.