AUTHOR=Scharowski Nicolas , Perrig Sebastian A. C. , Svab Melanie , Opwis Klaus , Brühlmann Florian TITLE=Exploring the effects of human-centered AI explanations on trust and reliance JOURNAL=Frontiers in Computer Science VOLUME=5 YEAR=2023 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1151150 DOI=10.3389/fcomp.2023.1151150 ISSN=2624-9898 ABSTRACT=

Transparency is widely regarded as crucial for the responsible real-world deployment of artificial intelligence (AI) and is considered an essential prerequisite to establishing trust in AI. There are several approaches to enabling transparency, with one promising attempt being human-centered explanations. However, there is little research into the effectiveness of human-centered explanations on end-users' trust. What complicates the comparison of existing empirical work is that trust is measured in different ways. Some researchers measure subjective trust using questionnaires, while others measure objective trust-related behavior such as reliance. To bridge these gaps, we investigated the effects of two promising human-centered post-hoc explanations, feature importance and counterfactuals, on trust and reliance. We compared these two explanations with a control condition in a decision-making experiment (N = 380). Results showed that human-centered explanations can significantly increase reliance but the type of decision-making (increasing a price vs. decreasing a price) had an even greater influence. This challenges the presumed importance of transparency over other factors in human decision-making involving AI, such as potential heuristics and biases. We conclude that trust does not necessarily equate to reliance and emphasize the importance of appropriate, validated, and agreed-upon metrics to design and evaluate human-centered AI.