
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
BRIEF RESEARCH REPORT article
Front. Psychol.
Sec. Cognitive Science
Volume 16 - 2025 | doi: 10.3389/fpsyg.2025.1498958
This article is part of the Research TopicExploring Human-Autonomous Interactions: Agency, Awareness, and Ethical ImplicationsView all articles
The final, formatted version of the article will be published soon.
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Tech companies often use agentive language to describe their AIs (e.g., "Gemini can understand, explain and generate high-quality code", Pichai & Hassabis, 2023). Psycholinguistic research has shown that violating animacy hierarchies by putting a nonhuman in this agentive subject position (i.e., grammatical metaphor) influences readers to perceive it as a causal agent. However, it is not yet known how this affects readers' responsibility assignments towards AIs or the companies that make them. Furthermore, it is not known whether this effect relies on psychological anthropomorphism, or a more limited set of linguistic causal schemas. We investigated these questions by having participants read a short vignette in which "Dr. AI" gave dangerous health advice in one of two framing conditions (AI as Agent vs. AI as Instrument). Participants rated how responsible the AI, the company, and the patients were for the outcome, and their own AI experience. We predicted that participants would assign more responsibility to the AI in the Agent condition, and that lower AI experience participants would assign higher responsibility to the AI because they would be more likely to anthropomorphize it. The results confirmed these predictions; we found an interaction between linguistic framing condition and AI experience such that lower experience participants assigned higher responsibility to the AI in the Agent condition than in the Instrument condition (z = 2.13, p =.032) while higher experience participants did not. Our findings suggest that the effects of agentive linguistic framing towards non-humans are decreased by domain experience because it decreases anthropomorphism.
Keywords: Linguistic framing, Grammatical metaphor, agency, Anthropomorphism, AI
Received: 19 Sep 2024; Accepted: 13 Mar 2025.
Copyright: © 2025 Petersen and Almor. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Dawson Petersen, University of South Carolina, Columbia, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Supplementary Material
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.