Skip to main content

CONCEPTUAL ANALYSIS article

Front. Psychol.
Sec. Media Psychology
Volume 15 - 2024 | doi: 10.3389/fpsyg.2024.1357572
This article is part of the Research Topic On the “Human” in Human-Artificial Intelligence Interaction - Volume II View all 7 articles

Beyond Humanism: Telling Response-able Stories About Significant Otherness in Human-Chatbot Rela@ons

Provisionally accepted
  • Technical University of Munich, Munich, Germany

The final, formatted version of the article will be published soon.

    AI-enabled chatbots intended to build social relations with humans are becoming increasingly common in the marketplace, with millions of registered users using these chatbots as virtual companions or therapists. These chatbots make use of what is often called the “Eliza effect”, or the tendency of human users to attribute human-like knowledge and understanding to a computer program. A common interpretation of this phenomenon is to consider this form of relating in terms of delusion, error, or deception, where the user forgets or misunderstands they are talking to a computer. As an alternative, we draw on the work of feminist Science & Technology Studies scholars as providing a robust and capacious tradition of thinking and engaging with human – non-human relationships in non-reductive ways. We closely analyze two different stories about encounters with chatbots, taking up the feminist STS challenge to attend to the agency of significant otherness in the encounter. The first is Joseph Weizenbaum’s story about rejecting the ELIZA chatbot technology he designed to mimic a therapist as a monstrosity, based on his experiences watching others engage with it. The second is a story about Julie, who experiences a mental health crisis, and her chatbot Navi, as told through her own descriptions of her experiences with Navi in the recent podcast, Radiotopia presents: Bot Love. We argue that a reactionary humanist narrative, as presented by Weizenbaum, is incapable of attending to the possibilities of pleasure, play or even healing that might occur in human-chatbot relatings. Other forms of engaging with, understanding, and making sense of this new technology and its potentialities are needed both in research and mental health practice, particularly as more and more patients will begin to use these technologies alongside engaging in traditional human-led psychotherapy.

    Keywords: artificial intelligence - AI, Feminist science and technology studies, Chatbots, Companion species, Significant otherness, Eliza effect

    Received: 27 Feb 2024; Accepted: 29 Jul 2024.

    Copyright: © 2024 Holohan and Müller. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Ruth Müller, Technical University of Munich, Munich, Germany

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.