Skip to main content

HYPOTHESIS AND THEORY article

Front. Artif. Intell.
Sec. Language and Computation
Volume 7 - 2024 | doi: 10.3389/frai.2024.1490698

Language Writ Large: LLMs, ChatGPT, Meaning and Understanding

Provisionally accepted
  • Université du Québec à Montréal, Montreal, Canada

The final, formatted version of the article will be published soon.

    Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how Large Language Models (LLMs) like ChatGPT work (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, etc.). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It’s not true that it understands. But it is also not true that we understand how it can do what it can do. I will suggest some hunches about benign “biases” -- convergent constraints that emerge at LLM-scale that may be helping ChatGPT do so much better than we would have expected. These biases are inherent in the nature of language itself, at LLM-scale, and they are closely linked to what it is that ChatGPT lacks, which is direct sensorimotor grounding to connect its words to their referents and its propositions to their meanings. These convergent biases are related to (1) the parasitism of indirect verbal grounding on direct sensorimotor grounding, (2) the circularity of verbal definition, (3) the “mirroring” of language production and comprehension, (4) iconicity in propositions at LLM-scale, (5) computational counterparts of human “categorical perception” in category learning by neural nets, and perhaps also (6) a conjecture by Chomsky about the laws of thought. The exposition will be in the form of a dialogue with ChatGPT-4o.

    Keywords: symbol grounding, categorical perception, category learning, Feature Abstraction, meaning and understanding, ChatGPT and LLMs, direct sensorimotor grounding, indirect verbal grounding

    Received: 03 Sep 2024; Accepted: 20 Dec 2024.

    Copyright: © 2024 Harnad. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Stevan Harnad, Université du Québec à Montréal, Montreal, Canada

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.