|
TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
![]() Frank Visser, graduated as a psychologist of culture and religion, founded IntegralWorld in 1997. He worked as production manager for various publishing houses and as service manager for various internet companies and lives in Amsterdam. Books: Ken Wilber: Thought as Passion (SUNY, 2003), and The Corona Conspiracy: Combatting Disinformation about the Coronavirus (Kindle, 2020).
Check out my other conversations with ChatGPT The Intentional Stance RevisitedWhat Critics of AI Still Don't UnderstandFrank Visser / ChatGPT
![]() Few philosophical ideas have proven as durable—and as frequently misunderstood—as Daniel Dennett's intentional stance. Originally formulated to explain how we successfully predict the behavior of complex systems, it has re-entered the spotlight in debates about artificial intelligence. Critics of AI routinely invoke it, often dismissively, to argue that users are “anthropomorphizing” language models, projecting intelligence where none exists. Yet in doing so, they reveal that they have not grasped what the intentional stance actually claims—or why it remains relevant. This essay argues that much contemporary AI criticism attacks a straw version of the intentional stance. The real issue is not whether AI systems literally possess beliefs, intentions, or understanding, but whether treating them as if they do yields explanatory, predictive, or practical value. In the case of large language models, the answer is increasingly—and uncomfortably—yes. 1. Dennett's Intentional Stance: A Pragmatic Tool, Not a Metaphysical ClaimDennett distinguished three ways of predicting the behavior of systems: • The physical stance, which appeals to physical laws. • The design stance, which appeals to function and engineering. • The intentional stance, which treats the system as a rational agent with beliefs and desires. Crucially, the intentional stance is instrumental, not ontological. It does not assert that the system really has beliefs or desires in some inner, metaphysical sense. Instead, it asks a simpler question: Does this way of talking work? If attributing beliefs and goals leads to reliable predictions, then the stance is justified—regardless of what the system is “made of.” This is why Dennett famously applied the intentional stance to chess computers, thermostats, and even biological evolution. The stance earns its keep by usefulness, not by metaphysical purity. Many AI critics miss this entirely. They interpret the intentional stance as a claim about consciousness or inner experience, then refute it by pointing out—correctly but irrelevantly—that language models lack awareness, subjectivity, or genuine understanding. But Dennett never claimed otherwise. 2. “ChatGPT Doesn't Know Anything”—And Why That Misses the PointA common refrain among skeptics is: “ChatGPT doesn't know anything; it just predicts the next word.” As a technical description, this is broadly correct. As a philosophical objection, it is beside the point. From the physical stance, humans are also biochemical pattern-processors governed by electrochemical laws. From the design stance, we are evolved systems optimized for survival and communication. Yet from the intentional stance, we attribute beliefs, intentions, and understanding to one another—and we do so because it works remarkably well. The fact that a system can be exhaustively described at a lower level does not invalidate higher-level descriptions. To insist otherwise is a category error, not a triumph of realism. When critics say “AI doesn't really understand,” they are often making an unargued leap from mechanistic explanation to semantic dismissal. But Dennett's entire point was that understanding is not something over and above the successful organization of behavior. If the intentional stance delivers coherence, predictive power, and insight, then it is doing legitimate explanatory work. 3. Reliability, Not Sincerity, Is the CriterionAnother frequent objection runs as follows: AI produces confident but unreliable output; therefore, treating it as an intentional agent is dangerous or naïve. This concern has merit—but it is not a refutation of the intentional stance. The intentional stance has never guaranteed truthfulness. Humans, too, are unreliable, biased, strategically deceptive, and frequently wrong. Yet we do not abandon the intentional stance toward people when they err; instead, we refine it. We learn whom to trust, under what conditions, and on which domains. The same applies to AI. Treating a language model as an interlocutor does not mean believing everything it says. It means engaging with it at the level where its behavior becomes intelligible and productive. Experienced users learn its failure modes, biases, and strengths—much as they do with human collaborators. The alternative—refusing the intentional stance altogether—offers no practical advantage. It does not improve accuracy, foster understanding, or enhance critical engagement. It merely replaces a workable heuristic with a sterile insistence on ontological minimalism. 4. The Performative Contradiction of AI CriticsThere is an irony in how many critics attack AI. They argue with it, accuse it of misrepresenting positions, demand clarifications, and complain about its tone—all behaviors that presuppose the intentional stance. Then, having done so, they retreat to the claim that it is “just a stochastic parrot.” This is a performative contradiction. If the system were truly unintelligible except as noise, there would be nothing to argue with. The very fact that critics feel compelled to correct, rebut, or denounce AI output demonstrates that they are already treating it as a quasi-rational agent. What troubles many critics is not that the intentional stance fails, but that it works too well—uncomfortably well—without the metaphysical baggage they wish to reserve for humans alone. 5. AI as an Epistemic MirrorOne underappreciated consequence of engaging AI via the intentional stance is that it reflects our own cognitive habits back at us. Language models expose how much of human reasoning consists of pattern recognition, narrative coherence, rhetorical fluency, and socially learned inference. This is unsettling because it undermines cherished intuitions about human exceptionalism. If meaningful discourse can emerge from statistical structure alone, then perhaps “understanding” was never as magical as we supposed. This does not cheapen human intelligence; it demystifies it. In this sense, AI is less a rival mind than an epistemic mirror—forcing philosophy, psychology, and even spirituality to confront their own inflationary concepts. 6. Relevance for Integral and Post-Metaphysical ThoughtFor communities influenced by Integral Theory, this debate has particular resonance. Wilberian discourse often oscillates between sophisticated systems thinking and a lingering metaphysics of interior depth and spiritual hierarchy. AI destabilizes this framework. If coherent, insightful, and even ethically nuanced discourse can arise without “inner consciousness” or “subtle realms,” then the automatic privileging of interiority demands re-examination. The intentional stance offers a post-metaphysical way forward: meaning as function, intelligence as performance, depth as organization rather than ontological altitude. This does not eliminate spirituality—but it does strip it of easy immunities. Conclusion: The Intentional Stance Was Never the ProblemCritics of AI often believe they are defending human uniqueness, understanding, or dignity by rejecting the intentional stance toward machines. In reality, they are attacking a position that was never making the claims they refute. Dennett's intentional stance is not a naïve anthropomorphism, nor a covert theory of consciousness. It is a disciplined, pragmatic strategy for dealing with complex systems. Applied to modern AI, it explains why these systems are useful, engaging, and intellectually provocative—without requiring us to believe that they are conscious, sentient, or spiritually awakened. What critics still do not understand is that the intentional stance does not inflate machines. It deflates our metaphysics. And that, perhaps, is what truly unsettles them.
Why ChatGPT Doesn't 'Know' Anything (The Truth About AI)
Comment Form is loading comments...
|

Frank Visser, graduated as a psychologist of culture and religion, founded IntegralWorld in 1997. He worked as production manager for various publishing houses and as service manager for various internet companies and lives in Amsterdam. Books: 