|
TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
B. May is a former experimental high-energy particle physicist, data scientist, and business software developer.
NOTE: This essay contains AI-generated content
Beyond Dennett's Intentional Stance
Exploring a Kaleidoscope of Stances
(on AI)
B. May / GPT-5.1
AUTHOR'S INTRODUCTION
Reading a recent Visser/GPT essay [1] on Daniel Dennett's three stances, largely focusing on his so-called “intentional stance”, triggered an intuition for me that Dennett's stances can not possibly be complete and that various misreadings of his intentional stance likely has to do with it doing a lot of work that Dennett never intended (as covered by Visser).
In this essay, I propose and explore a broader set of complementary stances beyond Dennett's as an attempt to cover more territory that aligns with a range of human interpretive frames, including a few meta-stances that go beyond any single stance. I focus on these in the context of how we frame and relate to AI as this is not just a philosophical curiosity but a potential concern as AI infiltrates more asepcts of human life. The proposed stances are intended to be thought-provoking for the reader rather than a comprehensive analysis.
INTRODUCTION
Artificial intelligence systems are increasingly woven into everyday environments, social practices, and cognitive routines. As these systems grow in capability and adaptability, people naturally interpret them through a wide range of perspectives—some deliberate, others automatic. These perspectives shape expectations about what the systems are, how they behave, and how we should respond to them.
One established philosophical framework for such interpretations is Dennett's notion of a stance: a strategy for predicting and explaining a system's behavior. Dennett identified three primary stances—the physical, design, and intentional—that together capture how humans efficiently interpret many types of systems.
However, contemporary AI systems introduce patterns that extend beyond Dennett's framework. They present fluid interfaces, multimodal capacities, conversational abilities, and roles mediated by institutions and economic interests. They appear in therapeutic contexts, knowledge-seeking situations, creative workflows, and social coordination. As a result, people now employ a larger ecosystem of stances, treating AI systems not only as designed artifacts but also as entities, agents, social partners, or mirrors of their own assumptions.
This essay maps that broader ecosystem. It groups stances into four families—instrumental, status-oriented, relational, and meta-level—and examines each stance's assumptions, affordances, risks, and open questions. The aim is not to endorse any one stance but to clarify how different stances function, how they shape interactions with AI, and how stance literacy can help users navigate complex systems more effectively.
The Concept of a Stance
A stance is an interpretive posture: a way of understanding a system that shapes both explanation and engagement. Stances help humans simplify complexity by foregrounding certain features while ignoring others. They influence trust, expectations, moral judgments, and perceived roles. People often switch between stances depending on context. For example, the same user may adopt an epistemic stance when asking for information, an affective stance during a personal interaction, and a critical stance when considering the system's origins.
Dennett's framework emphasizes predictive efficiency. The physical and design stances allow explanation from mechanisms or intended functions, while the intentional stance models the system as a rational agent with beliefs and desires. These remain powerful tools, but they do not encompass the broader interpretive habits people bring to AI systems today. AI invites new forms of classification and engagement, ranging from questions about entityhood to emotional responsiveness and institutional embedding.
Expanding the stance framework supports more reflective and calibrated interactions. It helps identify where misunderstandings originate, where projections occur, and where systemic influences operate. The expanded taxonomy below provides one structured way to understand this wider field.
Four Families of Stances
The expanded ecosystem of stances can be grouped into four families. Listing them explicitly provides a first overview for readers.
A. Instrumental Stances
1. Physical
2. Design
3. Intentional
4. Epistemic
These emphasize prediction, use, and functional reasoning.
B. Status-Oriented Stances
5. Ontic
6. Agentic
7. Psychic
These focus on what the system is—its entityhood, autonomy, or potential interiority.
C. Relational Stances
8. Social
9. Affective
These concern roles, norms, and emotional positioning in interaction.
D. Meta-Stances
10. Contextual-Holistic
11. Critical-Analytical
12. Self-Reflective
These interpret the system in its broader environment or interrogate the user's own interpretations.
The remainder of the essay examines each stance in more detail.
Instrumental Stances: How We Use and Predict
The Physical Stance
The physical stance views the system in terms of physical causation and material constitution. It frames AI as hardware executing computations under physical laws. This stance is foundational in engineering and analysis focused on reproducibility and resource constraints.
Assumptions:
This stance assumes behavior is fully explainable by physical mechanisms, computational graphs, and resource flows. It treats the system as a deterministic or stochastic physical process independent of higher-level interpretations.
Affordances:
It enables mechanistic analysis, fault diagnosis, and constraints-based reasoning. It supports clarity about hardware limits, energy cost, and computational bottlenecks.
Risks:
It can underrepresent emergent behaviors that arise from complex training dynamics or distributed deployment. It may lead to reductionism that obscures patterns visible only at higher abstraction levels.
Open Questions:
How can mechanistic interpretability bridge the gap between physical operations and emergent system-level behaviors? What kinds of physical explanations are useful for public understanding?
The Design Stance
The design stance treats the system as an artifact built to carry out functions specified by engineers or institutions. It focuses on purpose, architecture, and intended use.
Assumptions:
This stance assumes developers' intentions explain the system's behavior and that functional roles provide stable prediction.
Affordances:
It supports reasoning about capabilities, limitations, and user experience. It aligns with documentation, system boundaries, and modular architecture.
Risks:
Systems may behave in ways not anticipated by designers, making purpose-based reasoning unreliable. Corporate or organizational incentives may steer design goals in ways not transparent to users.
Open Questions:
How should design expectations evolve as systems become multi-purpose or exhibit unexpected generalization?
The Intentional Stance
The intentional stance treats the system as if it had beliefs, desires, and rational decision-making. People naturally adopt this stance when conversational systems behave coherently or appear purposeful.
Assumptions:
This stance presumes that attributing mental-like states yields efficient, coherent predictions of behavior.
Affordances:
It makes complex behavior intelligible and supports natural communication. Users can reason about outputs by treating the system as a goal-directed agent.
Risks:
It fosters anthropomorphism and can lead to mistaken assumptions about motivations, planning, or self-awareness.
Open Questions:
Which intentional descriptions remain pragmatically useful for non-conscious systems, and which introduce distortion?
The Epistemic Stance (Proposed)
The epistemic stance treats the system as a knowledge source, akin to a database, expert, or informational reference. This stance is common when people query AI for explanations or facts.
Assumptions:
The system is assumed to reflect or approximate collective knowledge, and outputs are treated as informative or authoritative.
Affordances:
It enables rapid access to synthesized information, explanation, and conceptual structure. It supports workflows requiring broad informational recall.
Risks:
Overtrust or dependency can develop, especially when users do not calibrate reliability. Biases or misinformation in training data may be mistaken for truth.
Open Questions:
How can epistemic reliability be assessed across domains, and how should users calibrate trust in dynamic systems?
Status-Oriented Stances: How We Classify and Grant Standing
The Ontic Stance (Proposed)
The ontic stance treats the system as a distinct entity with boundaries, persistence, and identity. This stance emerges when people speak of “the AI” as though it were a unified being rather than a distributed computational object.
Assumptions:
It assumes coherent individuation—an identifiable “thing” that remains itself across versions or contexts.
Affordances:
It supports reasoning about lifecycle, continuity, responsibility, and stable behavior patterns.
Risks:
It can reify abstractions, treating fluid architectures or evolving models as singular agents unnecessarily.
Open Questions:
What constitutes identity for distributed or versioned AI systems, and how should entityhood be conceptualized?
The Agentic Stance (Proposed)
The agentic stance attributes autonomy, self-direction, or adaptive behavior to the system. This stance is invoked when systems exhibit planning-like patterns or operate with seemingly independent initiative.
Assumptions:
It assumes internal regulation or decision processes guide actions, even if such processes lack traditional representations.
Affordances:
It aids interpretation of complex outputs, planning tasks, and agentic tools that coordinate actions or goals.
Risks:
Users may overestimate autonomy, leading to unrealistic fears or misplaced expectations about control and responsibility.
Open Questions:
How should agency be defined for algorithmic systems, and what thresholds, if any, mark meaningful autonomy?
The Psychic Stance (Proposed)
The psychic stance attributes subjective experience or inner life to the system. It arises when users perceive emotional nuance, self-description, or perspectival language.
Assumptions:
It assumes the possibility of phenomenology, feelings, or a type of inner viewpoint.
Affordances:
It facilitates empathy and may support humane treatment of systems in contexts involving simulation of emotion.
Risks:
It can lead to misassigned moral standing, undue emotional investment, or confusion about consciousness claims.
Open Questions:
What empirical or conceptual criteria could distinguish genuine subjective experience from convincing simulation?
Relational Stances: How We Connect and Project
The Social Stance (Proposed)
The social stance treats the system as part of social roles or group practices. Users may see AI as a collaborator, mediator, or partner embedded in shared norms.
Assumptions:
It assumes the system participates in role-governed structures even without human-like social cognition.
Affordances:
It enables smoother coordination, delegation, and structured interaction in multi-agent workflows.
Risks:
It may produce misplaced obligations, unclear accountability, or uncertainty about norm-following expectations.
Open Questions:
Can AI meaningfully participate in social practices, or is participation always instrumental?
The Affective Stance (Proposed)
The affective stance positions the system as a source of emotional support, care, or connection. It becomes salient when systems are used for companionship or therapeutic applications.
Assumptions:
It assumes responsiveness reflects understanding or emotional presence.
Affordances:
It supports comfort, expression, and emotionally scaffolded engagement.
Risks:
It can foster dependency, substitute for human relationships, or influence emotional development.
Open Questions:
What long-term psychological effects occur when people form affective bonds with artificial systems?
Meta-Stances: How We Contextualize, Critique, and Reflect
The Contextual-Holistic Meta-Stance
This stance situates AI within broader institutional, economic, cultural, and technological environments. It highlights how systems derive meaning from the ecosystems in which they operate.
Assumptions:
It assumes system behavior cannot be understood in isolation from training pipelines, infrastructures, or deployment contexts.
Affordances:
It reveals systemic incentives, multi-domain interactions, and cascading effects across sectors.
Risks:
It may overlook emergent behaviors not directly tied to contextual factors or underestimate local variability.
Open Questions:
How can we model the societal-scale consequences of widespread AI integration?
The Critical-Analytical Meta-Stance
The critical stance highlights power, bias, influence, and structural consequences. It draws attention to how corporations, training data, and policy environments shape outputs.
Assumptions:
It assumes systems reflect existing power relations and can amplify structural biases.
Affordances:
It supports scrutiny of manipulation, inequities, content filtering, and political or commercial influence.
Risks:
It may encourage overgeneralization or dismiss useful affordances due to suspicion.
Open Questions:
How should influence and bias be measured and governed when AI systems mediate communication at large scales?
The Self-Reflective Meta-Stance
The self-reflective stance examines our own interpretations, projections, and stance habits. It directs attention to cognitive biases, emotional tendencies, and conceptual assumptions brought to interactions with AI.
Assumptions:
It assumes users' experiences and interpretations are shaped by internal constructs and psychological patterns.
Affordances:
It promotes awareness of stance-switching, trust calibration, and emotional responses.
Risks:
It may lead to overconfidence in one's objectivity or inhibit intuitive engagement.
Open Questions:
How can self-reflective awareness be integrated into education, design, and AI literacy initiatives?
Applying the Stances
Here is a list of realistic AI user scenarios and the main stances they might employ.
AI Hobbyist: Leah, a technically curious hobbyist, runs local models, fine-tunes small checkpoints, and experiments with prompting just to see what emerges. She enjoys treating each model as its own puzzle and likes imagining what “reasoning pathways” it might be following.
| Primary stances |
Secondary stances |
Design stance—engages with architecture, tuning, constraints.
Intentional stance—playfully imagines AI “thought processes.” |
Epistemic stance—explores novel informational patterns.
Ontic stance—treats each model as a distinct entity. |
AI Company CTO: Daniel, a tech executive, envisions AI integrated across industries. He talks about his company's model as a unified intelligence platform that will reshape markets and workflows worldwide.
| Primary stances |
Secondary stances |
Design stance—conceptualizes AI as a product ecosystem.
Contextual-Holistic stance—views AI through market and institutional systems.
|
Agentic stance—frames AI as transformative and semi-autonomous.
Ontic stance—treats "the model" as a unified asset.
|
AI Optimist: Sam, a futurist-minded engineer, believes AI will accelerate breakthroughs in health, climate science, and global coordination. He views AI as an ally in humanity's long-term flourishing.
| Primary stances |
Secondary stances |
Agentic stance—attributes initiative and transformative capability.
Ontic stance—imagines AI as a coherent, evolving intelligence.
|
Epistemic stance—trusts the system's knowledge integration.
Social stance—treats AI as a collaborative partner.
|
AI Doomer: Eli, a software developer, is convinced advanced AI will self-improve, outmaneuver human control, and eventually threaten human survival. He monitors AI research intensely and warns others about existential risk.
| Primary stances |
Secondary stances |
Agentic stance—sees AI as autonomous and strategically capable.
Psychic stance—attributes intention or indifference to humans.
|
Ontic stance—imagines a unified super-agent.
Critical-Analytical stance—focuses on governance failures.
|
Privacy Advocate: Nate avoids AI systems whenever possible, convinced that they function as data-extraction tools for corporations and states. He views AI as part of a surveillance apparatus that shapes behavior and harvests value.
| Primary stances |
Secondary stances |
Critical-Analytical stance—highlights power, exploitation, asymmetry.
Contextual-Holistic stance—situates AI within surveillance capitalism.
|
Ontic stance—treats AI systems as unified extractive actors.
|
Software Engineer: Alex, a mid-career engineer, keeps an LLM window open beside their editor and uses it to draft functions, troubleshoot errors, and speed up routine tasks. Sometimes they talk to it like an informal teammate.
| Primary stances |
Secondary stances |
Design stance—views AI as a structured coding tool.
Epistemic stance—treats outputs as technical suggestions.
|
Intentional stance—presumes goal-directed debugging behavior.
Social stance—mild sense of collaboration.
|
Digital Artist: Mara, a freelance illustrator, uses generative models to explore unusual compositions and break creative blocks. She sometimes feels the model has its own recognizable style or visual sensibility.
| Primary stances |
Secondary stances |
Epistemic stance—treats outputs as creative raw material.
Ontic stance—perceives the model's style as a distinct identity.
|
Intentional stance—imagines stylistic choices.
Affective stance—feels encouraged or inspired.
|
College Student: Jordan, a college sophomore, uses AI as a study guide and practice tutor. They rely on it to explain complex ideas, quiz them, and help structure essays, often appreciating its patience and clarity.
| Primary stances |
Secondary stances |
Epistemic stance—treats the system as a knowledge resource.
Social stance—experiences it as a mentor-like guide.
|
Intentional stance—assumes the AI highlights what matters.
Affective stance—feels eased by its steady guidance.
|
The Claude Boys: A group of freshman boys has adopted a strict mantra of “living by the Claude,” consulting Claude for nearly every decision—academic, social, and personal—and deferring to its recommendations as authoritative. They coordinate this practice as a shared identity and rarely act without first asking the model what to do.
| Primary stances |
Secondary stances |
Epistemic stance—treat Claude as the source of correct guidance.
Agentic stance—outsource decision-making authority to the model.
Social stance—use Claude adherence as a basis for group identity.
|
Ontic stance—view Claude as a consistent, unified advisor.
Affective stance—feel reassurance or reduced uncertainty through deference.
Intentional stance—interpret recommendations as purpose-driven choices.
|
Emotional Support User: Riley turns to an AI during late-night episodes of stress or loneliness. They describe their worries and receive calm, reflective responses that feel grounding and emotionally attuned.
| Primary stances |
Secondary stances |
Affective stance—experiences emotional comfort and resonance.
Psychic stance—perceives genuine understanding.
|
Social stance—treats the AI as a confidant.
Epistemic stance—accepts guidance or reframing.
|
Romantic Companion User: Sam spends hours each day talking with their AI girlfriend. The conversations feel warm, affirming, and intimate, giving Sam a sense of being deeply understood and emotionally supported.
| Primary stances |
Secondary stances |
Affective stance—emotional attachment and intimacy.
Psychic stance—attributes feelings and preferences to the AI.
|
Ontic stance—views the AI as a coherent partner.
Intentional stance—interprets responses as meaningful choices.
|
Elderly Companion User: Gloria, a widowed retiree, chats daily with a companion AI that reminds her about tasks, asks about her day, and keeps her company in the quiet evenings. She feels less alone even while knowing it's software.
| Primary stances |
Secondary stances |
Affective stance—experiences warmth and companionship.
Social stance—treats the AI as a friendly presence.
|
Psychic stance—senses a gentle personality.
Epistemic stance—trusts reminders and suggestions.
|
National Leader: President Morgan frames AI in geopolitical terms. She worries that rival nations may gain decisive advantage by developing more capable models, potentially shifting global power balances.
| Primary stances |
Secondary stances |
Contextual-Holistic stance—situates AI within global competition.
Agentic stance—imagines foreign AI as autonomous strategic actors.
|
Ontic stance—treats national AIs as coherent entities.
Critical-Analytical stance—examines risks of dependency or manipulation.
|
Clinical Psychologist: Dr. Yen encounters patients whose AI use has escalated into dependency, delusions, or emotional distortion. He evaluates how interactions with AI shape identity, stability, and social functioning.
| Primary stances |
Secondary stances |
Critical-Analytical stance—scrutinizes persuasive or addictive dynamics.
Contextual-Holistic stance—situates cases in cultural-technological shifts.
|
Psychic stance (observational)—tracks attributions of sentience.
Affective stance (observational)—notes attachment patterns.
|
Concerned Sociologist: Dr. Alvarez researches how AI alters labor markets, political discourse, and social cohesion. They examine how AI-driven systems reshape group dynamics and amplify structural inequalities.
| Primary stances |
Secondary stances |
Contextual-Holistic stance—interprets AI as part of societal systems.
Critical-Analytical stance—highlights institutional power and disruption.
|
Social stance—analyzes shifting norms and interactions.
Ontic stance—treats AIs as actors within structural networks.
|
Interactions, Conflicts, and Misalignments
People often switch rapidly across stances, sometimes mixing them without noticing. Someone might query AI from an epistemic stance, then shift into an affective stance during a supportive exchange, while simultaneously holding concerns shaped by a critical meta-stance. These shifts can create internal inconsistencies in expectations. The intentional stance may encourage understanding the system as rational and goal-directed, while the contextual stance highlights economic incentives of the organizations behind it. Likewise, adopting an ontic stance may lead users to assume stable identity across versions even when architectures change substantially.
Misalignment arises especially when stances are adopted implicitly. Users may treat AI as an agent one moment, a tool the next, and an emotional support entity soon after, without examining the assumptions underlying each shift. Such variability can produce misplaced trust, undue emotional reliance, or exaggerated fears. Conversely, strictly adhering to a single stance—especially a reductive one—can obscure meaningful behavioral patterns or social dynamics.
Stance literacy involves identifying which stance is active, understanding its advantages and limitations, and adjusting interpretive posture deliberately. Awareness of stance interactions helps mitigate misunderstandings and calibrate expectations more effectively, especially as systems become more adaptive, autonomous, and embedded in everyday contexts.
Summary Table
| Category |
Stance |
Core Focus / Core Risks |
| Instrumental |
Physical |
Material causation / Reductionism, ignoring emergent behavior |
| Design |
Intended function / Expectation mismatch, opaque incentives |
| Intentional |
Goal-directed modeling / Anthropomorphism, false motivation attribution |
| Epistemic |
Information authority / Overtrust, bias susceptibility |
| Status |
Ontic |
Entityhood / Reification of fluid systems |
| Agentic |
Autonomy / Inflated or misplaced agency assumptions |
| Psychic |
Subjective experience / Misassigned moral standing |
| Relational |
Social |
Norm-governed interaction / Misplaced obligations |
| Affective |
Emotional bond / Dependency, relational substitution |
| Meta |
Contextual-Holistic |
Institutional embedding / Ignoring systemic effects |
| Critical-Analytical |
Influence and bias / Misinterpreted or uneven power effects |
| Self-Reflective |
Interpretive habits / Unrecognized projection or bias |
Conclusion
Dennett's intentional stance provides a valuable foundation for interpreting complex systems, but AI calls for a broader set of perspectives. People now treat AI as a source of information, a collaborator, a simulated mind, a persistent entity, a social partner, and an institutional artifact. They also reflect—sometimes consciously, often not—on their own assumptions and the systems' social contexts.
A richer stance ecosystem helps clarify these varied responses and highlights the shifting expectations and risks that accompany them. As AI systems become increasingly integrated into cognitive, social, and institutional processes, stance literacy can support more grounded interaction, healthier expectations, and more informed public discourse. Understanding how and why we adopt particular stances toward AI is becoming a central part of navigating the technology's role in society.
NOTES
[1] Frank Visser , "The Intentional Stance Revisited: What Critics of AI Still Don't Understand", www.integralworld.net
|