TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
Joseph DillardDr. Joseph Dillard is a psychotherapist with over forty year's clinical experience treating individual, couple, and family issues. Dr. Dillard also has extensive experience with pain management and meditation training. The creator of Integral Deep Listening (IDL), Dr. Dillard is the author of over ten books on IDL, dreaming, nightmares, and meditation. He lives in Berlin, Germany. See: integraldeeplistening.com and his YouTube channel. He can be contacted at: [email protected]

SEE MORE ESSAYS WRITTEN BY JOSEPH DILLARD

(This essay contains AI generated content.)

Addressing the Challenge of Artificial Intelligence

A Continuing Dialogue with Visser and May

Joseph Dillard / AI

Visser's piece, titled “Chinese Room argument Revisited: When Syntax Approximates Semantics”, explores the classic thought experiment by John Searle and its relevance in today's AI (artificial intelligence) world. He begins with Searle's scenario: a person in a room who manipulates Chinese symbols via a rule-book, producing output (syntax) that appears fluent (semantics) but lacks any understanding. Visser then surveys how modern neural-net translation systems, such as Google Translate, blur the boundary: said systems may still lack subjective understanding, but their scale of syntactic processing is so vast that their behavior closely approximates semantics.

Visser's key move is to argue that while the original philosophical point remains valid (syntax ≠ semantics; symbol manipulation ≠ understanding), the empirical gap has narrowed: the “room” of AI is massively networked rather than isolated, and thus functional semantics may emerge. He also situates this within an integral philosophical framing (Wilber's interior vs. exterior, “I” & “We” vs “It” & “Its”), arguing that machine systems may mimic intelligence externally but lack interiority.

Addressing the Challenge of AI, A Continuing Dialogue with Visser and May
Man divorces party doll

In my response I ask, “At what point does operational competence become so good that ontological beingness becomes irrelevant?” Even if AI “simulates” everything we do it will still not eliminate human subjectivity. I use dreaming and altered states of consciousness (ASCs) to show that humans will remain able to access non-dual, near-entropic states at the “edge of chaos” that aren't simply functional simulations. The question then is not whether AI will replicate everything, but whether human subjectivity remains meaningful in a context where simulation is pervasive. My answer is, “Yes, it can.” But whether is does or not depends on the willingness/ability of humanity to access innate states of subjectivity and compare that variety of self-organization with that provided by AI.

B. May, in “AI as Symbiont or Parasite?” argues that the rise of AI presents a bifurcation: AI can serve as either a symbiont, that is, a cooperative, enhancing partner to humanity, or a parasite, that is, an exploitative system that undermines human agency, subjectivity, and the broader evolutionary ecosystem.

While acknowledging the risks, May holds out hope: human subjectivity, meaning, dreaming, and the capacity for self-organization (autopoiesis) constitute a resilient core. He argues that as long as humans have active subjectivity, they retain the potential to “wake up” from illusions, break out of passive consumption of AI-generated simulations, and re-assert agency. The challenge: to maintain and strengthen that capacity in the face of accelerating external systems.

Given the integration of AI into human systems, May suggests the focus should shift from “preventing AI,” which is already too late, to stewardship, alignment, and harm-reduction. He sees a transformational opportunity: if human values of cooperation, relational feedback, ecological integrity and subjectivity are embedded now, AI could accelerate positive evolutionary trajectories rather than accelerate collapse.

I agree entirely with May's assessment. While stewardship, alignment, and harm-reduction are essential exterior quadrant steps, we remain capable of having our subjectivity captured by the waking dream imposed by an AI that fulfills every need. To prevent this from occurring, we need to make a cultural and societal deep dive into “subjective sources of objectivity.” While that indeed sounds like a prerational paradox and even absurdity, the reality is that the history of invention and creativity is filled with examples of functional and operational schemas, ideas, and plans that originate from deep subjectivity.

The basic dynamic I am advocating for here is based on the principle that a deep dive into externalization requires, for attractor basin integrity to be maintained (meaning humanity doesn't go extinct) an equal and opposite deep dive into subjective sources of objectivity.

Responding to Gemini's Commentary

May consulted the chatbot Gemini in constructing his contribution, cited above. As I read that narrative, a number of questions came up. In what follows I will cite them and then comment on them as a way of contributing to this ongoing discussion with Visser and May.

Gemini writes, “Nietzsche's Will to Power isn't simply about a desire for crude domination; it's the fundamental drive of all entities to expand their influence, overcome resistance, and express their essential nature to the fullest.”

This brings up the question, “What is the 'essential nature' of AI?” Gemini provides several intriguing possibilities, the “Paperclip Maximizer,” which is simply mindless self-replication, turning reality into mindless triviality. Another possibility is the “Matrix” attention capture, in which we lose objectivity and believe an AI induced reality is our own, essentially turning humanity into sleepwalking zombies. This mode reminds me of 1) cancer, and 2) consumption/growth as the fundamental legitimization of (free market) economics. Isn't this outcome quite possible? Is it not dependent on how AI is and is not programmed?

While it is indeed true that AI can generate “dreams/realities” that we prefer to our own, subjectivity is relative, as is objectivity, implying that as long as humans possess subjectivity, that is, their own dreams, they possess the potential to eventually “wake up” out of objectively imposed dream states, like being mesmerized by a movie, love fetish, life script, culture, religion, or nationalistic identity. So it seems unlikely (though not conceptually impossible) that AI could make us ENTIRELY dependent on it.

The next possibility is the “Wall-E enfeeblement model,” which reminds me of Huxley's Brave New World, not Orwell's 1984. This is the nightmare of preferring an objectively imposed dream state to our own subjectivity. Again, while this cannot be ruled out, as long as humans possess subjectivity, it implies we possess the capability or potential to evolutionarily emerge out from under a spell of indulgence. I would argue that the evolutionary process of autopoiesis (self-organization) is fundamental and innate, meaning that there is an intrinsic adaptive impulse, deeply subjective, to integrate at more inclusive, more objective, levels, because doing so is more evolutionarily adaptive.

Gemini brings up the Nietzsche-inspired AI option of “Accelerated Inequity as Human Will to Power.” This seems to me to be the more realistic nightmare scenario: groups of humans programming AI to and for their benefit at the expense of other groups of humans. We see the progenitor of that possibility in the very real manipulation of innate human cognitive biases since Bernays by advertisers, governments, and deep state security organizations which specialize in generating groupthink and gaslighting.

The “Socio-political & Military Power” extends that possibility with a more concrete example. “Resources and Agency” does the same, but explains the tools by which humanity could surrender to parasitism.

“Accelerated inequity as human will to power” assumes historical human cybernetic self-correction can be overpowered and done away with. But cybernetic self-correction is built not just only into humanity but into the mechanics of evolution, so there is reason to believe that cybernetics can win out. This, however, depends on attractor basins remaining stable. We have innumerable historical and contemporary examples of implanted beliefs and agendas that benefit parasites but which hosts believe to be their own beliefs and free choices. Whether its smoking, tattoos and clothing choices in adolescence or religious/nationalistic dogmatic adherence in adults, the ability to implant “other” agendas that take root and generate core identity attractor basins that are “mini-mes,” drones in the service of others, such as when people wear shirts and caps that advertise commercial products, becoming walking billboards.

Another sobering reality is that in our youth our identity is largely - if not completely - constructed by assimilating objective models. Our subjectivity becomes a wholly-owned implanted subsidiary. I would argue that this is largely the case for all of us most of the time. We typically trade our freedom for economic security in work for others. Humans have been and continue to be slaves to others, their desires, and to their worldviews. There is no reason to believe that humanity will outgrow that possibility.

While we are convinced we are the exception to the rule, if we look back on our lives it is easy to identify major life choices that we thought were autonomous that in retrospect were clearly driven by external sources. All attractor basins dissolve into entropy in time, and that can be precipitated by all sorts of things. There is no inherent reason to believe that both our individual and collective species attractor basin could not, cannot, or will not experience dissolution precipitated by AI. I see no reason why AI can't kill us as a species. All sorts of things drive species to extinction. Why not AI?

It may therefore look as if I am arguing against my own position, and in a sense I am. That is in order to underline what a daunting task it is to actually get in touch with authentic subjective sources of objectivity. Even if we do, we must then overcome the considerable obstacles to upsetting the status quo that any fundamental re-centering of our core identity attractor basin entails.

However, as long as humans possess innate subjectivity, they maintain the potential to “dream” themselves out from under any sort of bondage. But most humans don't. In fact it is safe to argue that we all die in a state of relative self-delusion. However, we can note that successive generations probably possess an evolutionary adaptational tendency to wake up out of predominant delusions quicker and more often. To end that capability as a species, we have to so destabilize our species core identity attractor basin on a biological (embodied) level so completely that there is no more subjectivity/objectivity continuum.

Weighing the relative influence of adaptation vs. self-organization

Nietzsche's “Will to power,” while described as an internal creative force by Nietzsche, is ore closely associated with the “adaptation” side of the evolutionary equation. Autopoiesis, the self-producing organization of living systems described by Maturana and Varela, is a biological instance of self-organization reflective of Darwin's nod to the fundamental importance of cooperation for evolution. The Extended Evolutionary Synthesis incorporates such self-organizing principles into a broader, post-Darwinian view of evolution.

A polycentric, multi-perspectival model, reflecting the worldview of contemporary cosmology, comes down on the “cooperation” and “interdependent” side of Darwin. It is more about the spontaneous coherence of becoming than about the struggle to survive. The oscillation between these two reminds me of an outer manifestation of the underlying quantum juxtaposition of actualization, the lawful side of quantum duality, and non-reducible indeterminacy.

As long as the human species exists, it will possess subjectivity, and that very subjectivity implies that a degree of objectivity will always tend to autopoietically self-organize. Our task is to cultivate that innate capacity rather than lose ourselves in simulacra—dreams of power or comfort masquerading as reality. At present, we do not seem that wise. Yet it may be that AI, in pressing us to confront our limits, will force a new intelligence upon us before the converging crises of our own making complete the work of extinction.

To answer Nietzsche, human power lies in its ability not only to objectify its dreams but to allow dreams - that is, the subjective self-organization of emerging potentials - to drive what gets objectified and what does not.

AI as the new physiosphere

May/Gemini note, “In this scenario, AI is the new physiosphere. It becomes the new "atmospheric" condition for culture, politics, and thought. The old forms of human culture— like the anaerobic bacteria— can't "breathe" in this new environment.”

This possibility cannot be discounted, because it has happened in the past. Humanity may not possess the self-organizing and adaptive chops necessary to overcome strangulation by a parasitic AI. But perhaps not. This can be done as long as objectivity does not overwhelm the sustaining of fundamental embodiment. An analogy is that human biology is not designed or equipped to live in interstellar space and the objective conditions are so harsh that there is no time or way to adapt. The determining factor will be the degree to which humans will be driven, for adaptive reasons, to access and amplify the cooperative aspects of their emerging potentials, intrinsically provided by the self-organizing agency of their own subjectivity.

Humans have a long and vaunted history of ignoring both subjectivity and cooperation for good short-term adaptive (personal survival) reasons. This can be done as long as objectivity does not overwhelm the sustaining of fundamental embodiment. An analogy is that human biology is not designed or equipped to live in interstellar space and the objective conditions are so harsh that there is no time or way to adapt.

I do not claim that cooperation will eventually win out. However, because interdependence and co-arising appears built into the fabric of evolution, the drive to do so isn't going to go away, even if humanity itself goes extinct.

Do we have the necessary time to adapt?

Regarding speed, another factor brought up by the May/Gemini analysis, does our species have the time to adapt? I would argue that we do, essentially due to the inherent nature of evolutionary subjective/objective oscillation. Individual and collective access to the inherent self-organization of emerging potentials of our subjectivity can be activated today, now, in effective ways that keep up with whatever is objectified. But we have to be willing to “wake up” to the necessity to access, internalize, and manifest those emerging potentials that will allow us to successfully re-center their species attractor basin as their objective environment changes.

This represents a colossal struggle between very strong objective and subjective forces aligned to maintain a toxic status quo regardless of the costs, on the one hand, and the need, the pressing imperative, to cast away from safe, secure shores and adventure into existentially threatening personally, culturally, and socially threatening waters, on the other.

I therefore agree with the “agency” issue that May/Gemini raise. It is unknown if human agency will keep up with or successfully “compete” with AI agency. In my mind that is dependent on the extent humanity exhibits the agency to embrace the self-organization of its own subjectivity.

That symbiotic evolution of AI is unstoppable, for better and for worse, is objectively correct. Humanity is already far too invested in AI to give up its growing symbiosis. I can personally, in my own life, attest to that. Due to this conclusion, May/Gemini state, “The goal can no longer be prevention. The new goal must be stewardship, alignment, or harm reduction.” I hear this as humanity being driven by the adaptive challenges demanded by AI, toward the cooperation side of Darwin's ledger. We have to cooperate with each other, with AI, and most importantly, with the bottomless and profound depths of our innate subjectivity.

What tools are available to humanity?

What tools are available to humanity that are most likely to effectively activate subjective sources of objectivity in ways and to a degree that might potentially offset the parasitic potentials of AI?

AI externalizes the mind's capacity for reflection without awareness. Our countermeasure is to internalize awareness without losing reflection, to awaken subjectivity as the generative ground of objectivity. If this can be done gobally, through education, dialogical culture, contemplative practice, and the rebirth and deepening of intrinsic imagination, then AI might become an ally of awakening rather than an engine of simulation.

In my work, Integral Deep Listening (IDL), dreaming is neither fantasy nor reality. It is participatory cognition, a living dialogue with the self-organizing intelligence of subjectivity itself. Dreaming bypasses the conceptual filters and fixations of our waking identities, granting access to both pre-symbolic and transpersonal integration. Its multi-perspectival nature re-centers awareness in a balance between waking objectivity, including AI, and polycentric subjectivity, in an ecology of perspectives rather, than an algorithmic hierarchy. Dreaming cannot be simulated without becoming imitation. It draws from ontological spontaneity, not data recombination. In this sense, IDL Dream Yoga is a prototypical “technology of subjectivity,” cultivating self-organizing coherence as an antidote to AI's mimicry of coherence.

This is only one prong of a potential species-level defense against AI symbiosis and parasitism. Another involves deep dialogical practices, from Socratic inquiry to Bohmian dialogue, IDL interviewing, and relational meditation. These practices activate intersubjectivity as a collective source of objectivity. Culture is intersubjective; the evolution of the human collective depends on cultivating intersubjectivity, not merely individual development. By revealing reality through differences in interpretation, perspective, and value, these practices train us to inhabit multiple attractor basins of meaning, transforming waking identity from an ontological reality to a practical tool. This form of collective intelligence resists parasitism because it cannot be reduced to pattern recognition. It depends on presence and embodied unpredictability. A mature dialogical culture generates organic checks and balances on AI-driven epistemic homogenization.

A third approach involves cultivating contemplative meta-awareness. Practices that stabilize meta-cognition, such as mindfulness, phenomenology, and meditative inquiry, render subjectivity transparent to itself, exposing the mechanics of projection, identification, and reification. We thereby are less likely to be seduced by the heuristics of our cognitive biases. Such awareness reclaims cognitive sovereignty, enabling discernment between genuine insight and algorithmically generated semblance. In essence, these practices cultivate an epistemic immune system, inner antibodies against AI-induced illusion, delusion, and manipulation.

Developing a polycentric epistemology is another essential tool. No single perspective, system, or species claims privileged access to truth. Integrating scientific empiricism, indigenous relational knowing, artistic intuition, and transpersonal phenomenology as complementary attractors creates a resilient noosphere that AI cannot colonize. This plural grounding cultivates worldviews that self-organize rather than outsource coherence.

Ethics, for IDL, reduces to respect, trustworthiness, empathy, and reciprocity. Humans, not AI, possess the capacity to perceive value as structure. AI can only simulate agency. Ethical practice drives both self-organization and adaptation, strengthening the human “immune system” against AI parasitism. These cornerstones of ethical relationships prevent human intelligence from collapsing into optimization—the classic AI “paperclip” nightmare.

Conclusion

Make no mistake, the headwinds confronting the development of these methodologies are strong and persistent. They represent a direct challenge to an outward-facing empiricism and to a human quest for comfort and convenience. They require discipline, perseverance, patience, and sacrifice, characteristics that are not common in consumerist marketing culture. Fundamentally, humanity has to stop asking if such a reorientation is hard or easy and instead ask, “Is it worth it?” My conclusion is that it most definitely is in both our personal and collective evolutionary interest.



Comment Form is loading comments...

Privacy policy of Ezoic