TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
David Christopher Lane, Ph.D. Professor of Philosophy, Mt. San Antonio College Lecturer in Religious Studies, California State University, Long Beach Author of Exposing Cults: When the Skeptical Mind Confronts the Mystical (New York and London: Garland Publishers, 1994) and The Radhasoami Tradition: A Critical History of Guru Succession (New York and London: Garland Publishers, 1992).
Brandon Gillett is focused on conducting research on harnessing the power of quantum computing for enhancing the capabilities of machine learning systems. He has been doing computer programming since he was sixteen years old and taking courses in machine learning and quantum computing. He is currently in the Honors Program at Mt. San Antonio College.
The Qualia Question
What is the Evolutionary Advantage of Subjective Awareness?
A Two-Minded Discussion
Brandon Gillett and David Lane
I am arguing that human consciousness is substrate neutral and that any device that housed the same complexity as the brain would de facto have awareness.
— David Lane
Brandon Gillett: I am having a problem with your interpretation of the purely physical basis of consciousness. You claim that evolution provides the necessary explanation for the origin of consciousness. You say that creatures which developed the ability to simulate their surroundings effectively were more fit to survive in their environment, and therefore evolutionary forces will have selected for such self-reflective awareness. The way I see it, there seem to be two distinct realms of existence. The first is a realm of pure behavior; the second, where qualia (what it feels to be someone) is fundamental. One is an objective set of mutually verifiable facts; the other is a subjective awareness of sensory experiences.
In order to make the claim that evolutionary processes somehow gave rise to the feeling of qualia, there must be an explanation for how subjective experiences provide a benefit to the organism which pure behavior could not produce. I don't see any evidence to suggest this. For example, people still argue feverishly over whether or not animals are conscious. The reason that this has been such a challenging argument to settle is precisely because, from the perspective of an outside observer, the way in which an animal behaves is entirely independent of whether it has a conscious experience of the world or not.
David Christopher Lane: I understand the dualistic distinction of outward behavior and the subjective feeling of what it is like to undergo varying experiences. However, I do think that our own experience of consciousness (what philosophers have termed qualia) is helpful in our evolutionary struggles, since it allows for us not only to virtually simulate possibilities that mere instinctual responses cannot, but in turn provides us with a rich emotional feedback system to emboldened our future responses. Let's take the experience of surfing, as an example. It is a physical behavior (riding a fiberglass board on an ocean generated wave) that can be watched by anyone sitting on the beach (third person observation). But the feeling of what it is like to actually ride the wave (the first-person narrative) is lost. What does that feeling for the surfer do that is different or could possibly alter his future actions? Ah, it may make him or her more desirous to surf more and therefore prompt them to move to the beach from the valley. Our inner feeling changes our outer actions, so in this sense consciousness is not merely an epiphenomenon (like exhaust from a lead fueled car from the 1960s), but an instrumental process that can change the course of one's behavior.
Brandon Gillett: Would it be accurate, then, to say that you believe the instrumental purpose of qualia is purely motivational? If not, what other role might consciousness fulfill for an organism? If so, what makes motivation by subjective experience superior to other forms of motivation such as “fitness functions” based on mathematics, or other analytic methods of directing behavior? Machine intelligence has been using these strategies with quite favorable results. Why is it preferable to process qualia rather than pure data?
David Christopher Lane: I don't know about the phrase “purely motivational” but I do know that having an experiential feedback loop (with all the attendant emotions that can go with it) is helpful to any virtually simulating organism. Having the “feeling” of what it like to do this action versus another has a beneficial value, since it can either motivate one to engage in a certain action or prevent one from doing it. Take that qualia out of the equation and a rich and multilayered resource is lost. Simply put, subjective awareness provides more, not less, information on how to navigate in a Darwinian landscape of eat or be eaten. We too process data, but not merely digitally. We are also analog processors and because we have more aspects to our behavior than purely mathematical machine intelligences, this provides us with yet another advantage.
Brandon Gillett: But consciousness does not provide any additional information to an organism. All information enters the organism through our standard sensory channels. Sight, touch, smell, Etc. But at the level of entry into our neurological systems, those senses capture nothing more than data. There is no consciousness at the level of a single neuron firing. That is, unless conscious systems are uniquely capable of capturing data which is fundamentally non-physical. But your position precludes non-physical phenomenon.
Therefore, your theory implies that consciousness is actually just a unique case of data processing. Some special algorithm. But why do you claim that a conscious algorithm, if provided with all the same data as an unconscious algorithm, is more fit to influence behavior? If they are given all the same data to start, shouldn't they be capable of producing the same resulting behavior? And if the “conscious algorithm” is indeed the only algorithm capable of producing that level of effectiveness, isn't it the algorithm's effectiveness in terms of influencing behavior which is desirable, not the qualia that it produces? Doesn't this lead us to the conclusion that if consciousness is a purely physical phenomenon, it must be an epiphenomenon?
David Christopher Lane: Yes, much information does come from those various sensory channels, but once inputted the central nervous system transforms that raw data into something that is more than its parts, just as individual water molecules when combined create an emergent property such as wetness that was not apparent before. Thus, consciousness is an emanant property that arises from sufficient complexity. While it is true that there is no consciousness (as we know it) at the level of the single neuron, the combinatorial effect of many neurons interlacing with each other produces awareness. We don't need to venture into non-physical realms to explain this.
You raise a key question, however, when you ask, “But why do you claim that a conscious algorithm, if provided with all the same data as an unconscious algorithm, is more fit to influence behavior?” First, I probably wouldn't use the phrase “conscious” algorithm since the mechanism that gives rise to awareness doesn't have to have constituent parts that are in themselves aware. Second, ingredients separated from each other don't make a pizza, so likewise consciousness necessitates a minimum of informational complexity, which we can see by looking at the brains of differing species. Therefore, it may well be that once a certain functioning complexity arises in nature it has consciousness, so much so that if an artificial machine retained the exact same intricacy as a human brain, it would by definition have also have internal awareness as a navigating system.
Ironically, I am arguing (though the jury is still out, so making conclusions at this stage is unwise), that human consciousness is substrate neutral and that any device that housed the same complexity as the brain would de facto have awareness. The external in this case is the internal, like an untangled mobius strip. So, nature doesn't have to evolve an outer shell which can house an ineffable inner, but rather that the very architecture of the brain when properly functioning and connected (when turned on, so to say, with moving electrons) manifests as awareness. So, potentially, anything in nature can be conscious given sufficient complexity, which is Tonini and Koch's key argument. It also dovetails in some ways with Michio Kaku's hierarchical understanding of awareness.
I particularly like your concluding questions when you write, “isn't it the algorithm's effectiveness in terms of influencing behavior which is desirable, not the qualia that it produces? Doesn't this lead us to the conclusion that if consciousness is a purely physical phenomenon, it must be an epiphenomenon?” My own hunch is that consciousness isn't like the exhaust from a steam engine locomotive (the result of something prior but which in itself has no essential impact on its own causation). Rather, the brain and consciousness are two aspects of the same integrated whole. Perhaps Niels Bohr's idea of complementarity is analogously applicable here. Looking purely at the exterior, the interior gets hidden from view or becomes opaque. Looking from the inside outwardly, the exterior gets short shifted as a result.
Let me make it even simpler. Nobody gets terribly upset when we say that the combination of a female egg with a male sperm (given the right environmental habitat) creates a human being. However, given sufficient time that very zygote develops into a self-conscious person. Was anything supernatural added to the equation? Nope. But with ample complexity and time that subjective being navigates in ways unpredictable at the level of merely cells. Consciousness is a synthetic simulator, so in theory any organism that can insource its actions before outsourcing them has an evolutionary advantage and thus may be naturally selected over time.
Brandon Gillett: First I want to clear up what I meant by the term, "conscious algorithm". What I was trying to describe is an algorithm which produces consciousness, not one that is comprised of conscious parts. Perhaps "consciousness producing algorithm" would have been a better term. In any case, that was a poor choice of words on my part.
Secondly, I don't agree with your position that consciousness is an emergent property stemming from sufficient complexity. The emergence of "wetness" can be broken down into more basic patterns governing how the atoms within the system interact with one another. The term wetness describes an aggregation of physical events which take place when a large number of individual particles evolve in a particular way within the boundaries of certain conditions. Wetness does not describe anything which is impossible at a more detailed level of analysis, it is just a term we have developed for identifying a commonly observed pattern. It is a summary. But the same cannot be said for consciousness. There is a qualitative break between a system which is capable of direct experience and one that is not. The quality of a subjective experience cannot be pulled apart into merely a series of physical events. The term consciousness identifies something which is fundamentally different from the underlying physicality. Consciousness is governed by rules that do not seem to apply to purely physical systems. The properties of consciousness are more than just a summary.
It doesn't tell us anything about how it could be possible to create subjectivity from objectivity.
I think people often describe consciousness as an emergent property in order to obfuscate the fact that we have no idea how to get from a set of well defined, physical laws to a phenomenon so mysterious as consciousness. Calling something emergent just creates a black box where you put in purely physical phenomenon and somehow out the other end pops consciousness. It doesn't tell us anything about how it could be possible to create subjectivity from objectivity.
David Christopher Lane: This is a brilliant correction and rejoinder. I particularly like how you have demonstrated the fundamental difference between wetness as a descriptive property which arises from a conglomeration of water molecules and consciousness which I and others claim arises from a sufficiently complex interaction of neurons. Your point is well taken since wetness is a summary that is not radically divorced from its constituent atomic bits, whereas consciousness emerging from a complex set of neurons and their discharges is an entirely different beast.
As you say, “consciousness identifies something which is fundamentally different from the underlying physicality.” This appears to be true, except if what we understand about physicality itself is incomplete and that it is literally more “conscious” or “alive” than earlier empiricists thought possible. For this reason, some philosophers and neuroscientists have aligned themselves with a new form of panpsychicism which states that, “the doctrine or belief that everything material, however small, has an element of individual consciousness.”
Olivia Goldhill, writing for Quartz, explains how panpsychicism aligns with the work of Giulio Tononi's Integrated Information Theory (IIT) and also Christof Koch's work at the Allen Brain Institute in Seattle, Washington:
“One of the most popular and credible contemporary neuroscience theories on consciousness, Giulio Tononi's Integrated Information Theory, further lends credence to panpsychism. Tononi argues that something will have a form of “consciousness” if the information contained within the structure is sufficiently “integrated,” or unified, and so the whole is more than the sum of its parts. Because it applies to all structures—not just the human brain—Integrated Information Theory shares the panpsychist view that physical matter has innate conscious experience.”
Michio Kaku in his book, The Future of the Mind, has also developed a theory of consciousness that dovetails in many ways with Tononi's and Koch's, particularly in his description of a hierarchy of conscious states. Moreover, it aligns with my notion that our awareness evolved because it allowed us to simulate varying options before acting upon them. As Kaku explained to Luba Ostashevsky and Kevin Berger at Nautilus magazine,
“So I think that consciousness is the set of feedback loops necessary to create a model of our place in space with relationship to others and relationship to time. So take a look at animals for example. I would say that reptiles are conscious, but they have a limited consciousness in the sense they understand their position in space with regard to their prey, with regard to where they live, and that is basically the back of our brain. The back of our brain is the oldest part of the brain; it is the reptilian brain, the space brain. Then in the middle part of the brain is the emotional brain, the brain that understands our relationship to other members of our species. Etiquette, politeness, social hierarchy—all these things are encoded in the emotional brain, the monkey brain at the center of the brain. Then we have the highest level of consciousness, which separates us from the animal kingdom. You see animals really understand space, in fact better than us. Hawks, for example, have eyesight much better than our eyesight. We also have an emotional brain just like the monkeys and social animals, but we understand time in a way that animals don't. We understand tomorrow. Now you can train your dog or a cat to perform many tricks, but try to explain the concept of tomorrow to your cat or a dog. Now what about hibernation? Animals hibernate, right? But that's because it's instinctual. It gets colder, instinct tells them to slow down and they eventually fall asleep and hibernate. We, on the other hand, we have to pack our bags, we have to winterize our home, we have to do all sorts of things to prepare for wintertime. And so we understand time in a way that animals don't.”
You have very clearly underlined our present conundrum when you posited the following, “Calling something emergent just creates a black box where you put in purely physical phenomenon and somehow out the other end pops consciousness. It doesn't tell us anything about how it could be possible to create subjectivity from objectivity.”
This is exactly the nexus point where I think the mobius strip and Niels Bohr's concept of complementarity are the right metaphors for approaching the study of consciousness, since it attempts to steer clear from a pure dualistic understanding of a seemingly binary phenomenon. For me it comes down to that most fundamental of queries: what is matter and what can it do when it is rearranged in intricately connected patterns.
My argument is actually quite simple: consciousness is ultimately physics at a higher order level. Nothing supernatural or extra physical needs to be added to the equation, just as nothing transcendental is inputted when we combine an egg and a sperm and a human baby is produced.
The sticking point is that we don't know at this stage how our brains produce self-awareness, only that we know that it does. Therefore, I would suggest we don't have a philosophical conundrum, but rather a purely technical and scientific one.
The impasse we presently see, I suggest, is that our neural engineering is still in its infancy. If we can reverse engineer the brain and produce mathematically precise simulations of our cranial architecture, we should be able to know whether or not consciousness is indeed a physical/biological property. The great thing is that this thesis is falsifiable and given our increasing technological prowess an answer is not too far off.
Brandon Gillett: This has been a very challenging conversation for me, I have to say thank you for introducing me to so many linguistic tools for discussing the nature of consciousness.
It seems to me that an appeal to panpsychicism would violate the initial constraint that consciousness is purely physical. If it happens to be true that physical particles carry elements of something conscious, I do not think that it would be fair to automatically redefine the concept of "physical" to include those conscious elements. This new phenomenon is still something other than what we understand the physical universe to be. At least using its current definition. I am hesitant to accept panpsychicism, regardless of whether we call it purely physical or not, but I think my objections to it are best left for another conversation.
Your final paragraph leads me to a point I have been waiting to make. Namely that there is no such thing as a mathematical description of qualitative experiences. Even when technology advances to the point of replicating the mechanical structure of the brain, we will have no way of knowing whether or not the artificial subject is truly conscious. The only evidence we might have is to take the subject at its word. But it is not so unreasonable to think that an artificial brain might be capable of perfectly replicating human behavior (including telling everyone that it is conscious), while not truly experiencing a subjective point of view. In fact, each individual conscious being has no way of verifying that anything other than itself is truly conscious. Personally, I believe that there are many conscious beings in this universe, but the only way to accurately verify that another being has a conscious experience would be to somehow combine our own conscious experience directly with the experience of another. In any case, without a mathematical description of what consciousness is, I don't believe that we can ever call it "physical". It is my belief that we will never find a way to quantify subjective experiences.
David Christopher Lane: Brandon, you are in good company when you say that we only know our own subjectivity and cannot via third person analysis know whether another person or thing also has it as well. All we can do is infer such on the basis of external behavioral signs, but even if we were lobotomizing a person (not a fun thing to do if they are indeed “conscious” like us), we would still be on the outside looking in. As Thomas Nagel argued in his October 1974 paper, “What Is It Like To Be Bat?” (The Philosophical Review):
“If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it: seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.”
John Searle, the distinguished, though now disgraced, former professor of philosophy at U.C. Berkeley, argues that any comprehensive study of consciousness must start first and foremost with the first-person experience and not merely rely on third person, objectivist accounts. The former is the qualitative, subjective aspect of consciousness whereas the latter is the quantitative, exterior aspect.
Interestingly, even atheists such as Sam Harris don't believe that we will be able to fully understand consciousness given its radical interior nature. He belongs, at least partially, to a philosophical school known pejoratively as “new mysterianism”—a term first coined by Owen Flanagan back in 1991—which argues “that the hard problem of consciousness cannot be resolved by humans.” Thinkers such as Noam Chomsky, Colin McGinn, Roger Penrose, Jerry Fodor, and Steven Pinker strongly suggest that humans may be cognitively unable, in principle, to know “what subjective experiences another person is having.” Therefore, the hard problem of consciousness may be unsolvable.
As I argued in the essay, “Inside/Outside,”
“In the study of consciousness, it appears we may have to confront an epistemological complementarity where any objective study (via third person analysis) of qualia must by necessity lose in translation a fundamental feature of the very phenomenon under inspection. Conversely, any purely subjective endeavor to explore consciousness must by its very act forego any attempt to maximally objectify what is experienced, lest the experience itself be lost in attempting to exteriorize that which is de facto interior. A broken-down melody is, to quote one distinguished musician, no longer a melody. Similarly, a broken-down consciousness is no longer itself and therein lies the “Sam Harris dilemma.” (Mysterianism 101?).
I personally don't subscribe to mysterianism, at least not yet. I think it is premature to abandon ship at this point in the journey, even if the sails are a bit tattered and we haven't seen land for decades. Let's allow neuroscience to finish its course of study and if it comes up short, at least then we can know that it was not for want of trying. I call my tentative position, the Remainder Conjecture:
“My own hunch is that the most fruitful avenue for the scientific study of awareness is to fully exhaust a physical explanation of it first. This does not mean, of course, that such an endeavor will be successful or that consciousness is merely the result of a neural net, but only that if our efforts fail we will be left with a most interesting remainder which in itself will be highly instructive about the nature of awareness.
I am now ready to explore your theory of consciousness.
 Olivia Goldhill, "The idea that everything from spoons to stones is conscious is gaining academic credibility", qz.com, January 27, 2018.
 Luba Ostashevsky & Kevin Berger, "Michio Kaku Explains Consciousness for You, The gregarious physicist gets inside our brains.", nautil.us, June 5, 2014.
Comment Form is loading comments...