TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
Andrea Diem-Lane is a tenured Professor of Philosophy at Mt. San Antonio College, where she has been teaching since 1991. She received her Ph.D. and M.A. in Religious Studies from the University of California, Santa Barbara. Dr. Diem earned her B.A. in Psychology from the University of California, San Diego, where she conducted original research in neuroscience on visual perception on behalf of V.S. Ramachandran, the world famous neurologist and cognitive scientist. Professor Diem has published several scholarly books and articles, including The Gnostic Mystery, When Scholars Study the Sacred and When Gods Decay. She is married to Dr. David Lane, with whom she has two children, Shaun-Michael and Kelly-Joseph. Also available as e-book. Republished with permission.
Part I: Explaining Evolution | Part II: Explaining Consciousness | Part III: Recommended Readings
Explaining Consciousness
Darwin's DNA: A Brief Introduction to Evolutionary Philosophy, Part 2
Andrea Diem-Lane
In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history.
--Charles Darwin
Take away the human brain and you take away all of philosophy.
Why do we think the way we do" Or, more in line with the theme of philosophy, why does the question "why" arise so much in human beings?
There have been, to be sure, many answers to these queries from time immemorial. Countless religions have been created to resolve these perennial questions, each with differing successes. Even science has gotten into the fray with the blossoming of psychology as a distinct discipline in the latter part of the 19th century.
Charles Darwin made a very pregnant prediction near the end of his classic 1859 tome when he opined that evolutionary theory would radically transform other fields of research. He even went so far as to say that "Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and gradation." He could have just as easily substituted Philosophy.
While there have been some remarkable developments in evolutionary psychology, a field previously known more controversially as sociobiology, there hasn't been the same attention given to philosophy. Historically, this may be due to the fact that Herbert Spencer, an early champion of fusing philosophy and evolution and a quite popular advocate of such during his lifetime, became something of anathema during the latter part of the 19th and early 20th century because of some of his more controversial views, particularly Social Darwinism. As the entry on him in Wikipedia notes: "Posterity has not been kind to Spencer. Soon after his death his philosophical reputation went into a sharp and irrevocable decline. Half a century after his death his work was dismissed as a 'parody of philosophy' and the historian Richard Hofstadter called him the 'the metaphysician of the homemade intellectual and the prophet of the cracker-barrel agnostic.'"
Combining philosophy with evolution can be fraught with peculiar dangers, not the least of which is a tendency towards what Dennett has called "cheap reductionism," explaining away complex phenomena instead of properly understanding it. Nevertheless, it is even more troublesome to ignore Darwinian evolution because it illuminates so many hitherto intractable problems ranging from epistemology to ethics.
The new field of evolutionary philosophy, unlike its aborted predecessors of the past, is primarily concerned with understanding why Homo sapiens are philosophical in the first place. It is not focused on advocating some specific future reform, but rather in uncovering why humans are predisposed to ask so many questions which, at least at the present stage, cannot be answered. In other words, if evolution is about living long enough to transmit one's genetic code, how does philosophy help in our global struggle for existence?
To answer that question and others branched with it, one has to deal with the most complex physical structure in the universethe human brain. Because it is from this wonder tissue, what Patricia Churchland has aptly called "three pounds of glorious meat," that all of human thought, including our deep and ponderous musings, is built upon. Take away the human brain and you take away all of philosophy.
Therefore, if we are to understand why philosophy arose in the first place, we have to begin with delving into the mystery on why consciousness itself arose. And to answer that question we first have to come to grips with Darwin's major contribution to evolutionary theorynatural selection. Why would nature select for awareness, especially the kind of self-conscious awareness endemic to human beings, when survival for almost all species is predicated upon unconscious instincts? What kind of advantage does self-reflective consciousness confer that would allow it to emerge and develop over time?
I thought long and hard over this question and for many years I never found a satisfactory response. It was only when I began working as a Research Assistant for Professor V.S. Ramachandran at UCSD, where we focused on visual perception, that I realized that the answer to consciousness was intimately related to Darwinian natural selection. When I started to think in evolutionary terms, it became clearer to me that I had been asking the wrong sorts of questions, especially as I tended to relate my philosophical musings with my religious upbringing.
It may be simply a coincidence but I noticed that after I had met with Francis Crick at a dinner party hosted by Ramachandran in his La Jolla home my questions became more focused and thus I proceeded upon a more fruitful line of inquiry. Ironically, I didn't know that the person I was introduced to at the party was a world famous scientist who along with James Watson had discovered the double-helix structure of DNA. Ramachandran simply introduced the Nobel laureate as Francis. Since we were the first guests at the party we ended up chatting on the nearby couch. However, we both became so absorbed in our conversation that we ended up talking for nearly two straight hours. At the time I thought Francis was a second-fiddle to his artist wife, since he never gave any indication of his remarkable credentials. For all I knew he was either retired or out of a job. When I asked Crick what he did for a living he smirked and said, "Ah, I dabble in a little bit of this and a little bit of that," never once revealing his vast background in molecular biology.
Francis learned of my interest in Eastern Philosophy, Gnosticism, and my advocacy of vegetarianism. He tried to provoke me a bit, but because I didn't think he was all that knowledgeable in these areas I passionately, but hopefully reasonably, gave him my counter-arguments. Crick seemed slightly bemused by my candor and complete lack of awe in his presence. I even had the audacity of suggesting that he could help me out in the lab at UCSD with the experiments we were doing. It was only later to my and everyone else's chagrin when I realized (and to my horror) that I was debating with such an eminent mind. Thankfully, and to his great credit, Francis Crick never let on during the entire conversation who he was.
What stood out to me both during and after our conversation was Crick's insistence that I should focus my graduate studies on neuroscience, since he argued that the brain was the key to unraveling many of life's greatest mysteries. Although I didn't at the time take Francis' advice, his suggestion stayed with me over the years and has clearly influenced my thinking on how to approach the study of consciousness.
My husband (who did his Ph.D. at UCSD) was also interested in why consciousness evolved and had since 1991 embarked on an intensive study of physics and neuroscience to better resolve the issue.
I still remember the day he came home from his teaching duties at CSULB and exclaimed that the answer to the riddle of consciousness was remarkably simple and obvious. So obvious, in fact, that he wondered why he hadn't dawned on him much earlier. In his typically Socratic way, he peppered me with a series of loaded questions designed to make clearer his epiphany. Watching our son growing up, said my husband, was the triggering event. As my husband recounted in his diary of that day,
I am learning more about the human brain and philosophy from Shaun than I ever did from books. It is now very obvious and clear to me that whatever questions we ask of the universe arise because of the architecture of our brain. More precisely, philosophy is the result of differing brain states and upon that contingent scaffolding we come up with varying questions to ask of the world and its participants, though we never seem to realize that such questioning has less to do with reality per se and more to do with our own evolutionary needs. Ah, I can put it better yet: philosophy is like heartburn. It is the natural result of something that didn't digest well. I will call it brain burn.
What does such a neologism mean? Every deep question we have, every deep thought we ponder, is the result of the confusion of a neural system when confronted with its own dissociation. Consciousness is dissociation. And therein lays its Darwinian advantage, since most of our awareness is in our head it doesn't have to face the very real and empirical and deathly consequences of being without. Being within survives. Being without tends to end up dead. So consciousness arises as dissociation so it can play out (via its internal machinations. . . what we call imagination/daydreaming) without physical harm alternative scenarios to secure its Four F'S: F..k, Food, Flee, Fight. Consciousness is literally a virtual simulator and that is why it has been so helpful in allowing humans to survive globally, even when our bodies were not adapted to certain environmental niches.
If you can imagine without real consequence, then you have a better chance of living if you have already played out (internally, but not externally) competing strategies. Those without consciousness don't have this liberty and thus when they do play out a choice, they do so in a real world. And in such a real world, if it doesn't work you are eaten. In imagination, in consciousness, you can play as if it is real and project all sorts of end game earnings to see which one would be to your advantage. Consciousness is the brain's way of making chance/chaos (read nature) more plastic, more pliable, more beneficial to the host organism.
What is the best way to survive chance contingencies? By developing a statistically deep understanding of what varying options portend. Consciousness is a way around pure chance by developing an internalized map of probabilities which can be visualized internally without having to be outsourced prematurely. Being in your head is another way of avoiding being stuck in only a one way avenue of recourse. Any reproducing DNA that can develop a virtual simulator within itself has a huge advantage over a genetic strand that cannot.
The gurus and mystics have it completely wrong. The world isn't an illusion; consciousness is. But even if we say consciousness is an illusionwhich it is (in the sense that its stuff is literally composed of dreams and other sorts puffery)that doesn't mean it isn't helpful to our survival. It is. Consciousness is a body's way of giving itself a better chance when confronted with the reality of Chance itself. Consciousness is probability functions envisioned.
Ah, but we humans are so naive. We were so mesmerized by the theatre of consciousness that we forgot that it was the body that was real, not its projected sideshow. So I won't be misunderstood. What I am saying is the direct opposite of religion, of mysticism, of guruism. Consciousness isn't the strongest part of a human being. It is not going to survive. There is no soul. Nietzsche was right. Consciousness is the weakest element of a human being, though we believe otherwise. Film is fragile, so is our awareness. Nobody believes that a reel of film is going to reincarnate or survive in the afterlife. Awareness is Poker Self-Reflective. Chance giving itself better odds.
My husband had been reading various articles by Ramachandran and a number of books by the Nobel laureate in medicine, Gerald Edelman (most popularly known for his introduction of Neural Darwinism) where they had implied that consciousness was more or less a virtual simulator. "That's it," Dave echoed. "Think about what consciousness does most of the time. It simulates its environment in order to make the best probable guesses about how to respond to it. But in order for such simulations to be wide-ranging, there has to be an allowance for lots of projections or end game results which are purely fantastical and have no concordance with empirical reality. Otherwise, the very basis of imagination would be too stilted and wouldn't be amenable to new situations, new environments, and new problems."
At this point I chimed in with Stephen Jay Gould's notion of spandrels, the unintended consequences or secondary effects of a more primary adaptation. If consciousness is an evolutionary adaptation which allows for any sophisticated strand of DNA to develop a virtual navigating device within itself, thereby increasing its odds by allowing for a prior contemplation of varying strategies before making a decisive and decidedly empirical decision, then it is quite reasonable to expect that there should be much in a virtual simulator which is merely imaginary and has no real world correlate. This would easily explain why many of our projections are delusional. If consciousness is a probability adjustor (played out in our minds replete with an emotional feedback loop), then sometimes we will opt for strategies that are indeed wrong, misguided, and illusionary. They are just part of the odds. If this is true, then we should expect certain mental states which would deviate from the mean. Is this the basis for all mental diseases?
But as I was partnering these ideas with my husband, I realized that even calling something a mental disease implied an already fixed understanding of what constituted normal awareness. If consciousness is simulating its environment so as to better its odds when it does indeed make a real life choice, then much of its success is due to how well it actually matches up with and predicts the incoming stimuli. Consciousness could never have survived the brutal machinations of natural selection unless it was somewhat accurate in how it modeled its exterior environment. If it came up with simulations that were continually mistaken, it would have been eaten up by a predator long ago. No, consciousness must be on whole a fairly accurate modeler of the outside world in order to confer the benefit necessary to evolve as a significant adaptation. That there will be glitches and mistakes and illusory detours is to be expected, but overall those have to be kept at a minimum since otherwise the very advantage that consciousness confers as a virtual simulator would be automatically lost. If consciousness was merely projections of our temporal lobes, it wouldn't have any real world benefit, but would act as a punishing (and ultimately eliminating) detriment.
As we discussed this theory of consciousness further, we started to see the powerful utility of Gerald Edelman's binary idea of Second Nature. Edelman makes a distinction between first order and second order consciousness, or what he calls first nature and second nature levels of awareness. As the Wilson Quarterly review of his book, Second Nature, explains:
In Second Nature, Nobel Prize winning neuroscientist Gerald Edelman proposes what he calls "brain-based epistemology," which aims at solving the mystery of how we acquire knowledge by grounding it in an understanding of how the brain works.
Edelman's title is, in part, meant "to call attention to the fact that our thoughts often float free of our realistic descriptions of nature," even as he sets out to explore how the mind and the body interact.
Edelman suggests that thanks to the recent development of instruments capable of measuring brain structure within millimeters and brain activity within milliseconds, perceptions, thoughts, memories, willed acts, and other mind matters traditionally considered private and impenetrable to scientific scrutiny now can be correlated with brain activity. Our consciousness (a "first person affair" displaying intentionality, reflecting beliefs and desires, etc.), our creativity, even our value systems, have a basis in brain function.
The author describes three unifying insights that correlate mind matters with brain activity. First, even distant neurons will establish meaningful connections (circuits) if their firing patterns are synchronized: "Neurons that fire together wire together." Second, experience can either strengthen or weaken synapses (neuronal connections). Edelman uses the analogy of a police officer stationed at a synapse who either facilitates or reduces the traffic from one neuron to another. The result of these first two phenomena is that some neural circuits end up being favored over others.
Finally, there is reentry, the continued signaling from one brain region to another and back again along massively parallel nerve fibers. Since reentry isn't an easy concept to grasp, Edelman again resorts to analogy, with particular adeptness: "Consider a hypothetical string quartet made up of willful musicians. Each plays his or her own tune with different rhythm. Now connect the bodies of all the players with very fine threads (many of them to all body parts). As each player moves, he or she will unconsciously send waves of movement to the others. In a short time, the rhythm and to some extent the melodies will become more coherent. The dynamics will continue, leading to new coherent output. Something like this also occurs in jazz improvisation, of course without the threads!" Reentry allows for distant nerve cells to influence one another: "Memory, imaging, and thought itself all depend on the brain 'speaking to itself.'"
In Edelman's view sentient creatures have (to greater or lesser degrees) first order awareness or consciousness, including human beings. But such awareness is merely rudimentary and is not yet self reflective; it is more akin to being aware of something exterior to one's self, but not yet being able to reflect upon what that means. It is like looking at a mirror but without any comprehension of what that reflection is or means. Second nature, or second order awareness, is when we look at a mirror and develop a feedback loop and have the ability to ponder over what it means. If first nature is a way for our senses to reach out to the world and receive stimuli and have our instincts respond accordingly, second nature is our ability to absorb such information and have the wherewithal to reconstruct models of varying probabilities about what this information means. Second nature allows us to simulate our environment in ways not possible with first order awareness. In this sense, it confers a dramatic Darwinian advantage because simulations allow for better odds in our ultimate reactions to whatever stimuli or information we encounter.
As the Edelman entry on Answers.com explains further:
Edelman was struck by a number of similarities between the immune system and the nervous system. Just as a lymphocyte can recognize and respond to a new antigen, the nervous system can respond similarly to novel stimuli. Neural mechanisms are selected, he argued, in the same manner as antibodies. Although the 109 cells of the nervous system do not replicate, there is considerable scope for development and variation in the connections that form between the cells. Frequently used connections will be selected, others will decay or be diverted to other uses. There are two kinds of selection: developmental, which takes place before birth, and experiential. There are also innate 'values' built in preferences for such features as light and warmth over the dark and the cold.
In Edelman's model, higher consciousness, including self-awareness and the ability to create scenes in the mind, have required the emergence during evolution of a new neuronal circuit. To remember a chair or one's grandmother is not to recall a bit of coded data from a specific location; it is rather to create a unity out of scattered mappings, a process called by Edelman a 'reentry'. Edelman's views have been dismissed by many as obscure; some neurologists, however, consider Edelman to have begun what will eventually turn out to be a major revolution in the neurosciences.
As the Guardian newspaper elaborates:
The degree to which an organism is conscious is therefore dependent on the complexity of its brain. Large-brained mammals such as dogs have a core self-consciousness. Humans, perhaps uniquely, have a reflexive and recursive consciousness - we are conscious of being conscious.
Human consciousness is thus an evolved property, the inevitable consequence of having brains of a particular complexity.
What David and I found most intriguing in Edelman's theory was his fundamental understanding of what consciousness does. As Steven Rose summarizes Edelman's definition, "What consciousness does", he says, is to "inform us of our brain states and is thus central to our understanding."
Awareness, therefore, is indeed a theatre for the mind in which varying trajectories are given stage time to play out their respective hunches. But because this is a theatre and not the outside world as such, many things are acted out as if they were indeed real. And herein lays the ultimate danger of such an awareness. One can too easily conflate one's neural state for the real state of the world at large. Freud, whom Edelman both praises and criticizes, often remarked that one of his greatest discoveries was psychic transference. As Alun Jones explains:
Freud described transference as "new editions and facsimiles of impulses and phantasies" (1923/1953, p. 82) originating in the past. Instead of remembering, the person transfer attitudes and conflicts are enacted in current relationships, sometimes with unfortunate results. Manifestations are likely to occur in all human encounters; feelings toward the significant other often begin to emerge early on in relationships.
Freud's notion of transference is what consciousness was evolved to do. For instance, in our early childhood our awareness developed several models about what was happening, especially in a situational crisis. If that model was believed and sustained to a given extent, then it is quite likely to be invoked again if a similar occasion arose. The problem, of course, is that the new dilemma may have little resemblance to the prior one and thus our projected transference may be of little help. But even here the transference is of use psychologically if we can see it for it was and is: "information about our brain state and its attendant emotions."
While Edelman rightly discounts Freud's psychoanalytic technique, he does nevertheless appreciate Freud's rich and nuanced understanding of how consciousness may operate. Freud may have been technically incorrect in his analysis of why such mental problems arise and how to resolve them, but he was an insightful cartographer of psychic ailments.
The problem of consciousness is, as Freud rightly suspected, revealed by dreaming, since it gives us a clue about how consciousness must have first emerged. As we all know there are several stages of awareness in our dreamsranging from the hallucinatory and intangible impressions one might get from a severe lack of rest to the elevated heights of lucidity when one experiences an acute sense of clarity and self luminosity. Our waking state awareness is built upon the feeling of certainty, and to the degree that we can believe what our mind projects the more likely we are to act upon it, rightly or wrongly. Thus the hallmark of a heightened sense of awareness is how certain and definite it feels. This is an important feature to differentiate since if we lacked this clarity we would be less willing to act and to make choices. And if we hesitate in face of a real danger (a lion, a tiger, a bear, oh my!), then we run the very real risk of being eaten. An awareness that couldn't make such fine tuned adjustments would have been eliminated long ago in our ancestral past.
But herein lies the problem. To the degree that consciousness "works" because it conflates our brain state with the current "real" state of the world around us, it runs the risk of not being pliable enough to adapt to a new set of circumstances. In other words, consciousness must have enough "plasticity" to make judgments that turn out to be wrong, provided those mistaken assumptions error on the side of being too conservative or too safe.
A classic example of this, albeit wildly over stated here for our purposes, is the hypothetical case of an animator versus a non-animator. Imagine for argument's sake that hundreds of thousands of years ago there existed two kinds of human beings. One who tended to believe in animism, whereby ordinary objects possessed an "animated" or "vital" or "soul" force replete with the same kind of intentionality, though perhaps more mysterious, than we have. The other human being lacked such animating tendencies and thus had a more limited way of deciphering the motivations or intentions of other objects. Now, let's further imagine that both of these individuals were brothers sitting around a camp fire and there was the sound of a rustling bush. The animator, given his predisposition to project (Freud's transference writ early?), imagines all sorts of possibilities and because of the emotions he attaches to each (and keeping in mind the conservative nature of how such a state of awareness must have in order to survive) will on average try to protect himself and thus run and hide or protect himself under the worst case scenario. The non-animator, however, will do no such thing and in his "realist" approach most likely stay put and not animate anything whatsoever about the rustling bushes.
Now it may well be that the non-animator is correct much more often than the animator (most of the time it is just the wind against the leaves), but if the animator is right just 1/10 of the time his conservative, though animistic, approach has saved his life and thereby allowed him to live another day in order to pass on his genetic code. The non-animator, even if he was correct more often than his counterpart, ends up eaten and dead, and thus reducing his chances to pass on his DNA.
While this example is hyperbolic to the extreme, it does underline something about animism and about why such an innate tendency may have evolved in human beings. Projections are future oriented and any awareness that can better predict the future will be of some survival benefit. But the future is unknown, and therefore making predictions about it is inherently a probabilistic venture. Therefore if consciousness arose at least to some small measure as a predictor of how to react to future but unknown events, then it must error on the side of caution. Otherwise, if awareness was too adventurous whatever advantages it gained would be prematurely lost with one bad outcome.
But this conservative approach of awareness would most likely only be necessary when an individual confronted what he or she perceived to be a "real" or "certain" crisis. In other words, a virtual simulator would only be cautious if it acted upon that simulation as if it related to a real event. If, however, the virtual simulator knew or believed beforehand that the simulation wasn't going to be invoked immediately and thus could be contemplated upon without any empirical test, then it could allow itself the freedom of a wide arrange of imaginings. It could, to invoke a cliché, "day dream." It could fantasize. It could conjure up all sorts of nonsense, provided that it wasn't forced into an early test case.
Here we are starting to see what dreaming portended for waking state awareness. Dreams, by definition, are subjective conjurings that arise when we are asleep. This is another way of saying that when consciousness doesn't have to "work" or be called into the line of fire, it has the freedom to mix and match all sorts of images and sounds and feelings into a Picasso like universe.
This would be the precursor of a consciousness which had to act upon one of its modeling simulations. Dream first, act later. And this is apparently what consciousness has evolved to do and why it has been of such a huge advantage to Homo sapiens. Unlike other animals, which apparently do not possess Edleman's second nature, human beings can play out innumerable scenarios in the privacy and safety of their own head until such time that they can draw upon this rich rolodex of imagined trajectories and select what he or she believes is the best approach or model to apply to the present circumstance or problem.
To appreciate how effective and powerful this tool can be for any competitive organism, just ask yourself this question: Who would your rather have fly your airplane: a pilot who never underwent any flight simulations or one who underwent hundreds of hours in a flight simulator? The answer is obvious. Indeed, one can draw from many other professions to see the inherent advantages to virtual modeling or simulation. Most of the top athletes in the world todayfrom Tiger Woods to Kelly Slater to Michael Jordanhave repeatedly emphasized the importance of visualizing their performance before it happens. Whether it is going over and over in your head how to shoot a basketball in a hoop, or how to stand up quickly to ride a wave at Pipeline, or how to line up a long putt, the more time that is spent simulating the event the better off one feels when actually confronting the real occasion.
So if consciousness did indeed evolve over long epochs of time, we should expect to see varying gradations of awareness depending more or less on the complexity of the neuronal architecture that gives rise to it. Human awareness, though appearing unique and distinct, represents one end of the spectrum of consciousness. If awareness, as Crick and others points out, is intimately connected how our brain functions, then we should expect to see forms of higher awareness or even cognitive function in mammals closely aligned with us from our evolutionary past. And as recent studies in animal behavior have shown, this is precisely what researchers have found. As Donald R. Griffin points out in Animal Minds:
Must we reject, or repress, any suggestion that the chimpanzees or the herons think consciously about the tasty food they manage to obtain by these coordinated actions? Many animals adapt their behavior to the challenges they face either under natural conditions or in laboratory experiments. This has persuaded many scientists that some sort of cognition must be required to orchestrate such versatile behavior. For example, in other parts of Africa chimpanzees select suitable branches from which they break off twigs to produce a slender probe, which they carry some distance to poke it into a termite nest and eat the termites clinging to it as it is withdrawn. Apes have also learned to use artificial communication systems to ask for objects and activities they want and to answer simple questions about pictures of familiar things. Vervet monkeys employ different alarm calls to inform their companions about particular types of predator.
Such ingenuity is not limited to primates. Lionesses sometimes cooperate in surrounding prey or drive prey toward a companion waiting in a concealed position. Captive beaver have modified their customary patterns of lodge- and dam-building behavior by piling material around a vertical pole at the top of which was located food that they could not otherwise reach. They are also very ingenious at plugging water leaks, sometimes cutting pieces of wood to fit a particular hole through which water is escaping. Under natural conditions, in late winter some beaver cut holes in the dams they have previously constructed, causing the water level to drop, which allows them to swim about under the ice without holding their breath.
Nor is appropriate adaptation of complex behavior to changing circumstances a mammalian monopoly. Bowerbirds construct and decorate bowers that help them attract females for mating. Plovers carry out injury-simulating distraction displays that lead predators away from their eggs or young, and they adjust these displays according to the intruder's behavior. A parrot uses imitations of spoken English words to ask for things he wants to play with and to answer simple questions such as whether two objects are the same or different, or whether they differ in shape or color. Even certain insects, specifically the honeybees, employ symbolic gestures to communicate the direction and distance their sisters must fly to reach food or other things that are important to the colony.
Although these are not routine everyday occurrences, the fact that animals are capable of such versatility has led to a subtle shift on the part of some scientists concerned with animal behavior. Rather than insisting that animals do not think at all, many scientists now believe that they sometimes experience at least simple thoughts, although these thoughts are probably different from any of ours. For example, Terrace (1987, 135) closed a discussion of "thoughts without words" as follows: "Now that there are strong grounds to dispute Descartes' contention that animals lack the ability to think, we have to ask just how animals do think." Because so many cognitive processes are now believed to occur in animal brains, it is more and more difficult to cling to the conviction that this cognition is never accompanied by conscious thoughts.
Conscious thinking may well be a core function of central nervous systems. For conscious animals enjoy the advantage of being able to think about alternative actions and select behavior they believe will get them what they want or help them avoid what they dislike or fear. Of course, human consciousness is astronomically more complex and versatile than any conceivable animal thinking, but the basic question addressed in this book is whether the difference is qualitative and absolute or whether animals are conscious even though the content of their consciousness is undoubtedly limited and very likely quite different from ours. There is of course no reason to suppose that any animal is always conscious of everything it is doing, for we are entirely unaware of many complex activities of our bodies. Consciousness may occur only rarely in some species and not at all in others, and even animals that are sometimes aware of events that are important in their lives may be incapable of understanding many other facts and relationships. But the capability of conscious awareness under some conditions may well be so essential that it is the sine qua non of animal life, even for the smallest and simplest animals that have any central nervous system at all. When the whole system is small, this core function may therefore be a larger fraction of the whole.
Now turning our attention directly to philosophy we are in a better position to understand why the question "why" arises so often in human beings. In light of consciousness as a virtual simulator, any organism that can develop a mental "pivot" tool will have a tremendous advantage in thinking of new and unexpected strategies.
A curious, but hopefully, useful analogy can be derived here from a well known sport. In basketball, for instance, a seasoned player knows well how to use his or her pivot "foot." Once one has finished dribbling the ball, he or she must keep one foot firmly set on the ground. The other foot, however, is free to "pivot" or "revolve" or "turn" giving one options that the other foot doesn't.
Asking "why" is consciousness' pivot foot. It allows for a virtual simulator to turn and think of varying options and what they portend. It allows the mind to revolve and go into different directions. As F. Scott Fitzgerald essayed in his book, The Crack-Up: "The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function." Why is the mind's way of allowing a multiplicity of ideas to compete and hopefully function better because of it.
"Why" is similar to an all-purpose function key on your laptop computer which opens up programs that are otherwise hidden from display. But though asking why can be quite helpful in very specific situations (why does it rain in the winter and not the summer, for example?), it can also serve as unnecessary nuisance if its protestations cannot be adequately met. Perhaps this can help us better understand the wide gulf between religion and science. We have already admitted that for a virtual simulator to be highly effective it must be able to conjure up all sorts of imagined nonsense, provided it doesn't have to always act upon such in a real life situation. Science, though clearly built upon wild speculations and imaginings, is differentiated from religion because it measures its successes by actually "testing" its varying models with each other and placing them in real life contexts to determine which one holds up best under rigorous conditions. Science, in other words, attempts to falsify what consciousness conjures up so as to see which model best explains reality. And in so doing, it allows for a cataloging of both its successes and failures. In this way, science can indeed progress because it has a built-in tendency to eliminate less successful theoretic conjectures. Religion, on the other hand, tends to accept certain simulations above all others without resorting to any empirical verification and habitually substantiates such imaginary permutations as being beyond physical testing. In this way the virtual simulation protects its integrity and truth value by pointing to a transcendent arbiter and thereby foregoing any real world competition lest it be eliminated by such testing. Is it merely coincidence that there are tens of thousands of religions in the world each claiming exclusive truths, but nothing comparable in the world of science. There isn't a Japanese physics or a Tibetan physics or an American physics. There is just physics. What country you come from is secondary. Gravity is universal and doesn't have different acolytes claiming different revelations in different tongues. But which geographical region you come from in religion isn't secondary, but primary, since as every geographer knows the gods change when you go to different landscapes.
Virtual simulation can also be instrumental in helping us better understand why beliefs systems can be so powerful, even when such ideologies can be regarded as wrongheaded or backward. Any meaning system, provided it allows one enough purpose and drive to live another day, is better than none at all. As the script from the movie, Truth Lies, explains it:
Our brains didn't evolve to understand the universe but to literally "eat" it (as in survive the local ecosystem niche long enough to transmit one's genetic code). Thus, we have invariably conflated our appetite with truth. We don't know the truth; we simply know what it takes for us to live skillfully enough so that we don't get eaten too young. And if we live long enough, the redundancies (or spandrels or secondary effects) of our enlarged cerebral cortex allows us the freedom to ponder imponderables and impute upon those mysteries all sorts of silly nonsense with the added caveat that such idiocy will last provided it floats our boat to live another day. Let me rephrase that: nonsense evolved as an adaptive function of an enlarged brain because believing nonsense makes MORE sense (in terms of replicating strategies) than coming to grips with the random chaos from which the universe apparently arises. Tom Blake might put it this way, "nature is without sentiment, but those who FEEL sentiments have a better chance of surviving this horror show than those who do not." Why? Because any meaning is better than no meaning even if the universe ultimately is devoid of purpose. As Voltaire would say in my twisted way of paraphrasing him, "Man would have to invent God even if such a being didn't exist." We cannot live without purpose, even if that purpose is an adaptive fiction evolved over eons of time designed to blind those with such sentiments from the truth that nature has no such sentiment. Ah, too much truth and you cannot move. Too much reality and you become autistic. Which is another way of saying that Jack Nicholson was right all along: we cannot handle the truth. But the truth surely has a good way of handing us. It lies to us in order for us to live an extra day. Think about it. The truth is that truth lies
The idea that our brains could literally deceive us is now well established in neuroscience. Indeed, the brain's capacity for filling in objects that are not present is a vitally important component of how we navigate in our day to day world. As the abstract to "Perceptual filling-in from the edge of the blind spot" on Science Direct explains:
Looking at the world with one eye, we do not notice a scotoma in the receptor-free area of the visual field where the optic nerve leaves the eye. Rather we perceive the brightness, color, and texture of the adjacent area as if they were actually there. The mechanisms underlying this kind of perceptual filling-in remain controversial. To better understand these processes, we determined the minimum region around the blind spot that needs to be stimulated for filling-in by carefully mapping the blind spot and presenting individually fitted stimulus frames of different width around it. Uniform filling-in was observed with frame widths as narrow as 0.05° visual angle for color and 0.2° for texture. Filling-in was incomplete, when the frame was no longer contiguous with the blind spot border due to an eye movement. These results are consistent with the idea that perceptual filling-in of the blind spot depends on local processes generated at the physiological edge of the cortical representation.
The brain is forced to makes these "lying" choices to us as part of its mapping expertise. We are not seeing or hearing or smelling or feeling or touching the world "as it is," but rather as our brains "simulate" it for our evolutionary survival.
In this way philosophy is a multi-faceted procedure to encourage more simulations versus less. When Socrates axiomatically stated that an unexamined life wasn't worth living, he was arguing that we should bring to light more information about why and how we make the decisions we do. In his own inimitable way, Socrates was provoking the virtual simulator which is consciousness to start using its pivot foot (the "Why?") with more dexterity.
Evolutionary philosophy is in many ways similar to the Churchlands' concept of eliminative materialism. As the Neural Surfer website elaborates:
When we scientifically advanced in astronomy, medicine, and physics we replaced the old and outdated concepts of our mythic past with new and more accurate terminology which reflected our new found understanding of our body and the universe at large.
Thus, instead of talking about THOR, the Thunder God, we talked instead about electrical-magnetic currents. Thus, instead of talking about SPIRITS, as the causes of diseases, we talked about bacteria and viruses. Thus, instead of talking about tiny ghosts circulating throughout our anatomies pulling this or that muscle, we talked about a central nervous system.
In sum, we "eliminated" the gods or spirits in favor of more precise and accurate physiological explanations. Hence, the term: "eliminative materialism."
As a materialistic explanation evolves over time, it will either eliminate or reduce hitherto inexplicable phenomena down from the celestial region to the empirical arena. And in so doing, help us to better understand why certain events transpire in our body, in our mind, in our society, and in our world.
Eliminative materialism is reason writ large.
The glitch, though, is that we have allowed eliminative materialism to change our thinking about almost everything EXCEPT ourselves.
When it comes to understanding our own motivations, we have (as the Churchlands' point out) resorted more or less to "Folk Psychology," utilizing terms such as "desire", "motivation", "love", "anger", and "free will", to describe what we believe is happening within our own beings.
The problem with that is such terminology arises NOT from a robust neuroscientific understanding of our anatomies but rather arises from a centuries old MYTHIC/RELIGIOUS comprehension of our very consciousness.
And that's the rub.
Where we have moved away from such religious goo speak in the fields of physics, astronomy, chemistry, and biology, in talking about ourselves we are still stuck in pre-rational modes of discourse. Where astronomy reflects the LATEST theories of the universe, where medicine reflects the LATEST theories of diseases, in talking about ourselves we tend to reflect ANCIENT theories of human psychology.
We resist knowing ourselves as MERELY this body, this brain, this material.
As Patricia so astutely put it, "We don't want to be just three glorious pounds of meat."
Well, according to eliminative materialism, that is PRECISELY what "we" are.
And in order to get a better understanding of human consciousness, neurophilosophy argues that we focus our attention on developing a more comprehensive analysis of the brain and how it "creates" self-reflective awareness.
In so doing, we can then come up with a more neurally accurate way of describing what is going on within our own psyches (pun intended). Thus, instead of using the term "soul" we might instead use phase-specific words to describe the current state of awareness which are more neurologically correlated.
We have already done this slightly when it comes to headaches. Due to our increased attention to various pains and to the various drugs that are effective in treating them, we have become MORE aware of how to differentiate and thereby treat varying types of head pain. From Excedrin (very good for migraines because of the caffeine and aspirin combination) to Advil (very good for body and tooth aches).
Hence, the neurophilosophical way to understand one's "soul" is to ground such ideas in the neural complex.
What may transpire, as Francis Crick suggests in his aptly titled book The Astonishing Hypothesis, is that the soul will disappear.
Why?
Because there really is NO soul.
We are rather a bundle of neurons and nerve endings tied to together in a huge neural complex that gives rises to consciousness.
There is nothing META (beyond) physical about us.
We ARE physical.
And that very insight will lead to a reinterpretation of who we are and why we are and how we are.'
If consciousness does indeed serve as a virtual simulator with an amplified probability feedback loop, then it should come as no surprise that one of the more promising theories arising from neuroscience concerning how the brain works is based upon Bayesian probability theory.
From the New Scientist:
Neuroscientist Karl Friston and his colleagues have proposed a mathematical law that some are claiming is the nearest thing yet to a grand unified theory of the brain. From this single law, Friston's group claims to be able to explain almost everything about our grey matter.
Friston's ideas build on an existing theory known as the "Bayesian brain", which conceptualises the brain as a probability machine that constantly makes predictions about the world and then updates them based on what it senses.
The idea was born in 1983, when Geoffrey Hinton of the University of Toronto in Canada and Terry Sejnowski, then at Johns Hopkins University in Baltimore, Maryland, suggested that the brain could be seen as a machine that makes decisions based on the uncertainties of the outside world. In the 1990s, other researchers proposed that the brain represents knowledge of the world in terms of probabilities. Instead of estimating the distance to an object as a number, for instance, the brain would treat it as a range of possible values, some more likely than others.
A crucial element of the approach is that the probabilities are based on experience, but they change when relevant new information, such as visual information about the object's location, becomes available. "The brain is an inferential agent, optimising its models of what's going on at this moment and in the future," says Friston. In other words, the brain runs on Bayesian probability. Named after the 18th-century mathematician Thomas Bayes, this is a systematic way of calculating how the likelihood of an event changes as new information comes to light
Over the past decade, neuroscientists have found that real brains seem to work in this way. In perception and learning experiments, for example, people tend to make estimates - of the location or speed of a moving object, say - in a way that fits with Bayesian probability theory. There's also evidence that the brain makes internal predictions and updates them in a Bayesian manner. When you listen to someone talking, for example, your brain isn't simply receiving information, it also predicts what it expects to hear and constantly revises its predictions based on what information comes next. These predictions strongly influence what you actually hear, allowing you, for instance, to make sense of distorted or partially obscured speech.
In fact, making predictions and re-evaluating them seems to be a universal feature of the brain. At all times your brain is weighing its inputs and comparing them with internal predictions in order to make sense of the world. "It's a general computational principle that can explain how the brain handles problems ranging from low-level perception to high-level cognition," says Alex Pouget, a computational neuroscientist at the University of Rochester in New York (Trends in Neurosciences, vol 27, p 712).
Friston developed the free-energy principle to explain perception, but he now thinks it can be generalised to other kinds of brain processes as well. He claims that everything the brain does is designed to minimise free energy or prediction error (Synthese, vol 159, p 417). "In short, everything that can change in the brain will change to suppress prediction errors, from the firing of neurons to the wiring between them, and from the movements of our eyes to the choices we make in daily life," he says.
Applying Friston's grand unified theory of the brain to consciousness, it can be argued that while minimizing prediction errors is elemental and particularly relevant for any modeling system working in the real world of life and death, the most marked feature of second order awareness is its dissociation from such real world onslaughts. The fact that our awareness can be freed from the present struggle for existence is perhaps why it is so useful in orienting us to future occasions. To make a crude analogy here, a drowning man doesn't have enough "free" time or energy to do philosophy. His first nature instincts must take over and his reptilian brain survival tools must kick into high gear. However, if the drowning man is saved and is allowed enough leisure time to reflect (another word for simulate) upon what happened to him, he may be able to play out an array of options for escaping just such a dilemma the next time it might happen to him. But this presupposes a surplus of both physical and mental energy which is not already obliged or preoccupied with any over-arching present conundrums.
In light of this necessity for leisure, it can be argued that philosophy can only be practiced in earnest when there is enough free time and energy. A pilot who is flying under enemy fire doesn't have the option to go and ruminate in his mock-up flight simulator. Likewise, philosophy can only arise in a sustained fashion in a culture which has ample time on its hand.
Going back to our basketball analogy, the center can't use his or her pivot foot if he or she is too closely guarded. The same may also be true to some extent with the mental use of our pivotal "why." If conditions are too severe, we won't have the freedom to turn our thinking around or rotate our concepts in new directions. If time is of the essence, philosophy is not. Indeed, one can even propose a syllogism on the basis of this questionable claim. If you are doing philosophy for any measurable period of time, it can be taken as conclusive proof that you have too much of it. This, of course, is not to disparage philosophical inquiry but only to underline how an evolutionary approach to the subject uncovers the basis for why such an endeavor would emerge in the first place and why it can only be practiced under certain optimal conditions.
To be continued: Recommend Readings
|