Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber

Andy SmithAndrew P. Smith, who has a background in molecular biology, neuroscience and pharmacology, is author of e-books Worlds within Worlds and the novel Noosphere II, which are both available online. He has recently self-published "The Dimensions of Experience: A Natural History of Consciousness" (Xlibris, 2008).



A Reply to David Lane

Andrew P. Smith

David Lane's recent post, “The Synthetic Self” briefly discusses Christof Koch's view that consciousness should be understood as integrated information (the theory being called integrated information theory, or IIT). This theory is derived originally from work by Giulio Tononi and Gerald Edelman (Tononi and Edelman 1998; Edelman and Tononi 2000), who developed a definition of complexity in terms of information flow among different regions of the brain. More specifically, they proposed that complexity in a system results from a complex balance between differentiation or autonomy of its individual parts, on the one hand, and integration or interaction of different parts, on the other. They developed some sophisticated mathematical formulas that allowed them to quantitate the degree of complexity and information in any system such as the brain fairly precisely.

Schematic diagram of how to decompose systems into overlapping complexes according to Tononi's information integration theory
Schematic diagram of how to decompose
systems into overlapping complexes according
to Tononi's information integration theory

Tononi (2008) has subsequently made the bold claim that consciousness simply is integrated information, and Koch has apparently embraced this theory whole-heartedly. As someone who has written extensively about the relationship of consciousness to complexity (e.g., see Smith 2009) I fully endorse this approach in a very general sense. However, there are several problematic aspects of IIT that I will discuss here. Though David Lane implies that the theory might be some kind of breakthrough in our understanding of consciousness (and in my view, knocks down a straw man in arguing that it's not a mystical or top-down approach), I believe it has some serious limitations that need to be appreciated.

The Integrated Unconscious

IIT theory does nothing, absolutely nothing, to solve the hard problem of consciousness—the question of how physical processes can give rise to subjective experience.

I begin by emphasizing that IIT theory does nothing, absolutely nothing, to solve the hard problem of consciousness—the question of how physical processes can give rise to subjective experience. To equate consciousness with complexity of interactions and quantity of information still leaves untouched the question of how the one results in the other. I think this is the point, or part of the point, that John Searle, in his criticism of Koch that Lane cites, is getting at when he says, “Why should such systems thereby have qualitative, unified subjectivity?”

Of course, no one, including Searle, has any better answer to the hard problem. But every time someone claims to have a new theory of consciousness, people not very familiar with the field may be led to believe that we are at least coming closer to solving the hard problem. We aren't. Koch, like others before him, is simply trying to explain, in a testable fashion, how consciousness may be correlated with certain patterns of brain activity. I don't mean to demean it by describing it in this fashion—the so-called soft problems of consciousness are formidable indeed--but there is nothing in the complexity of organization per se that allows us to see why it should result in conscious experience.

I think I understand why Koch's view is seductive, nevertheless. Consciousness seems to be very closely related to information. Awareness, at least as we humans normally experience it, is always of something; there is an experiencing subject and an experienced object. The experience of an object constitutes information; it is some particular aspect of the world, rather than some other object, and has these features rather than those features. As Tononi puts it, every conscious experience rules out literally millions of other possible experiences. This is the differentiation aspect of consciousness, and it can carry an enormous amount of information.

At the same time, one of the signal features of conscious experience is its apparent unity. We not only can experience several different sensory modalities—vision, sound, smell, touch, taste—and endless numbers of variations within each, but all of these are experienced together. When we play a musical instrument, for example, the act of using the instrument, the sight of the instrument, and the resulting sound are all correlated, or bound together as neuroscientists are wont to say, into a single experience. This is the integrative aspect of consciousness.

But neither information nor its integration is the hard problem of consciousness. One could imagine a zombie—a being behaviorally identical to a human being but lacking in consciousness—playing a musical instrument in exactly the same fashion as an ordinary human. Its brain would still have to unify the different sensory modalities—if it didn't, it couldn't perform the musical composition correctly—and we could certainly imagine something like Koch's view of integrated information enabling it to do so. But the zombie would still lack conscious experience of playing the instrument. Comparing the zombie to an actual human being, there is something missing, and that missing something is not obviously provided for by integrated information.

In fact, much of our actual behavior is not that far removed from zombieland. When an accomplished professional musician performs, most of what the brain does is carried out unconsciously. As has been recognized for decades, this is one of the major distinctions between an expert and a novice, with regard to any form of behavior. The novice initially has to be conscious of nearly every detail of the performance, whereas for the expert, someone who has practiced the behavior extensively in the past, the performance is mostly automatic. This is very clear evidence that a vast amount of integration of information in the brain is carried out unconsciously.

Even our ordinary experience of the world is mostly constructed unconsciously. For example, when we look at a familiar scene (and for any adult, almost any conceivable experience is going to be somewhat, if not very deeply, familiar), what we see is not simply what we directly experience. Most of it results from past experience. Thus when we enter a room, we know from past experience that rooms have four walls, a floor and a ceiling, all at or nearly at right angles to each other, and this past experience is largely what we experience at this time. So deeply embedded is this past experience that the room may in fact not be constructed in exactly this fashion, but we will most likely see it in that way, anyway. Many well-known illusions make use of this phenomenon; we are presented with a scene that superficially appears to be one we are familiar with, and so we fill in the expected details, when in fact some of those details may be very different. To repeat, what we directly experience is only incomplete fragments of the scene; the rest is filled in by what we expect to see, and this filling in takes place unconsciously.

Another well-known example of unconscious processing is presented by language. The ability to read, write and speak language requires an enormous amount of integration, governed by certain rules which determine what constitute words and how these words may meaningfully be put together. Most of these rules, though, are unconscious. We use them without being aware of them; linguists only learned about them through comparative analysis of different languages.

Of course, regardless of how much unconscious processing proceeds it, language usually is a conscious process when it is actually expressed. But many kinds of studies show that we can use and comprehend language completely unconsciously. For example, if words are flashed on a screen so briefly that a subject cannot consciously recognize them, nevertheless they can influence the subject's subsequent behavior in ways that clearly demonstrate that there was unconscious recognition occurring. Studies also demonstrate that subjects primed with certain words that they consciously recognize subsequently make decisions that are determined unconsciously by these words.

So while our conscious experience is indeed highly integrated, a great deal of that integration takes place unconsciously. Consciousness is often described as “where it all comes together”, but it has already come together to a considerable degree before we are conscious. Indeed, there has been a long running debate over whether consciousness only occurs after all the relevant processing has occurred (Velmans 1991).

As long as we are talking about just normal adult humans, this is not necessarily an insurmountable problem for IIT. The theory can claim that while a great deal of integration occurs unconsciously, the greatest amount of integration—Tononi's moment of phi—results in consciousness. However, a problem emerges when we consider lesser developed organisms: higher vertebrates, small children, even babies. Most people would accept that these organisms are conscious, yet the processing going on in their brains is far less complex and organized than much of the unconscious processing that occurs in adult humans. We know this, in the first place, because any adult is routinely capable of performing, almost completely unconsciously, tasks that are far beyond the ability of animals or small children to accomplish under any conditions. We also have good reason to suspect this is the case because some brain regions associated with consciousness in humans don't even exist in other species that may be conscious.

It seems, therefore, that some unconscious processes reflect far greater integration than some conscious ones. Or to put it another way, processes that are integrated enough to result in consciousness in some organisms are not integrated enough to result in consciousness in other organisms. What's going on here?

I will discuss this issue further later, but the heart of the problem reflects a critical distinction we must make between unconscious states and unconscious processes. When we say that a small child is conscious, we mean it in basically the same sense that we mean it when we say that an adult is conscious: awake, responsive to the world, capable of feeling pain, and so on. Here we are talking about a conscious state. In this sense, there is little distinction between the child and the adult. They are more or less equally conscious.

But we can also say that a small child has a consciousness that is different from that of an adult. It doesn't have a strong sense of self, can't think abstractly, may not have fully developed language, if very young may not even be able to track moving objects very well, and so on. Now we are talking about conscious processes, and we can definitely say that in this sense, the adult is much more conscious than the child.

This distinction between conscious states and conscious processes roughly reflects the distinction between the hard problem and the soft problems. When one is in a conscious state, one has experience of qualia; in Thomas Nagel's (1974) immortal words, there is something it is like when we are conscious. Conscious processes refer to those functions or behaviors that one is capable of experiencing while in a conscious state. Now we are concerned about not whether or not we are conscious, but what we are conscious of.

IIT, though Koch and Tononi would almost certainly protest against this characterization, is basically a theory about conscious processes, not conscious states. It may explain, in any particular species such as ourselves, why some processes become conscious, and others do not—and that is not a trivial accomplishment, by any means. But it can't explain why there should be any consciousness at all.

There are really two aspects of the hard problem here, both of which IIT is failing to address. A theory must a) identify some process in the brain that is always correlated with the existence of consciousness; and b) explain why it is correlated, i.e., how that process becomes, creates, or simply is, consciousness. The hard problem is usually described in terms of b), the really difficult task, because in principle we should at least be able to find correlates of consciousness—some process or processes both necessary and sufficient for consciousness. This is something science has always been good at it.

But IIT also fails to satisfy a). When we compare consciousness in different species, or even within a single species at different stages of development, it is likely that we will find that there is no correlation between degree of complexity or integration and the existence of consciousness. A level of integration sufficient in one organism or state of development is insufficient in another, and conversely, a level necessary in one organism is not necessary in another. This strongly suggests that there is some fundamental aspect of consciousness that IIT is missing completely.

Having said this, I will again emphasize that the theory may be very powerful in the areas where it is applicable—understanding how processes become conscious in any particular organism, particularly, of course, ourselves. This is basically the soft aspects of consciousness, and in the remainder of this article, I will be mostly evaluating Koch's theory as an explanation of them. Here IIT can and should be held to a stricter standard.


Panpsychism does not provide an explanation of why consciousness exists, but simply accepts it as a given.

Before we do this, though, we need to examine more closely just what kind of theory IIT is. Most theories of consciousness that appeal to at least some contemporary scientists or philosophers fall into one of three general classes: materialism; dualism; and panpsychism. Where does IIT fit in this scheme? Unfortunately, the answer is not entirely clear, because Koch provides a confusing, I would say inconsistent, description of it. He describes himself as both a materialist and a panpsychist.

Materialism claims that consciousness emerges from material processes, but can't explain how this can possibly come about. This is in fact the hard problem. Dualism argues that consciousness and material processes are distinctly different kinds of things—thus avoiding the hard problem—but can't explain how they could interact, as they clearly do. Panpsychism holds that everything is conscious; that is, consciousness is a property of all material forms of existence, as fundamental a property as well-established physical properties such as mass and charge.

Panpsychism does not provide an explanation of why consciousness exists, but simply accepts it as a given. Thus one of its main attractions is that it seems to provide a way to avoid the dilemma of materialism vs. dualism. Consciousness does not emerge from matter, so no hard problem is involved. But neither is it distinct from matter, so no dualism is implied.[1]

Where does Koch stand on this issue? On the one hand, he claims to be a panpsychist. Thus in another post titled Information Field Theory, Lane quotes Koch as saying (his actual words can be heard in a video embedded in the post):

Consciousness is a fundamental, an elementary, property of living matter. It cannot be derived from anything else; it is a simple substance in Leibniz's words.

Koch goes on to make the specific analogy with electrical charge, as contemporary panpsychists frequently do.

But according to Koch's IIT, consciousness results from the complexity of organization. How can something be at one and the same time, “an elementary property of living matter”, and also the result of organizational complexity (which, even more contradictory, does not have to occur only in living matter?). Electrical charge does not result from the organization of living matter, nor, at least according to our current understanding, from the organization of anything at all.2 It is, to repeat, a fundamental property of matter. That makes it very different from the way Koch subsequently describes consciousness.

To emphasize the disconnect even further, Koch, in an interview with Wired Magazine that Lane also cites, very specifically claims that not everything is conscious. “A black hole, a heap of sand, a bunch of isolated neurons in a dish, they're not integrated,” he says. “They have no consciousness.” I understand that what Koch means is that there is no consciousness associated with the unintegrated group of matter or cells. This view could be consistent with panpsychism if one further added that the individual units of the group do have some form of consciousness—just as one might maintain that while individual humans are conscious, a small group of humans has no consciousness other than that of the individual consciousnesses. But Koch clearly does not mean this, for in the same interview, he adds, “In the case of the brain, it's the whole system that's conscious, not the individual nerve cells.” This is most definitely not a panpsychist view, and in fact, there are at least a few neuroscientists who now suggest that individual neurons could be conscious (Edwards 2004; Sevush 2005).

So while Koch claims to be a panpsychist, his integrated information theory clearly is not consistent with this claim. I'm a little surprised that Lane would accept his obviously contradictory statements without any comment or criticism (the very title of his post, The Synthetic Self, is also inconsistent with the panpsychist view of consciousness as an elementary property). Koch's view of consciousness resulting from complexity is much better understood as an emergent theory of consciousness, in some respects very much like Searle's. Where Koch's view differs from Searle's is that the former is “substrate-independent”, meaning that the organization or connectivity of the system is all that matters, not the nature of the units that are actually being organized connected. Thus in principle, computers composed of silicon chips could be conscious, or the internet, composed of computers, or yes, even a very large group of highly connected individuals (a very important point I will return to later).

But this view is not at all novel. As Koch notes in the video, it has long gone under the name of functionalism, and it has many prominent supporters, for example, philosopher Daniel Dennett. All Koch is doing, from this perspective, is fleshing out the underlying processes of functionalism. Fundamentally, at the philosophical level, his theory is not new at all. He's simply filling in details.

I find this conflation of IIT with panpsychism particularly unfortunate, because as I noted briefly earlier, panpsychism, for all its weaknesses, does avoid the hard problem that plagues all materialist theories of consciousness. So by arguing that he is a panpsychist, Koch is implying that IIT, too, avoids the hard problem. In fact, if I understand him correctly, he is claiming that consciousness is a fundamental property of complex organization in the same way that traditional panpsychism claims that consciousness is a fundamental property of any form of matter. But I think that is an unacceptably loose interpretation of panpsychism, because if consciousness is a fundamental property of organization, it comes into and out of existence dependent on that organization. It's like saying that consciousness is a fundamental property of the brain—it may be, but that is not an expression of panpsychism. In traditional panpsychism, consciousness is a fundamental property of everything—no conditions are attached. Any theory that postulates that consciousness comes into and out of existence in this manner is much more appropriately called an emergent one.

I conclude this section by noting that in his discussion of IIT, David Lane mentions the provocative work of Seth Lloyd (2006), who argues that information was being created in the universe from the very first interactions of quantum particles. Lloyd defines information somewhat differently from the way that Koch and Tononi do, but his work might provide a way of understanding consciousness in terms of information that is consistent with a genuine panpsychist view. If one accepts that consciousness is a fundamental property of all matter, one could further hypothesize that this consciousness is closely associated with information in the sense that Lloyd defines it.

The Hypocritic Oath

Why does an intelligent, informed scientist like Koch conflate functionalism with panpsychism? Perhaps because, while they are definitely different views, functionalism, like panpsychism, does have the important implication that many things that the traditional scientific view would believe are not conscious could be so. Not just animals with fairly complex brains, but perhaps, as Koch suggests in the Wired interview, computers or the internet. While Koch does not believe that everything is conscious, he believes that everything with a certain degree of complex organization is conscious.

Or does he? In that same interview, he stops short of claiming that human societies could be conscious:

The philosopher John Searle, in his review of Consciousness, asked, “Why isn't America conscious?” After all, there are 300 million Americans, interacting in very complicated ways. Why doesn't consciousness extend to all of America? It's because integrated information theory postulates that consciousness is a local maximum. You and me, for example: We're interacting right now, but vastly less than the cells in my brain interact with each other. While you and I are conscious as individuals, there's no conscious Übermind that unites us in a single entity. You and I are not collectively conscious.

This statement—this confident denial that a large human society could be conscious--begs several responses. First, how does Searle or Koch know that America is not conscious? How could they know? Why would one expect that a conscious individual that is part of a much larger conscious organization would be aware of that latter form of consciousness? As I noted earlier, some scientists believe individual neurons may be conscious. There is no reason to believe, though, that neurons are conscious in anything like the way a human being is conscious—in other words, that they are aware of a higher, human consciousness.[3] Why would it be any different with respect to an individual human being and the larger society?

The second point is that IIT in fact very strongly predicts that large, complex human societies would be conscious. If consciousness arises from complexity of organization, it's difficult to see how human societies could not be conscious. I have made this argument previously (Smith 2009); to summarize it here: 1) the number of individual humans on earth, about seven billion, is comparable to the number of neurons in the brains of typical non-human mammals; 2) each individual human is far more complex than any individual neuron; 3) both the number and the complexity of interactions between individual humans—involving today the internet and other forms of mass media that allow an individual to communicate simultaneously with millions of others—is far greater than the complexity of interactions of neurons in the brain; and 4) the organization of modern human societies, specifically, it's network properties, has some key properties in common with the organization of the human brain.

Given all this, what, pray tell, is it about IIT that could possibly allow Koch to conclude that human societies are not conscious? This seems to me a textbook example of a scientist refusing to accept the logical consequences of his theory because they contradict a long and deeply held prejudice.

But there is still another response to Koch's view that societies are not conscious that is even more compelling, and that illustrates in the most dramatic way the limitations of this view. Many brain researchers now argue that consciousness is not a product of just the brain—interactions among neurons-- but always involves interaction with the environment as well. According to this embodied view (Thompson and Varela 2001; Noe and Thompson 2004; Noe 2010), the complex systems that result in consciousness transcend the individual, and in fact, frequently include societies.

Perhaps the easiest way to appreciate this is by considering language. Language is clearly a property of societies, not individuals. As Wittgenstein (1958) and other philosophers have pointed out, there would be no point to language for an individual existing in complete social isolation. But it isn't just that language wouldn't be needed in the absence of communication between individuals. In that situation, it couldn't exist in anything like the form it actually does. Language involves symbols, and symbols only become meaningful through the act of communicating. As Deacon (1998) puts it:

it does not make sense to think of the symbols as located anywhere within the brain, because they are relationships between tokens, not the tokens themselves; and even though specific neural connections may underlie these relationships, the symbolic function is not even constituted by a specific association but the virtual set of associations that are partially sampled in any one instance. Widely distributed neural systems must contribute in a coordinated fashion to create and interpret symbolic relationships[4]

The key phrase in this passage is “virtual set of associations”. It is virtual because it does not exist in toto in the brain of any single individual. Language, to emphasize again, is a social property, meaning that only society as a whole has a complete (or the most nearly complete) understanding of a language. As any postmodernist will insist, words in fact have multiple, and constantly changing, meanings. Any individual, of course, experiences a meaning when he uses language, but only society can be said to include all the meanings. Or to put it another way, to the extent that meaning can be shared and communicated with others, it only points or refers to the social meaning. When an individual expresses language, she refers to a meaning that the individual herself does not experience. It's this social meaning—virtual from the point of view of any single individual--that allows language to function at all, to prevent it from being Wittgenstein's impossible “private language”.

Now language, of course, permeates our consciousness through and through. Even when we have a fairly basic sensory experience, such as observing a natural scene, the way we see and hear and feel the environment around us is profoundly shaped by language, which is to say, by society. I pointed out earlier that when we experience the environment, most of the experience is not direct, but constructed from memories. These memories fill in most of the details. And this filling in process is critically dependent on language. The fact that we have a word like “wall” with its associated meanings very definitely influences the experience we have of a room.

So if IIT is to be taken seriously, it needs to emphasize that the complex organization associated with consciousness is not just about human brains. It's very much also about human societies. While we certainly need brains to experience ourselves and the world in the way that we do, we also need social organization.

One view of consciousness that has long appealed to a minority of philosophers and cognitive scientists is to view the brain like a radio or television set. These appliances emit sound and/or visual imagery, but of course the information presented in the sounds and sights is not determined by components within the device itself. The information is carried within electromagnetic waves. A radio or TV is simply equipped to transduce this energy into sound or images. In the same way, according to this minority view, the brain does not produce consciousness through its activity, but acts as a kind of receiver of signals from outside the brain that are the real source of consciousness.

There is as yet no evidence for any kind of consciousness field, analogous to an electromagnetic field, that could act as the source of our brain-derived experience of consciousness. But my point here is that, properly understood, individual human consciousness results from a process not that much unlike that underlying radio or television. Rather than a field of consciousness that can exist independently of individual humans, there is a social consciousness that is created by interactions between individuals, in large part through language. The individual brain is able to transduce this social consciousness into an individual form of consciousness.

Does this mean that societies are conscious in the same way that individuals are? No. Language could be described as a distributed form of consciousness, present throughout society in the form of intersubjective interactions. It does not, by itself, imply that the entire society could have a unitary consciousness, and a sense of self.

But clearly, such a distributed consciousness could be a precursor for a unified social consciousness. After all, most evolutionists believe that the centralized nervous systems of modern organisms were preceded by more distributed forms in primitive species—the neural nets present in jellyfish and other members of Cnidaria, for example. And in fact, it's difficult to imagine how a unified consciousness could develop without passing through such a distributed stage.

A functionalist like Koch should eagerly embrace the idea that large, complex human societies could become conscious, indeed are likely to be in the process of becoming so right now. Not only do they fulfill the basic properties demanded by IIT even more clearly and fully than do individual brains, but we know that the most important features of consciousness are social properties, which can't be understood at an individual level. At the very least, Koch should realize that the kinds of organization of information that are associated with human consciousness are not at all limited to the human brain. They must include social interactions.

Why do Koch and so many other scientists and philosophers resist this conclusion? I said earlier I felt it was an expression of a deep prejudice. What exactly is the basis of this prejudice?

I think it's an unwillingness to accept that there may be multiple levels of consciousness all existing within a single system: conscious cells, conscious organisms, conscious societies. The notion that we could have consciousnesses or voices inside of us that we are completely unaware of is repugnant to most thinkers. It suggests a chaotic situation that would make it impossible for us to function with a more or less stable identity. And how could society be conscious without our being aware of this?

Yet we know that multiple forms of consciousness do exist within a single individual. Studies of split brain patients have long demonstrated that the two halves of our brain have very different perspectives of the world, and can come into conflict under certain conditions. And the left brain-right brain division is actually very simplistic. Anyone who has observed himself very deeply through the process of meditation will learn that we are in fact composed of multiple selves, most of which are not very aware of each other. Even the non-meditator will confront this situation every time she has a serious internal conflict, such as presented by a very difficult decision in life. Such conflicts are at root battles between different selves.

However, multiple consciousnesses in this sense all exist at about the same level. One could argue that they are not genuinely different forms of different consciousnesses, but different perspectives of a single consciousness. This is quite different from the possibility of individual cells being conscious, all at a level far below our ordinary awareness. Is that really believable?

But there is a very simple response to this objection. It's to note that it's being made by most of the same people who assure us that consciousness is fundamentally no different from growth, reproduction, or all the other fundamental properties of life that were once held to be mysterious, but which since have been shown to be explainable in terms of physiology and biochemistry. Consider this statement by Lane:

It was not so long ago that many people, including some very eminent scientists (such as Henri Bergson), believed that the secrets of genetics would never be revealed by biochemistry because there was something inherently non-reducible in life's coding system, something akin to supernatural vitalism. But this turned out to be spectacularly wrong when Francis Crick and James Watson discovered the double helix structure to DNA and how four basic building blocks, adenine, cytosine, thymine and guanine comprise the fundamental language in life's evolution…
Is it conceivable that the mystery of consciousness may also have an informational solution similar to the genomic revolution?

I'll pass over the description of Bergson as an “eminent scientist”, except to say he wasn't. The general point Lane is making—which has been made by many other scientists and philosophers—is that consciousness will ultimately be explainable in terms of material processes, just as other phenomena have been. But all these other phenomena have been found to exist in multiple levels within a single system. Thus while humans beings grow, reproduce, and adapt, so do single cells within their bodies. Individual cells don't have language in our sense, but they do communicate. They don't have immune systems, but they defend themselves. There is no conflict whatsoever in the existence of these processes at multiple levels. On the contrary, the ability of individual cells to grow and reproduce not only does not interfere with the ability of whole organisms to do the same, it's an essential feature of the latter.

So why should consciousness be any different? If growth, reproduction, adaptation and communication among individual cells in our bodies is in fact the basis of our own growth, reproduction, adaptation and communication, why can't consciousness of individual cells constitute the basis of consciousness of the whole organism? If we accept the panpsychist view of consciousness as a fundamental property of matter, the complexity of consciousness that we observe in ourselves would be explained in very much the same way that we explain the complexity of our biological processes—by the interaction of a very large number of simpler processes. Just as macromolecules result from the interactions of small molecules, and cells result from the interactions of macromolecules, and human beings result from the interactions of cells, our ordinary consciousness, in this view, results from the interactions of consciousness of cells, and consciousnesses of cells perhaps results from the interactions of conscious macromolecules.

Far from denying the scientific worldview that has brought us such a deep understanding of our physical and biological nature, this view embraces it. Starting with the admittedly highly controversial claim that consciousness is a fundamental property of matter, it explains our own highly developed consciousness in exactly the same way that we have come to understand all other aspects of ourselves.

Competitive Consciousness

One of the most essential characteristics of any scientific theory is that it should be testable. Koch claims that IIT passes this test. He argues that it should be possible to design experiments in which neural organization in conscious and unconscious individuals is compared.

In fact, this kind of testing is already going on. For example, there are some very interesting recent studies of subjects sleeping or sedated by drugs that suggest that consciousness is correlated with certain kinds of connectivity in the brain (Horovitz et al. 2009; Mhuircheartaigh et al. 2010). Such studies demonstrate that in the unconscious state induced by drugs like propofol, or simply by deep sleep, many major connecting pathways between certain critical areas in the brain become inactive. While I don't believe the appropriate analysis of the data has yet been carried out, this research undoubtedly supports the general conclusion that more information processing is occurring in the conscious brain.

This is very promising research, but I want to emphasize that such testing is subject to several major limitations. First, imaging is a relatively gross procedure, not capable of revealing detailed patterns of activity that would probably be necessary to demonstrate IIT. The studies I just cited demonstrate changes in connectivity, but it would require a far more detailed analysis to determine whether these changes were consistent with a particular kind of organization. Probably at some point in the future, the technology will be available to carry out such studies, but at our current state, this problem is very challenging.

A second problem is raised by a distinction I drew earlier, between unconscious states and unconscious processes. When an individual is in deep sleep, or sedated by a drug like propofol, he is in an unconscious state. So studies like the ones just cited are revealing differences between conscious and unconscious states, not between any particular conscious and unconscious processes. But as I also noted earlier, we know that conscious states in humans may reflect far greater complexity than those in simpler organisms. So while studies like these may provide insight into the differences between conscious and unconscious states in humans, they don't generalize to differences between such states in all species. In other words, any correlates defined by these studies are not relevant to the hard problem of consciousness. One can't look at these differences in connectivity and conclude, these specific connections between these specific brain regions are essential to the experience of consciousness in any universal sense.

Given this limitation, the potentially more informative kinds of studies that need to be done to support IIT involve comparing the connectivity occurring in conscious vs. unconscious processes. Many studies are now addressing this question, for example, using binocular rivalry, in which an identical stimulus to each eye separately results in a conscious perception from one eye but not the other (Haynes and Rees 2005; Wunderlich et a 2005). However, there is really no way to be certain that unconscious processing that does not result in conscious experience is exactly identical to processing that does. There are additional challenges to such studies. A study by Watanabe et al (2011) found that it was actually attention, not consciousness, that accounted for the difference in neural activity in binocular rivalry. Other studies suggest that signals measured by imaging are different from those measured directly by electrophysiological recording (Maier et a 2008).

Finally, if we do want to address the hard problem—that is, search for genuine correlates of consciousness in this sense—we clearly need to carry out studies in animals. But IIT can't be tested in these circumstances, unless we a priori make assumptions about which animals are conscious. We can certainly analyze brain activity in a sleeping or drugged animal, and compare it with activity in an animal that is awake; in such instances, it's presumed that awake means conscious, in the sense of experiencing the environment, not merely responding to it. But that only defines the correlates of consciousness in that species, it still does not identify the minimum correlates necessary for consciousness. To do that, we have to go down the evolutionary scale, to lower vertebrates and invertebrates, where this distinction is less easily drawn. Insects respond to their environment, but does this mean they are conscious? If we don't know, obviously it's impossible to devise an experiment comparing a conscious organism with an unconscious one.

This criticism, I think, goes to the heart of the problem. Koch wants to use IIT to demonstrate the differences between consciousness and unconsciousness. But to follow through on this program in a way that can identify at least the correlates of any form of consciousness, we need a way of distinguishing which organisms and their processes are conscious and which are not.

At this point, I want to introduce some provocative and I think under-appreciated work by a researcher I know David Lane is familiar with; in fact, I believe his wife, Andrea, worked for him: V.S. Ramachandran. In the paper The Three Laws of Qualia, Ramachandran and Hirstein (1997) argue that qualia—that is, conscious experiences—can be distinguished from unconscious processes by three major properties. That is, by observing certain features in an animal's response to its environment, we can determine whether that response is conscious or unconscious.

These three properties are what he calls irrevocability; the ability to make multiple responses; and a certain minimum duration during which the experience must be held in the brain. Here I will only discuss the second of these properties, that qualia are associated with the ability to make multiple responses to the environment.

This property, and how it distinguishes between conscious and unconscious processes, can be illustrated by a very simple example; you touch a hot stove. You immediately withdraw your hand, an unconscious reflex that begins even before you feel pain. This response is the only one possible for this unconscious process. You would never unconsciously leave your hand on the stove, nor do anything else but withdraw it. Your nervous system demands that you withdraw your hand.

Now consider the situation after you have actually been burned, and feel pain. There are several responses possible. You may scream in agony and wave your hand around. You might rub your hand to try to lessen the pain, or put some kind of ointment on it. You could go somewhere to get further medical attention. Or you might just bear the pain stoically. You might even touch the stove again, perhaps as an attempt to demonstrate your tolerance of pain.

This is a very simple point, but in Ramachandran's hands it becomes a very powerful tool to understanding consciousness. He now has a cogent response to the question, are simple organisms, like insects, conscious? His answer is no, because essentially all of their behavior is stereotyped or reflexive. It does not provide for multiple possible responses that, he argues, are a signal characteristic of consciousness.

While this is a good argument, though, it's not a compelling one. One could counter that the situation is different in insects from ourselves. Why would it be? Well, we have our waking consciousness mediated by our complex brains. It might be that reflexive processes operate at a lower level of consciousness, but we don't experience this lower consciousness because it is repressed or overwhelmed by our waking consciousness. In insects, this waking consciousness is absent, so that reflexive consciousness is all there is.

What kind of evidence would support this hypothesis? Let's return to a distinction I drew earlier, between unconscious states and unconscious processes. When we are in deep sleep or under the influence of certain drugs, we are in an unconscious state. When we are awake, we are in a conscious state, but nevertheless, unconscious processes are occurring. I also pointed out earlier that many of these unconscious processes are quite complex, and perform a great deal of the kind of integration of information that, according to Koch, is associated with consciousness. In fact, the very existence of such complex unconscious processes seems to be at odds with IIT.

Now, however, we have a possible explanation of why such complex organization can be associated with unconscious processes. Suppose that such processes are potentially conscious, but they exist in a system where further integration occurs, and consciousness only manifests in the latter. In other words, I am proposing that consciousness, in any organism, is only associated with the highest form of integration in that organism. It is not the absolute degree of integration that matters, but the relative degree.[5] In the human brain, where a very high degree of integration is possible, only that integration becomes conscious. Lower degrees of integration, though they may be quite complex in an absolute sense, remain unconscious.

If this view is correct, it challenges Ramachandran's contention that simple organisms like insects must be unconscious. Though their behavior may have properties similar to that which is unconscious in humans, and be manifested by neural processes that are unconscious in us, if that behavior represents the highest form of integration in that organism, it could still be conscious. Not in anything like waking consciousness is for us, but nevertheless conscious to some degree.

This is simply a hypothesis, of course. But as I pointed out earlier, there is strong evidence that consciousness is not simply related to complexity of organization. We can carry out behavior unconsciously that is far more complex than anything that higher vertebrates can do, presumably consciously. This strongly suggests that there could be more or less identical processes in humans and other animals, that are unconscious to us but conscious to the animals.

This conclusion is also consistent with fairly widely-held views that there are many integrative processes in the brain that could become conscious, but fail to do so because the brain's limited resources can only allow one to process at any time to prevail (Baars 1997; Dennett 2005). In other words, it isn't simply the complexity of processes that determines whether they become conscious, but how this complexity compares to other processes occurring simultaneously. Consciousness seems to be the outcome of a competition; some cognitive scientists have likened the competition to one in which a myriad of voices are shouting for attention, and only the loudest or most insistent one is heard. If the competition occurs at a high level, the winner will be very complex. But if the competition is weak, the winner may be relatively simple.

So to conclude this section, a reasonable interpretation of IIT in its most favorable light, I think, is that it may explain why certain processes become conscious, or at least become candidates for consciousness, while others do not. This same logic argues that we should remain open-minded about whether very simple organisms could be conscious in some sense. If fact, it should be possible to test the notion that essentially identical processes may be conscious in one species and not conscious in another. That is, if we assume that higher vertebrates are probably conscious, we might identify processes that are less informationally complex in such species than unconscious processes in humans. Such a finding would provide substantial evidence that less-evolved species, for which the possibility of consciousness is much more controversial, could also be conscious.

Fiat Lux

While all metaphors of consciousness are necessarily imperfect and misleading, I want to close by offering one, anyway, as a summary of my critique of IIT, and more generally, of an alternative view of consciousness. Imagine a small room, empty except for a single small light bulb that illuminates the room. Now imagine the room growing in both size and complexity, by which I simply mean it contains furniture and other objects of different sizes and shapes. In order for the room to remain fully illuminated, the number of light bulbs must increase, because of the much larger space, and also in order to cast their light in spaces behind or under the various objects.

The light, of course, represents consciousness. The rooms represent organisms of various degrees of complexity. What IIT is claiming is that there is a certain minimum number of light bulbs necessary to illuminate the room fully. This is correct with regard to the one very large and cluttered room they are studying—the human brain--but what IIT is missing, in this view, is that a) the number of bulbs required actually depends on the size and contents of the room; and b) illumination is not a fundamental property of a particular number of bulbs, but of any and every bulb.

Now let me develop the metaphor a little further. Suppose there are creatures of some sort living in the room, each of which has a light bulb attached to it. As these creatures move about the room, manipulating the objects in various ways for various purposes, the light bulbs of course move with them. Any one bulb is quite dim, though, and insufficient to illuminate very much. So much of what the creatures do occurs in essential darkness. It's only when a large number of creatures work together in a coordinated fashion in some localized area that the light becomes powerful enough to illuminate their actions—and also powerful enough to obscure the relatively dim light of smaller groups of creatures working in other parts of the room.

The major point to this metaphor is to counter what I believe is an extremely strongly-held myth that permeates almost all consciousness research. This is the notion that consciousness is a highly complex phenomenon. If it is, it will undoubtedly be explained only by comparably complex processes in the brain (and as I have further discussed, in human societies). Hence the attraction of theories like Tononi/Koch's integrated information, capable of distinguishing millions of potential states of activity in the brain, each composed of perhaps hundreds of thousands or millions of neurons interacting in unique ways.

The reason consciousness is assumed to be complex is because what we humans are generally conscious of is quite complex. Even a relatively simple sensory experience, composed of sights, sounds, and perhaps other kinds of sensations, requires a fairly high degree of processing. When we throw thoughts and emotions into the mix, the complexity meter rises drastically further.

But as I have argued here, this understanding confuses conscious processes with conscious states—with what we are conscious of with the simple fact that we are conscious. Just because we can be conscious of enormously complex experiences does not establish that being conscious is necessarily very complex. Just as the fact that a very large number of light bulbs moving around in elaborate ways might be needed to illuminate a very large room occupied by a large number of creatures and objects does not mean that a single, stationary bulb has no power to illuminate at all.

The view that I'm proposing here is the traditional panpsychist one. quite different from what I would call the misnamed version described by Koch. This view is very controversial, of course, one not taken seriously by very many philosophers or scientists (though the few who do entertain it include some very eminent and well-respected figures, such as David Chalmers (1996) and William Seager (1999)). But though it should be distinguished from the materialist view, it isn't anti-materialist. It doesn't deny that our brains function through physical and biological processes, and that what we are capable of becoming conscious of has a physical and biological basis. In fact, as the metaphor I presented above should make clear, panpsychism is for the most part consistent with the claim of IIT that human consciousness is associated with integrated information. It just has a very different view of the ultimate source of consciousness.

Moreover, panpsychism may be more consistent than the materialist view with our general understanding of evolution. According to materialist theories of consciousness, including IIT, consciousness emerged relatively suddenly, with organisms possessing brains complex and integrated enough to process information at some minimum level. But this isn't the way most biological processes evolved. Most of them developed very gradually over a period of time encompassing many different species. In fact, they began appearing even before there were species. As I pointed out earlier, processes like growth, reproduction and self-maintenance in organisms are in effect composed of analogous processes occurring in single cells. They aren't fundamentally new with organisms; they are just more complex variations of themes that began billions of years ago. Even some single molecules, which do not grow, reproduce or maintain themselves in the biological sense, exhibit these properties to some degree, and again, they contribute to these processes in genuine cells (see Smith 2009).

Panpsychism suggests that consciousness may have evolved in much the same manner. Even simple matter may have a property which, though vastly different from what we understand as consciousness, bears enough fundamental resemblance to it to serve as a precursor. As matter evolved into life, this simple, proto-consciousness, if you will, by interacting with others like itself, became somewhat more complex. When cells associated to form organisms, this complexity reached the level at which we would recognize as consciousness similar to our own.

Panpsychism has other strengths as well. As I noted earlier, it navigates successfully between the Scylla and Charybdis of the hard problem of materialism and ghostly interactions of dualism. It does not need to explain how consciousness—which seems to provide no functional advantages to the organism that could not have been provided by an unconscious zombie—could have been subjected to natural selection. In fact, it can supplement most other current theories of consciousness, addressing the hard problem while deferring to research to solve everything else.


1. Actually, many forms of panpsychism, such as that discussed by philosopher David Chalmers (1996), imply a view called property dualism. Since consciousness and the physical attributes of matter are so apparently irreconcilable, they result in a dualistic view of the properties of matter. However, this kind of dualism is different from substance dualism—the traditional form, which Descartes made famous. To repeat, since they are always associated together, it is unnecessary to account for either how one creates the other, or how one interacts with the other.

2. Actually, it's understood to result from the combination of certain kinds of quarks, but this sort of organization is far simpler, and occurs at a far lower level, than that in the brain.

3. Edwards (2005) proposes that single neurons may have a consciousness somewhat like that of the whole organism; more specifically, that every conscious experience is associated with a different neuron. Thus an individual neuron basically does what in Tononi's theory is accomplished by a vast network of thousands or millions of interacting neurons. However, this single cell consciousness would still be considerably different from that of the whole individual.

4. Deacon (1998), p. 266.

5. Even this conclusion must be qualified. What becomes conscious is not necessarily the most integrated. Suppose you are reading this article, thinking over these ideas, when suddenly there is a very loud noise in your vicinity. This sound will immediately come into your consciousness—not because it represents more integrated information than what you were previously aware of, but because the stimulus is much greater. This is another problem with IIT. While integration is certainly a major determining factor in what becomes consciousness, it's not necessarily the only one.


Baars, B. (1997) In the theatre of consciousness. Global workspace theory, a rigorous scientific theory of consciousness. J Consciousness Studies 4: 292-309.

Chalmers, D. (1996) The Conscious Mind. (Oxford: Oxford University Press)

Deacon, T.W. (1998) The Symbolic Species. (New York: W.W. Norton)

Dennett, D. (2005) Sweet Dreams. Philosophical Obstacles to a Science of Consciousness. (NY: Bradford)

Edelman, G. and Tononi, G. (2000) The Universe of Consciousness (New York: Basic)

Edwards, J.C.W. (2005) Is consciousness only a property of individual cells? Journal of Consciousness Studies 12: 60-76.

Horovitz, S.G., Braun, A.R., Carr, W.S., Picchioni, D., Balkin, T.J., Fukunaga, M. and Duyn, J.H. (2000) Decoupling of the brain's default mode network during deep sleep Proc Natl Acad Sci U S A. 106(27): 11376–11381.

Lloyd, S. (2006) Programming the Universe. (New York: Knopf)

Mhuircheartaigh, R.N.,, Rosenorn-Lanng, D., Wise, R., Jbabdi, S., Rogers, R., and Tracey, I. (2010) Cortical and subcortical connectivity changes during decreasing levels of consciousness in humans: a functional magnetic resonance imaging study using propofol. J Neurosci. 30(27):9095-102

Nagel, T. (1974) What is it like to be a bat? Philos Rev 83: 435-450.

Noe, A. (2010) Out of our Heads: Why You Are Not Your Brain and Other Lessons from the Biology of Consciousness (NY: Hill and Wang) Ramachandran, V.S. and Hirstein, W. (1997) The Three Laws of Qualia. J Consciousness Studies 4, 429-457

Sevush, S. (2006) Single-neuron theory of consciousness. J Theor Biol 7: 704-725.

Smith, A.P. (2009) The Dimensions of Experience (X-libris)

Seager, W. (1999) Theories of Consciousness. (New York: Routledge)

Tononi, G. (2008) Consciousness as integrated information: a provisional manifesto Biol Bull. 215: 216-242

Tononi G. and Edelman, G.M. (1998) Consciousness and complexity. Science 282: 1846-1851

Velmans, M. (1991) Is human information processing conscious? Behavioral and Brain Science 14, 651-726

Wittgenstein. L. (1958) Philosophical Investigations. (Upper Saddle River, NJ: Prentice-Hall)

Comment Form is loading comments...