David Christopher Lane, Ph.D.
Professor of Philosophy, Mt. San Antonio College Lecturer in Religious Studies, California State University, Long Beach Author of Exposing Cults: When the Skeptical Mind Confronts the Mystical (New York and London: Garland Publishers, 1994) and The Radhasoami Tradition: A Critical History of Guru Succession (New York and London: Garland Publishers, 1992).
The Rendered Universe
Why Virtual Reality Unlocks
the Secret of Consciousness
David Lane
I would argue that consciousness evolved because it ushered in a way to have a depth of experiences within our own cranium without the life and death consequences.
Although Plato didn't know about virtual reality headsets or what such a new-fangled technology could evoke, he did understand better than most that humans do not see reality as it is, but rather how we filter it moment to moment. His famous Allegory of the Cave, which is arguably the single greatest thought experiment in the history of philosophy, explains (via the mouthpiece of Socrates) very simply that we are prisoners who confuse shadows on the wall, which are artificially manufactured by an unseen burning fire, as if they were real “men and other living things.” It is an illusion and as such the imprisoned men are duped.
Today with the advent of ever-increasing scientific advances, we are on the threshold of better understanding why consciousness evolved and how it works. Professor Donald Hoffman, the controversial cognitive scientist at the University of Irvine, has taken Plato's Allegory and given it a computational update, by using the metaphor of a desktop computer, where all we see is the user interface, not the underlying software programming or the hardware circuitry of electron exchanges. Accordingly, what we see/hear/smell/touch around us is a filtering mechanism developed by evolution to ensure that we focus on what will allow us to live long enough to pass on our genetic code.
In the past century we have come up with various models to understand consciousness—from Dennett's multiple drafts to Leary's eight circuits to Dehaene–Changeux global neuronal workspace, etc.—yet each have been hindered by their use of limited metaphors. This became much more apparent when the computational semblance of the mind faltered when neuroscientists and philosophers realized that awareness was not merely digital.
However, with the advent of virtual reality, augmented reality, and mixed reality, our models of how consciousness functions are becoming much clearer to grasp, since we now have the ability to simulate in 4 dimensions an all-encompassing environment artificially. Using VR, AR, and MR as touchstones allows us for the first time to better appreciate how our brains construct the world around us via incoming data streams. Consciousness is a forging mechanism, which by its attentional posturing enables it to fully immerse in a world of its own making, even as it remains mostly unaware of how such a magical performance occurs.
Simply put, we live not in an objective cosmos distinct from our interactions with it, but rather in a rendered universe, where our participation and our observations are fundamental to our interpretations of it. This becomes transparently obvious when we realize that the most sophisticated virtual reality headset known to exist is our own brain. However, by exploring manufactured VR accouterments and the varied vistas they can create, we have (perhaps for the first time in our history) the necessary tools to synthetically reconstruct how the brain perceives and interacts with reality.
This essay is an examination of how the advent of virtual reality changes our understanding of human consciousness and how future researches will benefit by employing its many iterations.
Brian Greene, the physicist polymath at Columbia University, recently made a parous observation about why human beings cannot visualize the quantum world or extraordinarily large time scales. Our bodies evolved to a middle range environment, where what was vital for our evolutionary survival was dependent on resources within a certain physical parameter. Because of this Darwinian dictum of eat or be eaten, our brains didn't develop the capacity to truly envision what ten dimensions may be like or what occurs at the level of a photon or time durations less than a nano-second or what was before the Big Bang, etc. Yes, we can imagine and we can speculate, but even here we have limited models with which to work.
Yet, Greene went on to speculate that with the advent of virtual and augmented reality, intertwined with Artificial Intelligence, it may be possible in the future for humans to stretch their cognitive abilities in ways unimaginable before the emergence of such technologies. Elon Musk's championing of neural lace/link is but a stepping stone for a wholly different way of thinking and being.
It may well be that the very reason we have yet to see a major breakthrough in the study of human consciousness is precisely due to our present cranial limitations. Just as the field of astronomy achieved greater success with building better and more refined telescopes, and molecular biology blossomed with more powerful microscopes, the study of the brain and awareness necessitates radically new tools which go far beyond
Electroencephalography (EEG), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Functional magnetic resonance imaging (fMRI). The immense complexity of the brain with its 86 billion or so neurons curled up in a compacted mound of gray matter weighing just three pounds cannot be approached, much less comprehensively understood, with crude instrumentation. Given that our technologies must evolve to properly inspect such an intricate compound, so too must our own mental maps, lest we be too inadequate to grapple with nature's greatest secret.
This is why I believe that virtual and augmented reality is a dramatic stride in the right direction. For example, this last year I attended the Oculus Connect Conference held in San Jose, which highlights the latest developments in virtual reality hardware and software. One of the keynote speakers was the video gaming pioneer, John Carmack, who pointed out two of the most important and fundamental “hard” problems in getting virtual reality to work seamlessly: spot-on rendering and time corrected latency.
First, when you don on a VR headset (such as the standalone Oculus Quest or HTC Vive's Cosmos Play), wherever you look the scene becomes clearer by a process technically known as foveated rendering “which uses an eye tracker integrated” with whatever headset you may be using that is specially designed “to reduce the rendering workload” systematically compressing the visual field “in the peripheral vision (outside of the zone gazed by the fovea).” Some headsets use a “less sophisticated variant called fixed foveated rendering [which] doesn't utilize eye tracking and instead assumes a fixed focal point.”
John Carmack and Michael Abrash, chief scientist for Oculus at Facebook, who also spoke at the Oculus 6 Conference, argued that perfecting rendering is the key to making virtual reality almost indistinguishable from our regular everyday sense of reality.
What is most intriguing about this in relation to our own particular form of attention is how our brain is literally a rendering machine, which provides us with only a small smattering of information via our senses whenever we pay attention to a particular occurrence or scene. We don't ever see objects or persons or events in their totality, but merely the tiniest of data slivers which are evolutionarily filtered for our survival and not necessarily for us to better understand the cosmos at large. Yet, we remain oblivious of how such neural rendering works and how it delimits the world we inhabit. In a very real sense we are born wearing the universe's most sophisticated VR headset and live as if we aren't aware of this requisite fact.
Understanding the mechanics behind VR rendering and its various permutations is undoubtedly helpful in better appreciating how our own brains conjure reality via our five senses. It may seem odd to realize that the universe we behold is versional and not a clear transparency of what is. Other species have different brain induced versions and we cannot accurately gauge what they experience if we only have limited knowledge about their respective rendering apparatuses.
Attention and rendering are correlative, and this can have startling implications. As James B. Glattfelder in his book Information-Consciousness-Reality explains:
“Most people who have journeyed to the DMT realm argue that what they have experienced is just as real as experiences made during the sober waking state—if not much more so. As mentioned, it is tempting to disregard such experiences as hallucinations—nothing more, nothing less. However, this raises philosophical questions. Recall that neuroscientists are quite clear: Our perception of reality is a hallucination tethered by a bit of sensory input. What I experience through sober waking consciousness is an elaborate virtual reality rendering in my brain. The nature of this hallucination can be modulated by the chemical composition of the brain. Crucially, the DMT produced by my own body has the potential to intrinsically and naturally modify the contents of my consciousness—by “teleporting” my mind to the DMT realm. Indeed, simply by shutting down the left hemisphere of the human brain appears to induce travels of the mind, allowing consciousness to access a “place” of peace and euphoria. How can I then be certain that the fidelity and truth content of the one hallucination is superior to the other? What epistemic guarantee do I have that would allow me to negate the reality of the DMT realm? How can I exclude the possibility that reality is indeed “queerer than we can suppose” and everything I have ever known about is metaphorically restricted to a tiny isolated island in a vast archipelago of transcendental existence?”
Whereas VR headsets are the result of sophisticated machinery along with complex software programs, our central nervous systems are the elongated result of natural selection operating on emergent organisms trying to survive within a competitive environmental landscape. What we get is what allows us to eat and mate, and thus there is no evolutionary pressure to reveal to us how the brain conjures the reality we witness. The VR industry is driven to provide us with a similar sort of magic, except that at this stage in its development there is still a long way to go before it becomes Matrix like. Yet, because it is in its infancy there is much that VR can tell us about how self-awareness may have first arisen.
One of the most outstanding revelations in VR is how it can convince us that projected images not only look real, but that they convey upon us a sense of presence. Why this is so tells us much about how our neurology is jerry rigged for interacting with key environmental cues.
Because of VR technology we have a better understanding of how to trick our brains into believing something is life-like despite it being merely a manufactured artifice. This is extremely helpful as it allows us a more accurate model of how a complex neural net goes about convincing us that what we see, hear, touch, and feel is truly authentic and which demands our focus. In other words, we have learned to discriminate what is vital for our survival and what is merely imaginative. That the line between both can too often be blurred is precisely why humans suffer from a wide range of mental ailments, not the least of which schizophrenia, dementia, and Alzheimer's.
Whenever I don my Oculus Quest VR headset I am amazed at how easily I get absorbed into a completely different realm, even though intellectually I know that it is merely a sophisticated simulation. How does such a device actually work so well as to deceive us?
“VR headsets either use two LCD displays (one per eye) or two feeds sent to one display. Headsets also have lenses placed between your eyes and the screen, which are used to focus and reshape the picture for each eye. They create a stereoscopic 3D image by angling the two 2D images. This is because the lenses mimic how each of our two eyes see the world very slightly differently.
VR headsets also need to have a minimum frame rate of at least 60 frames per second in order for the user to not feel sick. Current VR headsets are able to go way beyond this, with Oculus and the HTC Vive at 90 frames per second and PlayStation VR at 120 frames per second.
For VR to work properly, when you move your head up and down or side to side or tilt your head, the picture has to move properly with your head. Headsets use a system called six degrees of freedom (6DoF), which looks at your head's position in terms of the X, Y, and Z axis to measure head movements.
There are a couple of different components use in a head-tracking system, including a gyroscope, accelerometer, and a magnetometer. The PlayStation VR also uses 9 LEDs around the headset, which are used to provide 360-degree head tracking by using an external camera that monitor these signals.”
Although the VR set is but an inch or so away from one's eyes, the depth of vision, the feeling of overwhelming presence when fighting Darth Vader with a light saber, or being able to fly through our solar system and touching Jupiter, is truly mind boggling. Yet, and this is a point that needs to be underlined and worth repeating: the whole display is a sophisticated digital deception. Our brains are wired such that it doesn't take too much to convince us that an illusion is real, even when we are aware of the mechanics behind such magic.
This is why I believe virtual reality and augmented reality technologies are pivotal instruments in exploring how consciousness may arise from a subset of neuronal components. In a talk I gave to the Philosophy Club at the University of California, Irvine this past year I touched upon the reasons why.
Donald Hoffman
Earlier religious traditions, as found in Buddhism, Jainism, Hinduism, and Christian Gnosticism, argued that we live in an illusion which betrays its real origin. Just as the user interface on our smart phones hides the inner workings of computer programing and electron circuit design, the world of appearances masks the underlying physics and chemistry which gives rise to it. Herein lies the secret of consciousness, according to Donald Hoffman, Professor of Cognitive Science at the University of California, Irvine, who believes that a radical new approach is needed to understand the emergence of self-awareness. Writes Hoffman,
“Evolution has shaped us with perceptions that allow us to survive. But part of that involves hiding from us the stuff we don't need to know. And that's pretty much all of reality, whatever reality might be.”[1]
Hoffman believes that with the desktop interface we have for the first time the appropriate metaphor to understand how something that looks real on the surface is anything but. Elucidates Hoffman,
“Snakes and trains, like the particles of physics, have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions. Evolution shapes acceptable solutions, not optimal ones. A snake is an acceptable solution to the problem of telling me how to act in a situation. My snakes and trains are my mental representations; your snakes and trains are your mental representations.”[1]
While I agree with much of what Hoffman suggests (except that I find his concept of “conscious agents” confusing at best), I think Virtual Reality is a far more powerful metaphor for explaining how and why consciousness works, since it is a totally encompassing environment similar in so many ways to how we currently experience the world both within and without via our own subjective awareness. Yet, the VR headset is a mind manipulator par excellence, just as our own consciousness is an informational playground. Of course, the fundamental difference is that with VR we can inhabit instantly what before we could only dimly imagine.
But in both VR (virtual reality) and SA (self-awareness) we are as Donald Hoffman explains “not seeing the innards of reality.”
As he elaborates,
“Suppose there's a blue rectangular icon on the lower right corner of your computer's desktop—does that mean that the file itself is blue and rectangular and lives in the lower right corner of your computer? Of course not. But those are the only things that can be asserted about anything on the desktop—it has color, position and shape. Those are the only categories available to you, and yet none of them are true about the file itself or anything in the computer. They couldn't possibly be true. That's an interesting thing. You could not form a true description of the innards of the computer if your entire view of reality was confined to the desktop. And yet the desktop is useful. That blue rectangular icon guides my behavior, and it hides a complex reality that I don't need to know. That's the key idea. Evolution has shaped us with perceptions that allow us to survive. They guide adaptive behaviors. But part of that involves hiding from us the stuff we don't need to know. And that's pretty much all of reality, whatever reality might be. If you had to spend all that time figuring it out, the tiger would eat you.”[1]
As I mentioned earlier, our brains are rendering machines and as they parse tiny bits of incoming data streams they fashion the very worlds we inhabit. But this very act of rendering this or that scene is predicated to a large degree on where we focus our attention. In a very real sense we have limited bandwidth and can only take in a tiny fraction of all that is available to us at any particular moment. But herein lies the key: the limitations of our brain (like the limitations of our present-day VR headsets) means that evolution has modeled us to inhale what is sufficient for our survival; it doesn't allow for too much continual streaming of information, since that would in effect make us catatonic and unable to respond since we would be entirely swamped by data overload.
Moreover, how we respond to differing situations is directly correlated to time appropriation. Our brains are sophisticated motion detectors and the way we navigate our day to day interactions is dependent on how well we measure time intervals. Each species has its own unique time-lapse camera, so to say, since nothing arrives on time but rather in time and, even then, it has much to do with environmental conditions. In virtual reality, this is called the latency issue which has been a major obstacle for software programmers to get it right. Even the slightest delay or lag in VR can alter the user's experience and jolt him away from the game's surreal engagement. In a remarkably salient paper entitled, Towards Low-Latency and Ultra-Reliable Virtual Reality Mohammed S. Elbamby, Cristina Perfecto, Mehdi Bennis, and Klaus Doppler (from the Centre for Wireless Communications, University of Oulu, Finland) elaborate:
In VR environments, stringent latency requirements are of utmost importance for providing a pleasant immersive VR experience. The human eye needs to perceive accurate and smooth movements with low motion-to-photon (MTP) latency, which is the lapse between a moment (e.g. head rotation) and a frame's pixels corresponding to the new FOV have been shown to the eyes. High MTP values send conflicting signals to the vestibulo-ocular reflex (VOR), a dissonance that might lead to motion sickness. There is broad consensus in setting the upper bound for MTP to less than 15-20 ms. Meanwhile, the loopback latency of 4G under ideal operation conditions is 25 ms.
The challenge for bringing end-to-end latency down to acceptable levels starts by first understanding the various types of delays involved in such systems to calculate the joint computing and communication latency budget. Delay contributions to the end-to-end wireless/mobile VR latency include, sensor sampling delay, image processing or frame rendering computing delay, network delay (queuing delay and over-the-air delay) and display refresh delay. Sensor delay's contribution (<1 ms) is considered imperceptible by users, and display delay (˜10-15 ms) is expected to drop to 5 ms [6], which leaves 14 ms for computing and communication.
Both computing and communication delay serve as delay bottleneck in VR systems. Heavy image processing requires high computational power that is often not available in the local HMD GPUs. Offloading computing tasks to remote cloud servers significantly relieves the computing burden from the users' HMDs at the expense of incurring additional communication delay in both directions. Unlike MR and AR where uploading video streams to the cloud may be required, uplink communication delay due to offloading the computing task to the server is typically very small in VR, owing to the small amount of data needed, e.g., user 3 tracking data and the interactive control decisions. However, the downlink delivery of the processed video frames in full resolution can significantly contribute to the overall delay. Current online VR computing can take as much as 100 ms and communication delay (edge of network to server) reach 40 ms. Therefore, relying on remote cloud servers is a more suitable approach for low-resolution non-interactive VR applications, where the whole 360° content can be streamed and the constraints on real-time computing are relaxed. Interactive VR applications require real-time computing to ensure responsiveness. Therefore, it is necessary to shrink the distance between the end users and the computing servers to guarantee minimal latency.[2]
In analyzing the nitty gritty of human consciousness and its relationship to the brain, understanding rendering and latency is a fundamental necessity as any deficiency in one or the other or both can have catastrophic consequences. Nature over the course of millions of years came up with solutions that have served us well in our continuing struggle of existence. Because VR technology is attempting to seamlessly replicate what our brain has already achieved, it provides a tantalizing pathway for us to see how our own wetware accomplished such a remarkable feat. In many ways, VR and AR are perhaps the most practical means we presently have to understand the engineering nuts and bolts of consciousness. Even if such work proves insufficient to complete the entire task, the continued research and testing in virtual and augmented reality (replete with all the devices that will be created and tested as a result) will undoubtedly help us better grasp the intricate difficulties that must be resolved before self-reflective awareness can be operative.
It makes absolutely no sense to argue that we can comprehend the human mind without first knowing how the brain works.
In this regard, I don't think ample progress can be made in consciousness studies if we somehow bypass an exhaustive study of the brain and its labyrinthine structures. This is why I believe that philosophers of the mind should follow the lead of Patricia and Paul Churchland who as professionally trained philosophers have spent their careers championing neuroscience as a necessary route for anyone interested in studying awareness. It makes absolutely no sense to argue that we can comprehend the human mind without first knowing how the brain works. This is akin to saying that whenever your car breaks down, it is unnecessary to open up the hood and look at the engine since the engine and the vehicle are mutually exclusive.
Yes, there are certainly those thinkers (such as computer scientist, Bernardo Kastrup and New Age thinker, Ken Wilber) who believes that consciousness is not a product of three pounds of grey matter, but is a distinct force in itself. But in their disdain for mechanism, they tend to postulate myths that are untestable and even if believed and followed don't produce any practical benefits along the way to its own falsification. Whereas a neural focus on consciousness, even if it ultimately turns out to be an insufficient explanation, produces all sorts of beneficial byproducts, not the least of which is eliminating long held intuitions as mistaken.
We should remember that the history of science is to a very large measure the overthrow of lesser gods, whether it be Ra, Thor, Zeus, or the notion of an Intelligent Designer. It is precisely because of the process of eliminative materialism that we got rid of an outdated astrological interpretation of the cosmos and established the progressive science of astronomy; likewise, when we got a deeper understanding of how molecules and elements behaved we discarded alchemy for chemistry.
Thus, the reason virtual and augmented reality is such a powerful tool is that it allows for us to create hardware that by its very nature must mimic how we perceive, hear, and interact with the world, even if it is only a simulacrum of what we experience around us. Annaka Harris, author of the panpsychism influenced text, Conscious: A Brief Guide to the Fundamental Mystery of the Mind, defines consciousness simply as “experience,” the qualia of what it is like to see a blue ocean or feel in love. It is a wholly subjective sense which cannot be objectified and studied without losing the very quality that defines it to “be like something and not something else.”
Interestingly, VR is predicated on experience itself and creating a sense of presence. That is its chief draw and what attracts gamers and others to it. Of course, we must keep in mind that all maps and models are less than the territory to which they point and therefore we should not expect that VR technologies in themselves will unravel the secret to human consciousness and its emergence. But, like the simpler 2D computational models it replaces, it is a step forward and yields insights that were not possible before.
This point was driven home to me when I met Kenneth Williams one day outside my office at Mt. San Antonio College. Five years ago, Kenneth was T-boned in a horrific car accident which left him a quadriplegic. He was fascinated by what virtual reality had to offer, since an Oculus Quest headset would allow him to experience places and things that he couldn't, such as what was it like to ride a scary roller coaster or ride in a rocket ship or climb a mountain. A number of VR applications and games provided him with these experiences and more. Kenneth was experiencing what it was like to do something that was otherwise denied to him.
I would argue that consciousness evolved because it ushered in a way to have a depth of experiences within our own cranium (and ponder their past, present, and future iterations) without the life and death consequences of externalizing those imaginings in the real world. In other words, our brains are nature's own version of a biological VR headset. It makes perfect sense that in trying to create our own synthetic versions that we will get a richer and more robust understanding of how evolution created one by naturally selected means.
[2] M. S. Elbamby, C. Perfecto, M. Bennis and K. Doppler, "Toward Low-Latency and Ultra-Reliable Virtual Reality," in IEEE Network, vol. 32, no. 2, pp. 78-84, March-April 2018.
David Chalmers: The Philosophy of Virtual Reality (Aeon Video)