Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
Andrea Diem-Lane is a tenured Professor of Philosophy at Mt. San Antonio College, where she has been teaching since 1991. She received her Ph.D. and M.A. in Religious Studies from the University of California, Santa Barbara. Dr. Diem earned her B.A. in Psychology from the University of California, San Diego, where she conducted original research in neuroscience on visual perception on behalf of V.S. Ramachandran, the world famous neurologist and cognitive scientist. Professor Diem has published several scholarly books and articles, including The Gnostic Mystery and When Gods Decay. She is married to Dr. David Lane, with whom she has two children, Shaun-Michael and Kelly-Joseph. Republished with permission. See also her Darwins DNA: A Brief Introduction to Evolutionary Philosophy, published on Integral World.

Part I | Part II | Part III | Part IV | Recommended Readings

Quantum Weirdness

Spooky Physics, Einstein vs. Bohr, Part I

Andrea Diem-Lane

Introduction: An Age-Old Problem

To say that neurology is ontology is merely to state the obvious.

What is truly real? And how do we know? These twin questions, sometimes related in philosophical jargon to ontology and epistemology, are of central importance in understanding the dramatic implications of quantum theory. Indeed, one could argue that the reason quantum theory is so baffling to understand is because it upends our deepest and most cherished ideas about what is real and what is not. Moreover, quantum theory calls into question the very process of how we know things. It is for this reason that Albert Einstein resisted the implications of quantum theory because he knew that what it portended was an end to determinism and an end to a strict causality governing the universe. Of course, for others like Niels Bohr, succumbing to such indeterminism, even if intellectually disagreeable, is precisely what the theory demands. In other words, it is indeterminism itself which informs all of quantum theory, and to neglect that marked characteristic is to ignore its most vital feature.

Einstein ultimately found the implications of quantum theory so unsettling that he made a number of terse remarks on it. In a letter to Max Born, dated September 1944 he wrote, “You believe in the God who plays dice, and I in complete law and order in a world which objectively exists, and which I, in a wildly speculative way, am trying to capture. I hope that someone will discover a more realistic way, or rather a more tangible basis than it has been my lot to find. Even the great initial success of the Quantum Theory does not make me believe in the fundamental dice-game….” In fact, Einstein spent a good sum of his life trying to come up with thought experiments which would demonstrate the incompleteness of quantum theory and show why it was at best an interregnum theory which would in time yield to a more reasonable and deterministic one. As he quipped to Born, “Although I am well aware that our younger colleagues interpret this as a consequence of senility. No doubt the day will come when we will see whose instinctive attitude was the correct one.”

This book is a brief introduction to the famous Einstein-Bohr debate over the implications of quantum theory with a special focus on the philosophical ramifications of Heisenberg's uncertainty principle. We are fortunate that there exists a fairly extensive record of the conversations between these two eminent thinkers. Indeed, it isn't hyperbolic to call the Einstein-Bohr conflict one of the greatest intellectual debates of modern times, nay of any time period.

This book begins with an overview of quantum theory and its early development. It also explores some of its weirder aspects, including the dual aspect of light quanta. In Chapter two, we explore why Einstein found many aspects of quantum theory so disagreeable, especially the idea of uncertainty relations where knowing an electron's position increases the ignorance of knowing an electron's momentum, and vice versa. Chapter three centers on why Bohr accepted quantum indeterminism (what he called complimentarity) and encouraged his colleagues and students (such as Pauli and Born and Heisenberg) to play out its consequences to the fullest—what would later be famously called the Copenhagen school. The most heated section of the book comes in Chapter four where we get to witness (both through transcriptions of talks given at the time and through extensive correspondence, particularly the letters to and from Max Born) the passion of Einstein's arguments against quantum theory and Bohr's equal passion for it. Although both Bohr and Einstein have been dead for decades, Chapter five illustrates that their debate still lives on and why it is still a very hot topic even among a newer generation of physicists. And, finally, in the conclusion we ask what this debate means for us and our day to day lives.

Our evolution has bounded what we can and cannot know about the world around us. Because of this our brains are not well adapted to understand either the very large or the very small. We are quite literally middling creatures that have been shaped for eons of time to survive in eco-niches where our food and prey are accessible to our five apertures. What this means, of course, is that whenever we venture beyond our middle earth by extending our senses to the very large or very small, we have to acclimate ourselves anew.

The history of science is a record of how man achieved such acclimations and how, in turn, such new insights transformed his understanding of how the universe actually works. Whether it was Galileo's telescope (seeing a pock marked moon versus a polished lunar surface) or Copernicus mathematical equations (indicating a solar based orbital system versus an earth centered one), in each case sensory or mental breakthroughs led to revolutions in scientific thought. It may be no exaggeration to say that whenever man has altered his bodily or cranial limits he has extended his world, a world which is forever linked to the limitations of what his senses can and cannot reveal.

To say that neurology is ontology is merely to state the obvious. But what sometimes gets lost in such clichés is that man's brain state is never static and thus his world is never the same as well. Change the neural apparatus and one transforms the universe. Not necessarily because the brain creates such realities, but rather because the limitations of one's cranial capacities predetermines what is accessible or knowable about any given aspect of reality. Change those neural coordinates and thereby change one's intellectual map.

All of this is necessary preface to understand why the human mind has an almost innate difficulty in understanding quantum theory—a theory which takes into account things so infinitesimally tiny that even our best analogies freeze our minds in a state of wonder.

Ludwig Wittgenstein gives us a fruitful glimpse of just how contradictory quantum physics can be and why it demonstrates prima facie its almost inherent illogical nature. Writing several years before the discovery of Werner Heisenberg's Nobel Prize winning discovery of the uncertainty relations in the subatomic realm, Wittgenstein states in his famous Tractatus Logico-Philosophicus:

“6.3751: For two colours, e.g. to be at one place in the visual field, is impossible, logical impossible, for it is excluded by the logical structure of colour. Let us consider how this contradiction presents itself in physics. Somewhat as follows: That a particle cannot at the same time have two velocities, i.e. that at the same time it cannot be in two places, i.e. that particles in different places at the same time cannot be identical. It is clear that the logical product of two elementary propositions can neither be a tautology nor a contradiction. The assertion that a point in the visual field has two different colours at the same time, is a contradiction”

Today, of course, quantum physicists state the opposite of Wittgenstein's logical necessity about the behavior of matter and point out that indeed a particle can be in two places at the same time, even if that space and time is limited in its regional and temporal import.

What Wittgenstein captured (quite unwittingly since his Tractatus dates from the latter part of the First World War) was how a rational, logical mind would be upended by the implications of quantum theory. Moreover, he provides us with a framework for why it may be so difficult for many of us to actually “get” quantum theory. As Richard Feynman, the well-known architect behind Quantum Electrodynamics (QED) once quipped, “I think I can safely say that nobody understands quantum mechanics.”

I can think of no better caveat than Feynman's when approaching this most profound of subjects. Listening in on the Einstein-Bohr debate may not resolve our own existential dilemmas, but it will undoubtedly put into sharp relief what is at stake when confronting the heart of matter itself.

Chapter One: Quantum Weirdness

Imagine taking a road trip to Las Vegas, Nevada, from Huntington Beach, California. Depending on the traffic, and how fast one is driving, it may take anywhere from five to eight hours. However, there are sections along the way (particularly near Barstow) where speeding too fast will most likely result in being stopped by the highway patrol and receiving a ticket. This can happen even when there appears to be no law enforcement officers in sight.

Why? Because of sophisticated radar tracking stations that monitor traffic flow. Radio waves emitted from the station spread out in varying directions and when they hit a moving object, some of those waves bounce back and are received again at the tracking station, containing two key pieces of information: the position of the vehicle and its momentum. This may seem a bit trivial but these forms of information are absolutely vital to understanding almost anything in the physical universe. Indeed, one can exaggerate here a bit and say almost all of physics is based on these two points of data. Knowing only the position of the car, for instance (“hey, there's a Ford truck in Duarte”), isn't sufficient to warrant a speeding ticket. And if one only knows the momentum of the vehicle but not its whereabouts is a bit frustrating.

Now this fairly trite example can be applied to almost any event in our day to day lives, from when to attend a lecture at the local college, to when we pick up our children from elementary school, to when we pick up pizza at our neighborhood restaurant. The civilized world is fundamentally based on knowing both the position and momentum of physical objects, including when and where to pick up our spouses from shopping at Target.

Newtonian physics is a picture of this mechanistic and predictable universe and, as such, serves us well in navigating our lives through most events. However, when we begin to look at bits of matter that are extraordinarily small, this same guiding map breaks down.

Imagine now that instead of taking a car to Las Vegas, you are riding on a single electron (to slightly butcher Einstein's more famous metaphor of riding on a beam of light), traveling much faster than the speed limit of 70 miles per hour. Indeed, you would be approaching the ultimate speed that any particle can travel, 186,000 miles per second. Clearly, such speeding warrants a ticket! But, in this instance, the electron police find themselves in a very strange conundrum. Because what they discover to their chagrin is that the more they comprehend how fast the electron is traveling the less they know about where it is exactly located. And, then, when they do get a fix on its position, they lose sight of its momentum.

What they soon realize is that their very act of measuring is interfering with the electron's ultimate position and/or momentum. It is as if the radar itself (which is in truth nothing more than electromagnetic energy) is literally moving the electron at different speeds and/or in different directions.

This is akin to being in a car and having the radar either bump up your speed a hundred miles an hour or having it transport you to another freeway and sending you on your way to San Diego. Something is clearly wrong with this picture. Something is clearly breaking down. And if this happened in our wayward drives to cities in the desert, the highway patrol's radar tracking station would be directly responsible for our speeding violations or for our confused and haphazard sojourns. One could literally say to the ticketing patrolman, “But you made me speed and/or switch lanes. Therefore, you should be giving yourself the ticket not me.” And, given what we know about how the radar interfered with your car, the judge would be forced to admit the obvious and let you off and reprimand the traffic station.

Enter the weird and twisted world of quantum mechanics. While our car example only becomes viable at the level of the very small, it is disconcerting nonetheless to realize that Newtonian physics breaks down precisely when one gets closer to the secrets of Nature. In order to understand why uncertainty increases when we explore the very small, we need to first understand what Max Planck discovered over a century ago when studying black body radiation.

Instead of radiant energy being emitted purely in waves and in smaller and smaller frequencies which could be halved ad infinitum, Planck theorized (though apparently he thought that his views would be only a temporary bridge) that energy came in discrete packets, quantified bits of matters, known more popularly later as “quanta.” Quanta cannot be halved, and thus electromagnetic energy can only come in multiples of this basic unity of energy, later known famously as Planck's constant.

This is analogous to when one goes to the store to buy a bottle of classic coke. Let's say the 20 ounce bottle costs one dollar and twenty-nine cents (the current price at our local 7/11) and you give the clerk 2 bucks. Now imagine when you get your change of 71 cents that you object to the penny and argue that you want something “smaller” than the penny, like 1/10 of a cent or even a ½ of a cent. The clerk will no doubt look at you a bit strange and he or she may reply, “But we have nothing less than a penny. That's the lowest amount of money available.”

You cannot “halve” a penny in our day to day world. Likewise, you cannot halve a quanta. Nature, it seems, has decided that the smallest unit for exchange is this single photon and apparently there is no way around this. You literally cannot “short” change nature, even if your physicist's intuition suggests that you should be able to. Of course Planck's constant is indeed very small, 6.626068 × 10-34 m2 kg / s.

So small, in fact, that our minds cannot really grasp, except with faulty analogies and metaphors, the tininess of the subatomic realm. The implications of the quantification of matter was not lost on Albert Einstein who used Planck's understanding to develop his theory on the photo-electric effect for which he eventually won his only Nobel Prize.

What was so disconcerting about Planck's discovery (or, should we say unintended uncovering) and Einstein's photoelectric effect was that for decades physicists had experimentally demonstrated that light acted like a wave, but now there was evidence of its particle-like nature. This dualistic realization about the nature of matter forced the world of science into a theoretic crises. How can it be both? Or, as the experiments at the time indicated, how is that in one context light propagates as if it was a wave and in another context light behaves as if it were composed of tiny bullets? Is nature so capricious?

Moreover, if light is both a wave and a particle why is it that only one aspect (but not both) shows up in varying experimental designs? Do we really choose how light is going to behave?

The famous double slit experiment illustrates very clearly the inherent weirdness of the quanta world, Richard Feynman, the famous Nobel Prize winning physicist of the 1960s, has stated that analyzing this experiment alone can reveal the deep mysteries of quantum mechanics.

There have been a large number of books (and even a few films) which have explained how the double slit experiment works. It was first devised by Thomas Young in the early part of the 19th century in which he devised an obstacle with two openings and passed a beam of light through the apertures which would then hit an adjoining barrier wall. What he found was that when light passed through these two slits it caused an interference pattern showing that light had a wave light aspect. However, later experiments showed that if you only had one slit open, light acted as a discrete packet (a quanta) which demonstrated that it had a particle or bullet like aspect.

How the light behaves depends on how the experiment is set-up. Open up just one slit and light acts like a particle. Open up two slits and light acts like a wave. But, the real question (and the one at the heart of quantum weirdness) is how does the light know if the other slit is open or closed? Even if only one photon is allowed to pass through only one slit, if the other slit is open it will act like a wave. If, however, the other slit is closed, the light will be particle like.

Andrew Zimmerman Jones does a brilliant job of explaining the double slit experiment and why it is so mysterious:

It became possible to have a light source that was set up so that it emitted one photon at a time. This would be, literally, like hurling microscopic ball bearings through the slits. By setting up a screen that was sensitive enough to detect a single photon, you could determine whether there were or were not interference patterns in this case.

One way to do this is to have a sensitive film set up and run the experiment over a period of time, then look at the film to see what the pattern of light on the screen is. Just such an experiment was performed and, in fact, it matched Young's version identically - alternating light and dark bands, seemingly resulting from wave interference.

This result both confirms and bewilders the wave theory. In this case, photons are being emitted individually. There is literally no way for wave interference to take place, because each photon can only go through a single slit at a time. But the wave interference is observed. How is this possible? Well, the attempt to answer that question has spawned many intriguing interpretations of quantum physics, from the Copenhagen interpretation to the many-worlds interpretation.

Now assume that you conduct the same experiment, with one change. You place a detector that can tell whether or not the photon passes through a given slit. If we know the photon passes through one slit, then it cannot pass through the other slit to interfere with itself.

It turns out that when you add the detector, the bands disappear! You perform the exact same experiment, but only add a simple measurement at an earlier phase, and the result of the experiment changes drastically.

Something about the act of measuring which slit is used removed the wave element completely. At this point, the photons acted exactly as we'd expect a particle to behave. The very uncertainty in position is related, somehow, to the manifestation of wave effects.

As we will see later on, how one interprets this experiment will have deep philosophical repercussions.

What determines light as a wave or a particle is dependent (literally) on our measuring device. And even then we cannot know both the momentum and position of that particle/wave with absolute precision. For instance, the more we know about the electron's position, the less we know about its momentum, and vice versa.

Werner Heisenberg, whose name is forever attached with this discovery (thus we have “Heisenberg's uncertainty principle”) realized to the ultimate consternation of Einstein that this indeterminism forced physicists to take a different approach when trying to resolve the quanta's secrets. Instead of an exact causal explanation of any singular electron dance, what was needed was a probabilistic model of how the electron or any subatomic material behaved. It is as if nature itself was a gambling device and what it paid out was determined by odds.

The glitch, of course, for any would-be gambler (or should we say any would-be quantum mechanic) is that he or she never actually knows in one throw of the dice or one pull of the slot lever whether they are going to win or lose.

How to get around this impasse? How to beat the odds? Well, you can't actually in one throw or one pull, but you can if you gamble enough.

For instance, take a coin toss. It is either going to be heads or tails, but never both if you let it land on the ground (and not, in the very rare occasion, on its side). The odds are 50/50 which can also be translated as “I don't know.” That it will be one or the other allows you some ultimate outcome that you can guess but never absolutely know in one isolated toss of the coin.

However, if you are allowed to toss the coin many times (a thousand or more times, for instance), then something else comes into play: probability functions. That is, the more you toss the coin the more you start to see patterns emerge which, given the science of statistics, will provide you with a fairly accurate gauge of what you can expect to see if one is allowed to toss a coin a million times.

A good example of this is drawn right from Las Vegas casinos (hopefully, your car has reached there by now without too much interference). The casino does not know whether or not you are going to win at the poker slot machine when you put in your dollar. Indeed, if the gambling establishment is genuine and not rigged, then it CANNOT know precisely. However, since the casino makes significant amounts of money the question arises very simply: How can that be possible if it is due to chance only? The answer is both simple and profound. Yes, the casino does not know in advance what “dice” the gambler is going to throw down on the craps table in any one isolated event, but it does have a very good understanding of the probable odds of how many winners and losers it will get if the game is played enough. This is, of course, is the science of statistics.

For instance, if you toss a quarter, there is a 50/50 chance you will get heads or tails. The odds are evenly split. However, if you toss that same coin say 1000 times, you will start to see a certain pattern emerge. You will quickly notice how difficult it is to get 200 heads in a row or 500 tails in a row. You will start to be able to calculate the odds of certain patterns emerging or not emerging. These odds, or mathematical probabilities, will start to give you some “certainty” even when dealing with something fundamentally Uncertain.

So, if a friend of mine wants to bet me that he can get a 100 heads in a row, the next question I must ask him is “how many times are you going to throw it.?” Because the number of throws will either increase or lessen my confidence in taking up his bet. If he says, ah, give me a thousand tries, I would easily take him up on his bet. If, however, he starts to talk about a trillion times a trillion times, I wouldn't venture such a wager.

Quantum mechanics is essentially a probabilistic model to provide us with how an uncertain realm can yield quite predictable, even if occasionally quite odd, outcomes and trajectories.

This leads, however, to all sorts of strange and unusual paradoxes. A recent cover story on quantum theory in the New Scientist by Gregory T. Huang has posited four very famous illustrations of quantum weirdness.

Schrödinger's cat
Conventional quantum theory says that particles can be in a superposition of two states at once. This leads to the thought experiment of a cat being both alive and dead inside a box, depending on the state of a toxic subatomic particle. Only when you open the box or make a measurement is the animal's fate determined.

Spooky action at a distance
Einstein decried the idea of entanglement - that one particle could instantaneously affect another's spin, say, through a weird quantum link. This phenomenon, also known as non-locality, has since been demonstrated and is a key principle behind quantum computers and communications.

Objective reality
Does the moon exist if nobody is looking at it? Conventional quantum theory says there is no reality beyond what we observe, so in principle things don't exist unless they are being measured.

Uncertainty principle
If you measure the position of a quantum particle, you can't know its momentum precisely, and vice versa. The conventional explanation is that there is randomness inherent in the quantum universe. Einstein realized that quantum theory gave astounding results and predictions, but he never felt comfortable with it as a final theory. He felt something was amiss and that at best quantum theory was an interregnum theory and that in time realism and not indeterminism would reign once again. Turning now to chapter two let's discover why Albert Einstein felt that God didn't play dice with the universe and its ultimate governing laws.

To Be Continued...

Comment Form is loading comments...