I have a lot to say about renormalization; if I wait until I’ve read everything I need to know about it, my essay will never be written; I’ll die first; there isn’t enough time.
Click this link and the one above to read what some experts argue is the why and how of renormalization. Do it after reading my essay, though.
There’s a problem inside the science of science; there always has been. Facts don’t match the mathematics of theories people invent to explain them. Math seems to remove important ambiguities that underlie all reality.
People noticed the problem as soon as they started doing science. The diameter of a circle and its circumference was never certain; not when Pythagoras studied it 2,500 years ago or now; the number π is the problem; it’s irrational, not a fraction; it’s a number with no end and no pattern — 3.14159…forever into infinity.
More confounding, π is a number which transcends all attempts by algebra to compute it. It is a transcendental number that lies on the crossroads of mathematics and physical reality — a mysterious number at the heart of creation because without it the diameters, surface areas, and volumes of spheres could not be calculated with arbitrary precision.
The diameter of a circle must be multiplied by π to calculate its circumference; and vice-versa. No one can ever know everything about a circle because the number π is uncertain, undecidable, and in truth unknowable.
Long ago people learned to use the fraction 22 /7or, for more accuracy, 355/113. These fractions gave the wrong value for π but they were easy to work with and close enough to do engineering problems.
Fast forward to Isaac Newton, the English astronomer and mathematician, who studied the motion of the planets. Newton published Philosophiæ Naturalis Principia Mathematica in 1687. I have a modern copy in my library. It’s filled with formulas and derivations. Not one of them works to explain the real world — not one.
Newton’s equation for gravity describes the interaction between two objects — the strength of attraction between Sun and Earth, for example, and the resulting motion of Earth. The problem is the Moon and Mars and Venus, and many other bodies, warp the space-time waters in the pool where Earth and Sun swim. No way exists to write a formula to determine the future of such a system.
In 1887 Henri Poincare and Heinrich Bruns proved that such formulas cannot be written. The three-body problem (or any N-body problem, for that matter) cannot be solved by a single equation. Fudge-factors must be introduced by hand, Richard Feynman once complained. Powerful computers combined with numerical methods seem to work well enough for some problems.
Perturbation theory was proposed and developed. It helped a lot. Space exploration depends on it. It’s not perfect, though. Sometimes another fudge factor called rectification is needed to update changes as a system evolves. When NASA lands probes on Mars, no one knows exactly where the crafts are located on its surface relative to any reference point on the Earth.
Science uses perturbation methods in quantum mechanics and astronomy to describe the motions of both the very small and the very large. A general method of perturbations can be described in mathematics.
Even when using the signals from constellations of six or more Global Positioning Systems (GPS) deployed in high earth-orbit by various countries, it’s not possible to know exactly where anything is. Beet farmers out west combine the GPS systems of at least two countries to hone the courses of their tractors and plows.
On a good day farmers can locate a row of beets to within an eighth of an inch. That’s plenty good, but the several GPS systems they depend on are fragile and cost billions per year. In beet farming, an eighth inch isn’t perfect, but it’s close enough.
Quantum physics is another frontier of knowledge that presents roadblocks to precision. Physicists have invented more excuses for why they can’t get anything exactly right than probably any other group of scientists. Quantum physics is about a hundred years old, but today the problems seem more insurmountable than ever.
Insurmountable?
Why?
Well, the interaction of sub-atomic particles with themselves combined with, I don’t know, their interactions with swarms of virtual particles might disrupt the expected correlations between theories and experimental results. The mismatches can be spectacular. They sometimes dwarf the N-body problems of astronomy.
Worse — there is the problem of scales. For one thing, electrical forces are a billion times a billion times a billion times a billion times stronger than gravitational forces at sub-atomic scales. Forces appear to manifest themselves according to the distances across which they interact. It’s odd.
Measuring the charge on electrons produces different results depending on their energy. High energy electrons interact strongly; low energy electrons, not so much. So again, how can experimental results lead to theories that are both accurate and predictive? Divergent amplitudes that lead to infinities aren’t helpful.
An infinity of scales pile up to produce troublesome infinities in the math, which tend to erode the predictive usefulness of formulas and diagrams. Once again, researchers are forced to fabricate fudge-factors. Renormalization is the buzzword for several popular methods.
Probably the best-known renormalization technique was described by Shinichiro Tomonaga in his 1965 Nobel Prize speech. According to the view of retired Harvard physicist Rodney Brooks, Tomonaga implied that …replacing the calculated values of mass and charge, infinite though they may be, with the experimental values… is the adjustment necessary to make things right, at least sometimes.
Isn’t such an approach akin to cheating? — at least to working theorists worth their salt? Well, maybe… but as far as I know results are all that matter. Truncation and faulty data mean that math can never match well with physical reality, anyway.
Folks who developed the theory of quantum electrodynamics (QED) used perturbation methods to bootstrap their ideas to useful explanations. Their work produced annoying infinities until they introduced creative renormalization techniques to chase them away.
At first physicists felt uncomfortable discarding the infinities that showed up in their equations; they hated introducing fudge-factors. Maybe they felt they were smearing theories with experimental results that weren’t necessarily accurate. Some may have thought that a poor match between math, theory, and experimental results meant something bad; they didn’t understand the hidden truth they struggled to lay bare.
Philosopher Robert Pirsig believed the number of possible explanations scientists could invent for phenomena were in fact unlimited. Despite all the math and convolutions of math, Pirsig believed something mysterious and intangible like quality or morality guided human understanding of the Cosmos. An infinity of notions he saw floating inside his mind drove him insane, at least in the years before he wrote his classic Zen and the Art of Motorcycle Maintenance.
The newest generation of scientists aren’t embarrassed by anomalies. They “shut up and calculate.” Digital somersaults executed to validate their work are impossible for average people to understand, much less perform. Researchers determine scales, introduce “cut-offs“, and extract the appropriate physics to make suitable matches of their math with experimental results. They put the horse before the cart more times than not, some observers might say.
Apologists say, no. Renormalization is simply a reshuffling of parameters in a theory to prevent its failure. Renormalization doesn’t sweep infinities under the rug; it is a set of techniques scientists use to make useful predictions in the face of divergences, infinities, and blowup of scales which might otherwise wreck progress in quantum physics, condensed matter physics, and even statistics. From YouTube video above.
It’s not always wise to question smart folks, but renormalization seems a bit desperate, at least to my way of thinking. Is there a better way?
The complexity of the language scientists use to understand and explain the world of the very small is a convincing clue that they could be missing pieces of puzzles, which might not be solvable by humans regardless how much IQ any petri-dish of gametes might deliver to brains of future scientists.
It’s possible that humans, who use language and mathematics to ponder and explain, are not properly hardwired to model complexities of the universe. Folks lack brainpower enough to create algorithms for ultimate understanding.
Perhaps Elon Musk’s Neuralink add-ons will help someday.
The smartest thinkers — people like Nick Bostrom and Pedro Domingos (who wrote The Master Algorithm) — suggest artificial super-intelligence might be developed and hardwired with hundreds or thousands of levels — each loaded with trillions of parallel links — to digest all meta-data, books, videos, and internet information (a complete library of human knowledge) to train armies of computers to discover paths to knowledge unreachable by puny humanoid intelligence.
Super-intelligent computer systems might achieve understanding in days or weeks that all humans working together over millennia might never acquire. The risk of course is that such intelligence, when unleashed, might enslave us all.
Another downside might involve communication between humans and machines. Think of a father — a math professor — teaching calculus to the family cat. It’s hopeless, right?
Imagine an expert in AI & quantum computation joining forces with billionaire Musk who possesses the rocket launching power of a country. Right now, neither is getting along, Elon said. They don’t speak. It could be a good thing, right?
What are the consequences?
Entrepreneurs don’t like to be regulated. Temptations unleashed by unregulated military power and AI attained science secrets falling into the hands of two men — nice men like Elon and Larry appear to be — might push humanity in time to unmitigated… what’s the word I’m looking for?
I heard Elon say he doesn’t like regulation, but he wants to be regulated. He believes super-intelligence will be civilization ending. He’s planning to put a colony on Mars to escape its power and ensure human survival.
Is Elon saying he doesn’t trust himself, that he doesn’t trust people he knows like Larry? Are these guys demanding governments save Earth from themselves?
I haven’t heard Larry ask for anything like that. He keeps a low profile. God bless him as he collects everything everyone says and does in cyber-space.
Think about it.
Think about what it means.
We have maybe ten years, tops; maybe less. Maybe it’s ten days. Maybe the worst has already happened, but no one said anything. Somebody, think of something — fast.
Who imagined that laissez-faire capitalism might someday spawn an airtight autocracy that enslaves the world?
Humans are wise to renormalize their aspirations — their civilizations — before infinities of misery wreck Earth and freeless futures emerge that no one wants.
People assume they see nothing, but in every case, when they look closely — when they investigate — they find something… air, quantum fluctuations, vacuum energy, etc.
Everyone finds no evidence that a state of nothing exists in nature or is even possible.
Physicists know this for sure: there can be no state of absolute zero in nature — not for temperature; not for energy; not for matter. All three are equivalent in important ways and are never zero — at all scales and at all time intervals. Quantum theory — the most successful theory in science some will argue — claims that absolute zero is impossible; it can’t exist in nature.
There can be no time interval exactly equal to zero.
Time exists; as does space (which is never empty); both depend for their existence on matter and energy (which are equivalent).
Einstein said that without energy and matter, time and space have no meaning. They are relative; they vary and change according to the General Theory of Relativity, according to the distribution and density of energy and matter. As long as matter and energy exist, time can never be zero; space can never be empty.
People can search until their faces turn blue for a physical and temporal place where there is nothing at all, but they will never find it, because a geometric null-space (a physical place with nothing in it) does not exist. It never has and never will. Everywhere scientists look, at every scale, they find something.
We ask the question, Why is there something rather than nothing?
Physicists say that nothing is but one state of the universe out of a google-plex of other possibilities. The odds against a state of nothingness are infinite.
Another glib answer is that the state of nothing is unstable. The uncertainty principle says it must be so. Time and space do not exist in a place where nothing exists. Once the instability of nothing forces something, time and space start rolling. A universe cascades out of the abyss, which has always existed and always will. Right?
Think about it. It’s not complicated.
People seem to ignore the plain fact that no one has ever observed even a little piece of nothing in nature. There is no evidence for nothing.
Could it be that the oft-asked question — Why is there something rather than nothing? — is based on a false impression, which is not supported by any evidence?
Cosmic microwave background radiation is a good example. It’s a humming sound that fills all space. Eons ago CMB was visible light — photons packed like the molecules of a thick syrup — but space has expanded for billions of years; expansion stretched the ancient visible light into invisible wavelengths called microwaves. Engineers have built sensors to hear them. Everywhere and at every distance microwave light hums in their sensors like a cosmic tinnitus.
Until someone finds evidence for the existence of nothing in nature, shouldn’t people conclude that something exists everywhere they look and that the state of nothing does not exist? Could we not go further and say that, indeed, nothing cannot exist? If it could, it would, but it can’t, so it doesn’t.
Why do people find it difficult, even disturbing, to believe that no alternative to something is possible? Folks can, after all, imagine a place with nothing in it. Is that the reason?
Is it human imagination that explains why, in the complete absence of any evidence, people continue to believe in the possibility of null-spaces — and null-states — and empty voids?
A physical packet (quantum) of vibrating light (a photon) can be said to have zero mass (despite having momentum, which is usually described as a manifestation of mass), because it doesn’t interact with a field now known to fill the so-called vacuum of space — the Higgs Field.
Odder still: massive bodies distort the shape of space and the duration of time in their vicinities; packets of vibrating light (photons), which have no mass, actually change their direction of travel when passing through the distorted spacetime near massive bodies like planets and suns.
Maybe people cling to their belief in the concept of nothingness because of something related to their sense of vision — their sense of sight and the way their eyes and brains work to make sense of the world. Only a tiny interval of the electromagnetic spectrum, which is called visible light, is viewable. Most of the light-spectrum is invisible, so in the past no one thought it was there.
The photons people see have a peculiar way of interacting with each other and with sense organs, which has the effect of enabling folks to sort out from the vast mess of information streaming into their heads only just enough to allow them to make the decisions necessary for survival. They are able to see only those photons that enter their eyes. Were it otherwise humans and other life-forms might be overwhelmed by too much information and become confused.
Folks don’t see a lot of the extraneous stuff which, if they did observe it, would immediately disavow them of any fantasies they might have had about a state of nothingness in nature.
If we were not blind to 99.999% of what’s out there, we wouldn’t believe in the concept of nothing. Such a state, never observed, would seem inconceivable.
The reason there is something rather than nothing is because there is no such thing as nothing. Deluded by their own blindness, humans invented the concept of ZERO in mathematics. Its power as a place holder convinced them that it must possess other magical properties; that it could represent not just the absence of things that they could count, but also an absolute certainty in measurement that we now know is not possible.
ZERO, we have learned, can be an approximation when it’s used to describe quantum phenomenon.
When the number ZERO is taken too seriously, when folks refuse to acknowledge the quantum nature of some of the stuff it purports to measure, they run into that most vexing problem in mathematics (and physics), which deconstructs the best ideas: dividing by zero, which is said to be undefined and leads to infinities that blow-up the most promising formulas. Stymied by infinities, physicists have invented work-arounds like renormalization to make progress with their computations.
Because humans are evolved biological creatures who are mostly blind to the things that exist in the universe, they have become hard-wired over the ages to accept the concept of nothingness as a natural state when, it turns out, there is no evidence for it.
The phenomenon of life and death has added to the confusion. We are born and we die, it seems. We were once nothing, and we return to nothing when we die. The concept of non-existence seems so right; the state of non-being; the state of nothingness, so real, so compelling.
But we are fools to think this way — both about ourselves and about nature itself. Anyone who has witnessed the birth of their own child understands that the child does not emerge from nothing but is a continuation of life that goes back eons. And we have no compelling evidence that we die; that we cease to exist; that we return to a state of nothingness.
No one remembers not existing. None of us have ever died. People we know and love seem to have died, physically, for sure. But we, ourselves, never have.
Those who make the claim that we die can’t know for sure if they are right, because they have never experienced a state of non-existence; in fact, they never will. No human being who has ever lived has ever experienced a state of non-existence. One has to exist to experience anything.
Why is there something, not nothing? Because there is no such thing as nothing. There never will be.
A foundation of modern physics is the Heisenberg Uncertainty Principle, right? If this principle is truly fundamental, then logic seems to demand that nothing can be exactly zero.
Nothing is more certain than zero, right? The Uncertainty Principle says that nothing fundamental about our universe can have the quale of certainty. The concept of nothing is an illusion.
An alternative to nothing, is something. Something doesn’t require an explanation. It doesn’t require properties that are locked down by certainty. Doesn’t burden-of-proof lie with the naysayers?
Find a patch of nothing somewhere in the universe.
It can’t be done.
The properties of things may need to be explained — scientists are always working to figure them out. People want to know how things get their properties and behave the way they do. It’s what science is.
Slowly, surely, science makes progress.
Billy Lee
Afterthought: The number ZERO is a valid place holder for computation but can never be a quantity of any measured thing that isn’t rounded-off. When thought about in this way, ZERO, like Pi (π), can take on the characteristics of an irrational number, which, when used for measurement, is always terminated at some arbitrary decimal place depending on the accuracy desired and the nature of the underlying geometry.
The universe might also be pixelated, according to theorists. Experiments are being done right now to help establish evidence for and against some specific proposals by a few of the current pixel-theory advocates. If a pixelated universe turns out to be fact, it will confound the foundations of mathematics and require changes in the way small things are measured.
For now, it seems that Pi and ZERO — indeed, all measurements involving irrational numbers — are probably best used when truncated to reflect the precision of Planck’s constant, which is the starting point for physicists who hope to define what some of the properties of pixels might be, assuming of course that they exist and make up the fabric of the cosmos.
In practice, pixelization would mean that no one needs numbers longer than forty-five or so decimal places to describe at least the one-dimensional properties of the subatomic world. According to theory, quantum stuff measured by a number like ZERO might oscillate around certain very small values at the fortieth decimal place or so in each of the three dimensions of physical space. A number ZERO which contained a digit in the 40th decimal place might even flip between negative and positive values in a random way.
The implications are profound, transcending even quantum physics. Read the Billy Lee Conjecture in the essay Conscious Life, anyone who doesn’t believe it.
One last point: quantum theory contains the concept of superposition, which suggests that an elementary particle is everywhere until after it is measured. This phenomenon — yes, it’s non-intuitive — adds weight to the point of view that space is not only not empty when we look; it’s also not empty when we don’t look.
Billy Lee
Comment by the Editorial Board:
Maybe a little story can help readers understand better what the heck Billy Lee is writing about. So here goes:
A child at night hears a noise in her toy-box and imagines a ghost. She cries out and her parents rush in. They assure her. There are no ghosts.
Later, alone in her room, the child hears another sound, this time in the closet. Her throbbing heart suggests that her parents must be lying.
Until she turns on the light and peeks into her closet, she can’t know for sure.
Then again, maybe ghosts fly away when the lights are on, she reasons.
In this essay, Billy Lee is trying to reassure his readers that there is no such thing as nothing. It’s not real.
Where is the evidence? Or does nothing disappear when we look at it?
Maybe ghosts really do fly away when we turn on the lights.
A mystery lies at the heart of quantum physics. At the tiniest scales, when a packet of energy (called a quantum) is released during an experiment, the wave packet seems to occupy all space at once. Only when a sensor interacts with it does it take on the behavior of a particle.
Its location can be anywhere, but the odds of finding it at any particular location are random within certain rules of quantum probabilities.
One way to think about this concept is to imagine a quantum “particle” released from an emitter in the same way a child might emit her bubble-gum by blowing a bubble. The quantum bubble expands to fill all space until it touches a sensor, where it pops to reveal its secrets. The “pop” registers a particle with identifiable states at the sensor.
Scientists don’t detect the particle until its bubble pops. The bubble is invisible, of course. In fact, it is imaginary. Experimenters guess where the phantom bubble will discharge by applying rules of probability.
This pattern of thinking, helpful in some ways, is probably profoundly wrong in others. The consensus among physicists I follow is that no model can be imagined that won’t break down.
Scientists say that evidence seems to suggest that subatomic particles don’t exist as particles with identifiable states or characteristics until they are brought into existence by measurements. One way to make a measurement is for a conscious experimenter to make one.
The mystery is this: if the smallest objects of the material world don’t exist as identifiable particles until after an observer interacts in some way to create them, how is it that all conscious humans see the same Universe? How is it that people agree on what some call an “objective” reality?
Quantum probabilities should construct for anyone who is interacting with the Universe a unique configuration — an individual reality — built-up by the probabilities of the particular way the person interfaces with whatever they are measuring. But this uniqueness is not what we observe. Everyone sees the same thing.
John von Neumann was the theoretical physicist and mathematician who developed the mathematics of quantum mechanics. He advanced the knowledge of humankind by leaps and bounds in many subjects until his death in 1954 from a cancer he may have acquired while monitoring atomic tests at Bikini Atoll.
“Johnny” von Neumann had much to say about the quantum mystery. A few of his ideas and those of his contemporary, Erwin Schrödinger, will follow after a few paragraphs.
As for Von Neumann, he was a bonafide genius — a polymath with a strong photographic memory — who memorized entire books, like Goethe’s Faust, which he recited on his death bed to his brother.
Von Neumann was fluent in Latin and ancient Greek as well as modern languages. By the age of eight, he had acquired a working knowledge of differential and integral calculus. A genius among geniuses, he grew-up to become a member of the A-team that created the atomic bomb at Los Alamos.
He died under the watchful eyes of a military guard at Walter Reed Hospital, because the government feared he might spill vital secrets while sedated. He was that important. The article in Wikipedia about his life is well worth the read.
Von Neumann developed a theory about the quantum process which I won’t go into very deeply, because it’s too technical for a blog on the Pontificator, and I’m not an expert anyway. [Click on links in this article to learn more.] But other scientists have said his theory required something like the phenomenon of consciousness to work right.
The potential existence of the particles which make up our material reality was just that — a potential existence — until there occurred what Von Neumann called, Process I interventions. Process II events (the interplay of wave-like fields and forces within the chaotic fabric of a putative empty space) could not, by themselves, bring forth the material world. Von Neumann did hypothesize a third process, sometimes called the Dirac choice, to allow nature to perform like Process I interventions in the apparent absence of conscious observers.
Von Neumann developed, as we said, the mathematics of quantum mechanics. No experiment has ever found violations of his formulas. Erwin Schrödinger, a contemporary of Von Neumann who worked out the quantum wave-equation, felt confounded by Neumann’s work and his own. He proposed that for quantum mechanics to make sense; for it to be logically consistent, consciousness might be required to have an existence independent of human brains — or any other brains for that matter. He believed, like Von Neumann may have, that consciousness could perhaps be a fundamental property of the Universe.
The Universe could not come into being without a Von Neumann Process I or III operator which, in Schrodinger’s view, every conscious life-form plugged into, much like we today plug a television into cable-outlets to view video. This shared consciousness, he reasoned, was why everyone sees the same material Universe.
Billy Lee
Post Script: Billy Lee has written several articles on this subject. Conscious Life and Bell’s Inequality are good reads and contain links to videos and articles. Sensing the Universe is another. Billy Lee sometimes adds to his essays as more information becomes available. Check back from time to time to learn more. The Editorial Board
UPDATE: 18 December 2022: Royal Swedish Academy of Sciences on 4 October 2022 awarded the Nobel Prize in Physics to:
Alain Aspect Institut d’Optique Graduate School – Université Paris- Saclay and École Polytechnique, Palaiseau, France
John F. Clauser
J.F. Clauser & Assoc., Walnut Creek, CA, USA
Anton Zeilinger University of Vienna, Austria
“for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”
UPDATE: September 5, 2019: I stumbled across this research published in NATURE during December 2011, where scientists reported entanglement of vibrational patterns in separated diamond crystals large enough to be viewed without magnification. Nature doi:10.1038/nature.2011.9532
UPDATE: May 8, 2018: This video from PBS Digital Studios is the best yet. Click the PBS link to view the latest experimental results involving quantum mechanics, entanglement, and their non-intuitive mysteries. The video is a little advanced and fast paced; beginners might want to start with this link.
UPDATE: February 4, 2016: Here is a link to the August 2015 article in Nature, which makes the claim that the last testable loophole in Bell’s Theorem has been closed by experiments conducted by Dutch scientists. Conclusion: quantum entanglement is real.
UPDATE: Nov. 14, 2014: David Kaiser proposed an experiment to determine Is Quantum Entanglement Real? Click the link to redirect to the Sunday Review, New York Times article. It’s a non-technical explanation of some of the science related to Bell’s Theorem.
John Stewart Bell‘s Theorem of 1964 followed naturally from the proof of an inequality he fashioned (now named after him), which showed that quantum particle behavior violated logic.
It is the most profound discovery in all science, ever, according to Henry Stapp—retired from Lawrence Berkeley National Laboratory and former associate of Wolfgang Pauli and Werner Heisenberg. Other physicists like Richard Feynman said Bell simply stated the obvious.
Here is an analogy I hope gives some idea of what is observed in quantum experiments that violate Bell’s Inequality: Imagine two black tennis balls—let them represent atomic particles like electrons or photons or molecules as big as buckyballs.
The tennis balls are created in such a way that they become entangled—they share properties and destinies. They share identical color and shape. [Entangled particles called fermions display opposite properties, as required by the Pauli exclusion principle.]
Imagine that whatever one tennis ball does, so does the other; whatever happens to one tennis ball happens to the other, instantly it turns out. The two tennis balls (the quantum particles) are entangled.
[For now, don’t worry about how particles get entangled in nature or how scientists produce them. Entanglement is pervasive in nature and easily performed in labs.]
According to optical and quantum experimentalist Mark John Fernee of Queensland, Australia, ”Entanglement is ubiquitous. In fact, it’s the primary problem with quantum computers. The natural tendency of a qubit in a quantum computer is to entangle with the environment. Unwanted entanglement represents information loss, or decoherence. Everything naturally becomes entangled. The goal of various quantum technologies is to isolate entangled states and control their evolution, rather than let them do their own thing.”
In nature, all atoms that have electron shells with more than one electron have entangled electrons. Entangled atomic particles are now thought to play important roles in many previously not understood biological processes like photosynthesis, cell enzyme metabolism, animal migration, metamorphosis, and olfactory sensing. There are several ways to entangle more than a half-dozen atomic particles in experiments.
Imagine particles shot like tennis balls from cannons in opposite directions. Any measurement (or disturbance) made on a ball going left will have the same effect on an entangled ball traveling to the right.
So, if a test on a left-side ball allows it to pass through a color-detector, then its entangled twin can be thought to have passed through a color-detector on the right with the same result. If a ball on the left goes through the color-detector, then so will the entangled ball on the right, whether or not the color test is performed on it. If the ball on the left doesn’t go through, then neither did the ball on the right. It’s what it means to be entangled.
Now imagine that cannons shoot thousands of pairs of entangled tennis balls in opposite directions, to the left and right. The black detector on the left is calibrated to pass half of the black balls. When looking for tennis balls coming through, observers always see black balls but only the half that get through.
Spin describes a particle property of quantum objects like electrons — in the same waycolor or roundness describe tennis balls. The property is confusing, because no one believes electrons (or any other quantum objects) actually spin. The math of spin is underpinned by the complex-mathematics of spinors, which transform spin arrows into multi-dimensional objects not easy to visualize or illustrate. Look for an explanation of how spin is observed in the laboratory later in the essay. Click links for more insight.
Now, imagine performing a test for roundness on the balls shot to the right. The test is performed after the black test on the left, but before any signal or light has time to travel to the balls on the right. The balls going right don’t (and can’t) learn what the detector on the left observed. The roundness-detector is set to allow three-fourths of all round tennis balls through.
When round balls on the right are counted, three-eighths of them are passing through the roundness-detector, not three-fourths. Folks might speculate that the roundness-detector is acting on only the half of the balls that passed through the color-detector on the left. And they would be right.
These balls share the same destinies, right? Apparently, the balls on the right learned instantly which of their entangled twins the color-detector on the left allowed to pass through, despite all efforts to prevent it.
So now do the math. One-half (the fraction of the black balls that passed through the left-side color-detector) multiplied by three-fourths (the fraction calibrated to pass through the right-side roundness-detector) equals three-eighths. That’s what is seen on the right — three-eighths of the round, black tennis balls pass through the right-side roundness-detector during this fictionalized and simplified experiment.
According to Bell’s Inequality, twice as many balls should pass through the right-side detector (three-fourths instead of three-eighths). Under the rules of classical physics (which includes relativity), communication between particles cannot exceed the speed of light.
There is no way the balls on the right can know if their entangled twins made it through the color detector on the left. The experiment is set up so that the right-side balls do not have time to receive a signal from the left-side. The same limitation applies to the detectors.
The question scientists have asked is: how can these balls (quantum particles) — separated by large distances — know and react instantaneously to what is happening to their entangled twins? What about the speed limit of light? Instantaneous exchange of information is not possible, according to Einstein.
The French quantum physicist, Alain Aspect, suggested his way of thinking about it in the science journal, Nature (March 19, 1999).
He wrote: The experimental violation of Bell’s inequalities confirms that a pair of entangled photons separated by hundreds of meters must be considered a single non-separable object — it is impossible to assign local physical reality to each photon.
Of course, the single non-separable object can’t have a length of hundreds of meters, either. It must have zero length for instantaneous communication between its endpoints. But it is well established by the distant separation of detectors in experiments done in labs around the world that the length of this non-separable quantum object can be arbitrarily long; it can span the universe.
When calculating experimental results, it’s as if a dimension (in this case, distance or length) has gone missing. It’s eerily similar to the holographic effect of a black hole where the three-dimensional information that lives inside the event-horizon is carried on its two-dimensional surface. (See the technical comment included at the end of the essay.)
Another way physicists have wrestled with the violations of Bell’s Inequality is by postulating the concept of superposition. Superposition is a concept that flows naturally from the linear algebra used to do the calculations, which suggests that quantum particles exist in all their possible states and locations at the same time until they are measured.
Measurement forces wave-particles to “collapse” into one particular state, like a definite position. But some physicists, like Roger Penrose, have asked: how do all the super-positioned particles and states that weren’t measured know instantaneously to disappear?
Superposition, a fundamental principle of quantum mechanics, has become yet another topic physicists puzzle over. They agree on the math of superposition and the wave-particle collapse during measurement but don’t agree on what a measurement is or the nature of the underlying reality. Many, like Richard Feynman, believe the underlying reality is probably unknowable.
Quantum behavior is non-intuitive and mysterious. It violates the traditional ideas of what makes sense. As soon as certainty is established for one measurement, other measurements, made earlier, become uncertain.
It’s like a game of whack-a-mole. The location of the mole whacked with a mallet becomes certain as soon as it is struck, but the other moles scurry away only to pop up and down in random holes so fast that no one is sure where or when they really are.
Physicists have yet to explain the many quantum phenomena encountered in their labs except to throw-up their hands to say — paraphrasing Feynman — it is the way it is, and the way it is, well, the experiments make it obvious.
But it’s not obvious, at least not to me and, apparently, many others more knowledgeable than myself. Violations of Bell’s Inequality confound people’s understanding of quantum mechanics and the world in which it lives. A consequence has been that at least a few scientists seem ready to believe that one, perhaps two, or maybe all four, of the following statements are false:
1) logic is reliable and enables clear thinking about all physical phenomenon;
4) a model can be imagined to explain quantum phenomenon.
I feel wonder whenever the idea sinks into my mind that at least one of these four seemingly self-evident and presumably true statements could be false — possibly all four — because repeated quantum experiments suggest they must be. Why isn’t more said about it on TV and radio?
The reason could be that the terrain of quantum physics is unfamiliar territory for a lot of folks. Unless one is a graduate student in physics — well, many scientists don’t think non-physicists can even grasp the concepts. They might be right.
So, a lot is being said, all right, but it’s being said behind the closed doors of physics labs around the world. It is being written about in opaque professional journals with expensive subscription fees.
The subtleties of quantum theory don’t seem to suit the aesthetics of contemporary public media, so little information gets shared with ordinary people. Despite the efforts of enthusiastic scientists — like Brian Cox, Sean M. Carroll, Neil deGrasse Tyson and Brian Greene — to serve up tasty, digestible, bite-size chunks of quantum mechanics to the public, viewer ratings sometimes fall flat.
When physicists say something strange is happening in quantum experiments that can’t be explained by traditional methods, doesn’t it deserve people’s attention? Doesn’t everyone want to try to understand what is going on and strive for insights? I’m not a physicist and never will be, but I want to know.
Even me — a mere science-hobbyist who designed machinery back in the day — wants to know. I want to understand. What is it that will make sense of the universe and the quantum realm in which it rests? It seems, sometimes, that a satisfying answer is always just outside my grasp.
Here is a concise statement of Bell’s Theorem from the article in Wikipedia — modified to make it easier to understand: No physical theory about the nature of quantum particles which ignores instantaneous action-at-a-distance can ever reproduce all the predictions about quantum behavior discovered in experiments.
To understand the experiments that led to the unsettling knowledge that quantum mechanics — as useful and predictive as it is — does indeed violate Bell’s proven Inequality, it is helpful not only to have a solid background in mathematics but also to understand ideas involving the polarization of light and — when applied to quantum objects like electrons and other sub-atomic particles — the idea of spin. Taken together, these concepts are somewhat analogous to the properties of color and roundness in the imaginary experiment described above.
This essay is probably not the best place to explain wave polarization and particle spin, because the explanation takes up space, and I don’t understand the concepts all that well, anyway. (No one does.)
But, basically, it’s like this: if a beam of electrons, for example, is split into two and then recombined on a display screen, an interference pattern presents itself. If one of the beams was first passed through a polarizer, and if experimenters then rotate the polarizer a full turn (that is, 360°), the interference pattern on the screen will reverse itself. If the polarizer-filter is rotated another full turn, the interference pattern will reverse again to what it was at the start of the experiment.
So, it takes two spins of the polarizer-filter to get back the original interference pattern on the display screen — which means the electrons themselves must have an intrinsic “one-half” spin. All so-called matter particles like electrons, protons, and neutrons (called fermions)have one-half spin.
Yes, it’s weird. Anyway, people can read-up on the latest ideas by clicking this link. It’s fun. For people familiar with QM (quantum mechanics), a technical note is included in the comments section below.
Otherwise, my analogy is useful enough, probably. In actual experiments, physicists measure more than two properties, I’m told. Most common are angular momentum vectors, which are called spin orientations. Think of these properties as color, shape, and hardness to make them seem more familiar — as long as no one forgets that each quality is binary; color is white or black; shape is round or square; hardness is soft or hard.
Spin orientations are binary too — the vectors point in one of two possible directions. It should be remembered that each entangled particle in a pair of fermions always has at least one property that measures opposite to that of its entangled partner.
The earlier analogy might be improved by imagining pairs of entangled tennis balls where one ball is black, the other white; one is round, the other square; add a third quality where one ball is hard, the other soft. Most important, the shape and color and hardness of the balls are imparted by the detectors themselves during measurement, not before.
Before measurement, concepts like color or shape (or spin or polarity) can have no meaning; the balls carry every possible color and shape (and hardness) but don’t take on and display any of these qualities until a measurement is made. Experimental verification of these realities keep some quantum physicists awake at night wondering, they say.
Anyway, my earlier, simpler analogy gets the main ideas across, hopefully. And a couple of the nuances of entanglement can be found within it. I’ve added an easy to understand description of Bell’s Inequality and what it means to the end of the essay.
In the meantime, scientists at the Austrian Academy of Sciences in Vienna recently demonstrated that entanglement can be used as a tool to photograph delicate objects that would otherwise be disturbed or damaged by high energy photons (light). They entangled photons of different energies (different colors).
They took photographs of objects using low energy photons but sent their higher energy entangled twins to the camera where their higher energies enabled them to be recorded. New technologies involving the strange behavior of quantum particles are in development and promise to transform the world in coming decades.
Perhaps entanglement will provide a path to faster-than-light communication, which is necessary to signal distant space-craft in real time. Most scientists say, no, it can’t be done, but ways to engineer around the difficulties are likely to be developed; technology may soon become available to create an illusion of instantaneous communication that is actually useful. Click on the link in this paragraph to learn more.
Non-scientists don’t have to know everything about the individual trees to know they are walking in a quantum forest. One reason for writing this essay is to encourage people to think and wonder about the forest and what it means to live in and experience it.
The truth is, the trees (particles at atomic scales) in the quantum forest seem to violate some of the rules of the forest (classical physics). They have a spooky quality, as Einstein famously put it.
Trees that aren’t there when no one is looking suddenly appear when someone is looking. Trees growing in one place seem to be growing in other places no one expected. A tree blows one way in the wind, and someone notices a tree at the other end of the forest — where there is no wind — blowing in the opposite direction. As of right now, no one has offered an explanation that doesn’t seem to lead to paradoxes and contradictions when examined by specialists.
John Stewart Bell proved that the trees in the quantum forest violate the laws of nature and logic. It makes me wonder whether anyone will ever know anything at all that they can fully trust about the fundamental, underlying essence of reality.
Some scientists, like Henry Stapp (now retired), have proposed that brains enable processes like choice and experiences like consciousness through the mechanism of quantum interactions. Stuart Hameroff and Roger Penrose have proposed a quantum mechanism for consciousness they call Orch Or.
Others, like Wolfgang Pauli and C. G. Jung, have gone further — asking, when they were alive, if the non-causal coordination of some process resembling what is today called entanglement might provide an explanation for the seeming synchronicity of some psychic processes — an arena of inquiry a few governments are rumored to have already incorporated (to great effect) into their intelligence gathering tool kits.
In a future essay I hope to speculate about how quantum processes like entanglement might or might not influence human thought, intuition, and consciousness.
Billy Lee
P.S. A simplified version of Bell’s Inequality might say that for things described by traits A, B, and C, it is always true that A, not B; plus B, not C; is greater than or equal to: A, not C.
When applied to a room full of people, the inequality might read as follows: tall, not male; plus male, not blonde; is greater than or equal to: tall, not blonde.
Said more simply: tall females and dark haired men will always number more than or equal to the number of tall people with dark hair.
People have tried every collection of traits and quantities imaginable. The inequality is always true, never false; except for quantum objects.
One way to think about it: all the ”not” quantities are, in some sense, uncertain in quantum experiments, which wrecks the inequality. That is to say, as soon as ”A” is measured (for example) ,”not B” becomes uncertain. When ”not B” is measured, ”A” becomes uncertain.
The introduction of uncertainties into quantities that were — before measurement — seemingly fixed and certain doesn’t occur in non-quantum collections where individual objects are big enough to make uncertainties not noticeable. The inability to measure both the position and velocity of small things with high precision is called the uncertainty principle and is fundamental to physics. No advancement in the technology of measurement will ever overcome it.
Uncertainty is believed to be an underlying reality of nature. It runs counter to the desire humans have for complete and certain knowledge; it is a thirst that can never be quenched.
But what’s really strange: when working with entangled particles, certainty about one particle implies certainty about its entangled twin; predicted experimental results are precise and never fail.
Stranger still, once entangled quantum particles are measured, the results, though certain, change from those expected by classical theory to those predicted by quantum mechanics. They violate Bell’s Inequality and the common sense of humans about how things should work.
Worse: Bell’s Theorem seems to imply that no one will ever be able to construct a physical model of quantum mechanics to explain the results of quantum experiments. No ”hidden variables” exist which, if anyone knew them, would explain everything.
Another way to say it is this: the underlying reality of quantum mechanics is unknowable. [A technical comment about the mystery of QM is included in the comments section.]