I have a lot to say about renormalization; if I wait until I’ve read everything I need to know about it, my essay will never be written; I’ll die first; there isn’t enough time.
Click this link and the one above to read what some experts argue is the why and how of renormalization. Do it after reading my essay, though.
There’s a problem inside the science of science; there always has been. Facts don’t match the mathematics of theories people invent to explain them. Math seems to remove important ambiguities that underlie all reality.
People noticed the problem as soon as they started doing science. The diameter of a circle and its circumference was never certain; not when Pythagoras studied it 2,500 years ago or now; the number π is the problem; it’s irrational, not a fraction; it’s a number with no end and no pattern — 3.14159…forever into infinity.
More confounding, π is a number which transcends all attempts by algebra to compute it. It is a transcendental number that lies on the crossroads of mathematics and physical reality — a mysterious number at the heart of creation because without it the diameters, surface areas, and volumes of spheres could not be calculated with arbitrary precision.
The diameter of a circle must be multiplied by π to calculate its circumference; and vice-versa. No one can ever know everything about a circle because the number π is uncertain, undecidable, and in truth unknowable.
Long ago people learned to use the fraction 22 /7or, for more accuracy, 355/113. These fractions gave the wrong value for π but they were easy to work with and close enough to do engineering problems.
Fast forward to Isaac Newton, the English astronomer and mathematician, who studied the motion of the planets. Newton published Philosophiæ Naturalis Principia Mathematica in 1687. I have a modern copy in my library. It’s filled with formulas and derivations. Not one of them works to explain the real world — not one.
Newton’s equation for gravity describes the interaction between two objects — the strength of attraction between Sun and Earth, for example, and the resulting motion of Earth. The problem is the Moon and Mars and Venus, and many other bodies, warp the space-time waters in the pool where Earth and Sun swim. No way exists to write a formula to determine the future of such a system.
In 1887 Henri Poincare and Heinrich Bruns proved that such formulas cannot be written. The three-body problem (or any N-body problem, for that matter) cannot be solved by a single equation. Fudge-factors must be introduced by hand, Richard Feynman once complained. Powerful computers combined with numerical methods seem to work well enough for some problems.
Perturbation theory was proposed and developed. It helped a lot. Space exploration depends on it. It’s not perfect, though. Sometimes another fudge factor called rectification is needed to update changes as a system evolves. When NASA lands probes on Mars, no one knows exactly where the crafts are located on its surface relative to any reference point on the Earth.
Science uses perturbation methods in quantum mechanics and astronomy to describe the motions of both the very small and the very large. A general method of perturbations can be described in mathematics.
Even when using the signals from constellations of six or more Global Positioning Systems (GPS) deployed in high earth-orbit by various countries, it’s not possible to know exactly where anything is. Beet farmers out west combine the GPS systems of at least two countries to hone the courses of their tractors and plows.
On a good day farmers can locate a row of beets to within an eighth of an inch. That’s plenty good, but the several GPS systems they depend on are fragile and cost billions per year. In beet farming, an eighth inch isn’t perfect, but it’s close enough.
Quantum physics is another frontier of knowledge that presents roadblocks to precision. Physicists have invented more excuses for why they can’t get anything exactly right than probably any other group of scientists. Quantum physics is about a hundred years old, but today the problems seem more insurmountable than ever.
Insurmountable?
Why?
Well, the interaction of sub-atomic particles with themselves combined with, I don’t know, their interactions with swarms of virtual particles might disrupt the expected correlations between theories and experimental results. The mismatches can be spectacular. They sometimes dwarf the N-body problems of astronomy.
Worse — there is the problem of scales. For one thing, electrical forces are a billion times a billion times a billion times a billion times stronger than gravitational forces at sub-atomic scales. Forces appear to manifest themselves according to the distances across which they interact. It’s odd.
Measuring the charge on electrons produces different results depending on their energy. High energy electrons interact strongly; low energy electrons, not so much. So again, how can experimental results lead to theories that are both accurate and predictive? Divergent amplitudes that lead to infinities aren’t helpful.
An infinity of scales pile up to produce troublesome infinities in the math, which tend to erode the predictive usefulness of formulas and diagrams. Once again, researchers are forced to fabricate fudge-factors. Renormalization is the buzzword for several popular methods.
Probably the best-known renormalization technique was described by Shinichiro Tomonaga in his 1965 Nobel Prize speech. According to the view of retired Harvard physicist Rodney Brooks, Tomonaga implied that …replacing the calculated values of mass and charge, infinite though they may be, with the experimental values… is the adjustment necessary to make things right, at least sometimes.
Isn’t such an approach akin to cheating? — at least to working theorists worth their salt? Well, maybe… but as far as I know results are all that matter. Truncation and faulty data mean that math can never match well with physical reality, anyway.
Folks who developed the theory of quantum electrodynamics (QED) used perturbation methods to bootstrap their ideas to useful explanations. Their work produced annoying infinities until they introduced creative renormalization techniques to chase them away.
At first physicists felt uncomfortable discarding the infinities that showed up in their equations; they hated introducing fudge-factors. Maybe they felt they were smearing theories with experimental results that weren’t necessarily accurate. Some may have thought that a poor match between math, theory, and experimental results meant something bad; they didn’t understand the hidden truth they struggled to lay bare.
Philosopher Robert Pirsig believed the number of possible explanations scientists could invent for phenomena were in fact unlimited. Despite all the math and convolutions of math, Pirsig believed something mysterious and intangible like quality or morality guided human understanding of the Cosmos. An infinity of notions he saw floating inside his mind drove him insane, at least in the years before he wrote his classic Zen and the Art of Motorcycle Maintenance.
The newest generation of scientists aren’t embarrassed by anomalies. They “shut up and calculate.” Digital somersaults executed to validate their work are impossible for average people to understand, much less perform. Researchers determine scales, introduce “cut-offs“, and extract the appropriate physics to make suitable matches of their math with experimental results. They put the horse before the cart more times than not, some observers might say.
Apologists say, no. Renormalization is simply a reshuffling of parameters in a theory to prevent its failure. Renormalization doesn’t sweep infinities under the rug; it is a set of techniques scientists use to make useful predictions in the face of divergences, infinities, and blowup of scales which might otherwise wreck progress in quantum physics, condensed matter physics, and even statistics. From YouTube video above.
It’s not always wise to question smart folks, but renormalization seems a bit desperate, at least to my way of thinking. Is there a better way?
The complexity of the language scientists use to understand and explain the world of the very small is a convincing clue that they could be missing pieces of puzzles, which might not be solvable by humans regardless how much IQ any petri-dish of gametes might deliver to brains of future scientists.
It’s possible that humans, who use language and mathematics to ponder and explain, are not properly hardwired to model complexities of the universe. Folks lack brainpower enough to create algorithms for ultimate understanding.
Perhaps Elon Musk’s Neuralink add-ons will help someday.
The smartest thinkers — people like Nick Bostrom and Pedro Domingos (who wrote The Master Algorithm) — suggest artificial super-intelligence might be developed and hardwired with hundreds or thousands of levels — each loaded with trillions of parallel links — to digest all meta-data, books, videos, and internet information (a complete library of human knowledge) to train armies of computers to discover paths to knowledge unreachable by puny humanoid intelligence.
Super-intelligent computer systems might achieve understanding in days or weeks that all humans working together over millennia might never acquire. The risk of course is that such intelligence, when unleashed, might enslave us all.
Another downside might involve communication between humans and machines. Think of a father — a math professor — teaching calculus to the family cat. It’s hopeless, right?
Imagine an expert in AI & quantum computation joining forces with billionaire Musk who possesses the rocket launching power of a country. Right now, neither is getting along, Elon said. They don’t speak. It could be a good thing, right?
What are the consequences?
Entrepreneurs don’t like to be regulated. Temptations unleashed by unregulated military power and AI attained science secrets falling into the hands of two men — nice men like Elon and Larry appear to be — might push humanity in time to unmitigated… what’s the word I’m looking for?
I heard Elon say he doesn’t like regulation, but he wants to be regulated. He believes super-intelligence will be civilization ending. He’s planning to put a colony on Mars to escape its power and ensure human survival.
Is Elon saying he doesn’t trust himself, that he doesn’t trust people he knows like Larry? Are these guys demanding governments save Earth from themselves?
I haven’t heard Larry ask for anything like that. He keeps a low profile. God bless him as he collects everything everyone says and does in cyber-space.
Think about it.
Think about what it means.
We have maybe ten years, tops; maybe less. Maybe it’s ten days. Maybe the worst has already happened, but no one said anything. Somebody, think of something — fast.
Who imagined that laissez-faire capitalism might someday spawn an airtight autocracy that enslaves the world?
Humans are wise to renormalize their aspirations — their civilizations — before infinities of misery wreck Earth and freeless futures emerge that no one wants.
Many smart physicists wonder about it; some obsess over it; a few have gone mad. Physicists like the late Richard Feynman said that it’s not something any human can or will ever understand; it’s a rabbit-hole that quantum physicists must stand beside and peer into to do their work; but for heaven’s sake don’t rappel into its depths. No one who does has ever returned and talked sense about it.
I’m a Pontificator, not a scientist. I hope I don’t start to regret writing this essay. I hope I don’t make an ass of myself as I dare to go where angels fear to tread.
My plan is to explain a mystery of existence that can’t be explained — even to people who have math skills, which I am certain most of my readers don’t. Lack of skills should not trouble anyone, because if anyone has them, they won’t understand my explanation anyway.
My destiny is failure. I don’t care. My promise, as always, is accuracy. If people point out errors, I fix them. I write to understand; to discover and learn.
My recommendation to readers is to take a dose of whatever medicine calms their nerves; to swallow whatever stimulant might ignite electrical fires in their brains; to inhale, if necessary, doctor-prescribed drugs to amplify conscious experience and broaden their view of the cosmos. Take a trip with me; let me guide you. When we’re done, you will know nothing about the fine-structure constant except its value and a few ways curious people think about it.
Oh yes, we’re going to rappel into the depths of the rabbit-hole, I most certainly assure you, but we’ll descend into the abyss together. When we get lost (and we most certainly will) — should we fall into despair and abandon our will to fight our way back — we’ll have a good laugh; we’ll cry; we’ll fall to our knees; we’ll become hysterics; we’ll roll on the soft grass we can feel but not see; we will weep the loud belly-laugh sobs of the hopelessly confused and completely insane — always together, whenever necessary.
Isn’t getting lost with a friend what makes life worth living? Everyone gets lost eventually; it’s better when we get lost together. Getting lost with someone who doesn’t give a care; who won’t even pretend to understand the simplest things about the deep, dark places that lie miles beyond our grasp; that lie beneath our feet; that lie, in some cases, just behind our eyeballs; it’s what living large is all about.
Isn’t it?
Well, for those who fear getting lost, what follows is a map to important rooms in the rather elaborate labyrinth of this essay. Click on subheadings to wander about in the caverns of knowledge wherever you will. Don’t blame me if you miss amazing stuff. Amazing is what hides within and between the rooms for anyone to discover who has the serenity to take their time, follow the spelunking Sherpa (me), and trust that he (me) will extricate them eventually — sane and unharmed.
Anyway, relax. Don’t be nervous. The fine-structure constant is simply a number — a pure number. It has no meaning. It stands for nothing — not inches or feet or speed or weight; not anything. What can be more harmless than a number that has no meaning?
Well, most physicists think it reveals, somehow, something fundamental and complicated going on in the inner workings of atoms — dynamics that will never be observed or confirmed, because they can’t be. The world inside an atom is impossibly small; no advance in technology will ever open that world to direct observation by humans.
What physicists can observe is the frequencies of light that enormous collections of atoms emit. They use prisms and spectrographs. What they see is structure in the light where none should be. They see gaps — very small gaps inside a single band of color, for example. They call it fine structure.
The Greek letter alpha (α) is the shortcut folks use for the fine-structure constant, so they don’t have to say a lot of words. The number is the square of another number that can have (and almost always does have) two or more parts — a complex number. Complex numbers have real and imaginary parts; math people say that complex numbers are usually two dimensional; they must be drawn on a sheet of two dimensional graph paper — not on a number line, like counting numbers always are.
Don’t let me turn this essay into a math lesson; please, …no. We can’t have readers projectile vomiting or rocking to the catatonic rhythms of a panic attack. We took our medicines, didn’t we? We’re going to be fine.
I beg readers to trust; to bear with me for a few sentences more. It will do no harm. It might do good. Besides, we can get through this, together.
Like me, you, dear reader, are going to experience power and euphoria, because when people summon courage; when they trust; when they lean on one another; when — like countless others — you put your full weight on me; I will carry you. You are about to experience truth, maybe for the first time in your life. Truth, the Ancient-of-Days once said, is that golden key that unlocks our prison of fears and sets us free.
Reality is going to change; minds will change; up is going to become down; first will become last and last first. Fear will turn into exhilaration; exhilaration into joy; joy into serenity; and serenity into power. But first, we must inner-tube our way down the foamy rapids of the next ten paragraphs. Thankfully, they are short paragraphs, yes….the journey is do-able, peeps. I will guide you.
The number (3 + 4i) is a complex number. It’s two dimensional. Pick a point in the middle of a piece of graph paper and call it zero (0 + 0i). Find a pencil — hopefully one with a sharp point. Move the point 3 spaces to the right of zero; then move it up 4 spaces. Make a mark. That mark is the number (3 + 4i). Mathematicians say that the “i” next to the “4” means “imaginary.” Don’t believe it.
They didn’t know what they were talking about, when first they worked out the protocols of two-dimensional numbers. The little “i” means “up and down.” That’s all. When the little “i” isn’t there, it means side to side. What could be more simple?
Draw a line from zero (0 + 0i) to the point (3 + 4i). The point is three squares to the right and 4 squares up. Put an arrow head on the point. The line is now an arrow, which is called a vector. This particular vector measures 5 squares long (get out a ruler and measure, anyone who doesn’t believe).
The vector (arrow) makes an angle of 53° from the horizontal. Find a protractor in your child’s pencil-box and measure it, anyone who doubts. So the number can be written as (5∠53), which simply means it is a vector that is five squares long and 53° counter-clockwise from horizontal. It is the same number as (3 + 4i), which is 3 squares over and 4 squares up.
The vectors used in quantum mechanics are smaller; they are less than one unit long, because physicists draw them to compute probabilities. A probability of one is 100%; it is certainty. Nothing is certain in quantum physics; the chances of anything at all are always less than certainty; always less than one; always less than 100%.
Using simple rules, a vector that is less than one unit long can be used in the mathematics of quantum probabilities to shrink and rotate a second vector, which can shrink and rotate a third, and a fourth, and so on until the process of steps that make up a quantum event are completed. Lengths are multiplied; angles are added. The rules are that simple. The overall length of the resulting vector is called its amplitude.
Yes, other operations can be performed with complex numbers; with vectors. They have interesting properties. Multiplying and dividing by the “imaginary” i rotates vectors by 90°, for example. Click on links to learn more. Or visit the Khan Academy web-site to watch short videos. It’s not necessary to know how everything works to stumble through this article.
The likelihood that an electron will emit or absorb a photon cannot be derived from the mathematics of quantum mechanics. Neither can the force of the interaction. Both must be determined by experiment, which has revealed that the magnitude of these amplitudes is close to ten percent (.085424543… to be more exact), which is about eight-and-a-half percent.
What is surprising about this result is that when physicists multiply the amplitudes with themselves (that is, when they “square the amplitudes“) they get a one-dimensional number (called a probability density), which, in the case of photons and electrons, is equal to alpha (α), the fine-structure constant, which is .007297352… or 1 divided by 137.036… .
Get out the calculator and multiply .08524542 by itself, anyone who doesn’t believe. Divide the number “1” by 137.036 to confirm.
From the knowledge of the value of alpha (α) and other constants, the probabilities of the quantum world can be calculated; when combined with the knowledge of the vector angles, the position and momentum of electrons and photons, for example, can be described with magical accuracy — consistent with the well-known principle of uncertainty, of course, which readers can look up on Wikipedia, should they choose to get sidetracked, distracted, and hopelessly lost.
“Magical” is a good word, because these vectors aren’t real. They are made up — invented, really — designed to mimic mathematically the behavior of elementary particles studied by physicists in quantum experiments. No one knows why complex vector-math matches the experimental results so well, or even what the physical relationship of the vector-math might be (if any), which enables scientists to track and measure tiny bits of energy.
To be brutally honest, no one knows what the “tiny bits of energy” are, either. Tiny things like photons and electrons interact with measuring devices in the same ways the vector-math says they should. No one knows much more than that.
What is known is that the strong force of QCD is 137 times stronger than the electromagnetic force of QED — inside the center of atoms. Multiply the strong force by (α) to get the EM force. No one knows why.
There used to be hundreds of tiny little things that behaved inexplicably during experiments. It wasn’t only tiny pieces of electricity and light. Physicists started running out of names to call them all. They decided that the mess was too complicated; they discovered that they could simplify the chaos by inventing some new rules; by imagining new particles that, according to the new rules, might never be observed; they named them quarks.
By assigning crazy attributes (like color-coded strong forces) to these quarks, they found a way to reduce the number of elementary particles to seventeen; these are the stuff that makes up the so-called Standard Model. The model contains a collection of neutrons and muons; and quarks and gluons; and thirteen other things — researchers made the list of subatomic particles shorter and a lot easier to organize and think about.
Some particles are heavy, some are not; some are force carriers; one — the Higgs — imparts mass to the rest. The irony is this: none are particles; they only seem to be because of the way we look at and measure whatever they really are. And the math is simpler when we treat the ethereal mist like a collection of particles instead of tiny bundles of vibrating momentum within an infinite continuum of no one knows what.
Physicists have developed protocols to describe them all; to predict their behavior. One thing they want to know is how forcefully and in which direction these fundamental particles move when they interact, because collisions between subatomic particles can reveal clues about their nature; about their personalities, if anyone wants to think about them that way.
The force and direction of these collisions can be quantified by using complex (often three-dimensional) numbers to work out between particles a measure during experiments of their interaction probabilities and forces, which help theorists to derive numbers to balance their equations. These balancing numbers are called coupling constants.
The fine-structure constant is one of a few such coupling constants. It is used to make predictions about what will happen when electrons and photons interact, among other things. Other coupling constants are associated with other unique particles, which have their own array of energies and interaction peculiarities; their own amplitudes and probability densities; their own values. One other example I will mention is the gravitational coupling constant.
To remove anthropological bias, physicists often set certain constants such as the speed of light (c), the reduced Planck constant (ℏ) , the fundamental force constant (e), and the Coulomb force constant (4πε)equal to “one”. Sometimes the removal of human bias in the values of the constants can help to reveal relationships that might otherwise go unnoticed.
The coupling constants for gravity and fine-structure are two examples.
for gravity;
for fine-structure.
These relationships pop-out of the math when extraneous constants are simplified to unity.
Despite their differences, one thing turns out to be true for all coupling constants — and it’s kind of surprising. None can be derived or worked out using either the theory or the mathematics of quantum mechanics. All of them, including the fine-structure constant, must be discovered by painstaking experiments. Experiments are the only way to discover their values.
Here’s the mind-blowing part: once a coupling constant — like the fine-structure alpha (α) — is determined, everything else starts falling into place like the pieces of a puzzle.
The fine-structure constant, like most other coupling constants, is a number that makes no sense. It can’t be derived — not from theory, at least. It appears to be the magnitude of the square of an amplitude (which is a complex, multi-dimensional number), but the fine-structure constant is itself one-dimensional; it’s a unit-less number that seems to be irrational, like the number π.
For readers who don’t quite understand, let’s just say that irrational numbers are untidy; they are unwieldy; they don’t round-off; they seem to lack the precision we’ve come to expect from numbers like the gravity constant — which astronomers round off to four or five decimal places and apply to massive objects like planets with no discernible loss in accuracy. It’s amazing to grasp that no constant in nature, not even the gravity constant, seems to be a whole number or a fraction.
Based on what scientists think they know right now, every constant in nature is irrational. It has to be this way.
Musicians know that it is impossible to accurately tune a piano using whole numbers and fractions to set the frequencies of their strings. Setting minor thirds, major thirds, fourths, fifths, and octaves based on idealized, whole-number ratios like 3:2 (musicians call this interval a fifth) makes scales sound terrible the farther one goes from middle C up or down the keyboard.
No, in a properly tuned instrument the frequencies between adjacent notes differ by the twelfth root of 2, which is 1.059463094…. . It’s an irrational number like “π” — it never ends; it can’t be written like a fraction; it isn’t a ratio of two whole numbers.
In an interval of a major fifth, for example, the G note vibrates 1.5 times faster than the C note that lies 7 half-steps (called semitones) below it. To calculate its value, take the 12th root of two and raise it to the seventh power. It’s not exactly 1.5. It just isn’t.
Get out the calculator and try it, anyone who doesn’t believe.
[Note from the Editorial Board: a musical fifthis often written as 3:2, which implies the fraction 3/2, which equals 1.5. Twelve half-notes make an octave; the starting note plus 7 half-steps make 8. Dividing these numbers by four makes 12:8 the same proportion as 3:2, right? The fraction 3/2 is a comparison of the vibrational frequencies (also of the nodes) of the strings themselves, not the number of half-tones in the interval.
However, when the first note is counted as one and flats and sharps are ignored, the five notes that remain starting with C and ending with G, for example, become the interval known as a perfectfifth. It kind of makes sense, until musicians go deeper; it gets a lot more complicated. It’s best to never let musicians do math or mathematicians do music. Anyone who does will create a mess of confusion, eight times out of twelve, if not more.]
An octave of 12 notes exactly doubles the vibrational frequency of a note like middle C, but every note in between middle C and the next higher octave is either a little flat or a little sharp. It doesn’t seem to bother anyone, and it makes playing in large groups with different instruments possible; it makes changing keys without everybody having to re-tune their instruments seem natural — it wasn’t as easy centuries ago when Mozart got his start.
The point is this:
Music sounds better when everyone plays every note a little out of tune. It’s how the universe seems to work too.
As for gravity, it works in part because space-time seems to curve and weave in the presence of super-heavy objects. No particle has ever been found that doesn’t follow the curved space-time paths that surround massive objects like our Sun.
Even particles like photons of light, which in the vacuum of space have no mass (or electric charge, for that matter) follow these curves; they bend their trajectories as they pass by heavy objects, even though they lack the mass and charge that some folks might assume they should to conduct an interaction.
Massless, charge-less photons do two things: first, they stay in their lanes — that is they follow the curved currents of space-time that exist near massive objects like a star; they fall across the gravity gradient toward these massive objects at exactly the same rate as every other particle or object in the universe would if they found themselves in the same gravitational field.
Second, light refracts in the dielectric of a field of gravity in the same way it refracts in any dialectric—like glass, for example. The deeper light falls into a gravity field, the stronger is the field’s refractive index, and the more light bends.
Measurements of star-position shifts near the edge of our own sun helped prove that space and time are curved like Einstein said and that Isaac Newton‘s gravity equation gives accurate results only for slow moving, massive objects.
Massless photons traveling from distant stars at the speed of light deflect near our sun at twice the angle of slow-moving massive objects. The deflection of light can be accounted for by calculating the curvature of space-time near our sun and adding to it the deflection forced by the refractive index of the gravity field where the passing starlight is observed.
In the exhilaration of observations by Eddington during the eclipse of 1919 which confirmed Einstein’s general theory, Einstein told a science reporter that space and time cannot exist in a universe devoid of matter and its flip-side equivalent, energy. People were stunned, some of them, into disbelief. Today, all physicists agree.
The coupling constants of subatomic particles don’t work the same way as gravity. No one knows why they work or where the constants come from. One thing scientists like Freeman Dyson have said: these constants don’t seem to be changing over time.
Evidence shows that these unusual constants are solid and foundational bedrocks that undergird our reality. The numbers don’t evolve. They don’t change.
Confidence comes not only from data carefully collected from ancient rocks and meteorites and analyzed by folks like Denys Wilkinson, but also from evidence uncovered by French scientists who examined the fossil-fission-reactors located at the Oklo uranium mine in Gabon in equatorial Africa. The by-products of these natural nuclear reactors of yesteryear have provided incontrovertible evidence that the value of the fine-structure constant has not changed in the last two-billion years. Click on the links to learn more.
Since this essay is supposed to describe the fine-structure constant named alpha (α), now might be a good time to ask: What is it, exactly? Does it have other unusual properties beside the coupling forces it helps define during interactions between electrons and photons? Why do smart people obsess over it?
I am going to answer these questions, and after I’ve answered them we will wrap our arms around each other and tip forward, until we lose our balance and fall into the rabbit hole. Is it possible that someone might not make it back? I suppose it is. Who is ready?
Alpha (α) (the fine-structure constant) is simply a number that is derived from a rotating vector (arrow) called an amplitude that can be thought of as having begun its rotation pointing in a negative (minus or leftward direction) from zero and having a length of .08524542…. . When the length of this vector is squared, the fine-structure constant emerges.
It’s a simple number — .007297352… or 1 / 137.036…. It has no physical significance. The number has no units (like mass, velocity, or charge) associated with it. It’s a unit-less number of one dimension derived from an experimentally discovered, multi-dimensional (complex) number called an amplitude.
We could imagine the amplitude having a third dimension that drops through the surface of the graph paper. No matter how the amplitude is oriented in space; regardless of how space itself is constructed mathematically, only the absolute length of the amplitude squared determines the value of alpha (α).
Amplitudes — and probability densities calculated from them, like alpha (α) — are abstract. The fine-structure constant alpha (α) has no physical or spatial reality whatsoever. It’s a number that makes interaction equations balance no matter what systems of units are used.
Imagine that the amplitude of an electron or photon rotates like the hand of a clock at the frequency of the photon or electron associated with it. Amplitude is a rotating, multi-dimensional number. It can’t be derived. To derive the fine structure constant alpha (α), amplitudes are measured during experiments that involve interactions between subatomic particles; always between light and electricity; that is, between photons and electrons.
I said earlier that alpha (α) can be written as the fraction “1 / 137.036…”. Once upon a time, when measurements were less precise, some thought the number was exactly 1 / 137.
The number 137 is the 33rd prime number after zero; the ancients believed that both numbers, 33 and 137, played important roles in magic and in deciphering secret messages in the Bible. The number 33 was Christ’s age at his crucifixion. It was proof, to ancient numerologists, of his divinity.
The number 137 is the value of the Hebrew word, קַבָּלָה (Kabbala), which means to receive wisdom.
In the centuries before quantum physics — during the Middle Ages — non-scientists published a lot of speculative nonsense about these numbers. When the numbers showed up in quantum mechanics during the twentieth century, mystics raised their eyebrows. Some convinced themselves that they saw a scientific signature, a kind of proof of authenticity, written by the hand of God.
That 137 is the 33rd prime number may seem mysterious by itself. But it doesn’t begin to explain the mysterious properties of the number 33 to the mathematicians who study the theory of numbers. The following video is included for those readers who want to travel a little deeper into the abyss.
Numerology is a rabbit-hole in and of itself, at least for me. It’s a good thing that no one seems to be looking at the numbers on the right side of the decimal point of alpha (α) — .036 might unglue the too curious by half.
Read right to left (as Hebrew is), the number becomes 63 — the number of the abyss.
I’m going to leave it there. Far be it for me to reveal more, which might drive innocents and the uninitiated into forests filled with feral lunatics.
Folks are always trying to find relationships between α and other constants like π and e. One that I find interesting is the following:
=
Do the math. It’s mysterious, no?
Well, it might be until someone subtracts
which brings the result even closer to the experimentally determined value of α. Somehow, mystery diminishes with added complexity, correct? Numerology can lead to peculiar thinking e times out of π. Right?
The view today is that, yes, alpha (α) is annoyingly irrational; yet many other quantum numbers and equations depend upon it. The best known is:
These constants (and others) show up everywhere in quantum physics. They can’t be derived from first principles or pure thought. They must be measured.
As technology improves, scientists make better measurements; the values of the constants become more precise. These constants appear in equations that are so beautiful and mysterious that they sometimes raise the hair on the back of a physicist’s head.
The equations of quantum physics tell the story about how small things that can’t be seen relate to one another; how they interact to make the world we live in possible. The values of these constants are not arbitrary. Change their values even a little, and the universe itself will pop like a bubble; it will vanish in a cosmic blip.
How can a chaotic, quantum house-of-cards depend on numbers that can’t be derived; numbers that appear to be arbitrary and divorced from any clever mathematical precision or derivation?
The inability to solve the riddles of these constants while thinking deeply about them has driven some of the most clever people on Earth to near madness — the fine-structure constant (α) is the most famous nut-cracker, because its reciprocal (137.036…) is so very close to the numerology of ancient alchemy and the kabbalistic mysteries of the Bible.
What is the number alpha (α) for? Why is it necessary? What is the big deal that has garnered the attention of the world’s smartest thinkers? Why is the number 1 / 137 so dang important during the modern age, when the mysticism of the ancient bards has been largely put aside?
Well, two reasons come immediately to mind. Physicists are adamant; if α was less than 1 / 143 or more than 1 / 131, the production of carbon inside stars would be impossible. All life we know is carbon-based. The life we know could not arise.
The second reason? If alpha (α) was less than 1 / 151 or more than 1 / 124, stars could not form. With no stars, the universe becomes a dark empty place.
Without mathematics, humans have no hope of understanding the universe.
Yet, here we are wrestling against all the evidence; against all the odds that the mysteries of existence will forever elude us. We cling to hope like a drowning sailor at sea, praying that the hour of rescue will soon come; we will blow our last breath in triumph; humans can understand. Everything is going to fall into place just as we always knew it would.
It might surprise some readers to learn that the number alpha (α) has a dozen explanations; a dozen interpretations; a dozen main-stream applications in quantum mechanics.
The simplest hand-wave of an explanation I’ve seen in print is that depending on ones point of view, “α” quantifies either the coupling strength of electromagnetism or the magnitude of the electron charge. I can say that it’s more than these, much more.
One explanation that seems reasonable on its face is that the magnetic-dipole spin of an electron must be interacting with the magnetic field that it generates as it rushes about its atom’s nucleus. This interaction produces energies which — when added to the photon energies emitted by the electrons as they hop between energy states — disrupt the electron-emitted photon frequencies slightly.
This jiggling (or hopping) of frequencies causes the fine structure in the colors seen on the screens and readouts of spectrographs — and in the bands of light which flow through the prisms that make some species of spectrographs work.
OK… it might be true. It’s possible. Nearly all physicists accept some version of this explanation.
Beyond this idea and others, there are many unexplained oddities — peculiar equations that can be written, which seem to have no relation to physics, but are mathematically beautiful.
For example: Euler’s number, “e” (not the electron charge we referred to earlier), when multiplied by the cosine of (1/α), equals 1 — or very nearly. (Make sure your calculator is set to radians, not degrees.) Why? What does it mean? No one knows.
What we do know is that Euler’s number shows up everywhere in statistics, physics, finance, and pure mathematics. For those who know math, no explanation is necessary; for those who don’t, consider clicking this link to Khan Academy, which will take you to videos that explain Euler’s number.
What about other strange appearances of alpha (α) in physics? Take a look at the following list of truths that physicists have noticed and written about; they don’t explain why, of course; indeed, they can’t; many folks wonder and yearn for deeper understanding:
1 — One amazing property about alpha (α) is this: every electron generates a magnetic field that seems to suggest that it is rotating about its own axis like a little star. If its rotational speed is limited to the speed of light (which Einstein said was the cosmic speed limit), then the electron, if it is to generate the charge we know it has, must spin with a diameter that is 137 times larger than what we know is the diameter of a stationary electron — an electron that is at rest and not spinning like a top. Digest that. It should give pause to anyone who has ever wondered about the uncertainty principle. Physicists don’t believe that electrons spin. They don’t know where their electric charge comes from.
2 — The energy of an electron that moves through one radian of its wave process is equivalent to its mass. Multiplying this number (called the reduced Compton wavelength of the electron) by alpha (α) gives the classical (non-quantum) electron radius, which, by the way, is about 3.2 times that of a proton. The current consensus among quantum physicists is that electrons are point particles — they have no spatial dimensions that can be measured. Click on the links to learn more.
3 — The physics that lies behind the value of alpha (α) requires that the maximum number of protons that can coexist inside an atom’s nucleus must be less than 137.
Think about why.
Protons have the same (but opposite) charge as electrons. Protons attract electrons, but repel each other. The quarks, from which protons are made, hold themselves together in protons by means of the strong force, which seems to leak out of the protons over tiny distances to pull the protons together to make the atom’s nucleus.
The strong force is more powerful than the electromagnetic force of protons; the strong force enables protons to stick together to make an atom’s nucleus despite their electromagnetic repulsive force, which tries to push them apart.
An EM force from 137 protons inside a nucleus is enough to overwhelm the strong forces that bind the protons to blow them apart.
Another reason for the instability of large nuclei in atoms might be — in the Bohr model of the atom, anyway — the speed that an electron hops about is approximately equal to the atomic number of the element times the fine-structure constant (alpha) times the speed of light.
When an electron approaches velocities near the speed of light, the Lorentz transformations of Special Relativity kick in. The atom becomes less stable while the electrons take on more mass; more momentum. It makes the largest numbered elements in the periodic table unstable; they are all radioactive.
The velocity equation is V = n * α * c . Element 118 — oganesson — presumably has some electrons that move along at 86% of the speed of light. [ 118 * (1/137) * (3E8) ] 86% of light-speed means that relativistic properties of electrons transform to twice their rest states.
Uranium is the largest naturally occurring element; it has 92 protons. Physicists have created another 26 elements in the lab, which takes them to 118, which is oganesson.
When 137 is reached (most likely before), it will be impossible to create larger atoms. My gut says that physicists will never get to element 124 — let alone to 137 — because the Lorentz transform of the faster moving electrons grows by then to a factor of 2.3. Intuition says, it is too large. Intuition, of course, is not always the best guide to knowledge in quantum mechanics.
Plutonium, by the way — the most poisonous element known — has 94 protons; it is man-made; one isotope (the one used in bombs) has a half-life of 24,000 years. Percolating plutonium from rotting nuclear missiles will destroy all life on Earth someday; it is only a matter of time. It is impossible to stop the process, which has already started with bombs lost at sea and damage to power plants like the ones at Chernobyl and at Fukushima, Japan. (Just thought I’d mention it since we’re on the subject of electron emissions, i.e beta-radiation.)
4 — When sodium light (from certain kinds of streetlamps, for example) passes through a prism, its pure yellow-light seems to split. The dark band is difficult to see with the unaided eye; it is best observed under magnification.
The split can be measured to confirm the value of the fine-structure constant. The measurement is exact. It is this “fine-structure” that Arnold Sommerfeld noticed in 1916, which led to his nomination for the Nobel Prize; in fact Sommerfeld received eighty-four nominations for various discoveries. For some reason, he never won.
5 — The optical properties of graphene — a form of carbon used in solid-state electrical engineering — can be explained in terms of the fine-structure constant alone. No other variables or constants are needed.
6 — The gravitational force (the force of attraction) that exists between two electrons that are imagined to have masses equal to the Planck-mass is 137.036 times greater than the electrical force that tries to push the electrons apart at every distance. I thought the relationship should be the opposite until I did the math.
It turns out that the Planck-mass is huge — 2.176646 E-8 kilograms (the mass of the egg of a flea, according to a source on Wikipedia). Compared to neutrons, atoms, and molecules, flea eggs are heavy. The ratio of 137 to 1 (G force vs. e force) is hard to explain, but it seems to suggest a way to form micro-sized black holes at subatomic scales. Once black holes get started their appetites can become voracious.
The good thing is that no machine so far has the muscle to make Planck-mass morsels. Alpha (α) has slipped into the mathematics in a non-intuitive way, perhaps to warn folks that, should anyone develop and build an accelerator with the power to produce Planck-mass particles, they will have — perhaps inadvertently — designed a doomsday seed that could very well grow-up to devour Earth, if not the solar system and beyond.
8 — The Standard Model of particle physics contains 20 or so parameters that cannot be derived; they must be experimentally discovered. One is the fine-structure constant (α), which is one of four constants that help to quantify interactions between electrons and photons.
9 — The speed of light is 137 times greater than the speed of “orbiting” electrons in hydrogen atoms. The electrons don’t actually “orbit.” They do move around in the sense of a probability distribution, though, and alpha (α) describes the ratio of their velocities to the cosmic speed limit of light. (See number 3 in this list for a description of element 118 — oganesson — and the velocity of some of its electrons.)
10 — The energy of a single photon is precisely related to the energy of repulsion between two electrons by the fine-structure constant alpha (α). Yes, it’s weird. How weird? Set the distance between two electrons equal to the wavelength of any photon. The energy of the photon will measure 137.036 times more than the repulsive force between the electrons. Here’s the problem. Everyone thinks they know that electron repulsion falls off exponentially with distance, while photon energy falls off linearly with wavelength. In these experimental snapshots, photon energy and electron repulsive energy are locked. Photons misbehave depending on how they are measured, right? The anomaly seems to have everything to do with the geometric shape of the two energy fields and how they are measured. Regardless, why “α”?
11 — The charge of an electron divided by the Planck charge — the electron charge defined by natural units, where constants like the speed of light and the gravitational constant are set equal to one — is equal to . This strange relationship is another indicator that something fundamental is going on at a very deep level, which no one has yet grasped.
12 — Some readers who haven’t toked too hard on their hash-pipes might remember from earlier paragraphs that the “strong force” is what holds quarks together to make protons and neutrons. It is also the force that drives protons to compactify into a solid atomic nucleus.
The strong force acts over short distances not much greater than the diameter of the atom’s nucleus itself, which is measured in femtometers. At this scale the strong force is 137 times stronger than the electromagnetic force, which is why protons are unable to push themselves apart; it is one reason why quarks are almost impossible to isolate. Why 137? No one has a clue.
Now, dear reader, I’m thinking that right now might be a good time to share some special knowledge — a reward for your courage and curiosity. We’ve spelunked together for quite a while, it seems. Some might think we are lost, but no one has yet complained.
Here is a warning and a promise. We are about to descend into the deepest, darkest part of the quantum cave. Will you stay with me for the final leg of the journey? I know the way. Do you believe it? Do you trust me to bring you back alive and sane?
In the Wikipedia article about α, the author writes, In natural units, commonly used in high energy physics, where ε0 = c = h/2π = 1, the value of the fine-structure constant is:
Every quantum physicist knows the formula. In natural units e = .302822….
Remember that the units collapse to make “α” a dimensionless number. Dimensional units don’t go away just because the values used to calculate the final result are set equal to “1”, right? Note that the value above is calculated a little differently than that of the Planck system — where 4πε is set equal to “1”.
As I mentioned, the value for “α” doesn’t change. It remains equal to .0073…, which is 1 / 137.036…. What puzzles physicists is, why?
What is the number 4π about? Why, when 4π is stripped away, does there remain only “α” — the mysterious number that seems to quantify a relationship of some kind between two electrons?
Well… electrons are fermions. Like protons and neutrons they have increments of 1/2 spin. What does 1/2 spin even mean?
It means that under certain experimental conditions when electrons are fired through a polarized disc they project a visible interference pattern on a viewing screen. When the polarizing disc is rotated, the interference pattern on the screen changes. The pattern doesn’t return to its original configuration until the disc is rotated twice — that is, through an angle of 720°, which is 4π radians.
Since the polarizer must be spun twice, physicists reason that the electron must have 1/2 spin (intrinsically) to spin once for every two spins of the polarizer. Yes, it makes no sense. It’s crazy — until it isn’t.
What is more insane is that an irrational, dimensionless number that cannot be derived by logic or math is all that is left. We enter the abyss when we realize that this number describes the interaction of one electron and one photon of light, which is an oscillating bundle of no one knows what (electricity and magnetism, ostensibly) that has no mass and no charge.
All photons have a spin of one, which reassures folks (because it seems to make sense) until they realize that all of a photon’s energy comes from its so-called frequency, not its mass, because light has no mass in the vacuum of space. Of course, photons on Earth don’t live in the vacuum of space. When photons pass through materials like glass or the atmosphere, they disturb electrons in their wake. The electrons emit polaritons, which physicists believe add mass to photons and slow them down.
The number of electrons in materials and their oscillatory behavior in the presence of photons of many different frequencies determine the production intensity of polaritons. It seems to me that the relationship cannot be linear, which simply means that intuition cannot guide predictions about photon behavior and their accumulation of mass in materials like glass and the earth’s atmosphere. Everything must be determined by experiment.
Theories that enable verifiable predictions about photon mass and behavior might exist or be on the horizon, but I am not connected enough to know. So check it out.
Anyway… frequency is the part of Einstein’s energy equation that is always left out because, presumably, teachers feel that if they unveil the whole equation they won’t be believed — if they are believed, their students’ heads might explode. Click the link and read down a few paragraphs to explore the equation.
In the meantime, here’s the equation:
When mass is zero, energy equals the Planck constant times the frequency. It’s the energy of photons. It’s the energy of light.
Photons can and do have any frequency at all. A narrow band of their frequencies is capable of lighting up our brains, which have a strange ability to make sense of the hallucinations that flow through them.
Click on the links to get a more detailed description of these mysteries.
What do physicists think they know for sure?
When an electron hops between its quantum energy states it can emit and absorb photons of light. When a photon is detected, the measured probability amplitude associated with its emission, its direction of travel, its energy, and its position are related to the magnitude of the square of a multi-dimensional number. The scalar (α) is the probability density of a measured vector quantity called an amplitude.
When multi-dimensional amplitudes are manipulated by mathematics, terms emerge from these complex numbers, which can’t be ignored. They can be used to calculate the interference patterns in double-slit experiments, for one thing, performed by every student in freshman physics.
The square root of the fine-structure constant matches the experimentally measured magnitude of the amplitude of electron/photon interactions — a number close to .085. It means that the vector that represents the dynamic of the interaction between an electron and a photon gets “shrunk” during an interaction by almost ten percent, as Feynman liked to describe it.
Because amplitude is a complex (multi-dimensional) number with an associated phase angle or direction, it can be used to help describe the bounce of particles in directions that can be predicted within the limitations of the theory of quantum probabilities.
Square the amplitude, and a number (α) emerges — the one-dimensional, unit-less number that appears in so many important quantum equations: the fine-structure constant.
Why? It’s a mystery. It seems that few physical models that go beyond a seemingly nonsensical vision of rotating hands on a traveling clock can be conjured forth by the brightest imaginations in science to explain the why or how.
The fine-structure constant, alpha (α) — like so many other phenomenon on quantum scales — describes interactions between subatomic particles — interactions that seem to make no intuitive sense. It’s a number that is required to make the equations balance. It just does what it does. The way it is — for now, at least — is the way it is. All else is imagination and guesswork backed by some very odd math and unusual constants.
By the way (I almost forgot to mention it): α is very close to 30 times the ratio of the square of the charge of an at-rest electron divided by Planck’s reduced constant.
Anyone is welcome to confirm the calculation of what seems to be a fairly precise ratio of electron charge to Planck’s constant if they want. But what does it mean?
What does it mean?
Looking for an answer will bury the unwary forever in the rabbit hole.
I’m thinking that right now might be a good time to leave the abyss and get on with our lives. Anyone bring a flashlight?
Communicating with distant spacecraft in the solar system is cumbersome and time consuming because the distances are huge and no one can send signals faster than the speed-of-light. A signal from Earth can take from three to twenty-two minutes to reach Mars depending on the position of the two planets in their orbits. Worse, the Sun blocks signals when it lies in their path.
As countries explore farther from Earth to Mars and beyond, these delays and blockages will become annoying. The need to develop a technology for instantaneous communication that can penetrate or bypass the Sun will become compelling.
Quantum particles are known for their ability to “tunnel” through or ignore barriers — as they clearly do in double-slit experiments where electrons are fired one at a time to strike impossible locations. So, looking to quantum processes for signaling might be good places to start to find solutions to long-range communication problems.
NOTE FROM THE EDITORIAL BOARD, May 8, 2019: Sixteen months after Billy Lee published this post, the Chinese launched the Mozi satellite. It successfully carried out the first in a series of experiments with entangled quantum particles over space-scale distances. This technology promises a quantum encrypted network by the end of 2020 and a global web built on quantum encryption by 2030. The Chinese seem to be on the cusp of both FTL communication (through teleportation of information) and quantum encryption.
If scientists and engineers are able to develop quantum signaling over solar-system-scale distances, they might discover later that adding certain tweaks and modifications will render the Sun transparent to our evolving planet-to-planet communications network.
Indeed, the Sun is transparent to neutrinos — the lightest (least massive) particles known. In 2012, scientists showed they could use neutrinos to send a meaningful signal through materials that block or attenuate most other kinds of subatomic particles.
But this article is about faster than light (FTL) communication. Making the Sun transparent to inter-planetary signaling is best left for another article.
Quantum entanglement is the only phenomenon known where information seems to pass instantly between widely placed objects. But because the information is generated randomly, and because it is transferred between objects that are traveling at speeds at or below the speed-of-light, it seems clear to most physicists that faster-than-light (FTL) messaging can’t come from entanglement, certainly, or any other process — especially in light of Einstein’s assertion of a cosmic speed-limit.
Proposals for FTL communications based on technologies rooted in the quantum process of entanglement are usually dismissed as crack-pot engineering because they seem to be built on fundamental misunderstandings of the phenomenon.
Difficulties with the technology are often overlooked — such as spontaneous breaking and emergence of entanglement; progress seems impossible to skeptics. Nevertheless, there may be ways to make FTL happen, possibly. The country that develops the technology first will accrue advantages for their space exploration programs.
In this essay I hope to explain how FTL messaging might work, put my ideas into a blog-bottle and throw it into the vast cyber-ocean. Yes, the chances are almost zero that the right people will find the bottle, but I don’t care. For me, it’s about the fun of sharing something interesting and trying to explain it to whoever will listen.
Maybe a wandering NSA bot will detect my post and shuffle it up the chain-of-command for a human to review. What are the odds? Not good, probably.
Anyway, two serious obstacles must be overcome to communicate instantly over astronomical distances using quantum entanglement. The first is the problem of creating a purposeful signal. (To learn more about entanglement click the link in this sentence to go to Billy Lee’s essay, Bell’s Inequality. The Editors)
The second problem is how to create the architectural space to send signals instantly to a distant observer. Knowledgeable people who have written about the subject seem to agree that both obstacles are insurmountable.
Why? It’s because the states of an entangled pair of subatomic particles are not determined until one of the particles is measured. The states can’t be forced; they can only be discovered — and only after they are created by a measurement.
Once one particle’s state is created (randomly) through the mechanism of a measurement, the information is transferred to the entangled partner-particle instantly, yes, but the particles themselves are traveling at the speed-of-light or less. The randomly generated states carried by these entangled particles aren’t going anywhere for very long faster than the speed-limit of light.
How can these difficulties be overcome?
Although the architectural problem is the most interesting, I want to address the purposeful-signal problem first. A good analogy to aid understanding might be that of an old-fashioned typewriter. Each key on a typewriter when pressed delivers a unique piece of information (a letter of the alphabet) onto a piece of paper. A person standing nearby can read the message instantly. Fair enough.
Imagine setting up a device which emits entangled pairs of photons; rig the emissions so that half the photons when measured later will be polarized one way, half the other. No one can know which photons will display which state, but they can predict the overall ratio of the two polarities from a “weighted” emitter.
Call the 50/50 ratio, letter “A”. Now imagine configuring another emitter-system to project 3 of 4 photons polarized one way; 1 of 4 another — after measurement. Call the 3 to 1 ratio “B”. If engineers are able to construct and rig weighted emitters like these, they will have solved half of the FTL communication problem.
Although no one can know the state of any single particle until after a measurement, engineers could identify the ratio of polarization states in a large number sent from any of the unique emitter-configurations they design.
This capability would permit them to build a kind of typewriter keyboard by setting up photon emitters with enough statistical variation in their emission patterns to differentiate them into as many identifiable signatures as needed — perhaps an entire alphabet or — better yet — some other symbolic coding array like a binary on-off signaling system perhaps. In that case, one configuration of emitter would suffice, but designers would need to solve other technical problems involving rapid signal-sequencing.
To send a purposeful-signal, engineers might select an array of emitters and rapid-fire photons from them. If they selected an “A” (or perhaps an “on”) emitter, 50% of the photons would register as being in a particular polarization state after they were measured. If they chose “B”, 75% would register, and so on. After measurements on Earth, the entangled bursts of particles on their way to Mars would take on these ratios instantly.
I believe it might be possible to build emitter-systems someday — emitter systems with non-random polarization ratios. If not, then as is sometimes said at NASA, Houston, we have a problem. FTL communication may not be designable.
On the other hand, if engineers build these emitters, then we can know for sure that when measured on Earth, the entangled photon-twins in the Mars-bound emitter-bursts will display the same statistical patterns; the same polarization ratios. Anyone receiving bundles of entangled-photons from these encoded-emitters will be able to determine what they encode-for by the statistical distribution of their polarities.
Ok. Assume engineers build these emitter-systems and set up a keyboard. How might they ensure that when someone presses a key the letter sent is seen immediately by a distant observer?
How might the architectural geometry of the communication space be configured?
This part is the most interesting, at least to me, because its success doesn’t depend on whether anyone sends a single binary-signal or a zoo of symbols — and it’s the most critical.
It does no one any good to instantly communicate polarization states to bunches of photons traveling at the speed of light to Mars. The signals take three to twenty-two minutes to get there, whoever tells them instantly what state to be in or not. We want the machines on Mars to receive messages at the same time we send them.
How can we do that?
Maybe the method is becoming obvious to some readers. The answer is: photons in Earth-bound labs aren’t measured until their entangled twins have had time enough to travel to Mars (or wherever else they might be going). Engineers will entrap on Earth the photons from each “lettered” emitter and send their entangled twins to Mars. The photons from each “lettered” emitter on Earth will circulate in a holding bin (a kind of information-capacitor), until needed to construct a message.
As entangled twins reach the Mars Rover (for example), anyone can “type-out” a message by measuring the Earth-bound photons in the particular holding bins that encode the “letters” — that is, they can start the process that takes measurements that will induce the polarization-ratios of the “lettered” emissions used to “type” messages. Instantly, the entangled particle-bursts reaching Mars will take on these same polarization-ratios.
I hear folks saying, Wait a minute! Stop right there, Billy Lee! No one can hold onto photons. You can’t store them. You can’t trap or retain them, because they are impervious to magnets and electrical fields. No one can delay measurements for five milliseconds, let alone five minutes or five days.
Well, to my mind that’s just a technical hurdle that clever people can jump over, if they set their minds to it. After all, it is possible to confine light for for short periods with simple barriers, like walls.
Then again, electrons or muons might make better candidates for communication. Unlike photons, they are easily retained and manipulated by electromagnetic fields.
Muons are short-lived and would have to be accelerated to nearly light-speed to gain enough lifespan to be useful. They are 207 times heavier than electrons, but they travel well and penetrate obstacles easily. (Protons, by comparison, are nine times heavier than muons.)
The National Security Agency (NSA) photographs every ship at sea with muon penetrating technology to make sure none harbor nuclear weapons. Muons are particles some engineers are already comfortable manipulating in designs to give the USA an edge over other countries.
We also have a lot of experience with electrons. Electrons are long-lived — they don’t have to be accelerated to near light-speeds to be useful. Speed doesn’t matter, anyway.
Entangled particles don’t have to travel at light-speed to communicate well, nor do they have to live forever. Particles only need enough time to get to Mars (or wherever they’re going) before designers piggyback onto their Earth-bound entangled partners to transmit instant-messages.
Even if it takes days or weeks for bursts of entangled-particles to travel to Mars (or wherever else), it makes no difference. Engineers can run and accumulate a sufficiently robust loop of streaming emissions on Earth to enable folks, soon enough, to “type” out FTL messages in real time whenever necessary.
As long as control of and access to the emitted particle-twins on Earth is maintained, people can “type out” messages (by measuring the captive Earth-bound twins at the appropriate time) to impose and transfer the statistical configuration of their rigged polarization ratios (or spins in the case of electrons or muons) to the Mars-arriving particle-bursts, creating messages that a detector at that far-away location can decode and deliver, instantly.
The challenge of instant-return messaging could be met by employing the same technologies on Mars (or wherever else) as on Earth. The trick at both ends of the communication pipe-line is to store (and if necessary replenish) a sufficient quantity of the elements of any possible communication in streaming particle-emission capacitors.
Tracking and timing issues don’t require the development of new technologies; the engineering challenges are trivial by comparison and can be managed by dedicated computers.
Discharging streaming information capacitors to send ordered instant messages in real-time is new — perhaps a path forward exists that engineers can follow to achieve instant, long-range messaging through the magic of quantum entanglement.
The technical challenges of designing stable entanglement protocols that will enable an illusion of instant messaging that is both useful and practical are formidable, but everything worth doing is hard — until it isn’t.
A mystery lies at the heart of quantum physics. At the tiniest scales, when a packet of energy (called a quantum) is released during an experiment, the wave packet seems to occupy all space at once. Only when a sensor interacts with it does it take on the behavior of a particle.
Its location can be anywhere, but the odds of finding it at any particular location are random within certain rules of quantum probabilities.
One way to think about this concept is to imagine a quantum “particle” released from an emitter in the same way a child might emit her bubble-gum by blowing a bubble. The quantum bubble expands to fill all space until it touches a sensor, where it pops to reveal its secrets. The “pop” registers a particle with identifiable states at the sensor.
Scientists don’t detect the particle until its bubble pops. The bubble is invisible, of course. In fact, it is imaginary. Experimenters guess where the phantom bubble will discharge by applying rules of probability.
This pattern of thinking, helpful in some ways, is probably profoundly wrong in others. The consensus among physicists I follow is that no model can be imagined that won’t break down.
Scientists say that evidence seems to suggest that subatomic particles don’t exist as particles with identifiable states or characteristics until they are brought into existence by measurements. One way to make a measurement is for a conscious experimenter to make one.
The mystery is this: if the smallest objects of the material world don’t exist as identifiable particles until after an observer interacts in some way to create them, how is it that all conscious humans see the same Universe? How is it that people agree on what some call an “objective” reality?
Quantum probabilities should construct for anyone who is interacting with the Universe a unique configuration — an individual reality — built-up by the probabilities of the particular way the person interfaces with whatever they are measuring. But this uniqueness is not what we observe. Everyone sees the same thing.
John von Neumann was the theoretical physicist and mathematician who developed the mathematics of quantum mechanics. He advanced the knowledge of humankind by leaps and bounds in many subjects until his death in 1954 from a cancer he may have acquired while monitoring atomic tests at Bikini Atoll.
“Johnny” von Neumann had much to say about the quantum mystery. A few of his ideas and those of his contemporary, Erwin Schrödinger, will follow after a few paragraphs.
As for Von Neumann, he was a bonafide genius — a polymath with a strong photographic memory — who memorized entire books, like Goethe’s Faust, which he recited on his death bed to his brother.
Von Neumann was fluent in Latin and ancient Greek as well as modern languages. By the age of eight, he had acquired a working knowledge of differential and integral calculus. A genius among geniuses, he grew-up to become a member of the A-team that created the atomic bomb at Los Alamos.
He died under the watchful eyes of a military guard at Walter Reed Hospital, because the government feared he might spill vital secrets while sedated. He was that important. The article in Wikipedia about his life is well worth the read.
Von Neumann developed a theory about the quantum process which I won’t go into very deeply, because it’s too technical for a blog on the Pontificator, and I’m not an expert anyway. [Click on links in this article to learn more.] But other scientists have said his theory required something like the phenomenon of consciousness to work right.
The potential existence of the particles which make up our material reality was just that — a potential existence — until there occurred what Von Neumann called, Process I interventions. Process II events (the interplay of wave-like fields and forces within the chaotic fabric of a putative empty space) could not, by themselves, bring forth the material world. Von Neumann did hypothesize a third process, sometimes called the Dirac choice, to allow nature to perform like Process I interventions in the apparent absence of conscious observers.
Von Neumann developed, as we said, the mathematics of quantum mechanics. No experiment has ever found violations of his formulas. Erwin Schrödinger, a contemporary of Von Neumann who worked out the quantum wave-equation, felt confounded by Neumann’s work and his own. He proposed that for quantum mechanics to make sense; for it to be logically consistent, consciousness might be required to have an existence independent of human brains — or any other brains for that matter. He believed, like Von Neumann may have, that consciousness could perhaps be a fundamental property of the Universe.
The Universe could not come into being without a Von Neumann Process I or III operator which, in Schrodinger’s view, every conscious life-form plugged into, much like we today plug a television into cable-outlets to view video. This shared consciousness, he reasoned, was why everyone sees the same material Universe.
Billy Lee
Post Script: Billy Lee has written several articles on this subject. Conscious Life and Bell’s Inequality are good reads and contain links to videos and articles. Sensing the Universe is another. Billy Lee sometimes adds to his essays as more information becomes available. Check back from time to time to learn more. The Editorial Board
UPDATE: 18 December 2022: Royal Swedish Academy of Sciences on 4 October 2022 awarded the Nobel Prize in Physics to:
Alain Aspect Institut d’Optique Graduate School – Université Paris- Saclay and École Polytechnique, Palaiseau, France
John F. Clauser
J.F. Clauser & Assoc., Walnut Creek, CA, USA
Anton Zeilinger University of Vienna, Austria
“for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science”
UPDATE: September 5, 2019: I stumbled across this research published in NATURE during December 2011, where scientists reported entanglement of vibrational patterns in separated diamond crystals large enough to be viewed without magnification. Nature doi:10.1038/nature.2011.9532
UPDATE: May 8, 2018: This video from PBS Digital Studios is the best yet. Click the PBS link to view the latest experimental results involving quantum mechanics, entanglement, and their non-intuitive mysteries. The video is a little advanced and fast paced; beginners might want to start with this link.
UPDATE: February 4, 2016: Here is a link to the August 2015 article in Nature, which makes the claim that the last testable loophole in Bell’s Theorem has been closed by experiments conducted by Dutch scientists. Conclusion: quantum entanglement is real.
UPDATE: Nov. 14, 2014: David Kaiser proposed an experiment to determine Is Quantum Entanglement Real? Click the link to redirect to the Sunday Review, New York Times article. It’s a non-technical explanation of some of the science related to Bell’s Theorem.
John Stewart Bell‘s Theorem of 1964 followed naturally from the proof of an inequality he fashioned (now named after him), which showed that quantum particle behavior violated logic.
It is the most profound discovery in all science, ever, according to Henry Stapp—retired from Lawrence Berkeley National Laboratory and former associate of Wolfgang Pauli and Werner Heisenberg. Other physicists like Richard Feynman said Bell simply stated the obvious.
Here is an analogy I hope gives some idea of what is observed in quantum experiments that violate Bell’s Inequality: Imagine two black tennis balls—let them represent atomic particles like electrons or photons or molecules as big as buckyballs.
The tennis balls are created in such a way that they become entangled—they share properties and destinies. They share identical color and shape. [Entangled particles called fermions display opposite properties, as required by the Pauli exclusion principle.]
Imagine that whatever one tennis ball does, so does the other; whatever happens to one tennis ball happens to the other, instantly it turns out. The two tennis balls (the quantum particles) are entangled.
[For now, don’t worry about how particles get entangled in nature or how scientists produce them. Entanglement is pervasive in nature and easily performed in labs.]
According to optical and quantum experimentalist Mark John Fernee of Queensland, Australia, ”Entanglement is ubiquitous. In fact, it’s the primary problem with quantum computers. The natural tendency of a qubit in a quantum computer is to entangle with the environment. Unwanted entanglement represents information loss, or decoherence. Everything naturally becomes entangled. The goal of various quantum technologies is to isolate entangled states and control their evolution, rather than let them do their own thing.”
In nature, all atoms that have electron shells with more than one electron have entangled electrons. Entangled atomic particles are now thought to play important roles in many previously not understood biological processes like photosynthesis, cell enzyme metabolism, animal migration, metamorphosis, and olfactory sensing. There are several ways to entangle more than a half-dozen atomic particles in experiments.
Imagine particles shot like tennis balls from cannons in opposite directions. Any measurement (or disturbance) made on a ball going left will have the same effect on an entangled ball traveling to the right.
So, if a test on a left-side ball allows it to pass through a color-detector, then its entangled twin can be thought to have passed through a color-detector on the right with the same result. If a ball on the left goes through the color-detector, then so will the entangled ball on the right, whether or not the color test is performed on it. If the ball on the left doesn’t go through, then neither did the ball on the right. It’s what it means to be entangled.
Now imagine that cannons shoot thousands of pairs of entangled tennis balls in opposite directions, to the left and right. The black detector on the left is calibrated to pass half of the black balls. When looking for tennis balls coming through, observers always see black balls but only the half that get through.
Spin describes a particle property of quantum objects like electrons — in the same waycolor or roundness describe tennis balls. The property is confusing, because no one believes electrons (or any other quantum objects) actually spin. The math of spin is underpinned by the complex-mathematics of spinors, which transform spin arrows into multi-dimensional objects not easy to visualize or illustrate. Look for an explanation of how spin is observed in the laboratory later in the essay. Click links for more insight.
Now, imagine performing a test for roundness on the balls shot to the right. The test is performed after the black test on the left, but before any signal or light has time to travel to the balls on the right. The balls going right don’t (and can’t) learn what the detector on the left observed. The roundness-detector is set to allow three-fourths of all round tennis balls through.
When round balls on the right are counted, three-eighths of them are passing through the roundness-detector, not three-fourths. Folks might speculate that the roundness-detector is acting on only the half of the balls that passed through the color-detector on the left. And they would be right.
These balls share the same destinies, right? Apparently, the balls on the right learned instantly which of their entangled twins the color-detector on the left allowed to pass through, despite all efforts to prevent it.
So now do the math. One-half (the fraction of the black balls that passed through the left-side color-detector) multiplied by three-fourths (the fraction calibrated to pass through the right-side roundness-detector) equals three-eighths. That’s what is seen on the right — three-eighths of the round, black tennis balls pass through the right-side roundness-detector during this fictionalized and simplified experiment.
According to Bell’s Inequality, twice as many balls should pass through the right-side detector (three-fourths instead of three-eighths). Under the rules of classical physics (which includes relativity), communication between particles cannot exceed the speed of light.
There is no way the balls on the right can know if their entangled twins made it through the color detector on the left. The experiment is set up so that the right-side balls do not have time to receive a signal from the left-side. The same limitation applies to the detectors.
The question scientists have asked is: how can these balls (quantum particles) — separated by large distances — know and react instantaneously to what is happening to their entangled twins? What about the speed limit of light? Instantaneous exchange of information is not possible, according to Einstein.
The French quantum physicist, Alain Aspect, suggested his way of thinking about it in the science journal, Nature (March 19, 1999).
He wrote: The experimental violation of Bell’s inequalities confirms that a pair of entangled photons separated by hundreds of meters must be considered a single non-separable object — it is impossible to assign local physical reality to each photon.
Of course, the single non-separable object can’t have a length of hundreds of meters, either. It must have zero length for instantaneous communication between its endpoints. But it is well established by the distant separation of detectors in experiments done in labs around the world that the length of this non-separable quantum object can be arbitrarily long; it can span the universe.
When calculating experimental results, it’s as if a dimension (in this case, distance or length) has gone missing. It’s eerily similar to the holographic effect of a black hole where the three-dimensional information that lives inside the event-horizon is carried on its two-dimensional surface. (See the technical comment included at the end of the essay.)
Another way physicists have wrestled with the violations of Bell’s Inequality is by postulating the concept of superposition. Superposition is a concept that flows naturally from the linear algebra used to do the calculations, which suggests that quantum particles exist in all their possible states and locations at the same time until they are measured.
Measurement forces wave-particles to “collapse” into one particular state, like a definite position. But some physicists, like Roger Penrose, have asked: how do all the super-positioned particles and states that weren’t measured know instantaneously to disappear?
Superposition, a fundamental principle of quantum mechanics, has become yet another topic physicists puzzle over. They agree on the math of superposition and the wave-particle collapse during measurement but don’t agree on what a measurement is or the nature of the underlying reality. Many, like Richard Feynman, believe the underlying reality is probably unknowable.
Quantum behavior is non-intuitive and mysterious. It violates the traditional ideas of what makes sense. As soon as certainty is established for one measurement, other measurements, made earlier, become uncertain.
It’s like a game of whack-a-mole. The location of the mole whacked with a mallet becomes certain as soon as it is struck, but the other moles scurry away only to pop up and down in random holes so fast that no one is sure where or when they really are.
Physicists have yet to explain the many quantum phenomena encountered in their labs except to throw-up their hands to say — paraphrasing Feynman — it is the way it is, and the way it is, well, the experiments make it obvious.
But it’s not obvious, at least not to me and, apparently, many others more knowledgeable than myself. Violations of Bell’s Inequality confound people’s understanding of quantum mechanics and the world in which it lives. A consequence has been that at least a few scientists seem ready to believe that one, perhaps two, or maybe all four, of the following statements are false:
1) logic is reliable and enables clear thinking about all physical phenomenon;
4) a model can be imagined to explain quantum phenomenon.
I feel wonder whenever the idea sinks into my mind that at least one of these four seemingly self-evident and presumably true statements could be false — possibly all four — because repeated quantum experiments suggest they must be. Why isn’t more said about it on TV and radio?
The reason could be that the terrain of quantum physics is unfamiliar territory for a lot of folks. Unless one is a graduate student in physics — well, many scientists don’t think non-physicists can even grasp the concepts. They might be right.
So, a lot is being said, all right, but it’s being said behind the closed doors of physics labs around the world. It is being written about in opaque professional journals with expensive subscription fees.
The subtleties of quantum theory don’t seem to suit the aesthetics of contemporary public media, so little information gets shared with ordinary people. Despite the efforts of enthusiastic scientists — like Brian Cox, Sean M. Carroll, Neil deGrasse Tyson and Brian Greene — to serve up tasty, digestible, bite-size chunks of quantum mechanics to the public, viewer ratings sometimes fall flat.
When physicists say something strange is happening in quantum experiments that can’t be explained by traditional methods, doesn’t it deserve people’s attention? Doesn’t everyone want to try to understand what is going on and strive for insights? I’m not a physicist and never will be, but I want to know.
Even me — a mere science-hobbyist who designed machinery back in the day — wants to know. I want to understand. What is it that will make sense of the universe and the quantum realm in which it rests? It seems, sometimes, that a satisfying answer is always just outside my grasp.
Here is a concise statement of Bell’s Theorem from the article in Wikipedia — modified to make it easier to understand: No physical theory about the nature of quantum particles which ignores instantaneous action-at-a-distance can ever reproduce all the predictions about quantum behavior discovered in experiments.
To understand the experiments that led to the unsettling knowledge that quantum mechanics — as useful and predictive as it is — does indeed violate Bell’s proven Inequality, it is helpful not only to have a solid background in mathematics but also to understand ideas involving the polarization of light and — when applied to quantum objects like electrons and other sub-atomic particles — the idea of spin. Taken together, these concepts are somewhat analogous to the properties of color and roundness in the imaginary experiment described above.
This essay is probably not the best place to explain wave polarization and particle spin, because the explanation takes up space, and I don’t understand the concepts all that well, anyway. (No one does.)
But, basically, it’s like this: if a beam of electrons, for example, is split into two and then recombined on a display screen, an interference pattern presents itself. If one of the beams was first passed through a polarizer, and if experimenters then rotate the polarizer a full turn (that is, 360°), the interference pattern on the screen will reverse itself. If the polarizer-filter is rotated another full turn, the interference pattern will reverse again to what it was at the start of the experiment.
So, it takes two spins of the polarizer-filter to get back the original interference pattern on the display screen — which means the electrons themselves must have an intrinsic “one-half” spin. All so-called matter particles like electrons, protons, and neutrons (called fermions)have one-half spin.
Yes, it’s weird. Anyway, people can read-up on the latest ideas by clicking this link. It’s fun. For people familiar with QM (quantum mechanics), a technical note is included in the comments section below.
Otherwise, my analogy is useful enough, probably. In actual experiments, physicists measure more than two properties, I’m told. Most common are angular momentum vectors, which are called spin orientations. Think of these properties as color, shape, and hardness to make them seem more familiar — as long as no one forgets that each quality is binary; color is white or black; shape is round or square; hardness is soft or hard.
Spin orientations are binary too — the vectors point in one of two possible directions. It should be remembered that each entangled particle in a pair of fermions always has at least one property that measures opposite to that of its entangled partner.
The earlier analogy might be improved by imagining pairs of entangled tennis balls where one ball is black, the other white; one is round, the other square; add a third quality where one ball is hard, the other soft. Most important, the shape and color and hardness of the balls are imparted by the detectors themselves during measurement, not before.
Before measurement, concepts like color or shape (or spin or polarity) can have no meaning; the balls carry every possible color and shape (and hardness) but don’t take on and display any of these qualities until a measurement is made. Experimental verification of these realities keep some quantum physicists awake at night wondering, they say.
Anyway, my earlier, simpler analogy gets the main ideas across, hopefully. And a couple of the nuances of entanglement can be found within it. I’ve added an easy to understand description of Bell’s Inequality and what it means to the end of the essay.
In the meantime, scientists at the Austrian Academy of Sciences in Vienna recently demonstrated that entanglement can be used as a tool to photograph delicate objects that would otherwise be disturbed or damaged by high energy photons (light). They entangled photons of different energies (different colors).
They took photographs of objects using low energy photons but sent their higher energy entangled twins to the camera where their higher energies enabled them to be recorded. New technologies involving the strange behavior of quantum particles are in development and promise to transform the world in coming decades.
Perhaps entanglement will provide a path to faster-than-light communication, which is necessary to signal distant space-craft in real time. Most scientists say, no, it can’t be done, but ways to engineer around the difficulties are likely to be developed; technology may soon become available to create an illusion of instantaneous communication that is actually useful. Click on the link in this paragraph to learn more.
Non-scientists don’t have to know everything about the individual trees to know they are walking in a quantum forest. One reason for writing this essay is to encourage people to think and wonder about the forest and what it means to live in and experience it.
The truth is, the trees (particles at atomic scales) in the quantum forest seem to violate some of the rules of the forest (classical physics). They have a spooky quality, as Einstein famously put it.
Trees that aren’t there when no one is looking suddenly appear when someone is looking. Trees growing in one place seem to be growing in other places no one expected. A tree blows one way in the wind, and someone notices a tree at the other end of the forest — where there is no wind — blowing in the opposite direction. As of right now, no one has offered an explanation that doesn’t seem to lead to paradoxes and contradictions when examined by specialists.
John Stewart Bell proved that the trees in the quantum forest violate the laws of nature and logic. It makes me wonder whether anyone will ever know anything at all that they can fully trust about the fundamental, underlying essence of reality.
Some scientists, like Henry Stapp (now retired), have proposed that brains enable processes like choice and experiences like consciousness through the mechanism of quantum interactions. Stuart Hameroff and Roger Penrose have proposed a quantum mechanism for consciousness they call Orch Or.
Others, like Wolfgang Pauli and C. G. Jung, have gone further — asking, when they were alive, if the non-causal coordination of some process resembling what is today called entanglement might provide an explanation for the seeming synchronicity of some psychic processes — an arena of inquiry a few governments are rumored to have already incorporated (to great effect) into their intelligence gathering tool kits.
In a future essay I hope to speculate about how quantum processes like entanglement might or might not influence human thought, intuition, and consciousness.
Billy Lee
P.S. A simplified version of Bell’s Inequality might say that for things described by traits A, B, and C, it is always true that A, not B; plus B, not C; is greater than or equal to: A, not C.
When applied to a room full of people, the inequality might read as follows: tall, not male; plus male, not blonde; is greater than or equal to: tall, not blonde.
Said more simply: tall females and dark haired men will always number more than or equal to the number of tall people with dark hair.
People have tried every collection of traits and quantities imaginable. The inequality is always true, never false; except for quantum objects.
One way to think about it: all the ”not” quantities are, in some sense, uncertain in quantum experiments, which wrecks the inequality. That is to say, as soon as ”A” is measured (for example) ,”not B” becomes uncertain. When ”not B” is measured, ”A” becomes uncertain.
The introduction of uncertainties into quantities that were — before measurement — seemingly fixed and certain doesn’t occur in non-quantum collections where individual objects are big enough to make uncertainties not noticeable. The inability to measure both the position and velocity of small things with high precision is called the uncertainty principle and is fundamental to physics. No advancement in the technology of measurement will ever overcome it.
Uncertainty is believed to be an underlying reality of nature. It runs counter to the desire humans have for complete and certain knowledge; it is a thirst that can never be quenched.
But what’s really strange: when working with entangled particles, certainty about one particle implies certainty about its entangled twin; predicted experimental results are precise and never fail.
Stranger still, once entangled quantum particles are measured, the results, though certain, change from those expected by classical theory to those predicted by quantum mechanics. They violate Bell’s Inequality and the common sense of humans about how things should work.
Worse: Bell’s Theorem seems to imply that no one will ever be able to construct a physical model of quantum mechanics to explain the results of quantum experiments. No ”hidden variables” exist which, if anyone knew them, would explain everything.
Another way to say it is this: the underlying reality of quantum mechanics is unknowable. [A technical comment about the mystery of QM is included in the comments section.]