Skip to main content

Featured

Why doesn’t a nuclear bomb create a chain reaction that destroys the entire planet?

  Because real life is not Hollywood plus 4 reasons. Fission vs. Fusion : Nuclear bombs work on the principle of nuclear fission – splitting heavy atoms like uranium or plutonium. This releases energy, sure, but to destroy the entire planet? Not enough oomph. What you'd need is a fusion reaction, the kind that fuels stars. That involves lighter atoms like hydrogen fusing, and it's way more powerful. Think of fission as a firecracker, fusion as the sun. We're nowhere near making a fusion bomb as big as our planet. The Limits of Chain Reactions : Even in a fission bomb, the chain reaction doesn't run wild forever. The explosion itself scatters the nuclear fuel, disrupting the critical mass needed to sustain the reaction. It's like trying to keep a bonfire going by throwing the logs across the field. Dissipation of Energy : The colossal energy released by a nuke mostly disperses as heat, light, and a shockwave. Earth is just way too big to absorb all that and go kabloo...

Why can't a physicist make quantum physics understandable to most people?

I think most people do not understand the consequences of reaching the reductionist limit. The reductionist limit is where we have reduced things to definitions, which are simply ideas. These ideas then become building blocks with which to explain observed phenomena. The term “atom” was introduced by the ancient Greeks to represent this limit. This is where you can no longer ask what an atom is made of. The atom is what everything is made of. The atom remained simply an idea until their existence was deduced by Einstein in the analysis of Brownian motion. Thus the atom was elevated to a concept with explanatory power.

Quantum theory provides a description of phenomena at the reductionist limit. In doing so, some of our long held reductionist ideas were found wanting. Mind you, an idea is not reality. Quantum theory requires that we change some of our ideas regarding the fundamental building blocks of reality.

Let's follow this via the train of logic that led to the development of quantum theory.

Max Planck provided a model of the blackbody spectrum (ie. The colour of glowing hot things) by quantising the harmonic oscillator. This required the introduction of Planck's constant.

Louise de Broglie proposed that matter had an associated wavelength, which he called a matter wave. This was in response to the observation of interference effects from so-called particles. He related the wavelength to the particle momentum using Planck's constant.

If you are familiar with the Fourier decomposition of some arbitrary function into an integral of sinusoids, then you should be aware of conjugate variables. These are variables that are mutually incompatible with eachother. An example is time and frequency. These conjugate variables have some interesting properties. If something is localised in time, then it is completely delocalised in frequency. This forms the basis of signals analysis, giving us concepts such as channel capacity and bandwidth.

Planck's quantization of the harmonic oscillator could be directly incorporated into Fourier theory, as could de Broglie's matter-wave hypothesis. This directly gives the essential elements of quantum theory, such as the Heisenberg uncertainty principle, Schrödinger’s equation, the need to use operators instead of numbers, and the concept of a wavefunction to describe the system.

There's a lot to unpack there.

Classically we have two distinct concepts: particles and waves.

  • Particles are local objects and all measurable properties of particles are local to the particle.

  • Waves are non-local phenomena. Certain measurable properties are only apparent by observing over an extended region of space.

Some people might think that particles are fundamental and that waves are ultimately composed of particles. However, this thinking will get you into trouble. You see, de Broglie’s matter-wave hypothesis suggests that particles have wave properties. The introduction of Planck's constant actually provides a measure of how wave-like is a particle.

That's step one. If you are comfortable with all of the above-mentioned concepts, then we can move on to step two, which is to actually describe an experiment.

Let's take two examples.

The double slit experiment.

Young's double slit experiment was originally intended to show that light was a wave phenomenon through the observation of interference effects. Particles do not interfere, but waves do.

Therefore, the observation of interference is a smoking gun indicator of wave character, and indeed wavelength.

On the other hand, at the quantum level only single localised detections are observed. This seems to indicate that quantum objects are particles.

But wait! If we accumulate enough single particle detections, we find what look like the expected interference pattern. It's as if the particles are guided by an invisible hand to land on certain areas and not others.

This we observe with light, which we classically considered a wave. Yet now we observe particle behaviour.

What about trying some particle, like electrons?

We observe exactly the same behaviour; individual detections that eventually form an interference pattern.

Remember the de Broglie matter-wave hypothesis? That associated a wavelength with matter. But what is matter? Here we have both light and electrons exhibiting qualitatively similar behaviours.

Quantum contextuality

Now we'll consider a simple 50/50 beam splitter.

If we place detectors at the two possible outputs and input single photons, there will be an equal probability for either detector to click. That suggests the photon exited one of the two possible output ports, just like you'd expect from a particle.

Now, if instead of directly detecting the two output ports, we instead recombine the two paths using another beam splitter and place our detectors at the outputs of the second beam splitter. What would you expect then?

What we find is that only one specific detector registers a photon, no matter how many we send. How can this be if we have established that the photon exits a random output path with equal probability? Surely adding a second beam splitter would change nothing. But it does. It suggests that the photon has taken both paths that results in interference in the second beam splitter; destructive interference in one output and constructive interference in the other. This interference effect suggests that photons are waves.

This is an example of detecting particle behaviour if your experiment is set up for detecting particles, and wave-behaviour if it is set up for detecting waves. This property is referred to as contextuality, and is a uniquely quantum property.

There are numerous other counter-intuitive examples from quantum theory, including entanglement and its role in various experiments.

Did you know that entanglement destroys interference effects?

This is a key property needed to understand the delayed choice quantum eraser, which has caused no end of confusion.

The bottom line is that quantum objects are neither particles nor waves. They have behaviours that can be consistent with either. That makes them extremely conceptually difficult to explain because physicists cannot adequately explain it themselves.

Quantum theory provides both a description of observed phenomena and the ability to predict phenomena. That's what we really require of a theory, and that's what makes theories so useful. However, quantum theory does not explain phenomena in terms other than those defined within the theory. That's what makes quantum phenomena difficult to understand. Most people have no point of reference.

Explain de Broglie's matter-wave hypothesis. Why do particles have associated wave properties? Why is Planck's constant necessary?

The only answer is because this provides a good description of what we observe.


This article was written by Mark John Fernee (PhD in physics from the University of Queensland)

Comments

Popular Posts