My first visit to the Royal Institution
Friday's lecture from the 2012 Nobel Laureate was a highly interesting overview of quantum effects and their brief histories. I have had previous learning experiences with quantum mechanics at its most introductory level, with thoroughly enjoyable reads such as How to Teach Quantum Physics to your Dog by Chad Orzel and riveting science-to-the-masses documentaries from the likes of Dr Jim Al-Khalili, but watching a lecture on the baffling topic, in the hallowed theatre of the birthplace of modern science, was a really unrivalled opportunity. It was heartening to know that my prerequisite knowledge allowed me to come away with a good understanding of what was presented, wholly justifying the 7-hour round trip from Suffolk in my finest (and only) smart suit; even more thrilling though was that I have so much to come away and look deeper into, and the purpose of this article will be to convey some of the understanding I think I have gleaned from the realm of the interweb, linking back to what Haroche discussed in his discourse.[1]
Einstein's Slit argument
Haroche mentioned that there was perpetual disagreement between Einstein and Bohr over the fundamentals of quantum effects throughout their careers - the former was ironically to a large extent responsible for the birth of QM, stemming from his explanation of the photoelectric effect, yet he was one of its biggest critics in the 20th century.One such criticism arose in 1927 where Einstein laid down a thought-experiment to challenge Bohr; he sought an alternative to the superposition explanation for single photons in a Young's Slits experiment continuing to form an interference pattern over time.
To do this he supposed that an alternative version could be established; instead of two fixed slits, the upper slit would be suspended on springs such that the slightest input of force would cause it to move. Einstein postulated therefore that one could tell whether the photon passes through the upper slit, for a collision would cause the slit to move by conservation of momentum as the particle is deviated vertically. Hence he would accurately be able to measure the position of the photon (by tracing the path back from the screen to the collision point) and the momentum too (by the magnitude of slit displacement), thus violating the principle of indeterminacy.
However, Bohr had several arguments in return.
Firstly, Einstein's model required an extremely precise knowledge of the slit's original position, at a much deeper precision than the measurement of how far it is displaced by the photon, a nearly massless entity.
Also Heisenberg's Uncertainty Principle would mean that having such a precise knowledge of the slit's velocity would reduce accuracy of the slit's position. Even a displacement by half a wavelength would shift the bright patches of the interference pattern towards darkness, by inducing partially destructive interference.
An ideal experiment would average over every possible position of the slit, Bohr argued, and so on the screen the perfect constructive and destructive fringes would be different for each position. This would fill the screen with a uniform grey colour, destroying the interference pattern. A modern explanation for the loss of the interference pattern is decoherence - the two paths of the photon are individually entangled to two macroscopic observational states:
|X> = |goes through top slit>⋅|top slit moves> + |goes through bottom slit>⋅|top slit doesn't move>
In reality this means that such fundamental environmental entanglement causes the wavefunction to collapse (or the universe to diverge!) extremely quickly, in a matter of femtoseconds or even less. Hence, in the usual style of QM, the measurement changes the outcome and ruins the quantum effect.
With reference to whether the wavefunction collapses or the universe diverges at the point of measurement, Haroche was very careful when asked about his opinion to state that it really is trivial in terms of the effects we are observing here - I must say that I agree, since I see no compelling evidence to support either Copenhagen or Many Worlds, so I feel that I can for now banish it to the realm of irrelevance.
[2]
Bell's inequalities and experimental disproving of Local Hidden Variable Theory
Einstein and other QM skeptics (including Podolsky and Rosen who collaborated with him on the EPR Paradox) were extremely concerned with the implications of quantum entanglement between particles - in the paradox Einstein referred to "spooky action at a distance" between two particles, produced by the decay of a single particle, such that a measurement of one's spin would immediately allow the observer to know the spin of the other (it would be opposite, since they must cancel to the 0 spin of the original particle). If Bohr's Copenhagen Interpretation was to be believed, neither particle's spin is definite until measured so the fact that the other's is determined immediately would imply that information is sent between the two faster than the speed of light, violating Einstein's relativity. The only explanation consistent with relativity would be that the spin is already defined, but remains a "local hidden variable" until one is measured. It is clear that the EPR Paradox was a serious challenge to QM, intending to expose its current inconsistencies with classical mechanics.
John Bell came up with a method of testing this paradox. Instead of measuring spin, the focus for his experiment was to use the polarization of photons, of which identical pairs would be generated by a decay process in an atom. Three polarizing filters would be available to use to test each entangled photon, and the results of each binary reading could be compared.
The probability of two filter readings being the same is always at least 1/3, and this is what Local Hidden Variable Theory would predict (since measuring each photon cannot affect the value of the other). However experimentally, the probability of getting the same reading is much lower, at around 1/4. Therefore, since the experiment violates the Bell Inequality,
⅓ ≤ P(same)
local hidden variables cannot explain away the "spooky action at a distance". In reality this is the simplest explanation of the more complex field of Bell Inequalities, which I will endeavour to explore in greater depth in the future, but I must thank DrPhysicsA for this excellent explanation in his youtube tutorial.
[3]
The concept of the universal wave function
One of the questions put forward to Haroche was this: "Is all matter in the universe (or indeed multiverse) entangled to form a single universal wavefunction?". The answer the Nobel Laureate provided was very simple and quite sensible - yes, but it is of no mathematical use to consider it in its totality because the resolution of knowledge of microscopic physical systems would be lost, so it could not be applied to any laboratory experiments to improve our understanding of QM (like the experiment detailed in the above section).
It is curious to think that the space-time fabric of the entire universe plays out as the consequence of one giant probabilistic function, and does a certain justice to the idea of "God playing dice" - throughout the study of the history and development of QM , Einstein's objections continue to be thrown up and this refers to his famous quote "God doesn't play dice": current theories would suggest otherwise.
Rydberg atoms
A Rydberg state of an atom is where one or more electrons are excited enough to have a very large principle quantum number, many energy levels above the core electrons (which remain in their normal positions as defined on the periodic table). Since the excited electrons are in higher energy states, they experience a lesser attraction to the positive nucleus so occupy vastly wider orbits - this means, paraphrasing phys.org, exciting the outer electron of a rubidium atom from n=5 to n=18 would extend the atomic radius from 1nm to 700nm.
Rydberg atoms are a viable method for storing quantum information as qubits because they can be "sustained for a long time in a quantum superposition system", and interact strongly such that they would form stable and effective logic-gate-type systems.
[4]
[4]
Cavity quantum electrodynamics to count photons
Haroche's Nobel Prize winning paper was the source material for this section, which seems only fitting. The quantum cavity is based on the Bohr-Einstein photon box, yet another hypothetical piece of apparatus to constitute the battleground for thought experiments over quantum mechanics. As I understand it, the cavity consists of two mirrors which continually reflect photons inside the cavity until absorbed - the mirrors were constantly plagued by slight imperfections reducing the lifetime of the experiment, but a collaboration at the French Atomic Energy Commission led to precisely machined copper mirrors, covered in superconducting niobium, which form a quasi-spherical surface; as a result, a photon lifetime of 130ms was achieved in 2006.
To actually count the photons inside the cavity, the use of lasers produced rubidium Rydberg atoms with an outermost orbit diameter approximately 1000 times larger than the ground state version - a condition of a stable orbit is that the De Broglie wavelength divides as an integer into the circumference of the orbit, leading to the principle quantum number of 51 or 50 for this experiment. These circular Rydberg states allow a long lifetime of 30ms, on the same order of magnitude as the photon lifetime - this means that the production of photons from the decay of the excited electron orbits can initially be discounted from the uncontrollable variables of the experiment.
In the two states, e and g, the wave has uniform amplitude around the orbit, leading to a electron charge density centred on the atomic nucleus. However a pulse of resonant microwaves brings it the electron into a superposition of both e and g states, causing constructive interference on one side and destructive interference on the other. The result of this is that a net electric dipole is created, extremely sensitive to microwave radiation (i.e. the photons being counted).
Non-resonant microwave photons are not absorbed by the Rydberg atoms, making the process transparent to the entry of exterior photons. However tuning the cavity photons to very close to the phase-shift frequency of the Rydberg atoms allows them to exit the cavity with a dipole shift of up to 180 degrees. Such a shift is equivalent to a single photon, which allows them to be discretely counted.
The acceleration of light between media of different densities
My final talking point stems from yet another point raised by the audience during the question time: does light literally accelerate when it passes between two media of differing densities? We learn at school that light has different speeds in different materials, standardised in the form of refractive indices, yet it goes against classical mechanics to assume that there is an instantaneous change in the velocity of photons across this boundary - a change in velocity over ∆t ≈ 0 would involve a → ∞ (a = (v-u)⁄t). Since F = ma, the resultant force on any mass-possessing body would also approach ∞N, tearing the object apart.However in my opinion this is exactly what happens when light passes between two bodies. By Einstein's laws of special relativity, a massive object with a velocity approaching the speed of light will have a mass approaching infinity - since light cannot be weighed in the same way one might measure the weight of an apple or a car, it must be assumed that a photon is massless (this concept is, in retrospect, universally agreed across the physics community too). Since a photon is massless, it is not constrained by the limit placed on instantaneous acceleration for massive objects, so it seems completely reasonable that a photon will instantly change velocity at a boundary between media.
A related topic refers to the strangeness of the limit placed on universal speeds, relative or not, by Einstein. The confusion occurs when it is suggested that two spaceships are travelling very quickly towards each other. Ship A has a beam of light coming out of the front, naturally travelling at the approximate speed of 3.0e8ms-1. Classical mechanics would tell us that the relative velocity of the light beam, according to an observer sitting in the pilot seat of the other spaceship, would be (c + vA - vB)ms-1, where vB is the velocity of ship B and vA is the velocity of ship A. However this value will have a magnitude greater than c, and hence is not possible according to Einstein's axiom.
However special relativity has an explanation for this. The consequence of the principle of time dilation is the principle of length contraction at speeds close to the speed of light, so from the view of the observer the light beam is blueshifted towards the higher-frequency, lower-wavelength end of the visible light spectrum. This constitutes the physical manifestation of the exceeding of the speed of light, allowing the observable speed of the light to remain at the familiar constant c.
Perhaps this is why the surroundings of the Millennium Falcon shift towards the violet end of the visible light spectrum when Han and Chewie jump to lightspeed?
Sources
September 2015 Friday Night Royal Institution Discourse - Serge Haroche - "Light and the Quantum" (the recording can be found on the RI official youtube channel, here)
No comments:
Post a Comment