Welcome to my world

Here is my domain for splurging my ruminations on the STEM fields. Most of the stuff I discuss and research on this site is way beyond what we learn at school and what I am conventionally taught, so there may well be errors in my information or maths - please do not viciously troll the page with corrections, although constructive and useful criticism is of course welcome :)

Sunday, 13 December 2015

GCHQ's Christmas puzzle and cryptography

WARNING
This post will contain some spoilers for the puzzle, although I will remain fairly vague - if you are trying to solve all 5 levels without any help from other solvers (indeed like I am trying to do), close this page now. To be honest though, this website is so obscure that nobody will read this until long after the contest has passed! I was very disappointed to find that there is a public Reddit community solving this puzzle by crowdsourcing, not because it is strictly cheating (I think it is an efficient and smart way to tackle the problem) but because I cannot easily conduct research on the cryptic puzzles without stumbling upon other people's solutions, ruining the fun of the challenge. However, trying my best to refer only generally to my current solutions, I must continue writing since this intelligent series of tasks has got me so excited and gripped!

My progress
So far, I have spent most of my Friday at school solving the first puzzle (the shading grid); then I solved the "not odd one out", "weird semaphore message" and "D, D, P, V, C, C, D," problems from stage 2 without help, and with the combined knowledge of my family completed the stage today. There is no prize for noticing that there is a strong emphasis on cryptography within these puzzles, using obscure systems such as hand-signals, morse code, ASCII, phonetic alphabet and even snooker balls colours, and it is this that has prompted me to write this article this evening.

The first problem
I was particularly excited by this, which was shown to be by a friend at school: using the enigmatic numbers on each row and column, one has to fill in the black squares and hence form a QR code, which will direct the player to the next challenge.

The QR code is a very efficient method for encoding information within an image, as a binary matrix of black and white. Although they appear simple enough as what is effectively a two-dimensional barcode, there was a huge amount of development and engineering that went into their perfection - such information was very useful in starting-off the puzzle. For example, there is always a position-fixing square in three of the four corners of the grid (excluding the bottom right), which pins-down a single rotational orientation of the code, a timing column and a timing row which help the reading device get its bearings on the matrix. The puzzle presented to us by GCHQ takes the form of a version 2 QR code, containing 25x25 bits: in this way we are reminded that this is only a beginner task, since the version 40 QR code can hold over a thousand ASCII characters and is 177x177 bits!

Information is not simply stored in rows from left to right, top to bottom, with black as 1 and white as 0 though - to avoid confusing the scanner, which in modern-day usage could be a low-resolution and slightly dirty/moist lens from a low-end smartphone, "masking" has to be employed to avoid the code producing large areas of black or white (which would start to hide the discreteness of the bits), or structures which emulate the tracking squares. Such masking algorithms could include changing which colours represent 1 and 0 depending on their position within the matrix, or entering the data in a non-linear fashion. Such "encoding modes" are represented by a number, which sits somewhere in the QR image.

I was especially intrigued, when sifting through the highly informative but also extremely technical Wikipedia page which forms the source of the past few paragraphs [1], to see how scientists/mathematicians have endeavoured to extend the functionality and capacity of QR codes to store ever more information. For example, coloured QR codes (which with greater bit densities we would regard as "pictures") with 4 or even 8 hues would allow much more data to be stored in the same space, since each colour could represent a longer string of "offs" and "ons" such as 101 and 110. However the difficulties in incorporating such functionality are quite clear: error-correction would be much more complex, since the contrast between adjacent colours would be lessened so scanners would be more easily confused, and version 40 matrices seem out of the question currently because with high bit density a dithering effect (as utilised by 16777216-colour RGB screens) would be observed.

 My suggestion for an improvement to QR codes takes inspiration from recent developments in quantum computing, a field which relies on the existence of superposition to create complex "qubits". The fact that a single unit of information can now take the form of "on", "off" or "on-off" proves very interesting for the cryptography world in my eyes - what if the squares of the QR grid could be fully black, fully white or diagonally half-and-half? This would allow the "separators" (white regions between the tracking squares and the formattable area) and "timing strips" (two standard alternating lines) to stand out more, as well as increasing the potential data-storage power without deviating from binary colours (which cannot be confused in any lighting).

The issue I can see with this concept though is that the half-and-half squares, when viewed from a distance or when very small, would blur into a grey due to diffraction on the lens or the limit of the camera's resolution. However, zooming in too much and making the bits too large would limit the amount of information which could be captured in one snapshot! Therefore there must be an optimum bit size between these two extremes, which would maximise storage on the QR code. Without a thorough investigation it would be a sensible supposition that this optimum, taking into account the technicalities of error-compensation, is reached with a version 40 177x177 grid!

Thank you GCHQ
I feel it is only appropriate to give my sincerest thanks to GCHQ for their brilliant puzzle series this year - I am feeling extremely frustrated on the third level right now, but it sure is better than it being too easy! I really don't want to ruin the puzzle for other players, but I am also extremely proud of the progress my family and I have made through the challenge so far - therefore I will be posting the reasoning behind my answers in a separate article, which cannot be accessed through google and will not show up on my homepage (only through this link).

Sources
[1]   Wikipedia - "QR Code" - here 


Monday, 12 October 2015

Clocks, piezoelectric crystals and logic gate solutions - part 2

To understand the full premise of this project, it may be useful to read the first part of this blog post, here.

Experimenting with logicly

Logicly, the demo for which can be found here, is a very useful logic gate simulator which can allow you to create complex logic systems without the need for electrical engineering level knowledge of using hundreds of transistors and resistors correctly. My first task with this simulator is to emulate a stepping motor, by using the gates to control which solenoids have a voltage when. This was a fairly simple affair, involving only one flip flop and nine gates. The "circuit" is pictured below.


The first thing one might notice is that there are in fact no solenoids in this model - within the constraints of the program, data can be outputted as a lightbulb or a binary-configured digit, so I chose the former of the two. Another thing is that, since a lightbulb has only two states ("on" and "off"), there cannot be room for the three solenoid states of "positive", "neutral" and "negative": however, assuming that we are looking at the negative end of the rotor, the negative solenoid state is merely used to achieve a higher torque, so it can be dismissed here - therefore in this model, "on" represents "positive", and "off" represents "neutral". The full working of the circuit is below, through its 4 states.

  Clearly the solenoid configuration changes every second, controlled by the four AND gates. However whether this translates into a tick every second depends on the number of teeth on the rotor and stator - in the previous post I found that a 30-28 motor that turns by rotating the field 45° will move 30 rotor teeth in 120 rotations, so the clock could instead take an input of 1Hz in order to turn 360° in 60 seconds. This would involve using only 15 flip flops in the frequency division of the 32768Hz quartz oscillator, as opposed to the implicit 16 here.

The next section is optional, as you may well understand exactly how I have reached this result with the gates. If so, click here to jump to the next part of the post.

FULL EXPLANATION OF THE CIRCUIT [OPTIONAL]

The "clock" here gives an output of 0.5Hz, meaning that a logic 1 is pulsed for a 1 second duration, starting every 2 seconds. This will be fed into a buffer, passing on the value of the clock at any given time, and a NOT gate which will pass on the inverse of this. In addition the "clock" will act as the clock for the T flip flop - since the PRE' and CLR' functions are negative-edge triggered, I have disabled the former by attaching it to a logic 1 input, and made the clear function operable by connecting the output of a push button to a NOT gate (i.e. the negative edge will occur when the button is pressed, acting as a reset function); the flip flop is in toggle mode, since the T input is connected to logic 1. The Q output of the flip flop, just like the "clock", is fed into a buffer and a NOT gate to form "T" and "NOT-T".

Next I produced a truth table for the circuit so far, plotting the discrete time intervals of the "clock" against the values of "Clock", "NOT-Clock", "T" and "NOT-T":

 This doesn't seem to throw any light upon the situation, but placing 4 AND gates in certain combinations does:

 From here we have a situation where one, and only one, AND gate produces a logic 1 output in each second. These can be hooked up to sequential opposite pairs of lightbulbs to produce the desired pattern shown in the video.

"Stepping" it up a notch 

Without alternative output options on the simulator, I have reached the end of what I can do with analog clocks. However, since a digit is the other output option, there is a wealth of potential within the realm of the digital clock. A huge amount of experimentation and tweaking, plus some attempts to make the whole circuit neater and more logically structured, has led to the creation of my basic 24-hour clock system, shown below.


The explanation is very complex for this, so the next part is again optional.

FULL EXPLANATION OF THE CIRCUIT [OPTIONAL]

This circuit is clearly more complex, and will take a huge amount of explaining: we will start with the far right hand of the circuit and work our way back. I have tried to make the diagram more clear by isolating the circuitry for each digit into blocks, but to be honest it still made a fairly unavoidable tangle...

Digit 1
The input comes from the 0.5Hz clock, the NOT value of which forms the first binary input of the digit. This clock is also used to trigger the (2^-2)Hz flip flop, the inverse output of which forms the second binary clock input. The Q output of the flip flop feeds into the next flip flop, and the process repeats for (2^-3)Hz and (2^-4)Hz. The most annoying thing about these digits is the same thing that makes them so convenient - they are binary. Since the number of possible numerical outputs follows the function n = 2x (where x = number of digit inputs), there must be 4 pins to allow all the numbers from 0-9 to be displayed (2 cubed is only 8). Unfortunately this also generates a surplus of 6 characters (A-F), so using the flip flops as they are causes the digit to count from 0 to F before resetting to 0. We only want 0-9, so some jiggery-pokery is required. I have therefore created a test, comprised of an OR gate, an AND gate and a NOT gate, which will tell when the character A is reached and will instead reset all the flip flops to 0. The OR gate returns a logic 1 if output 2 or 3 is logic 1, and the AND gate will return a logic 1 if the OR is 1 and output 4 is 1. In this case, the NOT gate will return a 0. The additional AND gate below it will allow the reset to occur either in this eventuality, or if the master reset button is pressed. The result of their of these cases is that the negative-edge triggered PRE' functions on each flip flop will be fed with a short pulse of logic 0, resetting every flip flop back to 0. The test on outputs 2, 3 and 4 works because every number higher than 9 (i.e. A-F) will have output 4 on, and either outputs 2 or 3 on too.

Digit 2
Digit 2 is fed the Q output from (2^-4)Hz to form the clock input for (2^-5)Hz. The same chain of events, which acts as an extended frequency divider circuit, will occur for digit 2. However there are 60 seconds in a minute, so we only want digit 2 to count from 0 to 5 - this involves a different version of the reset test explained earlier, comprised of 2 AND gates, an XOR gate and a NOT gate. The first AND gate will return 1 only if outputs 6 and 7 are both 1. The XOR gate will only return 1 if only one of outputs 5 and 6 are 1. The second AND gate will only return 1 if both the first AND gate and the XOR gate are 1. In this case, which has tested when the binary 0110 is reached (decimal 6), the NOT gate will return a value of 0, negative-edge triggering the PRE' functions for the four flip flops associated with digit 2, resetting them to 0 as before.

Digit 3
This digit has exactly the same wiring as digit 1, but has four flip flops because we do not have the benefit of the 0.5Hz clock input providing a 'free' output. The clock input to (2^-9)Hz is the Q output of (2^-8)Hz.
Digit 4
Since there are 60 minutes in an hour, this digit has the same wiring as digit 2, but takes the Q output from (2^-12)Hz as the clock input of (2^-13)Hz.

Digit 5
Things get even more funky here, because this digit will be required to tick from 0-9 twice (in 00-09 hours and 10-19 hours), then only 0-3 the third time (in 20-23 hours), before resetting to repeat the pattern. For this I have started by employing the reset-after-9 test, then I will return to this digit once I have built digit 6. Remember that the clock input for (2^-17)Hz is the Q output of (2^-16)Hz!
Digit 6
This digit is much more simple, because it only needs to tick 0-2 before resetting to zero. Therefore only two flip flops are required, and outputs 23 and 24 can be fixed at logic 0. As usual, the clock input for (2^-21)Hz is the Q output of (2^-20)Hz. The 0000 reset test for digit 6 involves testing for the binary 11 over outputs 21 and 22, which can be done with a single AND gate. In this case, the NOT gate will return a logic 0 which will negative-edge trigger the PRE' functions of the two flip flops associated with digit 6. A final test needs to be constructed as well though, to deal with the digit 5 irregularity - I have made a test for the situation where output 21 is logic 0 and output 22 is logic 1, using 2 XOR gates and an AND gate, which is fairly self explanatory when you look at the diagram. The creation of a logic 1 when both conditions are met feeds back into another AND gate at digit 5 - this will return 1 only if the digit 6 test is logic 1, and an OR gate comparing outputs 23 and 24 returns logic 1. In this case, the NOT gate will return 0, and this result feeds into a 3-input AND gate later on. This 3-input AND gate allows the PRE' functions to be triggered when the above conditions are met, when the master reset is pressed or if digit 5 reaches A in a [(digit 6) ≠ 2] situation.

I hope this makes my circuit easier to understand, but I imagine my explanation leaves something to be desired!
 

Sunday, 11 October 2015

Clocks, piezoelectric crystals and logic gate solutions - part 1

Lying in bed at night, I find myself considering exactly how the cheap analog clock in front of me keeps such perceivably accurate time, day after day and night after night. The only thing which can halt the unstoppable passage of the ticking hands is the slow discharging of the battery - but how does the input of voltage lead to a constant and reliable motion, which doesn't degrade as the voltage of the battery slowly decays?

Piezoelectric crystals
The piezoelectric effect describes how the application of mechanical stress to certain crystalline materials can generate a small electric current - the pressure causes the positive and negative charge centres to move, with the result that a weak external electric field is created. Early experiments showed that the effect was exhibited in a variety of natural crystals including cane sugar, topaz and quartz, with more recent developments augmenting this range with man-made structures including barium titanate (BaTiO3) and lead zirconate titanate (Pb[ZrₓTi₁₋ₓ]O₃). [1]

As an aside, the latter of these two compounds is classified as an "intermetallic compound", a curious label which means that metallic bonding occurs with defined stoichiometric ratios (i.e. for x lead atoms there are y zirconium and z titanium atoms, and for 2x lead atoms there are 2y zirconium and 2z titanium atoms) - within mainstream education I have never heard of such materials which form quasi-ionic lattices, so I find this very interesting. In the case of lead zirconate titanate, the chemical formula implies that for every lead atom there are x zirconium atoms, (1-x) titanium atoms and 3 oxygen atoms (where x is between 0 and 1 inclusive). Having thought through the implications of this formula, it would appear that the crystal is not as uniform as giant ionic structures are. The most common form is PbZr0.52Ti0.48O3, telling us that the ratio of the number of PbZrO3 unit cells to PbTiO3 ones is 0.52 : 0.48, or 13 : 12 [2] (since for the two statements Zr = x and Ti = 1 - x to be valid over the restricted range [x, (1 - x) ℕ], the two solutions are [Zr = 1, Ti = o] and [Zr = 0, Ti = 1]).

This isn't the first time my attention has been drawn to the idea of these materials - while I was sailing with my family, lighting the gas stove made me consider exactly what causes the spark. It turns out that piezoelectric crystals are indeed involved: the movement of the trigger causes a spring-loaded hammer to strike the crystal, inducing a voltage by the piezoelectric effect which produces a spark against a metal plate. This high pd is enough to ignite the fuel, lighting the burner. 

The converse piezoelectric effect works by using a voltage to cause variation in the width of a piezo crystal. Since large voltages induce only tiny changes in the width of the crystal, the effect can be exploited to make motors which move objects with large precision - the crystal element is held against the target and a potential difference is applied across is, causing it to minutely change in width and push the target along by microns at a time. These motors are patented and manufactured by NanoMotion.

So how does this relate back to clocks keeping time? Well, these magical materials (usually quartz in this case) have a voltage applied between opposite faces by the battery, in turn causing the width of the crystal to oscillate thousands of times a second. However the crystal will not simply resonate by connecting the battery, but instead the electrical output needs to be fed back into it to continue the oscillation [6]. Since quartz oscillates at a fixed frequency of exactly 32768/s [3], a control circuit can convert these oscillations into even ticks of the clock by using a frequency divider circuit: these are used to take a high frequency signal in, and realise it to a much lower frequency signal - you want the clock to tick once per second, not 32768 times per second! Frequency divider circuits come in many different forms, but the simplest involves a chain of T flip-flops. These start with a default voltage of logic 1 or 0, and with each defined input will switch between the two. For example if the default was logic 1, and it will only flick to the alternative logic value when it receives an input of logic 1, the D flip-flop will output a digital signal of half the frequency. Therefore, the frequency-reduction factor of the circuit is 2number of flip flops [5]. Conveniently, 32768 = 215, so a circuit of fifteen sequential D flip-flops will reduce the frequency down to 1Hz.

Next, this signal will be fed into a stepping motor, which will use each electrical 'tick' to cause a mechanical tick of the clock: this type of motor is very useful for precision control of rotations. It is composed of a rotor with n metal teeth, and a stator with (n - 2) teeth connected to 8 solenoids arranged in a circle. The image below shows this well:

This is a view of the system from only one end. If we took the rotor out and looked at its cylindrical length, it would look like this:

In each step, two opposite solenoids will have positive magnetic charge and the two solenoids perpendicular to them will have negative charge. This will cause the rotor to jump by a quarter of a gear each time the field rotates [4]. It is better explained with my animations below; the first shows the process slowly, and the latter shows the net effect by playing it through at a faster rate. Since the rotor has 30 teeth and each 45° field rotation causes a quarter-gear jump, a full turn of the motor will occur every 120 field rotations - therefore my clock would run twice as slow as a normal clock, so real ones would have only half the tooth values (and be fed with a logic 1 frequency of 1Hz).



Sources

[1] NanoMotion - "The Piezoelectric Effect" - here
[2] Wikipedia - "Intermetallic" - here
[3] Explain That Stuff - "Piezoelectricity" - here 
[4] Youtube - digitalPimple - "Stepper Motor Basics and Control - How it works" - here
[5] Stack Exchange - Forum - here
[6]Hackman's Realm - "Information on Electronic Quartz Crystals" - here

Wednesday, 30 September 2015

The supermoon-bloodmoon-harvestmoon phenomenon, and how scientists knew when to be looking

I only heard about the lunar event on the BBC evening news the night before the stunning lunar eclipse occurred. Having been deeply disappointed by the solar eclipse earlier this year, standing out on the school playing field during my maths lesson on a chilly Friday morning in March with my ridiculously geeky solar glasses, gazing up at the hopelessly cloudy sky for a good half hour before resigning myself to gawping at the Faroe Islands' live stream, it seemed a hardly decent replacement service by the heavens - lunar eclipses are less highly regarded, since on average a total one can be seen from any given location on the Earth every 2.5 years [1]; however it really was an amazing sight, seeing the Earth's largest satellite in beautiful mars-like hues, close enough to be viewed by the naked eye of an unequipped enthusiast.

Supermoon 

The reason this particular lunar eclipse, the type which produces the colloquially-dubbed "blood moon", was so special is that it converged with yet another astronomical phenomenon, a perigee of the moon (or "supermoon"). Our moon goes through regular perigees and apogees due to its slightly non-circular orbit, which causes its distance from the Earth to vary slightly over its 28-day cycle (27.322 days more precisely [2]). According to mathematics, the eccentricity of an ellipse is given by e = ca = ( distance from centre to focus distance from focus to a vertex on the major axis ) = ( distance from centre to focus(As-maj - distance from centre to focus), where the distance from the centre to focus is given by c = √(As-maj2 - As-min2) = √[(length of semi-major axis)2 - (length of semi-minor axis)2], As-maj is the length of the semi-major axis and As-min is the length of the semi-minor axis. [3][10]

This looks quite complex, and it certainly took a while for me to research my way around it, but I stumbled upon the idea of Kepler's Laws, one of which states that the centre of mass of a two-body orbit system is the focus of the elliptical orbit. Firstly, assuming that the centre of mass of the Earth-Moon system is the centre of the Earth (and mind that this is a simplification for now), the eccentricity of the ellipse can be calculated very easily. This is because the apogee distance (where the moon is farthest from the Earth) is the distance from the focus to the distal major-axis vertex, and the perigee distance (where the moon is closest) is the distance from the focus to the proximal major-axis vertex. According to Kepler's Law, the larger body becomes the focus of the elliptical orbit of the other so Earth is the focus of this ellipse.

The apogee distance is 251,968mi = (251968*1 609.344)m = 405503189m
The perigee distance is 225,804mi = (225804*1 609.344)m = 363396312.6m


Therefore the distance from the focus to the centre of the ellipse = ((405503189 - 363396312.6)2) = 21053438.2m
The distance from the centre to a vertex is 363396312.6 + 21053438.2 = 384449750.8m
The eccentricity = 21053438.2384449750.8  = 0.054762522 ≈ 0.05

However its not that simple, since the centre of the Earth is not the centre of mass of the orbital system. In fact, this "barycentre" has an average distance from the centre of the Earth of 4671km - although this is still within the Earth's radius of 6378km [4], this will change the value of eccentricity since the perigee and apogee are no longer so intuitive to use within the elliptical geometry. Since both objects rotate around the orbital barycentre of the system, this will cause the Earth to wobble on its orbit and hence change position at perigee and apogee - the only reason this didn't occur in the previous model was that we took the barycentre to be the centre of the Earth, so it was simply rotating around its axis. The new model looks like this:



This diagram is not in any way to scale - the size of the Earth has been increased, and the eccentricity of the barycentre's location too, to emphasise the displacement of the planet in space caused by its orbit around the Earth-Moon barycentre, whereas in actual fact it is a very small effect since the Earth has a much larger mass than the moon. The Earth at apogee and perigee is represented by the large grey circles, bordered by the corresponding colours.

The barycentre is the new focus of the elliptical lunar orbit.
The distance from the moon to the barycentre at perigee = (363396312.6 + 4671000) = 368067312.6m
The distance from the moon to the barycentre at apogee = (405503189 + 4671000) = 410174189m
The distance from the elliptical centre to a vertex is (405503189 + 363396312.6 + (2*4671000))/2 = 389120750.8m
The distance from the elliptical centre to the geometric centre at perigee = 389120750.8 - 363396312.6 = 25724438.6m
The distance from the elliptical centre to the barycentre = 25724438.6 - 4671000 = 21053438.6m
The eccentricity = 21053438/389120750.8 =0.054105155 ≈ 0.05

The internet, including [5], states that the most accurate calculation of average eccentricity places the moon at around 0.0549. However this takes into account other factors which have been ignored in my calculations, such as the 5° angle to the equatorial plane at which the moon orbits, so I'm pretty satisfied with my model getting to the same approximate answer of 0.05.

The importance of the perigee to increasing the drama of Monday morning's eclipse was that it made the moon appear larger in the sky, since it was at its closest to the Earth. However this effect will slowly wane over the coming thousands of years because the average radius of the moon's orbit is slowly increasing... but more about that later.


Bloodmoon

A bloodmoon, to reiterate, is the colloquial name for the effect on the moon's apparent colour caused by a lunar eclipse - this is when the Earth, Moon and Sun line up in such a way that the Earth casts a shadow over the surface of the satellite rock. [6] has a very good explanation of why eclipses do not happen often (i.e. at every new and full moon) - the angle of the moon's orbital plane means that the Earth's shadow from the Sun often misses it, leaving us without an event to observe.

The red colour was only possible because it was a total lunar eclipse - just as when we see a solar eclipse at totality and the Sun's corona becomes visible, this coronal light will impinge into the Earth's shadow to stain the moon a coppery red. There are other types of lunar eclipse which are less dramatic though: penumbral eclipses occur when the moon passes into the outer fringes of the Earth's shadow, producing a largely indiscernible effect; partial eclipses occur when the moon partially enters the darker area of the Earth's shadow, obscuring the surface in part [7]. As one might expect, more interesting eclipses are inversely proportional to their frequency - that's why this combination of bloodmoon and supermoon is so rare (the last one happened in 1982, and the next will happen in 2033 [8].

Even more interesting is the fact that Monday's spectacle was the last of a series of four total lunar eclipses, occurring in 6-month intervals in 2014-15 - this is called a tetrad [9]. This sequence of events occurs roughly every decade, but used to (and in some cases still do) have biblical connotations until its workings were more formally understood; a quick internet search of "bloodmoon" yields a wealth of religious rapture predictions!

Savour it while we can

  The moon and sun's gravitational fields work together to cause the tides. When the moon is in line with the sun, on the opposite side or the same side of the Earth, the oblateness of the planet is increased as the oceans are pulled by metres towards the celestial bodies (this is at full moon or new moon); conversely when the moon and sun are at right angles to the Earth, neap tides occur where the difference between high and low tides is least (this is at first quarter and third quarter). As the Earth rotates, an observer on its surface will experience high and low tide twice each day as their position passes through the areas of higher and lower planetary radius from the centre to the sea level.

This constant pulling of the oceans naturally creates friction on the sea bed between H2O/salt/silt in the liquid and rock/sand on the floor. Since energy must be conserved within this closed Earth-Moon system, a loss of energy on Earth will result in an increase in the moon's kinetic energy. An orbiting object with more kinetic energy will move further from the object it orbits - now that the average orbital radius is greater, the moon experiences a slightly smaller acceleration due to gravity towards the Earth (gravitational fields follow the inverse square law, as F = Gm1m2r2) so a smaller velocity perpendicular to the centripetal force is required to stop the two bodies colliding. It is amazing that this friction is making the moon slow down and move further away, as well as slowing down the rotation of the Earth very slightly (the frictional force resists the rotational motion of the planet) - [11] quantifies this best, since the day in 100 years will be 2ms longer as a result of this effect.

Therefore the moon will slowly move away from the Earth over the coming millennia at an approximate rate of 3.8cm/year [11], or in SI units (3.8/(100*365.242*24*60*60) = 1.204173712 ≈ 1.20nm/s. This is not a hugely significant amount but, assuming that the elliptical shape of the moon remains constant as the average radius increases, the current apogee distance will become the perigee distance in ((405503189-363396312.6)0.038) = 9002812537 years, or approximately 9 billion years!


Sources

[1]   Time and Date - "What are Solar Eclipses?" - here

[2]   Space - "Does the Moon Rotate?" - here

[3]   Maths Open Reference - "Ellipse Eccentricity" - here

[4]   Wikipedia - "Barycenter" - here

[5]   Wikipedia - "Orbit of the Moon" - here

[6]   Space - "'Blood Moons' Explained: What Causes a Lunar Eclipse Tetrad?" - here

[7]   NASA - "A Tetrad of Lunar Eclipses" - here

[8]   Telegraph - "Supermoon lunar eclipse 2015 live: Amazing pictures from the UK and around the world of the 'blood moon'" - here

[9]   Wikipedia - "Tetrad" - here

[10] 1728 - "Ellipse Calculator" - here

[11] Ask an Astronomer - "Is the Moon moving away from the Earth? When was this discovered?" - here


Monday, 28 September 2015

Personal musings and research arising from Serge Haroche's RI Discourse, 25/09/2015

My first visit to the Royal Institution

Friday's lecture from the 2012 Nobel Laureate was a highly interesting overview of quantum effects and their brief histories. I have had previous learning experiences with quantum mechanics at its most introductory level, with thoroughly enjoyable reads such as How to Teach Quantum Physics to your Dog by Chad Orzel and riveting science-to-the-masses documentaries from the likes of  Dr Jim Al-Khalili, but watching a lecture on the baffling topic, in the hallowed theatre of the birthplace of modern science, was a really unrivalled opportunity. It was heartening to know that my prerequisite knowledge allowed me to come away with a good understanding of what was presented, wholly justifying the 7-hour round trip from Suffolk in my finest (and only) smart suit; even more thrilling though was that I have so much to come away and look deeper into, and the purpose of this article will be to convey some of the understanding I think I have gleaned from the realm of the interweb, linking back to what Haroche discussed in his discourse.

[1]


Einstein's Slit argument

Haroche mentioned that there was perpetual disagreement between Einstein and Bohr over the fundamentals of quantum effects throughout their careers - the former was ironically to a large extent responsible for the birth of QM, stemming from his explanation of the photoelectric effect, yet he was one of its biggest critics in the 20th century.


One such criticism arose in 1927 where Einstein laid down a thought-experiment to challenge Bohr; he sought an alternative to the superposition explanation for single photons in a Young's Slits experiment continuing to form an interference pattern over time.

To do this he supposed that an alternative version could be established; instead of two fixed slits, the upper slit would be suspended on springs such that the slightest input of force would cause it to move. Einstein postulated therefore that one could tell whether the photon passes through the upper slit, for a collision would cause the slit to move by conservation of momentum as the particle is deviated vertically. Hence he would accurately be able to measure the position of the photon (by tracing the path back from the screen to the collision point) and the momentum too (by the magnitude of slit displacement), thus violating the principle of indeterminacy.

However, Bohr had several arguments in return.

Firstly, Einstein's model required an extremely precise knowledge of the slit's original position, at a much deeper precision than the measurement of how far it is displaced by the photon, a nearly massless entity.

Also Heisenberg's Uncertainty Principle would mean that having such a precise knowledge of the slit's velocity would reduce accuracy of the slit's position. Even a displacement by half a wavelength would shift the bright patches of the interference pattern towards darkness, by inducing partially destructive interference.

An ideal experiment would average over every possible position of the slit, Bohr argued, and so on the screen the perfect constructive and destructive fringes would be different for each position. This would fill the screen with a uniform grey colour, destroying the interference pattern. A modern explanation for the loss of the interference pattern is decoherence - the two paths of the photon are individually entangled to two macroscopic observational states:

|X> = |goes through top slit>⋅|top slit moves> + |goes through bottom slit>⋅|top slit doesn't move>

In reality this means that such fundamental environmental entanglement causes the wavefunction to collapse (or the universe to diverge!) extremely quickly, in a matter of femtoseconds or even less. Hence, in the usual style of QM, the measurement changes the outcome and ruins the quantum effect.

With reference to whether the wavefunction collapses or the universe diverges at the point of measurement, Haroche was very careful when asked about his opinion to state that it really is trivial in terms of the effects we are observing here - I must say that I agree, since I see no compelling evidence to support either Copenhagen or Many Worlds, so I feel that I can for now banish it to the realm of irrelevance. 

[2]


 Bell's inequalities and experimental disproving of Local Hidden Variable Theory
Einstein and other QM skeptics (including Podolsky and Rosen who collaborated with him on the EPR Paradox) were extremely concerned with the implications of quantum entanglement between particles - in the paradox Einstein referred to "spooky action at a distance" between two particles, produced by the decay of a single particle, such that a measurement of one's spin would immediately allow the observer to know the spin of the other (it would be opposite, since they must cancel to the 0 spin of the original particle). If Bohr's Copenhagen Interpretation was to be believed, neither particle's spin is definite until measured so the fact that the other's is determined immediately would imply that information is sent between the two faster than the speed of light, violating Einstein's relativity. The only explanation consistent with relativity would be that the spin is already defined, but remains a "local hidden variable" until one is measured. It is clear that the EPR Paradox was a serious challenge to QM, intending to expose its current inconsistencies with classical mechanics.

John Bell came up with a method of testing this paradox. Instead of measuring spin, the focus for his experiment was to use the polarization of photons, of which identical pairs would be generated by a decay process in an atom. Three polarizing filters would be available to use to test each entangled photon, and the results of each binary reading could be compared.
The key is to use this information to generate another table, where we can determine for a given permutation if the results will be the same or different for any two filters:
The probability of two filter readings being the same is always at least 1/3, and this is what Local Hidden Variable Theory would predict (since measuring each photon cannot affect the value of the other). However experimentally, the probability of getting the same reading is much lower, at around 1/4. Therefore, since the experiment violates the Bell Inequality,

 ≤ P(same)

local hidden variables cannot explain away the "spooky action at a distance". In reality this is the simplest explanation of the more complex field of Bell Inequalities, which I will endeavour to explore in greater depth in the future, but I must thank DrPhysicsA for this excellent explanation in his youtube tutorial.


[3]

The concept of the universal wave function 

One of the questions put forward to Haroche was this: "Is all matter in the universe (or indeed multiverse) entangled to form a single universal wavefunction?". The answer the Nobel Laureate provided was very simple and quite sensible - yes, but it is of no mathematical use to consider it in its totality because the resolution of knowledge of microscopic physical systems would be lost, so it could not be applied to any laboratory experiments to improve our understanding of QM (like the experiment detailed in the above section).

It is curious to think that the space-time fabric of the entire universe plays out as the consequence of one giant probabilistic function, and does a certain justice to the idea of "God playing dice" - throughout the study of the history and development of QM , Einstein's objections continue to be thrown up and this refers to his famous quote "God doesn't play dice": current theories would suggest otherwise.

Rydberg atoms
A Rydberg state of an atom is where one or more electrons are excited enough to have a very large principle quantum number, many energy levels above the core electrons (which remain in their normal positions as defined on the periodic table). Since the excited electrons are in higher energy states, they experience a lesser attraction to the positive nucleus so occupy vastly wider orbits - this means, paraphrasing phys.org, exciting the outer electron of a rubidium atom from n=5 to n=18 would extend the atomic radius from 1nm to 700nm.

Rydberg atoms are a viable method for storing quantum information as qubits because they can be "sustained for a long time in a quantum superposition system", and interact strongly such that they would form stable and effective logic-gate-type systems.

[4]

Cavity quantum electrodynamics to count photons

Haroche's Nobel Prize winning paper was the source material for this section, which seems only fitting. The quantum cavity is based on the Bohr-Einstein photon box, yet another hypothetical piece of apparatus to constitute the battleground for thought experiments over quantum mechanics. As I understand it, the cavity consists of two mirrors which continually reflect photons inside the cavity until absorbed - the mirrors were constantly plagued by slight imperfections reducing the lifetime of the experiment, but a collaboration at the French Atomic Energy Commission led to precisely machined copper mirrors, covered in superconducting niobium, which form a quasi-spherical surface; as a result, a photon lifetime of 130ms was achieved in 2006.

To actually count the photons inside the cavity, the use of lasers produced rubidium Rydberg atoms with an outermost orbit diameter approximately 1000 times larger than the ground state version - a condition of a stable orbit is that the De Broglie wavelength divides as an integer into the circumference of the orbit, leading to the principle quantum number of 51 or 50 for this experiment. These circular Rydberg states allow a long lifetime of 30ms, on the same order of magnitude as the photon lifetime - this means that the production of photons from the decay of the excited electron orbits can initially be discounted from the uncontrollable variables of the experiment.

In the two states, e and g, the wave has uniform amplitude around the orbit, leading to a electron charge density centred on the atomic nucleus. However a pulse of resonant microwaves brings it the electron into a superposition of both e and g states, causing constructive interference on one side and destructive interference on the other. The result of this is that a net electric dipole is created, extremely sensitive to microwave radiation (i.e. the photons being counted).

Non-resonant microwave photons are not absorbed by the Rydberg atoms, making the process transparent to the entry of exterior photons. However tuning the cavity photons to very close to the phase-shift frequency of the Rydberg atoms allows them to exit the cavity with a dipole shift of up to 180 degrees. Such a shift is equivalent to a single photon, which allows them to be discretely counted.

The acceleration of light between media of different densities

My final talking point stems from yet another point raised by the audience during the question time: does light literally accelerate when it passes between two media of differing densities? We learn at school that light has different speeds in different materials, standardised in the form of refractive indices, yet it goes against classical mechanics to assume that there is an instantaneous change in the velocity of photons across this boundary - a change in velocity over 0 would involve →  (a = (v-u)t). Since F = ma, the resultant force on any mass-possessing body would also approach ∞N, tearing the object apart.

However in my opinion this is exactly what happens when light passes between two bodies. By Einstein's laws of special relativity, a massive object with a velocity approaching the speed of light will have a mass approaching infinity - since light cannot be weighed in the same way one might measure the weight of an apple or a car, it must be assumed that a photon is massless (this concept is, in retrospect, universally agreed across the physics community too). Since a photon is massless, it is not constrained by the limit placed on instantaneous acceleration for massive objects, so it seems completely reasonable that a photon will instantly change velocity at a boundary between media.

A related topic refers to the strangeness of the limit placed on universal speeds, relative or not, by Einstein. The confusion occurs when it is suggested that two spaceships are travelling very quickly towards each other. Ship A has a beam of light coming out of the front, naturally travelling at the approximate speed of 3.0e8ms-1. Classical mechanics would tell us that the relative velocity of the light beam, according to an observer sitting in the pilot seat of the other spaceship, would be (c + vA - vB)ms-1, where vB is the velocity of ship B and vA is the velocity of ship A. However this value will have a magnitude greater than c, and hence is not possible according to Einstein's axiom.

However special relativity has an explanation for this. The consequence of the principle of time dilation is the principle of length contraction at speeds close to the speed of light, so from the view of the observer the light beam is blueshifted towards the higher-frequency, lower-wavelength end of the visible light spectrum. This constitutes the physical manifestation of the exceeding of the speed of light, allowing the observable speed of the light to remain at the familiar constant c.

Perhaps this is why the surroundings of the Millennium Falcon shift towards the violet end of the visible light spectrum when Han and Chewie jump to lightspeed?

Sources

September 2015 Friday Night Royal Institution Discourse - Serge Haroche - "Light and the Quantum" (the recording can be found on the RI official youtube channel, here)

Wikipedia - "The Bohr-Einstein Debates" - here 

DrPhysicsA - "Bell's Inequality" - here

Phys.org - "Tuning up Rydberg atoms for quantum information applications" - here

Nobelprize.org - "Controlling Photons in a Box and Exploring the Quantum to Classical Boundary" - here