I have been doing some work (well a LOT of work) on brushing up a variety of precalculus and calculus topics (i.e. every one I can find on the website) on Khan Academy recently. This involved a huge amount of stuff I have done before, but usually with additional insights (such as logarithmic implicit differentiation) and new techniques (e.g. L'Hôpital's Rule for the two-sided limit of a function, and using the natural logarithm to turn exponential problems into rational functions for said rule to be applicable). But in my fascinatingly different learning on divergence/convergence tests for infinite series, something prickly came up.
As Sal Khan informs the viewer, quite correctly of course, there is a famous proof from French Medieval mathematician Nicole Oresme showing that the infinite harmonic series is divergent. It does this by way of the comparison divergence/convergence test. So first off, what is this test?
The direct comparison test involves the infinite series under test, SA, and a second infinite series SB. Every term of the series SA is smaller than the corresponding term in SB, and all terms are non-negative. The statements are thus: if SB converges, then SA must also converge; if SA diverges, then SB must also diverge.
Both of these statements are intuitively easy to grasp. Upon reading this, I immediately questioned the use of 'must converge' versus 'is bounded', simply because the counterexample that sprung to mind was a sequence containing the sine or cosine functions, or any other type of oscillatory function such as one including the term (-1)n - here it would be ok to say it is bounded if the first condition was satisfied, yet the very nature of the function would potentially stop a true limit being ever reached. However, in reality although this is true it would be a trivial case to use, since it defeats the premise of actually using the direct comparison test if you already know the function is divergent. And this is forgetting the possible exception that the oscillator is appended to a decaying function, in which case the problem would sort itself by zeroing out as n approaches infinity. Nevertheless the test would still be useful for determining whether said function is bounded or not, in fairness.
So, moving onto the proof. Oresme took the series SA = 1 + 1/2 + 1/3 + 1/4 + ... (infinite harmonic series) and for each term found the smallest power of 1/2 that was smaller than or equal to it. This led to this comparison:
SA = 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + 1/9 + ...
SB = 1 + 1/2 + 1/4 + 1/4 + 1/8 + 1/8 + 1/8 + 1/8 + 1/16 + ...
Since each term in SB is smaller than the corresponding term in SA, it must be that the value of SA up to the nth term is smaller than the value of SB at the same point. So we need to determine the behaviour of SB in the limit as n stretches off to infinity, in order to perform the direct comparison test.
It is easy to notice that the terms in SB can be grouped in a very special way - there are two quarters, four eighths, eight sixteenths etc. This leads to the clearly divergent series:
SB = 1 + 1/2 + 1/2 + 1/2 + ...
Since this is divergent, it must also be that the harmonic series is divergent too. This is nicely illustrated on Desmos, with the caveat that functions based on discrete summations of integral-domained terms plot discrete line segments centered about each integer. So it looks like a bunch of lines when in reality is is a set of discrete points. Anyway the visual representation is sound:
It's funny that out of all of that, the thing I'm most proud of it that I have finally found a vaguely practical use for the floor function! What it is doing in the red line (representing SB in the diagram) is finding the exact value of the power 0.5 needs to be raised to to equal the corresponding term in the blue line, then rounding it up to the next integer value.
Ok so that looks pretty convincing - the red line diverges so it stops the blue line converging too. It just seems very counterintuitive though. Although failing the nth term divergence test doesn't tell you for sure that a series converges, it would seem highly logical that the harmonic series should converge since its terms will get infinitely small in the limit! So for me, I want even more argument to convince myself that this is true.
The next thing we can do is to analyse how quickly the comparison function, SB, is growing. Obviously the rate of change decreases as n increases, because the additional increments drop in powers of 2 at a time. But the number of terms between each drop also increases by a factor of 2 each time (consider there are 2 terms of 1/4, 4 terms of 1/8, 8 terms of 1/16...). So, for example, when the gradient of the line is 1/128 (where 128 = 27), 27 < n < 28, or more clearly 128 < n < 256. More generally, we find that for some natural number k, the region in which the gradient is 1/2k is 2k < n < 2k+1. All this still doesn't help me reason this out though, because it would still suggest that as x → ∞, the gradient of the line → 1/∞ = 0.
In conclusion, it is very difficult to condense the reasoning to anything more satisfying than a series comparison at my current level of knowledge. Keep studying then...
Welcome to my world
Here is my domain for splurging my ruminations on the STEM fields. Most of the stuff I discuss and research on this site is way beyond what we learn at school and what I am conventionally taught, so there may well be errors in my information or maths - please do not viciously troll the page with corrections, although constructive and useful criticism is of course welcome :)
Wednesday, 31 August 2016
Some thoughts on the Laplacian method for solving differential equations
In the summer months, since the conclusion of my exams and work experience, I have been at a loss of normal summery things to do. Instead, I have taken this opportunity to work on some extra-curricular techniques and subject areas within maths and physics, just for a bit of fun. Clearly in general, since work on transforms, matrices and quantum probability are very much new topics for me, it has taken a large amount of read-learn-practice before I can come up with some useful projects to demonstrate my new skills. In the meantime, I will have a short discussion about something I have recently learned, called the Laplace Transform.
It is a name that I have heard every now and again from my maths teacher, and across my wider reading on the internet, yet only with nothing else to study for have I been brave enough to delve into the actual processes involved. In essence, the transform is often described in the physical world as taking a problem in the "time domain" and translating it into the "frequency domain"; conversely on a more abstract mathematical level it is merely a way to simplify differential problems into contrived algebraic ones, which may suit the number-cruncher better as it is a generally more consistent method for solving a range of differential equations, as opposed to the more guesswork-orientated method of characteristic-equation-then-particular-integral.
So how does it work? The Laplace transform of a function f(t) is found by multiplying it by the function e-st, then finding the definite integral of this product with respect to t, between the limits of positive infinity and zero. It seems strange when put as bluntly as that but, as soon as some examples are performed, it becomes clear how convenient such a multiplication is by the seemingly arbitrary exponential term. This is for two reasons: integration by parts is straightforward where dv/dt is substituted as e-st; the exponential of a negative independent variable reduce to 0 or 1 in the limits. Using these key understandings, and a few examples from the internet to guide the method, I set out to prove a table of such transforms - with the exception of the convolution integrals at the very bottom I was able to do so, and took great pleasure and satisfaction in doing so! Here are a couple of good examples, which also introduced new functions to me which will come in useful in the future:
The second proof I did was perhaps blasé to choose as an example - in my black book where I have recorded all the written work from my work experience and personal study, I sequentially worked through simpler proofs before the moderately-challenging ones such as tcos(at), ones such as sin(at) and cos(at). Yet I felt it would be unsatisfying to reference results such as "given that the Laplace transform of sin(at) is ...", so I have effectively completed three proofs in one during the second example (it is not difficult to infer from the working that L{sin(at)} = a/(s2+a2) and that, with a fulfilling sense of symmetry, L{cos(at)} = s/(s2+a2)). This symmetry does indeed somewhat extend into the mathematics of the order-one-polynomial-trig-function product, where L{tsin(at)} = 2as/(s2+a2)2.
Moving swiftly on... One of the more important conceptual implications of the Laplace Transform, which helps to link mathematical abstraction with the physical world, is to see what happens when the operator is applied to f(t) = t. One finds that 1/s is obtained, nicely satisfying the equation that frequency ∝ time-1. Hence it is clear to see how useful such a relation might be, for example, in the field of electrical engineering as one tracks electromagnetic oscillations over time in a differential perspective.
Okay so now I know how Laplace Transforms work, a little bit of context on the kind of subject areas they may be useful in, and furthermore how to compute them. Now it's time to actually put them to their main use, which is for solving differential equations. Here is an example of such a differential equation - notice how I first solve it using the method more familiar to me, that of the characteristic equation and particular integral amalgamation, then contrast it with the Laplacian approach. The new method certainly seems more contrived in this simplistic case:
In this case the two solutions are very similar in length, but the algebra is undoubtedly more involved in the Laplacian version - the partial fraction decomposition in particular adds unnecessary complexity to the working. But observe how the initial conditions are worked nicely into the solution, making it more coherent, whereas in the familiar method they are used to solve for unknown constants which travel through the solution unresolved until the very end. So overall there is no real improvement using the transforms. But let's attempt a more complicated example. First, the auxiliary method...
(note that I say 'displaced from equilibrium' but I mean 'displaced from natural length of the spring - this also applies to the first example I did)
Then, the Laplacian method:
It is very difficult to see if these two solutions are equivalent, since they use different combinations of overarching constants. However I plotted them on Desmos, as can be found here. From there is is clear that they do not agree, yet one is considerably more likely to be correct - that is the blue line, the one that passes through the point (0,x0) whereas the other line does not; interestingly this is the solution from the Laplace transforms! I could go back through the first method and find the mistake but I am satisfied that the relative ease of the second method has been demonstrated, so it would be pretty unnecessary.
The point of that was to show that as problems get more complicated, the Laplace Transform comes into its own. But complicated in a very specific way - the auxiliary method does not become much more difficult as the degree of the differential equation increases, as long as the characteristic equation can be factorised without too much trouble, yet the Laplace method will since each expansion of a transformed derivative increases the number of terms in the algebraic result (L{x''} produces 3 terms, L{x'''} produces 4 terms etc.). No, what matters is that the non-homogeneous f(t) element of the equation becomes more exotic - this is what causes so much trouble in finding the particular integral, but it hardly makes the Laplace method more difficult because only some partial fraction manipulation is required to separate it off into various sines, cosines, exponentials and combinations of those. Therefore Laplace Transforms are most effective, at the level I understand them, for solving low-degree non-homogeneous ODEs with polyfunctional f(t).
But there is more to the Laplace Transform than this, which is but an entry level understanding of how to apply them in linear situations. In the first example there was no non-conservative air resistance to waste mechanical energy, so oscillation extended indefinitely towards t = +∞, yet in the second there was an exponential decay in the amplitude of the oscillation. This appeared in the s-domain as the given trigonometric function transform, but s-shifted by a constant - and hence was a use of the s-shift Laplace Transform, one of the most useful ones in the table. There is a very similar t-shift, where the exponential function appears as a multiplier in the s-domain - in the t-domain this takes the form of uc·f(t-c), where uc is the unit step function at c.
But the versatility doesn't stop there. The issue I have at the moment is that I cannot progress beyond dealing with linear ODEs, since I have not yet learned how do perform convolution integrals (these produce a product of functions in the s-domain) or transforms of a product of two functions. All things to look forward to learning in the future I suppose!
It is a name that I have heard every now and again from my maths teacher, and across my wider reading on the internet, yet only with nothing else to study for have I been brave enough to delve into the actual processes involved. In essence, the transform is often described in the physical world as taking a problem in the "time domain" and translating it into the "frequency domain"; conversely on a more abstract mathematical level it is merely a way to simplify differential problems into contrived algebraic ones, which may suit the number-cruncher better as it is a generally more consistent method for solving a range of differential equations, as opposed to the more guesswork-orientated method of characteristic-equation-then-particular-integral.
So how does it work? The Laplace transform of a function f(t) is found by multiplying it by the function e-st, then finding the definite integral of this product with respect to t, between the limits of positive infinity and zero. It seems strange when put as bluntly as that but, as soon as some examples are performed, it becomes clear how convenient such a multiplication is by the seemingly arbitrary exponential term. This is for two reasons: integration by parts is straightforward where dv/dt is substituted as e-st; the exponential of a negative independent variable reduce to 0 or 1 in the limits. Using these key understandings, and a few examples from the internet to guide the method, I set out to prove a table of such transforms - with the exception of the convolution integrals at the very bottom I was able to do so, and took great pleasure and satisfaction in doing so! Here are a couple of good examples, which also introduced new functions to me which will come in useful in the future:
The second proof I did was perhaps blasé to choose as an example - in my black book where I have recorded all the written work from my work experience and personal study, I sequentially worked through simpler proofs before the moderately-challenging ones such as tcos(at), ones such as sin(at) and cos(at). Yet I felt it would be unsatisfying to reference results such as "given that the Laplace transform of sin(at) is ...", so I have effectively completed three proofs in one during the second example (it is not difficult to infer from the working that L{sin(at)} = a/(s2+a2) and that, with a fulfilling sense of symmetry, L{cos(at)} = s/(s2+a2)). This symmetry does indeed somewhat extend into the mathematics of the order-one-polynomial-trig-function product, where L{tsin(at)} = 2as/(s2+a2)2.
Moving swiftly on... One of the more important conceptual implications of the Laplace Transform, which helps to link mathematical abstraction with the physical world, is to see what happens when the operator is applied to f(t) = t. One finds that 1/s is obtained, nicely satisfying the equation that frequency ∝ time-1. Hence it is clear to see how useful such a relation might be, for example, in the field of electrical engineering as one tracks electromagnetic oscillations over time in a differential perspective.
Okay so now I know how Laplace Transforms work, a little bit of context on the kind of subject areas they may be useful in, and furthermore how to compute them. Now it's time to actually put them to their main use, which is for solving differential equations. Here is an example of such a differential equation - notice how I first solve it using the method more familiar to me, that of the characteristic equation and particular integral amalgamation, then contrast it with the Laplacian approach. The new method certainly seems more contrived in this simplistic case:
In this case the two solutions are very similar in length, but the algebra is undoubtedly more involved in the Laplacian version - the partial fraction decomposition in particular adds unnecessary complexity to the working. But observe how the initial conditions are worked nicely into the solution, making it more coherent, whereas in the familiar method they are used to solve for unknown constants which travel through the solution unresolved until the very end. So overall there is no real improvement using the transforms. But let's attempt a more complicated example. First, the auxiliary method...
(note that I say 'displaced from equilibrium' but I mean 'displaced from natural length of the spring - this also applies to the first example I did)
Then, the Laplacian method:
It is very difficult to see if these two solutions are equivalent, since they use different combinations of overarching constants. However I plotted them on Desmos, as can be found here. From there is is clear that they do not agree, yet one is considerably more likely to be correct - that is the blue line, the one that passes through the point (0,x0) whereas the other line does not; interestingly this is the solution from the Laplace transforms! I could go back through the first method and find the mistake but I am satisfied that the relative ease of the second method has been demonstrated, so it would be pretty unnecessary.
The point of that was to show that as problems get more complicated, the Laplace Transform comes into its own. But complicated in a very specific way - the auxiliary method does not become much more difficult as the degree of the differential equation increases, as long as the characteristic equation can be factorised without too much trouble, yet the Laplace method will since each expansion of a transformed derivative increases the number of terms in the algebraic result (L{x''} produces 3 terms, L{x'''} produces 4 terms etc.). No, what matters is that the non-homogeneous f(t) element of the equation becomes more exotic - this is what causes so much trouble in finding the particular integral, but it hardly makes the Laplace method more difficult because only some partial fraction manipulation is required to separate it off into various sines, cosines, exponentials and combinations of those. Therefore Laplace Transforms are most effective, at the level I understand them, for solving low-degree non-homogeneous ODEs with polyfunctional f(t).
But there is more to the Laplace Transform than this, which is but an entry level understanding of how to apply them in linear situations. In the first example there was no non-conservative air resistance to waste mechanical energy, so oscillation extended indefinitely towards t = +∞, yet in the second there was an exponential decay in the amplitude of the oscillation. This appeared in the s-domain as the given trigonometric function transform, but s-shifted by a constant - and hence was a use of the s-shift Laplace Transform, one of the most useful ones in the table. There is a very similar t-shift, where the exponential function appears as a multiplier in the s-domain - in the t-domain this takes the form of uc·f(t-c), where uc is the unit step function at c.
But the versatility doesn't stop there. The issue I have at the moment is that I cannot progress beyond dealing with linear ODEs, since I have not yet learned how do perform convolution integrals (these produce a product of functions in the s-domain) or transforms of a product of two functions. All things to look forward to learning in the future I suppose!
Monday, 13 June 2016
Analysing water in a wine glass
The realm of 3-D graphing is fairly new to me, and I have only really learnt what I know through extended research on the web - it is hardly touched upon in conventional A-level maths at all. One evening I decided to test exactly what I know, and what I could possibly learn, to model a seemingly very simple physical situation: liquid in a wine glass under gravity.
First, I assumed that the wine glass is a paraboloid in shape - this is a parabola of the form y = ax2 that has been rotated around the y-axis (I don't really see the point in the more general format of rotating y2 = 2ax about the z-axis myself, and the horizontal orientation would make the situation far less convincing in any case) to form a, well, 'wine glass shape'! Then, I took a 3-D plane of the form y = xtan(θ) and analysed the intersections it makes with the paraboloid - this plane effectively models the surface of the liquid.
The development of this short project is seen below. The part I am most proud of is that I have learned how to integrate under a surface within a circular region using polar coordinates, which is much simpler than the traditional rectangular coordinate method, requiring multiple trigonometric substitutions to achieve the same result. Anyway, enjoy reading. For the first time I know someone is reading this, so I hope it is all correct and logical to follow...
The gameplan for my method might be useful:
Now, the bulk of the maths:
The implications of these formulae are quite powerful really, but it is quite difficult to draw using Desmos - I have not yet worked out how to parametrically add the z-axis into the 2-D plotter. However, I have done a couple of graphs on the online graphing calculator: http://tinyurl.com/zgsvzc8. The angle is changed with the slider 'T' between -π/2 and π/2, and the volume of liquid and shape of the glass are also altered with their respective sliders.
Seeing the changing shape of the surface of the liquid (red) and the cross-sectional profile of the glass at the same time as the angle of tilt changes is very revealing - the red ellipse disappears at the point where the surface no longer straddles the minimum point of the glass (the bottom, at the origin): although real wine glasses actually have a slight curve back in towards the top, a quadratic curve has no such 'lip'. Therefore as soon as the liquid is all on one wall of the glass, it is poured out!
Please bear in mind that I have caused the surface of the liquid to rotate instead of the glass itself because it is much simpler to do. Therefore the graph is from the perspective of the glass instead of the Earth's gravitational field!
The development of this short project is seen below. The part I am most proud of is that I have learned how to integrate under a surface within a circular region using polar coordinates, which is much simpler than the traditional rectangular coordinate method, requiring multiple trigonometric substitutions to achieve the same result. Anyway, enjoy reading. For the first time I know someone is reading this, so I hope it is all correct and logical to follow...
The gameplan for my method might be useful:
Now, the bulk of the maths:
The implications of these formulae are quite powerful really, but it is quite difficult to draw using Desmos - I have not yet worked out how to parametrically add the z-axis into the 2-D plotter. However, I have done a couple of graphs on the online graphing calculator: http://tinyurl.com/zgsvzc8. The angle is changed with the slider 'T' between -π/2 and π/2, and the volume of liquid and shape of the glass are also altered with their respective sliders.
Seeing the changing shape of the surface of the liquid (red) and the cross-sectional profile of the glass at the same time as the angle of tilt changes is very revealing - the red ellipse disappears at the point where the surface no longer straddles the minimum point of the glass (the bottom, at the origin): although real wine glasses actually have a slight curve back in towards the top, a quadratic curve has no such 'lip'. Therefore as soon as the liquid is all on one wall of the glass, it is poured out!
Please bear in mind that I have caused the surface of the liquid to rotate instead of the glass itself because it is much simpler to do. Therefore the graph is from the perspective of the glass instead of the Earth's gravitational field!
Wednesday, 11 May 2016
Finding the length of a section of a curve
This morning I was set a new problem, which I feel I will be working on for a fair while - what is the volume of a gas-filled pillowcase shape (two rectangles stuck together around the edges, and inflated to the maximum possible volume)? I already have my suspicions as to how the 3-D surface of a half-pillowcase shape could be modelled as a function - a 2-D function needs to be found which, for a given length of the curve (the length of one of the rectangles making up the pillowcase), maximises the area under it. Using a bit of intuition, this function should rise and fall very quickly at its ends, for example y = |√(x+5)|, and have a relatively high height inbetweentimes. This leads me to common naturally-occurring functions such as hyperbolae, catenaries or even quadratics!
The first key bit of information needed is a method for calculating the length of a section of a curve. I have done this today, generalising the method up to an integral about halfway down; from there I took the quadratic example forward, since it turns out pretty neatly:
The first key bit of information needed is a method for calculating the length of a section of a curve. I have done this today, generalising the method up to an integral about halfway down; from there I took the quadratic example forward, since it turns out pretty neatly:
Tuesday, 10 May 2016
Calculus in a simple resistance problem
The problem goes as follows: a resistor, resistance R, is connected in series with a battery, emf ε and internal resistance r; find the maximum power dissipation in the resistor R, and the resistance that results in this output.
Although it is quite clear how this can be solved with algebra, some thought about the theory behind Ohm's Law and Kirchoff's Second Law is required to set the answer in some context: since the power dissipation in the resistor is equal to I2R, one would think that raising the current is more influential on the output than raising the resistance. However, since the current drawn from the battery is intrinsically linked to the total resistance of the circuit, there is a delicate balance to be struck.
The nature of this balance is difficult to ascertain without calculation though, so without further ado:
The result is that the power dissipated across the resistor is a maximum (assumed from the context of the function whose stationary point is found, since only one positive-R stationary point exists) when R = r. This nicely helps to explain why a short-circuited battery is likely to heat up, set fire or explode - the short-circuit has a very low resistance, so overall a massive current is drawn which causes the wire to heat up. The wire will heat up the most when the resistance of the wire matches the internal resistance of the battery. The overall implication is that, if one was designing an electric resistance heater to emit the maximum heat output for any given emf, the length of wire coiled up should, via the resistivity equation, reflect the internal resistance of the power supply.
However, a separate issue is maximising the efficiency of the circuit in terms of energy - for environmental and financial reasons it is always of high priority to do this in power-hungry domestic appliances, and the heater is no exception. Since the efficiency of the circuit is the ratio of power dissipation in the battery to the total power dissipation of the circuit, maximum efficiency would be achieved when the resistance of R is infinitely high. The issue with this is, just like trying to connect a 12V power pack across a piece of plastic, you cannot expect any power to be generated at all - this is because the high resistance results in a tiny current being drawn from the battery and in any case, the power output would be highly suboptimal according to the result I derived above. The moral of the story for the heater is that the most important factor here is to cut the internal resistance of the power supply as much as possible.
If the heater is being connected to the mains power, this opens an entirely new can of worms, namely the definition of the National Grid's internal resistance. This is a combination of the cumulative resistance of electrical cabling across the country, inefficiencies in combustion of fuels in power stations, friction in turbine halls, and delayed responses in adjusting power distribution across the country following surges and drops in energy demand, amongst other more subtle factors. Nevertheless more simple domestic factors to consider are reducing usage of extension leads where possible, and keeping the heater cable well insulated so that its resistance doesn't itself rise with time!
Although it is quite clear how this can be solved with algebra, some thought about the theory behind Ohm's Law and Kirchoff's Second Law is required to set the answer in some context: since the power dissipation in the resistor is equal to I2R, one would think that raising the current is more influential on the output than raising the resistance. However, since the current drawn from the battery is intrinsically linked to the total resistance of the circuit, there is a delicate balance to be struck.
The nature of this balance is difficult to ascertain without calculation though, so without further ado:
The result is that the power dissipated across the resistor is a maximum (assumed from the context of the function whose stationary point is found, since only one positive-R stationary point exists) when R = r. This nicely helps to explain why a short-circuited battery is likely to heat up, set fire or explode - the short-circuit has a very low resistance, so overall a massive current is drawn which causes the wire to heat up. The wire will heat up the most when the resistance of the wire matches the internal resistance of the battery. The overall implication is that, if one was designing an electric resistance heater to emit the maximum heat output for any given emf, the length of wire coiled up should, via the resistivity equation, reflect the internal resistance of the power supply.
However, a separate issue is maximising the efficiency of the circuit in terms of energy - for environmental and financial reasons it is always of high priority to do this in power-hungry domestic appliances, and the heater is no exception. Since the efficiency of the circuit is the ratio of power dissipation in the battery to the total power dissipation of the circuit, maximum efficiency would be achieved when the resistance of R is infinitely high. The issue with this is, just like trying to connect a 12V power pack across a piece of plastic, you cannot expect any power to be generated at all - this is because the high resistance results in a tiny current being drawn from the battery and in any case, the power output would be highly suboptimal according to the result I derived above. The moral of the story for the heater is that the most important factor here is to cut the internal resistance of the power supply as much as possible.
If the heater is being connected to the mains power, this opens an entirely new can of worms, namely the definition of the National Grid's internal resistance. This is a combination of the cumulative resistance of electrical cabling across the country, inefficiencies in combustion of fuels in power stations, friction in turbine halls, and delayed responses in adjusting power distribution across the country following surges and drops in energy demand, amongst other more subtle factors. Nevertheless more simple domestic factors to consider are reducing usage of extension leads where possible, and keeping the heater cable well insulated so that its resistance doesn't itself rise with time!
Monday, 9 May 2016
A curiosity - the reciprocal triangle
I was absent-mindedly solving simple calculus problems on Brilliant.org this afternoon when I stumbled across an interesting one about the reciprocal function. The question asked the user to prove that the area enclosed by the coordinate axes and the tangent to the curve at any point on the curve is constant, and to find this area. This is a basic proof of this unusual property:
The logical extension of this observation is to see what happens when the function is manipulated. First, imagine a reciprocal function of order n:
This sets nicely into context how unique the basic reciprocal function is - the k in the numerator will only be cancelled when n = 1.
Next, consider a function with a linear stretch factor a in the y-direction (which, due to the symmetry of the function, is the same as a stretch factor 1/a in the x-direction):
This is of mild interest too - the original problem took a = 1, such that the area of the triangle was 2.
Finally, consider a function translated by a units in the positive x-direction, and b units in the positive y-direction:
Clearly, as the sketch demonstrates, the translation of the curve disrupts the symmetry of the curve about the x and y axes, such that there is no longer a triangle of constant area. Trivially, it could easily be proven that there is a region of constant area, with the area under the tangent within the limits x > a and y > b, since these asymptotes effectively become the new x and y axes within the translated reference frame of the new curve. Furthermore it can be seen that the constant-area is reinstated when a = b = 0, such that k cancels out in the numerator and denominator to leave the previous area of 2.
The logical extension of this observation is to see what happens when the function is manipulated. First, imagine a reciprocal function of order n:
This sets nicely into context how unique the basic reciprocal function is - the k in the numerator will only be cancelled when n = 1.
Next, consider a function with a linear stretch factor a in the y-direction (which, due to the symmetry of the function, is the same as a stretch factor 1/a in the x-direction):
This is of mild interest too - the original problem took a = 1, such that the area of the triangle was 2.
Finally, consider a function translated by a units in the positive x-direction, and b units in the positive y-direction:
Clearly, as the sketch demonstrates, the translation of the curve disrupts the symmetry of the curve about the x and y axes, such that there is no longer a triangle of constant area. Trivially, it could easily be proven that there is a region of constant area, with the area under the tangent within the limits x > a and y > b, since these asymptotes effectively become the new x and y axes within the translated reference frame of the new curve. Furthermore it can be seen that the constant-area is reinstated when a = b = 0, such that k cancels out in the numerator and denominator to leave the previous area of 2.
Sunday, 8 May 2016
The hanging cable - the intrepid part 2
I mentioned at the end of my post yesterday (see "The Hanging Cable") that I might be able to optimise the maximum height of the cable by adding a taper. The aim of today, aside from the fairly mundane past papers I had scheduled, was to model this as best I could. The result of the monster integral, which I have verified using Wolfram Alpha, is a complicated function in Hmax which I don't believe can be solved by conventional algebraic means. Nevertheless, the working is fairly satisfying, if I have made it clear at all, and there are several interesting qualitative conclusions that can be drawn from the model.
It is important to note that I have not repeated the first-principles derivation of this method below. For this, see here.
The function at the bottom is equated to zero to indicate how the required root is to be found. I have entered it into Desmos graphing calculator here, where it is easy to read off the x-intercept as the value of Hmax for the material. The sample data on it is for 2800 maraging steel; when a taper angle of 1° ( ≈ 0.0175c ) is selected, a clearly too large but a good arbitrary starting figure, the graph of y = f(Hmax) looks like this:
This shows that the theoretical maximum length of cable, fitting this specification, is 7288km, a pretty awesome distance. However, for a little perspective, let's consider how fat the cable will be at the very top, where it is hanging from its superlatively streadfast loop:
0.5d = Htanθ = 7.288·106 · tan(0.0175c) = 127553.0213 ≈ 128km (3sf)
If this doesn't seem unwieldy and inappropriate enough, next consider the volume and mass of this so-called 'cable':
V = 1/3πHmax3tan2θ = 1/3π · (7.288·106)3 · tan2(0.0175c) = 1.241705149·1017 ≈ 1.24·1017m3 (3sf)
m = ρV = 1.24·1017 · 8100 = 1.005781171·1021 ≈ 1.01·1021kg (3sf)
This cable therefore makes up 0.0114% of the Earth's volume, enough to excavate the grand canyon nearly 30000 times (using the common estimate of 5.45 trillion cubic yards, where 1 yard = 0.9144 metres), and 0.0169% of the Earth's mass! Now, for the worst part, consider the estimated cost of this object, with the assumption that all the forges in the world could produce enough maraging steel between them. It is difficult to find a precise figure for the price of any particular variety, but in general the world steel price is about $60 per tonne, or 6 cents per kilogram:
Cost = 0.06·m ≈ $(6.06·1019)
This cost, just over 60 quintillion US dollars or 4.21 quintillion GBP, would be enough to pay off the £1.56 trillion UK deficit nearly 27 million times!
In essence what I am trying to illustrate is that a 1 degree taper is a truly ridiculous idea for extending the snapping length of the cable, even though it does a very good job of doing so. Let's see what happens as the angle is changed...
Since I am using graphical means to solve f(Hmax) = 0, I can see no way to easily find the equation for a graph of Hmax against taper angle. However some empirical experimentation with Desmos shows that the smaller the taper angle, the greater the value of Hmax. This is rather counterintuitive - the smaller the taper angle, the closer this model should come to the model established in the previous article.
Nevertheless I reckon I have found a possible flaw: since the cable starts with a radius of zero, having such a small taper means it is pretty much non-existent for the first few kilometres. I see this as analogous to the critical assembly of a system - just like ants can support weights disproportionate to their own mass due to their tiny size, having such a microscopic cable allows very disproportionate behaviours to occur in comparison with the macroscopic world. This idea doesn't entirely explain away the fact that the previous model completely decoupled the value of Hmax from the diameter of the cable, but I'm working on that bit! Perhaps the cable could instead be modelled as a frustum, with a significant radius at the bottom?
It is important to note that I have not repeated the first-principles derivation of this method below. For this, see here.
The function at the bottom is equated to zero to indicate how the required root is to be found. I have entered it into Desmos graphing calculator here, where it is easy to read off the x-intercept as the value of Hmax for the material. The sample data on it is for 2800 maraging steel; when a taper angle of 1° ( ≈ 0.0175c ) is selected, a clearly too large but a good arbitrary starting figure, the graph of y = f(Hmax) looks like this:
This shows that the theoretical maximum length of cable, fitting this specification, is 7288km, a pretty awesome distance. However, for a little perspective, let's consider how fat the cable will be at the very top, where it is hanging from its superlatively streadfast loop:
0.5d = Htanθ = 7.288·106 · tan(0.0175c) = 127553.0213 ≈ 128km (3sf)
If this doesn't seem unwieldy and inappropriate enough, next consider the volume and mass of this so-called 'cable':
V = 1/3πHmax3tan2θ = 1/3π · (7.288·106)3 · tan2(0.0175c) = 1.241705149·1017 ≈ 1.24·1017m3 (3sf)
m = ρV = 1.24·1017 · 8100 = 1.005781171·1021 ≈ 1.01·1021kg (3sf)
This cable therefore makes up 0.0114% of the Earth's volume, enough to excavate the grand canyon nearly 30000 times (using the common estimate of 5.45 trillion cubic yards, where 1 yard = 0.9144 metres), and 0.0169% of the Earth's mass! Now, for the worst part, consider the estimated cost of this object, with the assumption that all the forges in the world could produce enough maraging steel between them. It is difficult to find a precise figure for the price of any particular variety, but in general the world steel price is about $60 per tonne, or 6 cents per kilogram:
Cost = 0.06·m ≈ $(6.06·1019)
This cost, just over 60 quintillion US dollars or 4.21 quintillion GBP, would be enough to pay off the £1.56 trillion UK deficit nearly 27 million times!
In essence what I am trying to illustrate is that a 1 degree taper is a truly ridiculous idea for extending the snapping length of the cable, even though it does a very good job of doing so. Let's see what happens as the angle is changed...
Since I am using graphical means to solve f(Hmax) = 0, I can see no way to easily find the equation for a graph of Hmax against taper angle. However some empirical experimentation with Desmos shows that the smaller the taper angle, the greater the value of Hmax. This is rather counterintuitive - the smaller the taper angle, the closer this model should come to the model established in the previous article.
Nevertheless I reckon I have found a possible flaw: since the cable starts with a radius of zero, having such a small taper means it is pretty much non-existent for the first few kilometres. I see this as analogous to the critical assembly of a system - just like ants can support weights disproportionate to their own mass due to their tiny size, having such a microscopic cable allows very disproportionate behaviours to occur in comparison with the macroscopic world. This idea doesn't entirely explain away the fact that the previous model completely decoupled the value of Hmax from the diameter of the cable, but I'm working on that bit! Perhaps the cable could instead be modelled as a frustum, with a significant radius at the bottom?
Saturday, 7 May 2016
The hanging cable
A interesting problem was put forward by my physics teacher on Friday, relating to material properties - what is the longest possible constant-diameter cable, from a material with a given yield stress and density, that can be hung vertically in the Earth's gravitational field (with its end just touching the ground) without snapping?
In essence the problem is very simple. The point of maximum stress is right at the top of the cable, where it is hanging from some kind of steadfast loop capable of supporting the material in its entirety; at this point the stress is equal to the weight of the cable divided by the cross-sectional area. However when the materials used are strong enough to last kilometres into the air, the changing value of g starts to become a factor in the weight force experienced by the maximum stress point. I did some scribbling today, and came up with a little proof of a nice formula that takes into account the changes in g, with the use of a definite integral as the infinite sum of a series in its limit:
The most important insight to be gleaned from the final formula for the maximum height of cable is that the cross-sectional area of the cable is irrelevant to the height achieved, as long as it remains constant. It depends only on the radius of the Earth (constant), the universal gravitational constant (constant), the mass of the Earth (constant), the density of the material (constant for a given material) and the breaking stress of the material (constant for a given material) - note that it is assumed yield stress = breaking stress, since it is unnecessarily complicated to consider the plastic properties of the cable between yield and fracture.
An example of the use of the formula is to consider a material - steel is a popular choice for high-stress cabling. Wikipedia tells me the breaking stress of a certain type, 2800 maraging steel, is 2617MPa and the density is 8100g per cubic metre. The formula returns a Hmax value of 33072.8304159 ≈ 33.0km. If the uniformity of the Earth's gravitational field was assumed, the Hmax would be significantly different:
Cross-sectional area of cable = 0.25πd2
Volume of cable = 0.25Hπd2
Mass of cable = 0.25Hρπd2
Tension at top of cable = 0.25Hρgπd2
Stress at top of cable = Hρg
Hmax = σfrac/ρg = 32934.3954896 ≈ 32.9km
There is an absolute difference of 138.43493 ≈ 138m here, which converts to a surprisingly sizeable percentage error of 0.419%!
Now it has been established how important the non-uniformity of the Earth's gravitational field is in such a matter, one must now wonder how the shape of the cable could maximise the value of Hmax - my hypothesis is that having a slight linear taper on the cable such that it is fatter at the top than at the bottom, with the cross-sectional area scale factor per metre climbed matching the scale-factor for decrease in gravitational field strength per metre climbed, would be optimal. This is because having less mass at the bottom is essential for marginal gains, where the gravitational pull is strongest, and a taper to this specification should account logically for the inverse-square nature of Newtonian gravity.
Steel was a fairly old-school example for testing the value of Hmax: a huge focus of the material science field is the applications of carbon nanotubes, tailor-made materials based on graphene and its chemical derivatives. Wikipedia informs me that one particular type, Armchair Single-Walled NanoTubes, has a breaking stress of 126.2GPa, nearly 50 times the strength of 2800 maraging steel. Ignoring the considerable strain produced by such a breaking stress (0.231), an oversight it seems, the Hmax values can be calculated with the calculus and simplistic methods, given an approximate density value for Armchair SWNT of 1660kg per cubic metre (from here):
Calculus Hmax = -35976055.5613
Simplistic Hmax = 7749653.04644
This is pretty stunning as a result. The nanotubes are so strong that the calculus formula breaks down to form a negative result - this implies the cable can stretch infinitely out into space (assuming it is not affected by the gravitational fields of other bodies, which isn't technically true of course), effectively escaping the effects of the Earth's pull, without fracture. I suppose the combination of the cable being nearly 5 times less dense and 50 times stronger means there is an effective relative increase in potential height of 250 times.
In essence the problem is very simple. The point of maximum stress is right at the top of the cable, where it is hanging from some kind of steadfast loop capable of supporting the material in its entirety; at this point the stress is equal to the weight of the cable divided by the cross-sectional area. However when the materials used are strong enough to last kilometres into the air, the changing value of g starts to become a factor in the weight force experienced by the maximum stress point. I did some scribbling today, and came up with a little proof of a nice formula that takes into account the changes in g, with the use of a definite integral as the infinite sum of a series in its limit:
The most important insight to be gleaned from the final formula for the maximum height of cable is that the cross-sectional area of the cable is irrelevant to the height achieved, as long as it remains constant. It depends only on the radius of the Earth (constant), the universal gravitational constant (constant), the mass of the Earth (constant), the density of the material (constant for a given material) and the breaking stress of the material (constant for a given material) - note that it is assumed yield stress = breaking stress, since it is unnecessarily complicated to consider the plastic properties of the cable between yield and fracture.
An example of the use of the formula is to consider a material - steel is a popular choice for high-stress cabling. Wikipedia tells me the breaking stress of a certain type, 2800 maraging steel, is 2617MPa and the density is 8100g per cubic metre. The formula returns a Hmax value of 33072.8304159 ≈ 33.0km. If the uniformity of the Earth's gravitational field was assumed, the Hmax would be significantly different:
Cross-sectional area of cable = 0.25πd2
Volume of cable = 0.25Hπd2
Mass of cable = 0.25Hρπd2
Tension at top of cable = 0.25Hρgπd2
Stress at top of cable = Hρg
Hmax = σfrac/ρg = 32934.3954896 ≈ 32.9km
There is an absolute difference of 138.43493 ≈ 138m here, which converts to a surprisingly sizeable percentage error of 0.419%!
Now it has been established how important the non-uniformity of the Earth's gravitational field is in such a matter, one must now wonder how the shape of the cable could maximise the value of Hmax - my hypothesis is that having a slight linear taper on the cable such that it is fatter at the top than at the bottom, with the cross-sectional area scale factor per metre climbed matching the scale-factor for decrease in gravitational field strength per metre climbed, would be optimal. This is because having less mass at the bottom is essential for marginal gains, where the gravitational pull is strongest, and a taper to this specification should account logically for the inverse-square nature of Newtonian gravity.
Steel was a fairly old-school example for testing the value of Hmax: a huge focus of the material science field is the applications of carbon nanotubes, tailor-made materials based on graphene and its chemical derivatives. Wikipedia informs me that one particular type, Armchair Single-Walled NanoTubes, has a breaking stress of 126.2GPa, nearly 50 times the strength of 2800 maraging steel. Ignoring the considerable strain produced by such a breaking stress (0.231), an oversight it seems, the Hmax values can be calculated with the calculus and simplistic methods, given an approximate density value for Armchair SWNT of 1660kg per cubic metre (from here):
Calculus Hmax = -35976055.5613
Simplistic Hmax = 7749653.04644
This is pretty stunning as a result. The nanotubes are so strong that the calculus formula breaks down to form a negative result - this implies the cable can stretch infinitely out into space (assuming it is not affected by the gravitational fields of other bodies, which isn't technically true of course), effectively escaping the effects of the Earth's pull, without fracture. I suppose the combination of the cable being nearly 5 times less dense and 50 times stronger means there is an effective relative increase in potential height of 250 times.
Thursday, 5 May 2016
Hyperbolae 2 - introducing the third dimension
This is a summary of today's musings. In essence I have done two fairly basic things that marginally extend the 2-dimensional hyperbola: I have taken the infinitesimal-lamina principle behind solid revolution and used it to model how the function will rotate around the x-axis. Then I have returned to the basic calculus of revolution to derive an expression for the volume of a lobe, given values of the stretch-coefficients a and b, and the limit of integration c. Enjoy!
Tuesday, 3 May 2016
Investigating hyperbola-type implicit functions 1 - asymptotes
"Which way round was it again?"
During one of my FP1 past-papers, of which I have been trawling through the entire lot, it struck me that there was one part of the course that was very much a matter of "here's the result, learn it". Hyperbolas are only touched-upon in the AS further maths specification, and for me that makes the given theory more difficult to engage with. Hence, I found in one question that I could not remember the generalisation for the asymptotic gradient of a hyperbola. Well, with two options in mind, I thought asking for help somewhat non-proactive so I decided to see if I could work it out with a little bit of limit notation.
Having achieved this, feeling extremely pleased with my elevated understanding of the function despite having not blasted through the 3 or 4 AS level questions I would have otherwise spent the time with, I began to consider what would happen if the same general form of hyperbola was used with higher-order indices. My findings are on the A4 scans below.
The second page deals with more trivial cases, where x and y are raised to different powers - there produce fairly uninteresting polynomial-type curves, which could effortlessly be rearranged to express the first-quadrant branches explicitly. Nevertheless an implicit-differentiation method for proving the limits of the gradients is still fairly stimulating, and the result is pretty nice, if quite obvious.
During one of my FP1 past-papers, of which I have been trawling through the entire lot, it struck me that there was one part of the course that was very much a matter of "here's the result, learn it". Hyperbolas are only touched-upon in the AS further maths specification, and for me that makes the given theory more difficult to engage with. Hence, I found in one question that I could not remember the generalisation for the asymptotic gradient of a hyperbola. Well, with two options in mind, I thought asking for help somewhat non-proactive so I decided to see if I could work it out with a little bit of limit notation.
Having achieved this, feeling extremely pleased with my elevated understanding of the function despite having not blasted through the 3 or 4 AS level questions I would have otherwise spent the time with, I began to consider what would happen if the same general form of hyperbola was used with higher-order indices. My findings are on the A4 scans below.
The second page deals with more trivial cases, where x and y are raised to different powers - there produce fairly uninteresting polynomial-type curves, which could effortlessly be rearranged to express the first-quadrant branches explicitly. Nevertheless an implicit-differentiation method for proving the limits of the gradients is still fairly stimulating, and the result is pretty nice, if quite obvious.
Sunday, 20 March 2016
A beginner's attempt at modelling air resistance and restitution - a concerted attack on the dogma of modelling assumptions
As the title of this article may suggest, I thought it might be interesting to see what mathematical knots I could tie myself in once I begin to remove modelling assumptions from a seemingly-simple physical system. The answer, somewhat predictably, is very very many indeed.
This all stems from some work on differential equations I did over the Christmas period, where I first came into contact with the idea of modelling resistive forces as a function of a differentiated variable (acceleration usually). After doing a physics PAG (assessed practical) on determining the terminal velocity of a cupcake case falling under acceleration due to gravity in October, I realised how interesting an investigation into exactly how the object accelerates between t = 0 and terminal velocity could potentially be and with a new mathematical skillset, I began to do some scribbling...
Several months later, with some on-and-off periods of work on the paper, I have come up with this 48-page pdf file documenting how I have somewhat tangentially approached ideas such as drag and restitution with a certain level of naivety. It has not been extensively proof-read, so is most definitely an unpolished version which serves to demonstrate how I have leapt from one problem to the next, and no doubt errors will be inherent in some of my working as a result. Nevertheless, I am immensely proud of some of the derivations, as well as my real-life experiments to test them.
The following list contains all the external links to Desmos graphs, which feature throughout the document:
p13 - http://www.tinyurl.com/hmdd5jc - CTDM
p20 - http://www.tinyurl.com/jezmcxo - light-gate data
p30 - http://www.tinyurl.com/za2cq2f - multiple cycles of the CTDM in a bouncing-ball situation
p31 - http://www.tinyurl.com/hncm9su - fragility of the CTDM
p32 - http://www.tinyurl.com/grd873b - stability of the SQTM
p36 - http://www.tinyurl.com/hl32bt2 - collision impulse and force
p47 - http://www.tinyurl.com/jrmk6ye - strobe data and restitution
Introduction
I begin the investigation with some theoretical modelling - first I take drag to be proportional to velocity, then velocity squared and finally velocity cubed. After this, I investigate how a model could be developed containing both a linear and a quadratic term in v, called the combined term drag model (CTDM). Next, I conduct some experiments with falling objects: one such experiment involves a custom-made tube of 10 phototransistor-LED light gates to observe in detail how a falling muffin-case's displacement varies with time; another determines the spring constant of the average tennis ball; another uses a strobe-light and a long-exposure camera setting to collect displacement-time data for a bouncing tennis ball. I use all this data to test the efficacy of the various models, with a good deal of running commentary and analysis.
The paper can be found here: https://www.dropbox.com/s/bgdq0vwqa0wumxp/PAPER.pdf?dl=0
This all stems from some work on differential equations I did over the Christmas period, where I first came into contact with the idea of modelling resistive forces as a function of a differentiated variable (acceleration usually). After doing a physics PAG (assessed practical) on determining the terminal velocity of a cupcake case falling under acceleration due to gravity in October, I realised how interesting an investigation into exactly how the object accelerates between t = 0 and terminal velocity could potentially be and with a new mathematical skillset, I began to do some scribbling...
Several months later, with some on-and-off periods of work on the paper, I have come up with this 48-page pdf file documenting how I have somewhat tangentially approached ideas such as drag and restitution with a certain level of naivety. It has not been extensively proof-read, so is most definitely an unpolished version which serves to demonstrate how I have leapt from one problem to the next, and no doubt errors will be inherent in some of my working as a result. Nevertheless, I am immensely proud of some of the derivations, as well as my real-life experiments to test them.
The following list contains all the external links to Desmos graphs, which feature throughout the document:
p13 - http://www.tinyurl.com/hmdd5jc - CTDM
p20 - http://www.tinyurl.com/jezmcxo - light-gate data
p30 - http://www.tinyurl.com/za2cq2f - multiple cycles of the CTDM in a bouncing-ball situation
p31 - http://www.tinyurl.com/hncm9su - fragility of the CTDM
p32 - http://www.tinyurl.com/grd873b - stability of the SQTM
p36 - http://www.tinyurl.com/hl32bt2 - collision impulse and force
p47 - http://www.tinyurl.com/jrmk6ye - strobe data and restitution
Introduction
I begin the investigation with some theoretical modelling - first I take drag to be proportional to velocity, then velocity squared and finally velocity cubed. After this, I investigate how a model could be developed containing both a linear and a quadratic term in v, called the combined term drag model (CTDM). Next, I conduct some experiments with falling objects: one such experiment involves a custom-made tube of 10 phototransistor-LED light gates to observe in detail how a falling muffin-case's displacement varies with time; another determines the spring constant of the average tennis ball; another uses a strobe-light and a long-exposure camera setting to collect displacement-time data for a bouncing tennis ball. I use all this data to test the efficacy of the various models, with a good deal of running commentary and analysis.
The paper can be found here: https://www.dropbox.com/s/bgdq0vwqa0wumxp/PAPER.pdf?dl=0
Sunday, 28 February 2016
An intermediate attempt at modelling interference graphically
This modelling work, which I have spent most of my weekend developing, stems from a very simple question which came up in my AS physics textbook, on the topic of wave interference. It went something along the lines of "Draw 6 diagrams, 1 second apart, to show how these two approaching waveforms interfere". I was surprised at how difficult it was to faithfully do this, even with two simple waveforms in wide discrete time jumps and it got me thinking - how could I use a computer to do this better? The result is in an 8-page investigation, where I have endeavoured to develop some theory about how to model passing and reflecting waves and how they interfere - it begins easy, with some simple linear interference by summation of instantaneous displacements, but soon I found that the mathematics became more complicated (particularly in stage 3, where I began to model how waves slowly escape from a trap, like a solid-state laser but without the quantum stimulated emissions).
I have tried to explain myself fully throughout, although not enough proof-reading has been done to keep it from being a little haphazard, and the last page of the document requires some following-up when I have a little more time. However, I am pretty proud of some of the derivations.
In the paper, I have referenced some equations with numbers in square brackets. These match-up with equations on the corresponding Desmos file (here: http://tinyurl.com/zlmzg9d), to illustrate exactly what each equation looks like in practice.
The paper can be found here: http://tinyurl.com/zwegna4
I have tried to explain myself fully throughout, although not enough proof-reading has been done to keep it from being a little haphazard, and the last page of the document requires some following-up when I have a little more time. However, I am pretty proud of some of the derivations.
In the paper, I have referenced some equations with numbers in square brackets. These match-up with equations on the corresponding Desmos file (here: http://tinyurl.com/zlmzg9d), to illustrate exactly what each equation looks like in practice.
The paper can be found here: http://tinyurl.com/zwegna4
Sunday, 7 February 2016
Refraction in the third spatial dimension
In my AS Physics classes, we have reached the wave mechanics point of the course - one of the most fundamental observable properties of waves travelling through media of differing densities is refraction; however, we only ever seem to consider refraction in a coplanar sense (where there is only an incident angle against the plane of the boundary in two dimensions). My investigation of the morning is to develop a little theory, which will hope to extend my understanding of refraction to apply to the third dimension, which up until this point in my education has been ignored.
Firstly, let's review Snell's law. This states that the product of the refractive index of one medium and the sine of the angle in that medium against the normal is constant throughout the refraction process. In algebraic terms:
This provides the basic relationship between the two angles on the diagram below, which demonstrates the coplanar refraction I am talking about:
Similarly, I could represent the effect of refraction on the third-dimension in the same way, by reducing the problem into a coplanar one and ignoring the second dimension in the above example. However, considering all three dimensions at once will involve some basic vector geometry. The direction of the light ray, where the perspex-air boundary is an arbitrary straight line with the equation x = C, can be represented by a 3-D vector U:
This vector can then be divided into two components, dealing with different planes - the x-y plane and the x-z plane. The angle in the y-z plane is irrelevant because it is parallel to the boundary plane. I have worked through this problem, and here is my result:
From here, I used Desmos graphing calculator to demonstrate the result of this. I spent much time working with the z-axis, trying to produce the most convincing one possible by adjusting the positions of the x and y axis, all parametrically under the control of the rotation parameters A, B and C. I had some success with this, using the sinusoidal nature of rotational perspective to produce a set of coordinate axes which can be rotated about the origin (i.e. the real z-axis) in the x-y plane, and the image of the x-axis, allowing enough movement to achieve most angles on any 3-D subject. A current demonstration of this somewhat rudimentary and flawed model can be found here: http://tinyurl.com/hk4d3l7. In the future, I hope to be able to understand how not only full rotational freedom can be achieved through the parameters A, B and C, but also how I can faithfully project real points (x,y,z) onto these simulated axes while they can stand up to rotation.
However, since I have yet to truly crack the matter, I have for now settled with a simple pseudo-z-axis. However, it still demonstrates my mathematics fairly well. Furthermore I have paid attention to detail diagrammatically by programming Desmos to draw proportional arrows for the magnitudes of each component ray, as well as the resultant incident and refracted rays. The online version can be found here: http://tinyurl.com/jthre74.
Overall, I feel I have opened a metaphorical can of worms for myself when it comes to the true complexities of this physical phenomenon. Further research has yielded information regarding Fresnell's equations for transparent materials, which determine the relative amplitudes (and hence intensities) of reflected and refracted light at such a boundary between two media, which could be programmed into the graphing calculator. Also, I could isolate the mathematics from the assumption that the boundary is parallel to the y-z plane, such that refraction and reflection on a 3-D surface of differential-determined gradient could be modelled. Perhaps a follow-up to this will be required in the future.
Firstly, let's review Snell's law. This states that the product of the refractive index of one medium and the sine of the angle in that medium against the normal is constant throughout the refraction process. In algebraic terms:
This provides the basic relationship between the two angles on the diagram below, which demonstrates the coplanar refraction I am talking about:
Similarly, I could represent the effect of refraction on the third-dimension in the same way, by reducing the problem into a coplanar one and ignoring the second dimension in the above example. However, considering all three dimensions at once will involve some basic vector geometry. The direction of the light ray, where the perspex-air boundary is an arbitrary straight line with the equation x = C, can be represented by a 3-D vector U:
From here, I used Desmos graphing calculator to demonstrate the result of this. I spent much time working with the z-axis, trying to produce the most convincing one possible by adjusting the positions of the x and y axis, all parametrically under the control of the rotation parameters A, B and C. I had some success with this, using the sinusoidal nature of rotational perspective to produce a set of coordinate axes which can be rotated about the origin (i.e. the real z-axis) in the x-y plane, and the image of the x-axis, allowing enough movement to achieve most angles on any 3-D subject. A current demonstration of this somewhat rudimentary and flawed model can be found here: http://tinyurl.com/hk4d3l7. In the future, I hope to be able to understand how not only full rotational freedom can be achieved through the parameters A, B and C, but also how I can faithfully project real points (x,y,z) onto these simulated axes while they can stand up to rotation.
However, since I have yet to truly crack the matter, I have for now settled with a simple pseudo-z-axis. However, it still demonstrates my mathematics fairly well. Furthermore I have paid attention to detail diagrammatically by programming Desmos to draw proportional arrows for the magnitudes of each component ray, as well as the resultant incident and refracted rays. The online version can be found here: http://tinyurl.com/jthre74.
Overall, I feel I have opened a metaphorical can of worms for myself when it comes to the true complexities of this physical phenomenon. Further research has yielded information regarding Fresnell's equations for transparent materials, which determine the relative amplitudes (and hence intensities) of reflected and refracted light at such a boundary between two media, which could be programmed into the graphing calculator. Also, I could isolate the mathematics from the assumption that the boundary is parallel to the y-z plane, such that refraction and reflection on a 3-D surface of differential-determined gradient could be modelled. Perhaps a follow-up to this will be required in the future.
Sunday, 31 January 2016
The Simplex Algorithm
This rather complex algorithm was the latest subject in my Decision Mathematics 2 course at school. It involves taking a linear programming problem, and solving it without the need for graphical methods - the beauty of this is that it allows the logic to be extended to higher-dimensional problems which cannot easily be represented on multidimensional Cartesian coordinates. In fact, there is theoretically no limit to the number of variables that can be dealt with!
Anyway, the most important reason for the existence of the Simplex algorithm, as I see it, it that it can be more easily programmed into a computer. So this is what I did!
My C++ skills would seem rudimentary to a seasoned programmer, since I became a novice programmer several years ago and haven't touched an IDE since, but I'm quite pleased with the result of these 692 lines of code. It may spring bugs, but I have tested my version of the algorithm on the majority of the problems in the D1 and D2 textbooks and the results have been successful. This has allowed me to build on the pretty shocking SIMPLEX 1.0, developing a more robust SIMPLEX 2.0.
The various files can be found on my Dropbox:
- DevCPP source/project file (http://tinyurl.com/h7ck5ow)
- Readme in a text-file (http://tinyurl.com/gr4r56x)
- Executable SIMPLEX 2.0 file (http://tinyurl.com/zo8hbat)
Since the program is freshly programmed, it is likely that there will be faults in the code. Any error reports would be welcomed.
Anyway, the most important reason for the existence of the Simplex algorithm, as I see it, it that it can be more easily programmed into a computer. So this is what I did!
My C++ skills would seem rudimentary to a seasoned programmer, since I became a novice programmer several years ago and haven't touched an IDE since, but I'm quite pleased with the result of these 692 lines of code. It may spring bugs, but I have tested my version of the algorithm on the majority of the problems in the D1 and D2 textbooks and the results have been successful. This has allowed me to build on the pretty shocking SIMPLEX 1.0, developing a more robust SIMPLEX 2.0.
The various files can be found on my Dropbox:
- DevCPP source/project file (http://tinyurl.com/h7ck5ow)
- Readme in a text-file (http://tinyurl.com/gr4r56x)
- Executable SIMPLEX 2.0 file (http://tinyurl.com/zo8hbat)
Since the program is freshly programmed, it is likely that there will be faults in the code. Any error reports would be welcomed.
Subscribe to:
Posts (Atom)