Sunday Function

Last week we did the sinc function. Let's do it again!

The function, to refresh our collective memory, is this:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

Now I was thinking about jumping right into some contour integration, but on actually doing it again I see that it's a little hardcore for one post so eventually when we do it we'll have to stretch it out over probably three posts. Probably it'll be a Friday-Saturday thing culminating on an official Sunday Function. But it ain't gonna be this week. This week we're going to do three ways of computing a limit. There's more, a bunch more, but we're going to just do three.

As we noted last week, we can't just plug in x = 0 to our function. If we do we're dividing by zero, which is not possible. For some reason this is a frequent point of dissent among, uh, certain varieties of amateur mathematicians. But if you're working with the plain ol' real numbers, satisfying the usual axioms, there's no consistent way to define it. On the other hand there's no reason we can't just say f(x) equals the rule above, except at the point x = 0. Which we can define to 0 or 1 or a billion or pi or whatever the heck we want. We would like for our definition to make sense and be continuous, fitting in with the rest of the function. From the graph we expect we should probably define f(0) = 1, but we'd like to prove it. Is f(x) really getting closer and closer to 1 as x gets closer and closer to 0?

Method 1: Wildly unwarranted surmise.
This is where we plug in numbers close to 0 and see what happens. Shall we?

f(1) = 0.8414709848
f(0.1) = 0.9983341665
f(0.01) = 0.9999833334
f(0.001) = 0.9999998333
f(0.0001) = 0.9999999983

Well that's suggestive if nothing else. Though we won't do it, it's true that you'll get the same numbers if you do the calculations with the negative numbers -0.01, etc so the limit appears to be the same on both sides. Rigorous it's not. So let's move on.

Method 2: Series expansion.

Like all functions*, the sine function can be expressed as an infinite sum of polynomials. For now we won't pause to derive this, but this expression is true:

i-88516ca505d84f969b7ab15a965c3fb3-2.png

We're dividing this by x, and fortunately that's just the same as multiplying by 1/x which is easy. This will give us this nice expression:

i-d18063683dcc0d42b9be45451a84d1e3-3.png

No, no, don't let your eyes glaze over! Instead, just look at that rightmost sum after the equals. There's the number 1, and then a bunch of terms in x. But we're looking to see what happens as x goes to 0. And clearly that means all those x terms themselves go to zero, leaving behind that lonely number 1. Ladies and gentlemen, that is an honest to goodness proof.

Method 3: l'Hopital's Mathematical Hospital.

Early in the first calculus class, you'll learn that it's possible to figure out something about these kinds of problems using derivatives. Roughly speaking given two functions g and h that are both 0 at x = 0 (as ours are), then:

i-8d7070ce28f0af28047c7a5fb8246d53-4.png

In our case g(x) = sin(x) and h(x) = x. Doing the differentiation,

i-bb5c2c6b0452df43a61e3974bd9b473f-5.png

Exactly the same as with methods 1 and 2.

Whew! We're done for the moment. We could come up with a few more methods if we wanted, but there comes a point when the lily has been guilded to death. With that, we'll call it a day.

*This statement has not been evaluated by the AMS and is not intended to prove, demonstrate, or verify any theorem.

Categories

More like this

There's just one small problem with the proofs: they are built on the fact that you know what the derivative of the sine is, and you can't show that in a rigorous fashion without knowing the limit... They're good ways of remembering what the limit is, but proofs they're not.

By Anders Jonsson (not verified) on 12 Jul 2009 #permalink

Good point, but there's a way around it without doing the standard geometric construction. In higher math, it's not unusual to define the sine and cosine functions in terms of their power series. Since by definition sin(x) = that series, it goes without proof. The price you pay for that is that you have to work backwards and show that those power series satisfy the "usual" properties of the trig functions. But since we're not interested in those properties we don't have to worry about that part.

So what does it mean to say that the left hand side of an equation is equal to the right hand side? It seems a bit strange that two functions can be equal when they are defined on different sets (as when the LHS not defined at x=0 and the RHS is).

By Luke Shingles (not verified) on 12 Jul 2009 #permalink

How obvious is it really that, as x tends to zero, the series tends to zero - there are after all infinitely many terms there? I can give examples of series where the terms all go to zero but the sum does not so there must be something else working for you here.

#3: You're right, formally the RHS isn't defined at x = 0. Fortunately we don't need it to be. We're just using it to show that the limit as x approaches zero exists and is equal to 1. Think of it as a single-point form of analytic continuation.

#4: The limit of a sum is the sum of the limits, provided those limits exist. This is true even with infinite series (unless I'm badly mistaken, which has happened before). I'd love to see your example though, as my being proved wrong in this case is likely to be extremely interesting.

You should have said "analytic functions". Your * does not get you off the hook for "all functions". Not even close.

@1 misses the point that the properties of sin(x) are quite distinct from those of sinc(x). You can find the derivative of sin(x) without knowing the limit of sinc(x).

BTW, that question and those of the other comments can be addresses by looking at something like g(x) = sin(x)/x^2. All of the arguments above lead to the conclusion that it diverges at the origin.

Luke@3: The left hand side (the sinc function) is defined at x=0. It is defined to be 1. The proof shows this definition makes sense and that sinc(x) is continuous (and infinitely differentiable) at x=0.

kiwi@4: The series for sinc has limit 1. What is more remarkable about the series for sin(x) is that it converges when x is *anything*. It converges when x is 100000. It converges to zero when x is any multiple of pi.

What a score that AMS is here in Providence. I know right where it is on Charles St. in Providence near the USPS Turnkey office (029xx)

And a friend of mine used to do the typesetting for AMS.

By examination you have six possible "good" answers at x = 0: -infinity, -1, 0, 1, infinity; undefined. Anything else is a privileged answer. "Continuous and differentiable" suggests 1 is the boojum at the singularity. However... does any other function with a 0/0 singularity not asymptote to 1 at either side? If that, then "undefined." You cannot have a valued expression whose value depends on its neighborhood. 1 + 1 = 3 does not obtain for sufficiently large values of 1.

(Excluding economists set to shout "heteroskedasticity!" when it empirically fails. Nobel Prize/Economics are awarded for that - Milton Friedman, "Whip Inflation Now!" and his Boys from Chicago deligting Augusto Pinchet; Merton, Scholes, and "Long Term Capital Investment").

Matt,

Here is an example of a series whose terms all tend to 0 as x tends to zero but the sum does not ...

1/(1+x) = x/(1+x)(1+2x) + x/(1+2x)(1+3x) + x/(1+3x)(1+4x) + x/(1+4x)(1+5x) + ...

To verify the sum use partial fractions.

CCPhysicist,

You'll get no argument from me about the complete implausibility of the power series for sin, its periodicity, vanishing at multiples of pi etc. It completely blows my mind every time I contemplate it, as does the convergence of the exponential series to ever smaller values as the argument becomes bigger and bigger (negatively speaking).

I will argue with you about the independence of the derivative of sin and the limit of sinc. To the contrary, I think a good case could be made that they are equivalent.

The derivative involves a difference in the numerator, not a single function. They are closely related, but you can obtain the derivative of sin in general without ever dealing with its value at a specific point.

By CCPhysicist (not verified) on 13 Jul 2009 #permalink

Kiwi, I'm afraid we might be rapidly approaching areas of real/complex analysis that are beyond my limited skill. However, I believe I see a problem with the series you propose. While both sides are defined at x = 0, it's not possible to take a limit as x -> 0 on the RHS. In particular the function blows up for every x = -1/n. Any epsilon that's picked is going to have one of those 1/n explosions inside it.

Ok, change x to x^2 ...

1/(1+x^2) = x^2/(1+x^2)(1+2x^2) + x^2/(1+2x^2)(1+3x^2) + x^2/(1+3x^2)(1+4x^2) + x^2/(1+4x^2)(1+5x^2) + ...

In general switching the order of two limits (which is what we are doing here) works only if the convergence of one of them is uniform with respect to the other. Method 2 works in your original post because the power series for sin converges uniformly in an interval about the origin. In fact power series always converge uniformly (and absolutely) so long as you stay away from the endpoints of the interval of convergence, which is way you can get away with almost anything.

CCPhysicist, I don't really understand your statement. The derivative of sin at any point x is related to the value of the limit in question by the formula

D sin(x) = (lim h->0 sin(h)/h) * cos(x).

If this limit is not 1 (which is the case if you use degrees instead of radians, and in fact this is the reason we use radians!) then the derivative of sin is not cos.

Ah. That makes sense from a complex analysis standpoint. 0 is still an accumulation point, with singularities popping up along the imaginary axis. That explains the issue - the behavior at 0 is essentially pathological.

Treated purely as a real function that series is even more interesting, but my real analysis sucks so for the moment I'll have to file it away in the "cool stuff to learn later" bin.

But that was my point: You only need the limit of sinc(h), not the value of sinc(0), and you need this to get the derivative of sin(x) at *any* point, not just at the point where you might apply the hospital rule.

By CCPhysicist (not verified) on 15 Jul 2009 #permalink

You paint the lily not gild it, you gild gold.

Also, it's gild not guild

Shakespeare's King John, 1595:

SALISBURY:
Therefore, to be possess'd with double pomp,
To guard a title that was rich before,
To gild refined gold, to paint the lily,
To throw a perfume on the violet,
To smooth the ice, or add another hue
Unto the rainbow, or with taper-light
To seek the beauteous eye of heaven to garnish,
Is wasteful and ridiculous excess.

By Chris' Wills (not verified) on 18 Jul 2009 #permalink

"The limit of a sum is the sum of the limits, provided those limits exist. This is true even with infinite series (unless I'm badly mistaken, which has happened before). "

Depends.

It is only true within the radius of convergence of the series, and even then it depends on what kind of convergence. There is a theorem due to Riemann that if you rearrange the terms in a conditionally convergent series, you can make it converge to any value you want, or even to not converge at all (called, oddly enough, the Riemann Series theorem). The standard example of this is the alternating harmonic series.

Most series we deal with in physics converge uniformly with an infinite radius, which is convenient, and it makes us get used to thinking that infinite series can be treated more or less like numbers or real functions. However, it is worth remembering that the power series for the logarithm is conditionally convergent within a finite radius.

For your statement to apply, you need your series to be analytic. It isn't hard to cook up one that isn't. The one usually discussed is exp(-1/x^2). All its derivatives are 0 at x=0 so the Taylor series expanded around 0 is 0 for all x. The function, however, is nonzero for x not equal to 0 so it disagrees with its series representation. This issue doesn't arise in the complex plane since if the Taylor series converges is always converges to the function value.

So there are two counterexamples. The limit of each term in the series representation is zero but it does not converge to the function value. And in the Riemann example, rearranging the terms does not change their individual limits, but it does change their sum.

By Paul Camp (not verified) on 19 Jul 2009 #permalink

I am a bit late to respond to this contribution but I just discovered this web site. I liked your contributions and I thought the majority of your readers did too. For the sceptics and those who are not altogether happy about the two methods given by you, I should like to propose Method 3 to evaluate f(x) = sin(X)/X as X tends to 0. Perhaps this might appeal to some of your readers.

A simple drawing could have simplified the things but this would have been difficult to reproduce, therefore I explain the sketch verbally. Here it is:

Draw a couple of radii inside a unit circle centred at point O, such that the radii form an angle X at the centre O. Call these radii R1 and R2 and points made on the unit circle made by the radii R1 and R2 A and B respectively. Drop a perpendicular line from A on to the radius R2. Call the point which the perpendicular line makes with the radius R2, C. Now we have a right angled triangle OAC and by definition of âsinusâ function: sin(X) = AC/OA, or sin(X) = AC as OA=1. Now again by definition of angle X in radians: X=arc (AB)/ OA or X=arc (AB). Therefore: f(X) = AC/arc(AB).

Now it can easily be observed that: AC â arc (AB), as X â0. Therefore, f(X) â 1. QED.