It is time to continue our quest to prove that the sum of the reciprocals of the primes diverges. We have one more ingredient to put into place. I am referring to the notion of a Taylor series. The idea is this: Some functions, like those from trigonometry, are difficult to evaluate precisely. It would be nice to be able to approximate them via some other, more manageable, function. And since polynomials are the most manageable functions there are, why not try one of them?
So, let f(x) be a smooth function we wish to approximate. For simplicity, let us assume that we seek a polynomial that approximates the function in a neighborhood around the point x = 0). For further simplicity, let us see how far we can get with a linear polynomial.
Recall that any straight line has an equation of the form
\[
y=mx+b,
\]
where m is the slope and b is the y-intercept (that is, the value of the function at x = 0. It follows that a straight line can encode two pieces of information: a point and a slope.
Since we want this line to approximate f(x) at x = 0 we shall choose
\[
b=f(0) \phantom{xxx} \textrm{and} \phantom{xxx} m=f'(0).
\]
This is a fancy way of saying we want the straight line and the function to pass through the same point, and have the same slope, when x=0. Of course, this is the definition of the tangent line of the function at that point.
Using a quadratic polynomial would permit a better approximation, since we would now be able to encode three pieces of information. Specifically, we could make our polynomial have the same y-coordinate as the function at x = 0, and also have the same values for its first two derivatives. If we write
\[
f(x) \approx c+bx+ax^2
\]
then we have f(0) = c. Evaluating the first derivative gives us
\[
f'(x)=2ax+b.
\]
Evaluating both sides at zero now gives us f'(0) = b. Finally, evaluating second derivatives now gives us
\[
f''(x)=2a \phantom{xxx} \textrm{and} \phantom{xxx} \frac{f''(0)}{2}=a.
\]
Putting everything together gives us
\[
f(x) \approx f(0)+f'(0)x+\frac{f''(0)}{2}x^2.
\]
This is a formula for the best quadratic approximation to our function. Of course, the more terms we add to our polynomial, the better an approximation we will get. The basic pattern we have seen thus far continues, leading to the general formula
\[
f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n,
\]
where $f^{(n)}(0)$ refers to the n-th derivative of the function at zero.
The wavy equal sign has been turned into the real thing because, when all goes well, the infinite sum on the right converges to the value of the function on the left for any value of x. Sadly, things often do not go well, and determining precisely which infinitely differentiable functions can be expressed as a Taylor series is a difficult problem. (For real-valued functions, at any rate. It turns out that complex, differentiable functions are far better behaved in this regard, but that is a different post.) Happily, for many of the most common functions this procedure works very well.
I should also mention that by restricting our attention to the case x = 0 we are actually considering a special case of a Taylor series known as a Maclaurin series. That will be sufficient for our purposes.
Working out Taylor series directly from the formula is one of those amusing little exercises with which we torment students in second semester calculus classes. (Not too mention the subsequent torment for the people tasked with grading such things.) It is tedious in the extreme to work out all of those derivatives. In some cases, however, a little cleverness can get you the series by other means.
For example, if you remember the basic facts about geometric series then you know that we can write
\[
\frac{1}{1+x}=1-x+x^2-x^3+x^4-x^5+\dots
\]
If we now integrate both sides we obtain
\[
\ln (1+x) = x -\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5}-\frac{x^6}{6}+\dots
\]
Of course, as always I am ignoring some technical details here. It is not immediately obvious that you really can integrate infinite series in this way, and we ought to give some thought to the fact that an indefinite integral is only defined up to an arbitrary constant. Suffice it to say that it is, indeed, acceptable to integrate in this way, and it is easy to show that the constant is zero in this case.
Now evaluate both sides at x = 1 to obtain
\[
\ln 2 = 1 - \frac{1}{2} + \frac{1}{3} -\frac{1}{4}+\frac{1}{5}-\dots.
\]
On the right we have the alternating harmonic series, and this proves a formula I originally mentioned in this post.
As it happens, the Taylor series for the natural logarithm function is precisely the ingredient we need to complete our proof that the sum of the reciprocals of the primes diverges. Stay tuned for the big finale of this series, coming next week!
- Log in to post comments
In the quadratic approximation, isn't the argument supposed to be 0 in the third term and not x?
Tyler -
Sharp eye! Thanks for pointing out the error. It has been corrected.
Looking forward to the last step, Jason!
One little question: is a post about when/where Taylor series don't work forthcoming, or no? My impression was that they always work, so long as you used enough series. All I can think of is that maybe they fail for discontinuous functions, which makes me think of the Gibbs phenomenon. But, then again, I don't think Taylor series ought to be expected to work for discontinuous functions, since those might not really have derivatives at the discontinuity. Or am I totally off base?
cheglabratjoe --
I think you might be confusing Taylor series with Fourier series. Any periodic function can be expressed as a Fourier series, so you are right that if you add up enough terms of the series you can approximate your function to any degree of accuracy you like. As you note, the Gibbs phenomenon then has to do with how the series behaves in the neighborhood of a “jump” discontinuity.
Taylor series only apply to infinitely differentiable functions, but not all such functions can be expressed as a Taylor series. The ever useful Wikipedia provides an example.
It is easy to lose sight of this fact since the Calc II propaganda gives the impression that the method always works. It is sort of like the way generations of algebra students have come away thinking that factoring is a real crackerjack way of solving quadratic equations, even though that method rarely works too!
You made the comment that calculating Taylor series is a real hassle in Calculus II. Yes,
definitely. But, it is often possible to find Taylor series without taking derivatives, and in fact if you do it properly you don't have to deal with anything more complicated than Cauchy products -- a product of two Taylor series.
For example, the classic sin(x). Let
\[
y(x)=sin(x) \phantom{xx} \textrm{and} \phantom{xx} z(x)=cos(x).
\]
Then
\[
y'=cos(x) \phantom{xx} \textrm{and} \phantom{xx} z'=-sin(x),
\]
or
\[
y'=z \phantom{xx} \textrm{and} \phantom{xx}
z'=-y \phantom{xx} \textrm{with} \phantom{xx} y(0)=0 \phantom{xx}
\textrm{and} \phantom{xx} z(0)=1.
\]
Substitute power series, equate coefficients, and out comes the answer with no effort whatsoever. The same approach can be used for ANY function that can be represented as the solution to a differential equation. For example,
\[
f(x)= \sum_{i=1}^\infty x^i \sqrt{i}
\]
is analytic, but doesn't satisfy an ODE that can be written down in finite space.
Yeah, it looks like I fell for the propaganda. I thought maybe the Taylor series were *like* Fourier series in that adding enough terms would always eventually work perfectly, outside of special situations like discontinuous functions.
I saw what you linked to in the Wikipedia entry, but assumed the example about the Taylor series of log(1+x) was merely demonstrating that the approximation *around zero* got bad as you moved away from zero. I now see that the approximation *around a given point beyond +/-1* gets worse as you add terms. Thanks for clearing that up ... silly mistake in hindsight. (Aren't they all!)
Is there a way to know ahead of time if the Taylor expansion of a function won't behave well, or do you just have to give it a whirl? I saw something about looking at the form of the residual term (Rn) on the Wolfram site, but it got beyond me fast, as that site often does.
Well there are conditions on the derivatives that guarantee that an infinitely differentiable function is analytic. (Has a convergent Taylor series.) There are necessary and sufficient growth conditions I believe.
But my favorite sufficient condition is that if the function and its derivatives of every order is non-negative on a interval then the function is analytic.