Euler's Solution to the Basel Problem

I'm in the mood for some math today, so here's an amusing little proof I recently showed to my History of Mathematics class. We shall derive the formula


\[
\frac{\pi^2}{6}=1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\frac{1}{25}+\dots
\]



Note that the denominators of the fractions on the right are all perfect squares.

The problem of evaluating the sum on the right has a pedigree going back to the 1600s, when various mathematicians, including the famed Bernoullis, tried unsuccessfully to solve it. It was Leonhard Euler who polished it off at the age of 28 in 1735, thereby announcing himself as a force to be reckoned with in mathematics.

Euler's solution is one of those exceedingly clever arguments which, if you have any taste for mathematics at all, just has to bring a smile to your face. We need two main pieces of machinery. The first is the Taylor series for the sine function. If you can think back to whenever you took calculus, you might recall that it looks like this:


\[
\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\frac{x^9}{9!}-\dots
\]


If we divide through by x we obtain:


\[
\frac{\sin{x}}{x}=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+\frac{x^8}{9!}-\dots
\phantom{xxx} *
\]


(The asterisk is there just to make it easier to refer to this equation later on.)

The next step is to factor the infinite polynomial on the right. If you have any qualms about treating infinite polynomials and infinite products in the same way we treat their finite counterparts, then get over them. Euler didn't sweat those details, and if this argument was good enough for him then it's good enough for me! (For the record, though, everything Euler did can be made rigorous with a bit of skill and patience.)

So let us think back to our high school algebra days and remind ourselves about the basics of factoring. Given any polynomial, if plugging in a number r returns the value zero then we say that r is a root of the polynomial. You might recall that finding roots of a polynomial is effectively the same as finding the factors of that polynomial. This is why factoring is the first method you learn for finding roots of polynomials. A simple example would be something like this:


\[
x^2-5x+6=(x-2)(x-3)
\]


We now look at the individual factors on the right, and notice that the first is equal to zero when x=2 and the second is equal to zero when x=3. Thus, the roots of this polynomial are 2 and 3.

We can also do this in reverse. Suppose I tell you that I am thinking of a polynomial, which I shall call p(x). I also tell you that the roots of this polynomial are the four numbers a, b, c and d. Then you could immediately write down:


\[
p(x)=(x-a)(x-b)(x-c)(x-d)
\]


Now let's take this one step further. Suppose in addition to telling you what the roots are, I also tell you that I want this function to have the property that when I plug in the number zero for x the polynomial takes on the value one. You could achieve this by dividing the right hand side by abcd to obtain:


\[
p(x)=\left( \frac{x-a}{a} \right) \left( \frac{x-b}{b} \right) \left( \frac{x-c}{c} \right) \left( \frac{x-d}{d} \right)
\]


If we simplify each of the four terms, and then multiply through by -1, we obtain:


\[
\left(1-\frac{x}{a} \right) \left( 1-\frac{x}{b} \right)
\left(1- \frac{x}{c} \right) \left(1-\frac{x}{d} \right)
\]


Notice that this last polynomial still has its roots at a, b, c and d, but now it takes on the value one when we plug in zero for x. This formula is the second piece of machinery we need.

Let us apply this new-found wisdom to equation *. To factor the infinite polynomial we need to determine its roots. We can see, though, that the roots of the polynomial will simply be the values of x that make the sine function equal to zero. And if you remember your high school trig, you might recall that the values we seek are zero, and the integer multiples of pi. That is, the sine function is equal to zero when you plug in:


\[
0, \phantom{x} \pm \pi, \phantom{x} \pm 2 \pi, \phantom{x} \pm 3 \pi, \phantom{x} \dots
\]


We can throw out zero as a possible root, since that returns the value one when plugged into our polynomial. Applying our factorization formula to the right hand side of equation * now gives us:


\[
1-\frac{x^2}{3!}+\frac{x^4}{5!}-\dots=
\left(1-\frac{x}{\pi} \right) \left( 1+\frac{x}{\pi} \right)
\left(1- \frac{x}{2 \pi} \right) \left(1+\frac{x}{2 \pi} \right) \dots
\]


Now let me remind you of another factorization formula you once knew but may have forgotten:


\[
a^2-b^2=(a+b)(a-b).
\]


Applying that above now gives:


\[
1-\frac{x^2}{3!}+\frac{x^4}{5!}-\dots=
\left( 1-\frac{x^2}{\pi^2} \right) \left(1-\frac{x^2}{4 \pi^2} \right) \left( 1-\frac{x^2}{9\pi^2}\right) \dots
\]


We're almost home! Let us compare the coefficients of the $x^2$ term on the two sides of the equation. We obtain this:


\[
-\frac{1}{3!}=-\left(\frac{1}{\pi^2}+\frac{1}{4\pi^2}+\frac{1}{9\pi^2}+\frac{1}{16\pi^2} \dots \right)
\]



The right-hand side was obtained as follows: The process of multiplying out the binomials in our infinite product involves choosing one term out of each factor in all possible ways (with the proviso that we choose the $1$ out of all but finitely many of the factors.) Notice, now, that every factor already has an $x^2$ in it. Therefore, the only way I will obtain an $x^2$ term in the sum (after multiplying everything out) is to choose the $x^2$ term out of one of the factors, and the $1$ out of all the others. That is why the sum on the right above ends up just being the sum of the coefficients of $x^2$ in the individual factors.

Keeping in mind that $3!=6$, and factoring out the $\frac{1}{\pi^2}$ from the right hand side leads to the conclusion that


\[
\frac{\pi^2}{6}=1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\dots
\]


as desired. Pretty clever!

Of course, readers familiar with this sort of thing will realize that the Basel problem is related to the famous Riemann zeta function, but that's a different post...

More like this

I think I'm missing something, but then it's been nearly 30 years since I got my math degree so I definitely could be wrong (wouldn't be the first time)... but is p(x)^2 not also a function that evaluates to 1 at 0, and 0 at a, b, c and d, that is certainly not the same function as p(x)? In fact, aren't there an infinite number of functions that evaluate to 1 at 0, and 0 at the non-zero integral multiples of pi, only one of which is sin(x)/x? Is there a missing step that shows that sin(x)/x is equal to (1 + x^2/pi^2)(1 - x^2/pi^2)(1 + x^2/4pi^2)(1 - x^2/4pi^2)... and not, for instance (1 + x^2/pi^2)(1 - x^2/pi^2)(1 - x^2/4pi^2)^2(1 - x^2/4pi^2)^2...?

Good stuff. I'm hoping you'll bring back "Monday Math" at some point. (And I also hope you'll write about the Riemann Hypothesis.)

is p(x)^2 not also a function that evaluates to 1 at 0, and 0 at a, b, c and d, that is certainly not the same function as p(x)?

This is true. But in this case the zeroes of p(x)^2 are all double roots, thus all of the factors are repeated. The trick (AFAICT) relies on the fact that all of the roots of the infinite polynomial are distinct, so that there are no repeated factors. There are infinitely many functions with the same zeroes as sin(x)/x, but any other such function will have at least one repeated factor.

By Eric Lund (not verified) on 20 Apr 2012 #permalink

The difference in convergence between the two series is amazing.

Using excel, the pi*pi/6 series is within 1% of the final value after 100 terms. The sin expansion converges to less than 1% in 2 (for x = 0.5)

@KeithB
You can see why this is so pretty easily.
In the pi*pi/6 expansion, the coefficients are all positive, and they trend to being of the same size - the ratio of the nth term to the (n-1)th is (n-1)*(n-1)/n*n, which trends to 1 as n gets large, both of which make the convergence slow.
In the sinx/x expansion, the coefficients alternate in sign, and the ratio of the nth term to the (n-1)th is (2n-1)!/x*x*(2n+1)!, which gets steadily smaller as n gets large, at least for moderate values of x, both of which make the convergence fast.

OOPS - I should have said that in the sinx/x expansion the ratio of the nth term to the (n-1)th is x*x*(2n-1)!/(2n+1)!. Again, for reasonable values of x, this ratio gets small fast as n increases.

One last comment, and I'll stop being such a wiseass. Try Wolfram Alpha (Wolfram Mathematica) and you can get good graphical representations of the series - also useful if your math is as rusty as mine.

@4

The trick (AFAICT) relies on the fact that all of the roots of the infinite polynomial are distinct, so that there are no repeated factors.

The point of using p(x)^2 as an example was to demonstrate that there are an infinite number of functions with the same 0s. The fact that my example has repeated 0s is irrelevant - that sin(x)/x has no repeated 0s is not shown, and I don't think that it would make any difference if it had been. My choice for an example may not have been the best, but my point was that merely showing that two functions intersect in a few (admittedly, countably infinite) places is insufficient to say that they are in fact the same function. The functions f and g are equal iff f and g have the same domain, and f(x) = g(x) for all x in that domain. We're constructing the product based on a small subset of the (uncountably infinite) domain of sin(x)/x. We know that at all integer multiples of pi, sin(x)/x = [the product], but we've said nothing about where x is not an integer multiple of pi.

I'm not disputing Euler (Euler's the dude - I even named my last laptop after him). I'm just missing the step that proves that sin(x)/x is in fact equal to the constructed infinite product.

Monkboon, I might state your objection in a slightly different way. I'm not a mathematician but I play one in my computer music classes, so my apologies if this isn't formal or if it's plain wrong.

The left side is the polynomial expansion of the sinc function, and since it's a polynomial we know that finding all of the roots will produce the function exactly (more or less - there are a few exceptions which won't come into play, e.g. if it has a non-real root). So all that needs to be shown is that the roots are +/- the multiples of pi (other than 0, which has been shown), and that none of those roots is duplicated.

To show that, we know that if a root has a greater multiplicity than 1, the graph will be tangent to the x axis at that 0 (reflecting for an even multiplicity and passing through and changing inflection for an odd multiplicity). But this is not the case for the sinc function, and that's easily provable.

The last thing to show is that the whole infinite product does not need to be multiplied through by some constant other than 1 to get the sinc polynomial. It's clear that the 1 term in all the factors on the right will give the 1 term on the left, and I think this would not be so if there were a constant multiplier on the right side.

A side note - the polynomial expression of the sinc function is very useful in its own right because it gives the x=0 as 1, which has to be determined by limit in the trigonometric formulation because of the 0 denominator.

By Another Matt (not verified) on 20 Apr 2012 #permalink

By the way, going through some of these Euler proofs reminds me of going through one of those Bobby Fischer games where he sees a line of attack 5 moves in advance.

OK... OK.... OK?...... What the hell.

By Another Matt (not verified) on 20 Apr 2012 #permalink

By the way, is there a way to use math notation in the comments, or will it only be interpreted in the main post?

Test, which uses the markup from the original post:

\[ \frac{\pi^2}{6}=1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\frac{1}{25}+\dots \]

By Another Matt (not verified) on 21 Apr 2012 #permalink

I can't see a way in to making this a valid proof, the problem is in the $latex \sum \frac{x^{2n}}{n+1}=\prod(1 \pm frac{x}{n\pi}$. Power series are not infinite polynomials.

Apologies for the mangled tex.

Nyaya, the only assumption seems to be that one can factor an infinite polynomial. If you can, the result would be an infinite product. So all that is left is to find the roots and any constant multipliers, which happen to be inferable from the behavior of the sinc function the infinite polynomial represents.

But perhaps your objection is a little deeper - I don't have the chops to prove that an infinite polynomial like this one can be factored!

By Another Matt (not verified) on 25 Apr 2012 #permalink

@Monkboon

Since we are taking the Taylor series of the function, we are thus limiting ourselves to examining analytic functions; this is important. There may be many functions (such as piecewise functions) that have the same number of zeros and scaling, but they are not analytic, and thus do not have a Taylor Series representation. I can't seem to find a rigorous proof in my books, but I posit that any two analytic functions with the same poles (in this case there are none) and the same zeros are indeed identical, up to scaling. I can't say for sure that this is the case, but I would love to hear it if someone has a counter-example.

@Andy: a good posit, but f(z)=e^z vs. g(z)=1 is the simplest counter-example, with no poles and no zeroes for either of them and f(0)=g(0). If you look up "Weierstrass factorization theorem" on Wikipedia, it says that the three parts that need to match are the poles, the zeroes, and a non-zero analytic function (like e^z in this case).

The same issue comes when you try to extend the factorial function to the gamma function. There's always this non-zero analytic factor that can get in the way. But I think it's remarkable that most of the famous root-chasing identities (like the Basel problem here, and the gamma(z)*gamma(1-z)= pi/sin(pi*z), also due to Euler) have an analytic factor of one.

By Steven Morrell (not verified) on 01 May 2012 #permalink