I’m in the mood for some math today, so here’s an amusing little proof I recently showed to my History of Mathematics class. We shall derive the formula

\[

\frac{\pi^2}{6}=1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\frac{1}{25}+\dots

\]

Note that the denominators of the fractions on the right are all perfect squares.

The problem of evaluating the sum on the right has a pedigree going back to the 1600s, when various mathematicians, including the famed Bernoullis, tried unsuccessfully to solve it. It was Leonhard Euler who polished it off at the age of 28 in 1735, thereby announcing himself as a force to be reckoned with in mathematics.

Euler’s solution is one of those exceedingly clever arguments which, if you have any taste for mathematics at all, just has to bring a smile to your face. We need two main pieces of machinery. The first is the Taylor series for the sine function. If you can think back to whenever you took calculus, you might recall that it looks like this:

\[

\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\frac{x^9}{9!}-\dots

\]

If we divide through by *x* we obtain:

\[

\frac{\sin{x}}{x}=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+\frac{x^8}{9!}-\dots

\phantom{xxx} *

\]

(The asterisk is there just to make it easier to refer to this equation later on.)

The next step is to factor the infinite polynomial on the right. If you have any qualms about treating infinite polynomials and infinite products in the same way we treat their finite counterparts, then get over them. Euler didn’t sweat those details, and if this argument was good enough for him then it’s good enough for me! (For the record, though, everything Euler did can be made rigorous with a bit of skill and patience.)

So let us think back to our high school algebra days and remind ourselves about the basics of factoring. Given any polynomial, if plugging in a number *r* returns the value zero then we say that *r* is a *root* of the polynomial. You might recall that finding roots of a polynomial is effectively the same as finding the factors of that polynomial. This is why factoring is the first method you learn for finding roots of polynomials. A simple example would be something like this:

\[

x^2-5x+6=(x-2)(x-3)

\]

We now look at the individual factors on the right, and notice that the first is equal to zero when *x=2* and the second is equal to zero when *x=3*. Thus, the roots of this polynomial are *2* and *3*.

We can also do this in reverse. Suppose I tell you that I am thinking of a polynomial, which I shall call *p(x)*. I also tell you that the roots of this polynomial are the four numbers *a*, *b*, *c* and *d*. Then you could immediately write down:

\[

p(x)=(x-a)(x-b)(x-c)(x-d)

\]

Now let’s take this one step further. Suppose in addition to telling you what the roots are, I also tell you that I want this function to have the property that when I plug in the number zero for *x* the polynomial takes on the value one. You could achieve this by dividing the right hand side by *abcd* to obtain:

\[

p(x)=\left( \frac{x-a}{a} \right) \left( \frac{x-b}{b} \right) \left( \frac{x-c}{c} \right) \left( \frac{x-d}{d} \right)

\]

If we simplify each of the four terms, and then multiply through by *-1*, we obtain:

\[

\left(1-\frac{x}{a} \right) \left( 1-\frac{x}{b} \right)

\left(1- \frac{x}{c} \right) \left(1-\frac{x}{d} \right)

\]

Notice that this last polynomial still has its roots at *a, b, c* and *d*, but now it takes on the value one when we plug in zero for *x*. This formula is the second piece of machinery we need.

Let us apply this new-found wisdom to equation ***. To factor the infinite polynomial we need to determine its roots. We can see, though, that the roots of the polynomial will simply be the values of *x* that make the sine function equal to zero. And if you remember your high school trig, you might recall that the values we seek are zero, and the integer multiples of pi. That is, the sine function is equal to zero when you plug in:

\[

0, \phantom{x} \pm \pi, \phantom{x} \pm 2 \pi, \phantom{x} \pm 3 \pi, \phantom{x} \dots

\]

We can throw out zero as a possible root, since that returns the value one when plugged into our polynomial. Applying our factorization formula to the right hand side of equation * now gives us:

\[

1-\frac{x^2}{3!}+\frac{x^4}{5!}-\dots=

\left(1-\frac{x}{\pi} \right) \left( 1+\frac{x}{\pi} \right)

\left(1- \frac{x}{2 \pi} \right) \left(1+\frac{x}{2 \pi} \right) \dots

\]

Now let me remind you of another factorization formula you once knew but may have forgotten:

\[

a^2-b^2=(a+b)(a-b).

\]

Applying that above now gives:

\[

1-\frac{x^2}{3!}+\frac{x^4}{5!}-\dots=

\left( 1-\frac{x^2}{\pi^2} \right) \left(1-\frac{x^2}{4 \pi^2} \right) \left( 1-\frac{x^2}{9\pi^2}\right) \dots

\]

We’re almost home! Let us compare the coefficients of the $x^2$ term on the two sides of the equation. We obtain this:

\[

-\frac{1}{3!}=-\left(\frac{1}{\pi^2}+\frac{1}{4\pi^2}+\frac{1}{9\pi^2}+\frac{1}{16\pi^2} \dots \right)

\]

The right-hand side was obtained as follows: The process of multiplying out the binomials in our infinite product involves choosing one term out of each factor in all possible ways (with the proviso that we choose the $1$ out of all but finitely many of the factors.) Notice, now, that every factor already has an $x^2$ in it. Therefore, the only way I will obtain an $x^2$ term in the sum (after multiplying everything out) is to choose the $x^2$ term out of one of the factors, and the $1$ out of all the others. That is why the sum on the right above ends up just being the sum of the coefficients of $x^2$ in the individual factors.

Keeping in mind that $3!=6$, and factoring out the $\frac{1}{\pi^2}$ from the right hand side leads to the conclusion that

\[

\frac{\pi^2}{6}=1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\dots

\]

as desired. Pretty clever!

Of course, readers familiar with this sort of thing will realize that the Basel problem is related to the famous Riemann zeta function, but that’s a different post…