Sunday Function

Consider this not-so-difficult sum:


It consists of just a string of fractions up to whichever N you happen to choose. Add them up, and you certainly and unambiguously have a number. If you chose to stop at N = 10, you'd find that f(10) = 1627/2520, which is about 0.645635. If you chose to stop at N = 100, you'd find f(100) = 0.692647. If you've taken calculus in college, you can show without too much trouble that as you make N larger and larger, f(N) will close in on the value 0.6931471806..., which happens to be the natural logarithm of the number 2. As such you could say the sum of the infinite series with N = infinity is ln(2).

You could say that. But you could also say this: rearrange the terms of the series. Don't remove any of them, don't add any, just shuffle them around a bit. Like this:


What you now have is the terms arranged in blocks of three. You have one positive term followed by two negative terms, and that arrangement repeats. Each block of three terms is of this form, where m = 1, 2, 3...


If that looks a little confusing, just plug in m = 1. You get 1 - (1/2) - (1/4), which is just the first block of three terms. Each subsequent m will generate the next block of three. But it is also true by the basic rules of fractions that:


Which means each of those blocks of three is the same as this block of two:


Which means our infinite series can be rewritten as:


Which is half of the original series we wrote down. So the sum of the infinite series, by inexorable logic, is both ln(2) and ln(2)/2. How is this possible?

Of course it isn't. The flaw in our logic is the assumption that the series has a definite sum - in the mathematical parlance, that it's absolutely convergent. This series is not, it's only conditionally convergent. In fact you can show (the great G.F.B. Riemann was the first) that with judicious rearrangement, you can get this series to converge to anything at all. As such it's only meaningful to talk about the sum of this series if you specify the particular ordering you happen to be working with. For finite N the ordering doesn't matter so long as you include the same terms, but you can't do the calculus to find the infinite-N limit without a specific ordering.

It's unusual to encounter this sort of series in physics. Most of our series are either absolutely convergent or simply divergent by any standard. But math is weird, and you can't always assume that things work the way you intuitively expect. You have to rigorously check your assumptions.

More like this

Matt's Sunday Function this week is a weird one, a series that is only conditionally convergent: So the sum of the infinite series, by inexorable logic, is both ln(2) and ln(2)/2. How is this possible? Of course it isn't. The flaw in our logic is the assumption that the series has a definite sum…
Blake Stacey directed me towards a terrific tool for embedding TeX code into a web page. So how about we do ourselves a math post! Remember the harmonic series? No doubt you encountered it in some calculus class or other. It's the one that goes like this: $$ 1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4…
Continuing my series of basic concepts with middle school math will be tricky when we're doing a Sunday Function. But let's give it a shot, and see if we can keep it to that beginning level. This function is pretty simple. You add reciprocals until you get reach whatever number n you've picked,…
There's an interesting blog discussion going on about the age-old question of whether .99999..., where the nines go on forever, is actually equal to one. The answer is: Yes, it does, and if you think it does not then you are mistaken. Polymathematics got the ball rolling with several arguments…

Michael, not really conditional and non-absolute convergence issues occur with the complex numbers also. In some respects things get trickier in C. For example, consider a Taylor series with some radius of convergence r. If the series is over the reals, then you only have two endpoints where you might need to worry about conditional as opposed to absolute convergence. If you are on the whole complex plane then you have uncountably many such points. This becomes relevant in certain contexts such as when deciding whether a given series has an analytic continuation.

I know it's a divergent series, rather than conditionally convergent, but the result

1 + 2 + 3 + 4 + · · · = â1/12

always manages to freak me out whenever I try to understand it.

stop diddling your math penises for a minute and tell me what the practical application of this is.

For a physically relevant analog, look up the series expansion of the Madelung constant (arises in the course of calculating the electrostatic energy of an ionic crystal.) It's basically a three-dimensional variant of your example.

By Robert P. (not verified) on 17 May 2010 #permalink

pedxing: Go read G.H. Hardy's A Mathematician's Apology, then go fornicate yourself.

I went into some detail about absolute and conditionally convergent series about a year ago; if you want a slightly different perspective see here.

@pedxing These infinite series can be used to evaluate function values on computers. The point here is that sometimes we take these things for granted, but we must be diligent in the way we think of these things.

But the freaky thing is that you DO run into this kind of series in physics! Riemann Zeta function regularization is often used in quantum field theory!


I didn't explain what I meant very well. My understanding is that in general one needs to perform a contour integral over the complex plane to solve for the sum of the infinite series given above. That planar structure isn't obvious from the way the series is defined.

Clamtrox is right. The Casimir Effect is modeled with a Zeta.

Zeta(-1) = sum of positive integers to infinity = -1/12

The Casimir Effect has been experimentally verified.