Why I'd Never Make It as a Mathematician

Matt's Sunday Function this week is a weird one, a series that is only conditionally convergent:

So the sum of the infinite series, by inexorable logic, is both ln(2) and ln(2)/2. How is this possible?

Of course it isn't. The flaw in our logic is the assumption that the series has a definite sum - in the mathematical parlance, that it's absolutely convergent. This series is not, it's only conditionally convergent. In fact you can show (the great G.F.B. Riemann was the first) that with judicious rearrangement, you can get this series to converge to anything at all. As such it's only meaningful to talk about the sum of this series if you specify the particular ordering you happen to be working with. For finite N the ordering doesn't matter so long as you include the same terms, but you can't do the calculus to find the infinite-N limit without a specific ordering.

It's unusual to encounter this sort of series in physics. Most of our series are either absolutely convergent or simply divergent by any standard. But math is weird, and you can't always assume that things work the way you intuitively expect. You have to rigorously check your assumptions.

This is one of the big differences between physics and math, and why it's a little tricky for me to teach very formal mathematical classes-- I'm very much a swashbuckling experimentalist, used to plunging ahead secure in the knowledge that an actual measurement will give a definite result of some sort, and not worrying about fussy details of the formalism.

Matt's last paragraph reminds me of a post-doc I knew in grad school, who once helped his sister with math homework on series expansions. He asked her later how she had done on the assignment, and she said "Terrible. The prof said I did it like a physicist."

In math, you have to worry about series that don't converge, and actually do the series convergence tests and all that fun stuff. In physics, particularly low-energy experimental physics, you can usually just dive into working out the terms. If the series doesn't converge, it generally becomes obvious pretty quickly when your results bear no resemblance to reality, and then you know you have to do something else. Like go to Wall Street, and construct complicated financial instruments that bear no resemblance to reality...

Pure math doesn't have reality as an ultimate test, so you have to be a lot more careful.

More like this

Consider this not-so-difficult sum: It consists of just a string of fractions up to whichever N you happen to choose. Add them up, and you certainly and unambiguously have a number. If you chose to stop at N = 10, you'd find that f(10) = 1627/2520, which is about 0.645635. If you chose to stop…
Blake Stacey directed me towards a terrific tool for embedding TeX code into a web page. So how about we do ourselves a math post! Remember the harmonic series? No doubt you encountered it in some calculus class or other. It's the one that goes like this: $$ 1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4…
There was a postdoc in my research group in grad school who had a sister in college. She called him once to ask for help with a math assignment dealing with series expansions. He checked a book to refresh his memory, and then told her how to generate the various series needed for her homework…
For tedious reasons, I find myself faced with giving what will basically be a pure math lecture next Friday. I need to introduce a bunch of mathematical apparatus that we will need in the coming weeks, and I know that the Math department doesn't cover these topics in any of the classes that these…

I disagree with your statement that it should be obvious in physics that a series diverges, because it just gives the wrong answer. Sometimes things work that way, but the fact that many perturbation series in quantum theory diverge is a famous counterexample, since these series can be (and are) used to model the real world. The famed eleven decimal place accuracy of QED is based upon the use of a series that is provably divergent. Yet using only the first five (or probably one hundred) terms gives an excellent approximation to reality.

I have always loved that example with the alternating harmonic series though, and the proof that a conditionally convergent series can be rearranged to give any sum is really impressive. Once you've seen it, it's so simple that it can be hard to believe that a truly counter-intuitive result like that could be demonstrated with such simply tools.

For some reason I am reminded of The Saint, a truly terrible movie in which (if memory serves) the cold fusion apparatus was all built and ready to go, except they couldn't turn it on without the equation that would tell them what to do with it. Apparently just flipping the switch and trying it out wasn't an option.

Actually, it is rare that a series in physics converges, almost always you get a divergent series. This has nothing to do with fancy high energy physics, pretty much any perturbation series you are familiar with (including all the results in atomic physics) is formally a divergent series. Both the reasons for that fact, and why it is not really a problem, are well-known. I could elaborate, if anyone cares.

Plasma physics theory often makes use of asymptotic expansions for large arguments, and like the QED examples Brett gives, the series are often formally divergent. The trick that works for plasma physics, and presumably also for QED, is to truncate the series at the term with the smallest absolute value. My inner mathematician would cringe at each example of a formally divergent asymptotic expansion, but as a description of physics it works as well as anything you can do analytically (it isn't hard to produce a plasma physics problem which is not amenable to analytic analysis).

By Eric Lund (not verified) on 17 May 2010 #permalink

Moshe,

Please elaborate if you could. I am interested in this, as I have recently completed my first class in QFT, and don't remember anyone saying that the series actually diverge.

Brett is correct.

The infinite series that arise from doing perturbation theory are almost never convergent. Indeed, they're usually not even Borel-summable.

And the notion that

Well, if I sum the series, and get a result that agrees with experiment, then I'm good ....

is pretty much belied by the very example Matt cites. That's an example of a series that can be summed to give any value you want (including the experimentally-measured one).

Do you really want to claim that a theory, whose output was that series, is confirmed by (or successfully explains, or however you want to phrase it) your experiment?

I'd recommend reading Rudolf Peierls's "Suprises in Theoretical Physics" (or the followup, "More Surprises ...") for lots of examples where these sorts of "picky details" actually matter.

It's not just mathematicians who need to be aware of such things ...

Suppose you have a series you get through some approximate calculation of physics. Typically it will depend on some parameters, coupling constants and characteristic energy and distance scales etc. Now, if the series was convergent in the mathematical sense, that means the series approximates the physical result to arbitrary accuracy: you pick any accuracy whatsoever, and by taking enough terms in the series, you got it. This doesn't leave any room for effects that are not captured by your perturbative series, no matter how small they might be.

Now, we know from analysis of any non-linear equation, there are actually quite a few interesting effects that arise in physics that cannot captured in any power series (because such series always defines an analytic function of the parameters, and analytic functions are very orderly and don't allow anything you might call chaos). When perturbation theory is valid, those effects are really really small, but they are not strictly speaking zero. Asking for a series to converge is asking for such effects not to exist at all, for any value of the parameters. Only exceedingly boring systems (formally known as integrable systems) can satisfy such a strict requirement.

Instead what happens is that the series you get has bounded accuracy: it can approximate the result only up to some finite accuracy and no more. That accuracy is sufficient for all practical purposes, if your couplings are small enough, but it leaves room for interesting, non-analytic behavior, for more general values of your coupling.

This is true for any system described by a differential equations, doesn't even have to be physics.
In QFT you can search under Borel summability, and review articles of Zinn-Justin on large order behavior of perturbation theory, if you want to read more.

To amplify Moshe's comment, it is easy to see why none of the perturbation series that arise in atomic physics are convergent.

The perturbation series that arise in atomic physics are series in e2. If they converged, they would define an analytic function of e2, for e2 sufficiently close to zero.

But that's clearly nonsense, since the physics would be horribly ill-behaved for e2 negative. So the series that arise in atomic physics are never convergent. Instead (as, again, is typical for perturbation series, generally), they are asymptotic.

Glad to have spurred discussion! Yep, divergent series are pretty common in physics in the context of asymptotic expansions. Moishe's comment is a little hyperbolic I think, because formally integrable systems aren't necessarily so boring, but his point is a good one. I do agree with Chad too though; you don't really encounter those types of series by accident. As such a math mistake that gives you a divergent series is usually pretty obvious numerically.

Conditionally convergent series, as opposed to absolutely convergent or divergent series, are much more unusual in physics. I seem to recall having seen them, but I can't think of an explicit example offhand. In such cases the ordering of the terms would have to be built into the physics of the problem for the result to be worth anything.

For the record, as much as I love pure math I'm not very good at it. I do math like a physicist and do all that horrible stuff like treating dy/dx as a fraction.

Yeah, my comment was a bit tongue in cheek, just to emphasize that bread and butter physics is all about divergent series (I hesitate to call them asymptotic, since I am not sure they generically satisfy the formal requirements for a series to be asymptotic), whereas you have to enlist pretty fancy (and beautiful) mathematics if you ever want to see a convergent series.

Matt Springer: one physical example of a conditionally convergent series is the Madelung series for the electrostatic energy of an ionic crystal. You can find it in most solid state textbooks.

By Robert P. (not verified) on 17 May 2010 #permalink

In the words of George F. Carrier: "Divergent series converge faster than convergent series because they don't have to converge."

As a mathematician, I think you're overstating things a bit. ;-)

What actually happens seems to be the following: physicists have a body of formula / results that are known to be correct because they have proven their worth. However, when teaching how to derive them from first principles, it is not uncommon to give a derivation that is actually wrong, or that the lecturer does not understand himself enough to see its subtleties. As long as the result is correct, physicists often assume that the derivation is correct, too.

The consequence for novel results seems to be that physicists try a lot of new formula until they find one that fits and becomes part of the body of knowledge over time. It's unimportant whether a series converges, or whether this can even be fixed with mathematical means; that's simply not how results are judged. In other words: your mathematics are completely wrong, but you don't care. ;-)

What mathematicians don't realize is that this is actually an equally valid approach to science.

By Anonymous (not verified) on 17 May 2010 #permalink

@Moshe: You are using divergent in a strange way. A series like 1/n diverges, but 1/2^n converges. If you are summing 1/somerandomshityourmodeldoesntcover the problem is in your model, not in the series itself.

@13: Physics has always been close to the bleeding edge of math. For example, tensor analysis and general theory of relativity developed hand in hand.

By Lassi Hippeläinen (not verified) on 17 May 2010 #permalink

Infinite series can cause weird problems, and I think they do in harmonic analysis as well. First there is the problem of the little spikes at the edges of square waves etc. if you actually add up what they supposed to be comprised of (Gibbs phenomenon.) But that's a step function with infinite/undefined slope at bounds, so maybe we can expect weird stuff. But I've long been bothered by another issue that cause trouble even without infinite slope (but maybe higher order discons): HA says, something like say broken up sine wave (missing every dip, etc.) can be analyzed into series of harmonics - in such cases, infinite series. OK, so we can reconstruct the periodic wave by adding all those up.

But imagine I have a perfect frequency detector, and I'm monitoring the broken wave inside the sine rise. Since it's "composed" of all those harmonics in principle, I should - ideally, this is not about physics or technology so imagine a Platonic machinery - always be able to detect those other frequencies, and have them show up on a spectrometer (like used for sound.) But wait, if I monitor during the sine-wave portion, the process (all derivatives) are exactly the same as would be during a real, pure sine wave. How could the detector "know" what went on before or after? Sure, it may need a period to detect a given frequency but what about the higher ones that are supposed to "be there" all the time?

Maybe, having the waves all together somehow keeps them from being individually accessible. And maybe not having continuous derivatives of all degrees everywhere is the key - but it's still funny we should have trouble at an ideal level.

Sure, it may need a period to detect a given frequency

This is the answer to your question. In order to detect the fundamental frequency your spectrum analyzer must integrate over (at minimum) a full period of that fundamental, which in your case means the "missing" part of the sine wave as well as the part that is present. If you try to integrate over the half-period of the fundamental, you will only see signals related to the harmonics, whether it's the half with the signal or the half that's "missing".

By Eric Lund (not verified) on 18 May 2010 #permalink

@Eric. Yet the key is not the fundamental but the highers. I am looking for the higher frequencies that are supposed to be implicitly present as part of an infinite series. Let's say I have a broken sine at one Hz. I have an ideal detector looking for 64 Hz., but can't find 64 Hz. during an interval inside the portion that's like a sine wave. I should need only about 1/64 second to find that wave, and so on. I shouldn't need to "integrate" if we imagine that a literal series of harmonics at 1, 2, 3, 4, 5, .... is "really" present. That's IMHO the major problem.

Also, you say we'd "find" higher harmonics integrating (decomposing?) over a half-period, but again: they are effectively not present during any portion of the wave, which is either flat or like a pure sine wave. Our only clue is the discontinuities. The composing series waves are "hiding" all over each other, they don't have effective existence.