Sunday Function

Sorry for the two-day delay. The personal business to which I alluded kept me out until yesterday morning, delaying this post until today. I hope it's decent for all that!

This week, the same function as last week. There's nothing so special about the function itself, but it does serve as a nice Rorschach blot to project our own wandering mathematical thoughts upon. As with last time, the specific "f(x) = stuff" form of the function is not particularly enlightening. It's just a few lines tacked together. The graph is all we need.

i-7b7142f1efa09b47c3b8242f89bcb9f4-graph.png

Last time we set ourselves the task of writing this function as a trigonometric series, offhandedly mentioning the power series as a more usual way of expanding a function. A power series, as you may know, is when we figure out the constants "an" that make the following relationship true, if indeed such a thing can be done.

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

This can usually be done with most functions of physical interest. Sometimes it will work only for some x within a particular interval, and more rarely it may in fact work for no x (or at least almost no x). Fortunately telling which is the case is not just a shot in the dark. If a function is complex-differentiable in the neighborhood of x, the power series converges appropriately at that point. What about our function?

Well we can forget about complex differentiability right off the bat. It's not even regular differentiable everywhere. For those who are not aficionados of the calculus, remember that a derivative represents the slope of the graph at a particular point. At the sharp points our function is not differentiable. It just doesn't have a slope at all. And so if you try to find a power series that works everywhere you won't succeed. There just isn't one.

The Fourier series worked because its conditions for convergence are less onerous. To ruin the nice technical conditions with simplification, I can say that if the function is integrable and not deliberately stupid the Fourier series will converge. But come on, there's got to be some polynomial of degree n that approximates the function better than the rest of the degree n polynomials. Can we find it? Of course!

It turns out that Fourier series aren't the only game in town if you're looking for complete and orthogonal sets of functions. There's tons of 'em. An infinite number, in fact. One of the best known and most useful - we have in fact used it for a very similar purpose before - is the Legendre polynomials.

They're just ordinary looking polynomials. To list a few,

P1 = 1
P2 = x
P3 = 0.5*(x^2 - 1)

A series in the Legendre polynomials would look like this:

i-88516ca505d84f969b7ab15a965c3fb3-2.png

Over the interval [-1,1], they're orthogonal in the sense we talked about last time (feel free to take a gander at the last Sunday Function to refresh your memory). They work in the same way, and we find the coefficients in the same way. We do need to shift to the interval [0,2], but this isn't hard. In the style of TV chefs everywhere I'll pull our necessary formula for the coefficients pre-baked out of the oven. We have to pre-bake the individual polynomials themselves too. This isn't hard either, just replace x with x - 1. For the coefficients, we get:

i-d18063683dcc0d42b9be45451a84d1e3-3.png

Don't forget that the Pn are themselves functions of x. We then can write a series in the Legendre polynomials. But this will just be a sum of powers of x - how is this not just a power series in disguise? One important reason is that the coefficients in front of each individual power of x don't stay constant as you add in higher-order Legendre polynomials. This has its price. Unlike the standard power series we don't have beautiful results like Taylor's theorem, nor can we say nearly as much in general about what the Legendre series is doing about any given point. But frequently the trade-off is worth it.

Shall we see how this series looks? The convergence turns out not to be so pretty or fast as the Fourier series. But functional and ugly works too.

Only the first non-zero term:

i-7af7d8bbb982ec2af88c817b26b8368e-graph1.png

The first two non-zero terms:

i-3fdee7de4197c08986e3407853fa09fa-graph2.png

The first six non-zero terms:

i-d3fdbda776403268fadccbd8598f89a3-graph3.png

I'm including a

Mathematica notebook

in case you want to play around with this a little bit without having to write it yourself. Not that it matters immensely, but consider the code licensed so that you can do whatever you want with it so long as you attribute and share alike.

Creative Commons License
legendre.nb by Matthew Springer is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.
Based on a work at scienceblogs.com.

More like this

Interesting function.

Yes, I remember the when I learned how to model a crazy function with an infinite series of complete functions like Fourier series or Bessel functions, etc... Amazing stuff.

A question from a non-mathy guy: If the Fourier series converges to our spiky function, and the Fourier series is just a sum of trig functions that are differentiable (maybe this is where I'm going wrong?), why is there not a series of polynomials that converges to the Fourier polynomials and thus to the subject function?

There are, and in fact this is one of them. The issue is that a series composed of polynomials is not the same as a power series. I alluded to this a bit, but the main thrust is that a power series is an expansion about a point - with all that implies about remainder terms and how it behaves as x -> whatever point you're expanding about. Expansion in terms of orthogonal functions is expansion over an interval, without any such clean behavior about a given point.

You demonstrate the fatal flaw of social advocacy pseudosciences. A given dataset no matter how large can be fit to arbitrary accuracy in an unlimited number of ways. When Enviro-whinerism, psychology, economics... extrapolate their models, the death bird hatches.

Nothing constrains a forced fit to behave outside its interval. That is why we cherish global theory that resists falsification rather than local curve fittings with excuses (timespan for Enviro-whinerism, the Harvard law of animal interaction* for psychology, heteroskedasticity for economics; test of faith for religion).

*After all experimental variables are controlled, the test animal will still do what it damned well pleases.

For those who want to play with a computer algebra system but without access to Mathematica, take a look at Maxima (http://maxima.sourceforge.net/); it seems to deal with this very nicely too. Just remember to load the appropriate package ("load (orthopoly)") so you get the legendre polynomial stuff.

I played with your Mathematica file. It works great, thanks! But I'm curious why the Legendre expansion goes to pot for x larger than about 1.5 if you try say m=13 or m=15 in the last command of the notebook (try it and see). I assume it's some kind of round-off error, but 13 terms doesn't seem very big to me....

It's not the Legendre series which is having a problem, it's Mathematica's numerical evaluation of the shifted Legendre polynomials. When you translate these polynomials to the interval [0,2] and write them in powers of x (which I assume Mathematica is doing as part of the plotting routine?) the coefficients become very large. Around m=13 (which means the 25th polynomial) these (times x^n with x near 2) are large enough to overpower the machine precision of the calculation (roughly 17 digits). The expected large cancellation of the terms is now swamped by the round off error.

Also, NIntegrate fails to compute the coefficients accurately once n gets large ... for example it computes a_28 as 907.5667825 when it should be 0. I assume this is because of the same round off error as above.