Sunday Function

There's some math here, I'd rate it at Calc 2 difficulty. If you don't know calculus, that's fine! The details will be obscure but I think you'll still appreciate the abstract beauty of the method.

Ok, pick a rapidly oscillating function. It doesn't really matter which, so as an example I'll make one up. It has no particular physical significance, but the method we're going to test out on it ends up being very useful in numerous physical problems. A lot of things oscillate, and many times we're after the overall average effect of those vibrations, not the details of the vibration itself. So, our test subject:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

I've put the halves in parentheses to emphasize that there's two parts of interest - a regular old function (in this case an exponential and a polynomial), and a fast-changing oscillating sinusoid. It looks like this:

i-7b7142f1efa09b47c3b8242f89bcb9f4-graph.png

Now, say this is the graph of velocity. If we integrate that velocity, we can find the total displacement. Integration is just the process of calculating the area between the curve of the graph and the x-axis. Take it to be positive if the graph is above the axis, and negative if below.

For this oscillating graph, notice that each upswing is very nearly matched by a downswing. These will to a very good approximation cancel each other out, and we expect the total integral to be close to zero.

But it's a bit of a pain to actually compute the antiderivative to do the integral exactly, and most numerical methods break down if the oscillation is too fast. We need another way. That way is integration by parts - with some physics-style finagling. Integration by parts takes advantage of the product rule for derivatives, and works like this:

i-88516ca505d84f969b7ab15a965c3fb3-2.png

Call the oscillating part "dv" and the other regular part "u", and evaluate:

i-d18063683dcc0d42b9be45451a84d1e3-3.png

It's easier than it looks, I promise. Now here's the trick. The integration by parts has two terms. The first can be evaluated straight away, the second requires another integration. However, notice that in evaluating the second integral we'd integrate by parts again, bringing in another factor of 1/30. Thus we might expect that we can leave off that integral entirely and just use the first "surface term" as an approximation. Now as it happens we could have done this integral exactly, so let's see what happens.

Our approximate result: -0.00245746
The actual answer: -0.00252991

Not bad. If we wanted we could iterate again with that second integral and get a much closer approximation.

There's two important things to gain here. First, as the oscillations grow more and more rapid the approximation actually works better and better, due to the increasing smallness of the fraction in front of the oscillating term. This is despite the fact that raw computational methods tend to break down for high oscillations. Second, we don't need to antidifferentiate the non-oscillating part, we only have to evaluate it at the endpoints. That's a major advantage when the non-oscillating part has no clean expression for its antiderivative.

Aside from the practical utility in physics, this line of thought pretty quickly runs into some important areas of pure mathematics. We're splashing around near the shore, but not so much farther out the continental shelf drops off and we're in the deep waters of the theory of integral equations.

Kind of cool, I think.

More like this

Again, apologies for the hideously scanty posting. Been in the lab doing some really interesting research which will with some luck get me in a really nice journal, as well as doing the various rounds of revision on the paper for some previous research. Also putting two talks together for a…
Two very interesting papers this week: The Circadian Clock in Arabidopsis Roots Is a Simplified Slave Version of the Clock in Shoots: The circadian oscillator in eukaryotes consists of several interlocking feedback loops through which the expression of clock genes is controlled. It is generally…
I trust you're having a relaxing Sunday? Mathematical physics can be relaxing too, especially when you just look at it. We're just going to look at this one. In fact, this is a literal mathematical instantiation of Sunday relaxation. If you fix a wire or a rope at two points and let it hang…
I first met this function sometime in the year 2001 in the manual for a graphing calculator. The manual said that the function had no "closed-form analytic antiderivative" but nonetheless the calculator could integrate it numerically. At the time I had no idea what any of that meant, but upon…

...the increasing smallness...

You physicists have a word for everything, don't you?

By Flathead Phillips (not verified) on 25 Jan 2009 #permalink

A neat trick! I don't think I've seen that before. Of course, in all the math classes I do, the emphasis is never never never on calculation. Which makes me a little sad. Some calculation techniques seem fascinating.

Of course, the above seems to work optimally when the oscillating part is anti-differentiable. But what would you suggest for something like Sin(1/x)?

Have you thought about using something related to the Fourier Transform (or FFT) for a Sunday function?

I've recently started to wet my feet in DSP, of which I haven't seen much since college, and I was surprised at how much I had forgotten. You've got a definite knack for explanation that might save others the headache that I went through. I know that I'll be looking through past functions to see if there's anything anything simple and interesting enough to play around with, as your examples are clearer than the ones in my old reference books so far.