There’s some math here, I’d rate it at Calc 2 difficulty. If you don’t know calculus, that’s fine! The details will be obscure but I think you’ll still appreciate the abstract beauty of the method.
Ok, pick a rapidly oscillating function. It doesn’t really matter which, so as an example I’ll make one up. It has no particular physical significance, but the method we’re going to test out on it ends up being very useful in numerous physical problems. A lot of things oscillate, and many times we’re after the overall average effect of those vibrations, not the details of the vibration itself. So, our test subject:
I’ve put the halves in parentheses to emphasize that there’s two parts of interest – a regular old function (in this case an exponential and a polynomial), and a fast-changing oscillating sinusoid. It looks like this:
Now, say this is the graph of velocity. If we integrate that velocity, we can find the total displacement. Integration is just the process of calculating the area between the curve of the graph and the x-axis. Take it to be positive if the graph is above the axis, and negative if below.
For this oscillating graph, notice that each upswing is very nearly matched by a downswing. These will to a very good approximation cancel each other out, and we expect the total integral to be close to zero.
But it’s a bit of a pain to actually compute the antiderivative to do the integral exactly, and most numerical methods break down if the oscillation is too fast. We need another way. That way is integration by parts – with some physics-style finagling. Integration by parts takes advantage of the product rule for derivatives, and works like this:
Call the oscillating part “dv” and the other regular part “u”, and evaluate:
It’s easier than it looks, I promise. Now here’s the trick. The integration by parts has two terms. The first can be evaluated straight away, the second requires another integration. However, notice that in evaluating the second integral we’d integrate by parts again, bringing in another factor of 1/30. Thus we might expect that we can leave off that integral entirely and just use the first “surface term” as an approximation. Now as it happens we could have done this integral exactly, so let’s see what happens.
Our approximate result: -0.00245746
The actual answer: -0.00252991
Not bad. If we wanted we could iterate again with that second integral and get a much closer approximation.
There’s two important things to gain here. First, as the oscillations grow more and more rapid the approximation actually works better and better, due to the increasing smallness of the fraction in front of the oscillating term. This is despite the fact that raw computational methods tend to break down for high oscillations. Second, we don’t need to antidifferentiate the non-oscillating part, we only have to evaluate it at the endpoints. That’s a major advantage when the non-oscillating part has no clean expression for its antiderivative.
Aside from the practical utility in physics, this line of thought pretty quickly runs into some important areas of pure mathematics. We’re splashing around near the shore, but not so much farther out the continental shelf drops off and we’re in the deep waters of the theory of integral equations.
Kind of cool, I think.