Sunday Function

Math-averse readers! Do not be scared off! You can enjoy this entry even if as far as you're concerned the equations are pretty pictures of Cypriot syllabary.

Not long ago we looked at adding up lots of consecutive integers. Multiplying consecutive integers is also interesting, and not only that it has a tremendous number of uses all throughout physics and pure mathematics. The function that multiplies the integers from 1 to n is called the factorial function, and rather unusually it's denoted with an exclamation point, like this:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

Here I'm using dots to denote multiplication, as is pretty common in physics. Now the factorial function gets tremendously huge tremendously fast. The factorial of one hundred is almost 10158, which is just mind-bogglingly enormous. Now if you want to find the factorial of one million, you're absolutely hosed. Your calculator will choke, and I'm pretty sure even sophisticated systems like Mathematica will start to smoke and sputter before finally giving you an "out of memory" error. There's just no good way to multiply a million integers together, let alone finding the factorial of the much larger numbers in physics that crop up all the time. Maybe we can find a clean approximation which is good for large n? We can.

Since n! is so unwieldy by virtue of its size, let's chop it down a notch by finding its natural logarithm. We'll use this old fact from high school algebra:

i-88516ca505d84f969b7ab15a965c3fb3-2.png

Knowing that, we can write the log of n!

i-d18063683dcc0d42b9be45451a84d1e3-3.png

That's not an approximation, so far it's exact. But exact though it is, it's not very useful. There's a trick you can use in calculus to approximate a sum with an integral, so we'll go ahead and use it.

i-8d7070ce28f0af28047c7a5fb8246d53-4.png

Now evaluate the integral:

i-bb5c2c6b0452df43a61e3974bd9b473f-5.png

Now the number 1 is pretty much by definition negligible in comparison to large n, so we can drop it from the above equation. From there, take the exponential of both sides to recover an expression for the factorial instead of the logarithm of the factorial.

i-80f9713f6d11461837a9f9b540684e36-6.png

It's just an approximation, but it's a good one. Now it turns out that some careful finagling with the limits of integration in the integral (or alternately an entirely different argument involving the gamma function) can improve our estimate. I'll skip the derivation and present the final result. It's called Stirling's approximation, and it's our Sunday Function:

i-9ef27a331268eac0961c3fd4d1a55446-7.png

You might think that since the root n term grows without limit, our original approximation can't be right. But that's not the case. The nn term is so much more quickly growing that for many practical purposes it still works fine. After all, really this is an approximation of the logarithm of n factorial anyway. Neither our rough approximation nor the better one actually converges on n!, instead the absolute error merely grows more slowly than n!. That said, the relative error goes to zero using the root n term. It doesn't if you leave out the root n term.

Those slightly technical matters aside, both expressions are very useful. How about finding that factorial of one million using it? Let's do it:

i-2a911bc4888cdd6dfd35cad351fab7d5-8.png

Evaluating the first term is easy; the second will make your calculator choke. Some simple tricks involving logarithms will make it very easy though. I'll save those as an exercise for the alert reader and present the result:

i-44ec8cae73c5984c6c575d0dba7bf900-9.png

Whew! Now that is a large number, containing more than five and a half million digits. Readers who have done the calculation themselves will have noticed that as we mentioned before the square root term affects the result in this stacked exponential notation not in the slightest. To get that answer took me about 60 seconds worth of writing down a couple lines to keep track of the logarithms and typing the figures into the calculator. To find the answer without Stirling's approximation would have been utterly beyond my grasp. Thanks, Stirling!

More like this

Again, apologies for the hideously scanty posting. Been in the lab doing some really interesting research which will with some luck get me in a really nice journal, as well as doing the various rounds of revision on the paper for some previous research. Also putting two talks together for a…
First, a very quick and simple introduction to recursion. Here's an example. Pick a positive whole number n. The factorial function in the product of all the integers between 1 and n. For some reason it's denoted by an exclamation point. Now if you compute 5!, you don't need to repeat the…
Time for another sort-of advanced basic. I used some recursive definitions in my explanation of natural numbers and integers. Recursion is a very fundamental concept, but one which many people have a very hard time wrapping their head around. So it's worth taking the time to look at it, and see…
Let's start with a pretty simple function. It's not this week's official Sunday Function, but we'll use it to get there. Take the number 1, divide it by x. Pretty easy. Now imagine putting a dot at the point x = 1 on the x axis, and draw a line straight up from there to the curve formed by…

For what it's worth, Mathematica actually calculates the approximation to be 8.2639*10^(5565708) in 1.735 seconds, and a separate run got the full value in 1.703 seconds.

Though, naturally it only showed me part of the full output. When I asked it for the full value, it broke into tears.

I heart Mathematica!

That will teach me to doubt Mathematica! But we can crank up the pain and try again. How about the factorial of a googol (10^100)?

Using Stirling's approximation I get that it's about 10^(10^101.99), so clearly since there's something like a hundred googol zeros required to just to write the number down, full output on a computer screen would be challenging.

The free software SAGE, on my Macbook Pro, gave 1000000! in about 15 seconds. Not blindingly fast, but it didn't choke and (did I say this) SAGE is free.
Works great with Latex too.

Stirling's formula actually generalizes to a nice series. One interesting property is that the series as a whole never converges if you take all the terms but any truncated part of the series forms a really good approximation of n! for large n.

Stirling's formula has some nice applications. For example, it is helpful for getting tight bounds on Chebyshev's theorem on the distribution of the primes without much pain.

That will teach me to doubt Mathematica!

Actually even the brute-force algorithm only takes half an hour on a fast CPU. Say you represent a big integer by an array of 32-bit unsigned integers (2^32 = 4 billion). Then the largest "big integer" you would need would have a length of (log 10^6!) / (log 2^32) ~ 578,000 machine integers (takes 2.2 MB). In the naive factorial algorithm that loops through the integers in the range 1..10^6 and multiplies them to an accumulator,

acc = 1
for i in 1..10^6 {acc = acc * i}

i never exceeds 10^6, so it fits in a single machine integer. So the operation "acc * i" is a multiplication of a big integer and a machine integer - we can do this by looping through the components of the big integer, multiplying by i, and adding the overflows to the left. This is about 2*n arithmetic operations (n the length of the big integer). So in all we need no more than 2 * 578,000 * 1,000,000 floating ops, about 1 trillion. A modern CPU at 20 billion ops/sec (dedicated ALU) would do this in 10^12 / (2*10^10 /s) = one minute, if it weren't for pointer arithmetic and memory access. In my attempt it took half an hour.

The answer is 1.C3EF * (10h)^4,622,221. (10h = 16d - so the exponent (10h)^4,622,221 is similar to 10^5,565,708).

By easily amused (not verified) on 09 Nov 2008 #permalink

Well, that's amazing. I just did Stirling's approximation on a calculator and got

8.26393134 x 10^5,565,708.

I wrote a short computer program to sum: log10(2) + log10(3) + ... + log10(1000000)

and got 5,565,708.91718666 which means

8.26393052 x 10^5,565,708 (very close!)

It took only a fraction of a second to run. I don't know what Mathematica was doing for Foxy to take 1.7 seconds.

By Tim Gaede (not verified) on 10 Nov 2008 #permalink

Actually there is a difference in the computation speed of Factorial in Mathematica depending on when you attempt to do the numerical (i.e., floating point) evaluation. So, on my machine for

N[Factorial[(10^6)]]

it takes 0.76 seconds.

But

Factorial[N[10^6]]

takes 0.000093 seconds, giving the same result in both cases. This all has to do with how Mathematica attempts to address exact calculation versus numerical ones. And hence the order in which you set things up can make a difference.