We’ve spent more than a few Sunday Function features discussing the properties of the prime numbers. They’re just so important and interesting in number theory that they’re an irresistible target. Let’s set some scenery before getting to the actual function this week.

There are an infinite number of prime numbers. Whatever gargantuan prime number you find, there’s always an infinite number of larger ones waiting to be found. However, their relative commonness does decrease as the numbers get larger and larger. For instance, while the ten ousand numbers 1-10,000 contain 1,229 primes, the ten thousand numbers 1,010,000-1,000,000 only contain 753. In general the density of primes in the vicinity of the number n is inversely proportional to log(n). The log of one million is about 13.8, so around 1 number out of every 14 will be prime in that region. Sure sure enough, 10,000/13.8 = 723.8, which is very close to the actual value of 753.

The numbers that aren’t prime are composite. Composite numbers can be decomposed into prime factors, which are just the prime numbers which produce that composite number when multiplied together. Every composite number has one and only one set of prime factors. The prime factors form a unique representation of that composite number. To take an example, I was born in 1985. The number 1985 is a product of exactly two primes: 5 and 397. This year is 2010, also a composite number, but one with a more complicated factorization: 2*3*5*67 = 2010.

While we’re picking dates, 1776 illustrates another point well; 2*2*2*2*3*37 = 1776. If we’re counting prime factors, it’s unambiguous that 1985 has two and 2010 has four. But 1776 either has 3 or 6 depending on whether we want to count duplicates or not. Both of these are interesting and important mathematical properties, but just to keep things simple let’s assume we’re talking about distinct prime factors. Let’s invoke a function to tell us how many distinct prime factors a number has, and let’s plot it. This is our Sunday Function, and it’s usually denoted as ω(n). Here is is plotted for the first hundred n:

It’s a pretty wild function. It looks like maybe on average the value of the function increases with increasing n. But it’s pretty hard to tell. It’s going to continue wildly vary even at large n. Primes will always have a value of 1, and higher n can have pretty much anything. But if we’re just interesting in large-scale averages, can we say anything about that?

Sure we can. While I’m having trouble thinking of an adequately convincing non-technical “plausibility” argument for the truth of the expression I’m about to write (good ones in the comments would be appreciated!), it turns out that there is a good approximation for the average number of prime factors for large n:

For instance, the average number of prime factors for the numbers between 1,000,000 and 1,000,100 is 2.9. And after some calculator work, we find that log(log(1,000,000)) = 2.62579. We’re pretty close.

It also turns out that this same expression serves as an approximation for the total number of non-distinct prime factors as well. The higher-order terms are different, but the large-scale behavior of the averages are the same. Log(log(n)) is a very slowly growing function. If we apply this function to one googol (10^100), we see numbers in that vicinity will only have around 5.4 prime factors on average. No so many, considering we’re dealing with 100-digit numbers.

But that function does increase without bound. If you want to blow your mind, imagine how large n would have to be for it to have 100 factors on average…