Sunday Function

We've spent more than a few Sunday Function features discussing the properties of the prime numbers. They're just so important and interesting in number theory that they're an irresistible target. Let's set some scenery before getting to the actual function this week.

There are an infinite number of prime numbers. Whatever gargantuan prime number you find, there's always an infinite number of larger ones waiting to be found. However, their relative commonness does decrease as the numbers get larger and larger. For instance, while the ten ousand numbers 1-10,000 contain 1,229 primes, the ten thousand numbers 1,010,000-1,000,000 only contain 753. In general the density of primes in the vicinity of the number n is inversely proportional to log(n). The log of one million is about 13.8, so around 1 number out of every 14 will be prime in that region. Sure sure enough, 10,000/13.8 = 723.8, which is very close to the actual value of 753.

The numbers that aren't prime are composite. Composite numbers can be decomposed into prime factors, which are just the prime numbers which produce that composite number when multiplied together. Every composite number has one and only one set of prime factors. The prime factors form a unique representation of that composite number. To take an example, I was born in 1985. The number 1985 is a product of exactly two primes: 5 and 397. This year is 2010, also a composite number, but one with a more complicated factorization: 2*3*5*67 = 2010.

While we're picking dates, 1776 illustrates another point well; 2*2*2*2*3*37 = 1776. If we're counting prime factors, it's unambiguous that 1985 has two and 2010 has four. But 1776 either has 3 or 6 depending on whether we want to count duplicates or not. Both of these are interesting and important mathematical properties, but just to keep things simple let's assume we're talking about distinct prime factors. Let's invoke a function to tell us how many distinct prime factors a number has, and let's plot it. This is our Sunday Function, and it's usually denoted as ω(n). Here is is plotted for the first hundred n:

i-ee5104a4699493718e123e6faa6027f4-omega.jpg

It's a pretty wild function. It looks like maybe on average the value of the function increases with increasing n. But it's pretty hard to tell. It's going to continue wildly vary even at large n. Primes will always have a value of 1, and higher n can have pretty much anything. But if we're just interesting in large-scale averages, can we say anything about that?

Sure we can. While I'm having trouble thinking of an adequately convincing non-technical "plausibility" argument for the truth of the expression I'm about to write (good ones in the comments would be appreciated!), it turns out that there is a good approximation for the average number of prime factors for large n:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

For instance, the average number of prime factors for the numbers between 1,000,000 and 1,000,100 is 2.9. And after some calculator work, we find that log(log(1,000,000)) = 2.62579. We're pretty close.

It also turns out that this same expression serves as an approximation for the total number of non-distinct prime factors as well. The higher-order terms are different, but the large-scale behavior of the averages are the same. Log(log(n)) is a very slowly growing function. If we apply this function to one googol (10^100), we see numbers in that vicinity will only have around 5.4 prime factors on average. No so many, considering we're dealing with 100-digit numbers.

But that function does increase without bound. If you want to blow your mind, imagine how large n would have to be for it to have 100 factors on average...

More like this

Due to some homework which is taking longer than anticipated, today's Sunday Function will have to be a bit quick. It's no less interesting for that. Define a function Q(n) on the natural numbers, such that Q(n) = 1 for a prime number and Q(n) = 0 for a composite number. In other words, it's just…
In this week's edition of Monday Math we look at what I regard as one of the prettiest equations in number theory. Here it is: \[ \sum_{n=1}^\infty \frac{1}{n^s} = \prod_p \left( \frac{1}{1-\frac{1}{p^s}}\right) \]   Doesn't it just make your heart go pitter-pat? You are probably familiar with…
Sometimes in math we'll understand one aspect of a problem very well, while at the same time we understand another aspect of a problem very poorly. For instance, take the prime numbers. According to the prime number theorem, the number of prime numbers below x is approximately given by: Where…
In my last math post I casually mentioned that the sum of the reciprocals of the primes diverges. That is \[ \frac{1}{2}+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\frac{1}{11}+\frac{1}{13}+ \dots=\infty \]   That seems like a hard thing to prove. Certainly none of the traditional convergence tests…

That function is like a bush growing against a wall.

That's pretty nifty, actually. It makes some sense: the mth prime number is roughly mlog(m), so since the smallest number with m distinct prime factors is the product of the m smallest primes, even the smallest number with n distinct factors should grow pretty fast, faster than n!.

This makes log(log(n)) start to sound like a reasonable estimate, since the factorial function grows at a rate in between exponentiation and double exponentiation.

The proof though is well beyond me.

About 10^(10^43.06723250162571)

Much larger than Avogadro's number but much smaller than Skewes' number. Very very very roughly comparable to the factorial of Avogadro's number in the grand scheme of "large number" classification, I think.

Most numbers with names are much smaller than Skewes' number though, aren't they?

Do not despair! Chairman of the Federal Reserve Benjamin "BS" Bernanke has laid out extensive plans (almost two paragraphs) establishing sub-prime numbers. Most of the number line will then fairly benefit as do the privileged few.

Expect to run short of integers in the next couple of years - the sub-prime number collapse. This is a fine time to stockpile discrete quantities against floating point Federal fiats.

The approximation of w(n) ~ log log n is a good approximation in multiple ways. It is both the average value (in the sense that if you take the average of w(k) for k from 1 to n you get a quantity asymptotic to log log n) but it also a good appromimation in the sense that the function is almost asymptotic to log log n. In particular, for any epsilon >0, one has outside a set S of density 0 (S depending on epsilon), |w(n)- log log n| < epsilon (log log n). So this is a good approximation for multiple ways of thinking about it. It turns out that there are other ways of thinking about average value that also make it a good approximation.

Also MPL the proofs aren't that hard although they aren't the sort of thing one will be able to work out by oneself without a fair bit of number theory background. There are highly readable proofs of these and related facts in Hardy and Wright's "Introduction to the Theory of Numbers"

This function as you have explained does not supporting well as they are including in it. Small small towers and the main heights are in the multiple of each other and can support it.