Sunday Function

If you look at an incandescent light in a spectroscope, you'll see a broad and continuous range of light emitted over a large portion of the visible spectrum. This combination of colors looks white to us. At the other extreme, laser light generally consists of just a tiny slice of the frequency spectrum, and so it appears highly monochromatic. The laser light is not literally mathematically all of the exact same frequency, instead it's distributed closely about the laser frequency according to some probability distribution that can vary depending on the specifics of the laser.

Very frequently that distribution is not the Gaussian distribution with its famous bell curve of yore. Instead, it might be the Lorentz distribution (also known by the mathematicians as the Cauchy distribution). It's our Sunday Function, and when normalized it's expressed in the following way:

i-d4bb9fe09d6b78eb832985821d8a307f-1.png

The constants x0 and gamma represent the center and width (more on that in a second), and if we set the former to 0 and the latter to 1 we can plot a representative example:

i-a97b2cf4594b8d7e72f0594bf64c45cd-graph2.jpg

So, what's the variance and standard deviation of this distribution? If you do the integral to find out, you'll run into the brick wall that makes this distribution so weird. The integral diverges, and so the variance is undefined. So are all the higher moments of the distribution. In fact to be technical, the mean itself isn't defined, though we can take the center x0 of the distribution to more or less fill the same role in some cases. The parameter gamma represents the width at half-maximum, which is the best we can do in terms of quantifying width without a standard deviation.

This is a pretty seriously weird situation to see for a physically realized probability distribution, as failure to have those higher moments causes the distribution to fail the criteria for much of what we think of as "standard" behavior for probability. For instance, the central limit theorem doesn't apply to the Lorentz distribution.

Weirder, the strong law of large numbers doesn't apply either. If you didn't know x0, you might want to estimate it by taking a large number of samples and averaging them. If you did this, you'd find out that no matter how many samples you took, the average of those samples would refuse to settle down and converge. The uncertainty of the sample mean stays the same no matter how many times you sample. It's seriously bizarre.

In math as in life, we like to pretend that the things we encounter are well-behaved and play by the rules to which we're accustomed. In math as in life, they don't always oblige us.

More like this

I don't get it, and I think I'd like to. But I may not have a deep enough statistics background (even though I teach it at a community college). I don't know what higher moments are, for one thing. Know anything very readable that would teach me more about this?

Read the second to last paragraph more closely.

Most distributions admit sampling pretty well-- this means that if you sample the population and compute the mean of the sample, this approaches the mean of the actual distribution. this is the assumption all polling and survey science is based on For this distribution, this doesn't work! You can't really learn anything useful from sampling it, even though intuitively, you should be able to.

Benoit Mandelbrot (of fractal fame) got his start (and continues) looking at the distributions of things like stock prices and the futures market. What he discovered was that most of those distributions are not gaussians, but similar to your Sunday function here.

The trouble is that all the quants can do their calculations with the gaussians, and then applied them. However, if the risk is really a lorentzian, the odds of bad things happening don't die out as quickly as you think (by a long shot). That's one reason we get market crashes of horrendous proportions much, much more often than the "experts" say we should.

Ref: "The (Mis)Behavior of Markets", by Mandelbrot.

Fortunately you can recover x0 accurately by calculating the median rather than the mean. That value does converge, and with reasonable speed.

Sue, actually my stats background is not so strong either. It's just the basic skill set to do the data analysis for the numbers pouring out of our instruments. Actually though for understanding distributions like this I'd skip the dedicated statistics books and look for an introductory book on probability. I can't really recommend a specific one as I only have my old textbook and honestly it wasn't very good. If I were you I'd just look for the most straightforward and understandable book you can find, since the basic concepts are all you really need.

By the way, are you aware of the excellent "An Atlas of Functions" by Spanier and Oldham? I own two copies, kept separately for safety.

Sivia's _Data Analysis: A Bayesian Tutorial_, in addition to being a gentle introduction to its subject, contains a nice illustrated example of inferring the parameters of a Cauchy distribution despite its infinite moments (see "the lighthouse problem").

By Ambitwistor (not verified) on 23 Nov 2009 #permalink

However, if the risk is really a lorentzian, the odds of bad things happening don't die out as quickly as you think (by a long shot).

Which leads to things like the following August 2007 quote from David Viniar, then CFO of Goldman Sachs: "We are seeing things that were 25-standard deviation events, several days in a row." (Referenced in this article.) I do not think that word means what Mr. Viniar thinks it means.

In my line of work we sometimes encounter so-called kappa distributions, which are generalizations of the Lorentzian to simulate a power-law tail. Raise that denominator to some power and normalize it to get a kappa distribution. (The Gaussian is the kappa -> infinity limit of the kappa distribution.) Kappa distributions are a bit better behaved, but there is still a lot more probability density in the tail than you would expect if you are modeling things with Gaussians.

By Eric Lund (not verified) on 24 Nov 2009 #permalink

I thought it wasn't possible to pick a random number between -inf and +inf? (you run into paradoxes) That being the case, this idea of sampling the values of the function doesn't make any sense.

?

By Paul Murray (not verified) on 25 Nov 2009 #permalink