If you look at an incandescent light in a spectroscope, you’ll see a broad and continuous range of light emitted over a large portion of the visible spectrum. This combination of colors looks white to us. At the other extreme, laser light generally consists of just a tiny slice of the frequency spectrum, and so it appears highly monochromatic. The laser light is not literally mathematically all of the exact same frequency, instead it’s distributed closely about the laser frequency according to some probability distribution that can vary depending on the specifics of the laser.
Very frequently that distribution is not the Gaussian distribution with its famous bell curve of yore. Instead, it might be the Lorentz distribution (also known by the mathematicians as the Cauchy distribution). It’s our Sunday Function, and when normalized it’s expressed in the following way:
The constants x0 and gamma represent the center and width (more on that in a second), and if we set the former to 0 and the latter to 1 we can plot a representative example:
So, what’s the variance and standard deviation of this distribution? If you do the integral to find out, you’ll run into the brick wall that makes this distribution so weird. The integral diverges, and so the variance is undefined. So are all the higher moments of the distribution. In fact to be technical, the mean itself isn’t defined, though we can take the center x0 of the distribution to more or less fill the same role in some cases. The parameter gamma represents the width at half-maximum, which is the best we can do in terms of quantifying width without a standard deviation.
This is a pretty seriously weird situation to see for a physically realized probability distribution, as failure to have those higher moments causes the distribution to fail the criteria for much of what we think of as “standard” behavior for probability. For instance, the central limit theorem doesn’t apply to the Lorentz distribution.
Weirder, the strong law of large numbers doesn’t apply either. If you didn’t know x0, you might want to estimate it by taking a large number of samples and averaging them. If you did this, you’d find out that no matter how many samples you took, the average of those samples would refuse to settle down and converge. The uncertainty of the sample mean stays the same no matter how many times you sample. It’s seriously bizarre.
In math as in life, we like to pretend that the things we encounter are well-behaved and play by the rules to which we’re accustomed. In math as in life, they don’t always oblige us.