The 2007 Abel Prize: Professor S. Varadhan and the Theory of Large deviations

As an alert reader pointed out, a major mathematical prize was awarded recently. Since
2002, the government of Norway has been awarding a prize modeled on the Nobel, but in
mathematics. The prize was originally suggested by Sophus Lie, he of the Lie group, back in
1897, when he heard that Nobel was setting up his awards, and was not including
mathematics. The prize is named after Niels Abel, the Norwegian mathematician who
discovered the class of functions that are now known as Abelian functions; the same person
that Abelian groups are named after, etc.

Anyway, this year, the Abel
prize was awarded to Srinivasa Varadhan
, an Indian mathematician who is currently a href="http://math.nyu.edu/faculty/varadhan/">professor at the NYU Courant Institute.
Professor Varadhan's specialty is probability theory - in particular, the theory of large
deviations. In honor of Professor Varadhan's award, I thought it would be interesting to
very briefly explain what the theory of large deviations is, and why it's so
important that it justified the award of a million dollar prize.

In basic statistics, we generally talk about things like "normal distributions". The idea is that most of the time, when we look at random processes, we'll see a certain
pattern emerge. The traditional example of this is coin flipping: if we flip a fair coin enough times, we expect to see the ratio of head-flips to tail-flips get very close to 1:1. Not just that, but we also expect that if we repeatedly flip a coin 1000 times, and plot a histogram showing how many heads we flipped each time, we'll get a set of results that form a bell curve around 500.

Much of the basic study of probability focuses on the common cases, reasoning about what we can predict about the most likely outcome. So in our coin flipping example, basic statistics would talk about how we'd compute the expected outcome - 500 heads in a series of 1000 flips; what we'd expect from the standard deviation, etc.

Large deviations theory asks a very different question. It asks: what about the unlikely outcomes? Given information about probability distributions, what can we say about outcomes that are significantly different from the expectation? In particular, what can we say about how the probability of outcomes vary as they get increasingly distant from the expectation?

Why is this important? Well, the field was founded by a mathematician who worked for an
insurance company. In general, an insurance company's premiums are set based on the
statistics of past years. For example, a flood insurance company predicts how much money it
will need to pay out based on how much it had to pay to cover flood damage over a range of
past years. But, as we all learned after hurricane Katrina, there are years that are
dramatically different from the norm - where the company has to pay out a whole lot more money than they would in a normal year.

So for an insurance company, they need to know not just what's the expectation for the
amount that they'll probably have to pay out in a given year - but also, what's the
likelihood of the year being one of those bizarre outliers where the damage is an order of magnitude or two outside of the norm?

Large deviation theory provides a way of analyzing large quantities of data, and using them to predict not just what the most likely value of some statistic is, but also what the probability is of large deviations from that value. Doing that is hard, and
requires a lot more information, and lot more mathematical work than basic probability theory.

Professor Varadhan derived something called the Large Deviation Principle, which provides a way of working out probabilities of these kinds of large deviations. And as I said - it's incredibly hard. Varadhan's work takes nonlinear analysis, partial differential equations, and about a half-dozen other incredibly difficult fields of math -
and puts them all together to derive a comprehensive theory of large deviation probability. This is probably the single most important advance in the mathematics of probability since Bayes.

Aside from the financial applications of Professor Varadhan's work, it's also been a
major contributor to modern physics. The possible behaviors of groups of particles whose
individual behaviors can be predicted probabilistically can be understood better using
large deviation probability. In particular, when we look at things like thermodynamics,
what we get is a description of probabilistic behaviors of individual particles; and what we want is a description of how a huge number of those particles will behave in aggregate. Professor Varadhan has collaborated with physicists in applying his theory of large deviations to derive results in thermodynamic physics.

Tags

More like this

Great post. I will definitely be reading more on this area of probability theory.

By madsocialscientist (not verified) on 12 Apr 2007 #permalink

Thanks for the heads up. I work in insurance and I definately have a need for this. My focus lately is the distribution of equity returns. Lots of people use lognormal distributions, but some large single day drops are just too extreme to ever occur.

Nice post Mark. Although the theory of large deviations is of tremendous importance, I wouldn't go so far as to say that its the most important advance in probability since Bayes. Also , aside from unifying the theory of large deviations, Varadhan also contributed extensively (with Stroock) to the theory of stochastic partial differential equations. This work was of great importance as well.

I should mention here that Kiyoshi Ito won the inaugural Gauss Prize last year for the development of the Ito integral (an integral with respect to a random process). This achievement, back in the 40's, is used as the basis upon which SPDEs rests, and hence much of mathematical finance. I guess its been a good year for probabalists :)

For example, a flood insurance company predicts how much money it will need to pay out

And previously to that communities civil engineering needs to dimension their works against the statistics of the odd outliers, such as the 100 year flood water level et cetera.

I didn't know of this interesting work, but it seems like the rare case when great theoretical advances has great practical applications as well. Actually, my usual complaint here is that these awards are backwards - I consider math noble and science able. Ah, well.

By Torbjörn Larsson (not verified) on 12 Apr 2007 #permalink

Dr. Varadhan's elder son was killed in the terrorist attacks on WTC on 9/11. Varadhan studied with CR Rao at the Indian Statitical Institute and fielded some very interesting questions from a person in the audience at his defence. Later he learnt that the questioner was none other than Kolmogorov whose visit to the ISI Rao had scheduled to coincide with his student's defence. And isn't the Cramer of large deviations the same one as in the Rao-Cramer (Cramer-Rao) inequality? Coincidences all the way! Now is that a large or small deviaiton!

Minor nits: We don't usually omit "Henrik" from the name of Niels Henrik Abel. The name "Niels Abel" elicits the same "huh?" response in me as "Edgar Poe" would.

And the first Abel prize was awarded in 2003, not 2002.

Mark,

How does statistics differ from mathematics?

This type of probability theory likely also has applications in stochastic game theory [cooperative and noncooperative] and Max-Plus Algebras.

(1) "How does statistics differ from mathematics?" Sometimes they are two separate departments of the same university. For example, Dr. Gary Lorden Executive Officer of mathematics at Caltech, and Math Advisor to the hit CBS-TV show NUMB3RS told me that he is technically considers himself a "Professor of Statistics." Although I did take an advanced probability course from him. And Probability is not the same as Statistics.

(2) "... Abel group work was so important that he was honored by lower case rather than upper" as also with boolean for George Boole.

(3) "English writers do not usually capitalize the eponyms "shrapnel" (Henry Shrapnel, 1761-1842), "diesel" (Rudolf Diesel, 1858-1913), "saxophone" (Adolphe Sax, 1814-1894), "baud" (Emile Baudot, 1845-1903), "ampere" (Andre Ampere, 1775-1836), "chauvinist" (Nicolas Chauvin, 1790-?), "nicotine" (Jean Nicot, 1530-1600) or "teddy bear" (Theodore Roosevelt, 1858-1916).

However, we do capitalize "Darwinian evolution", "Victorian morality", "Elizabethan plays", "Dickensian stories", "Machiavellian politicians" and "Orwellian surveillance", so perhaps we should capitalize "Boolean logic".

I think the actual guideline to follow here is that we capitalize eponyms only if they are adjectives. Once they become nouns, we quickly stop capitalizing them.

Of course, that then doesn't explain why we do not capitalize "caesarian section" or "draconian measures". Expecting that much consistency from English is asking rather too much.

http://blogs.msdn.com/ericlippert/archive/2006/10/31/boolean-or-or-bool…

(1) "How does statistics differ from mathematics?" Sometimes they are two separate departments of the same university. For example, Dr. Gary Lorden Executive Officer of mathematics at Caltech, and Math Advisor to the hit CBS-TV show NUMB3RS told me that he is technically considers himself a "Professor of Statistics." Although I did take an advanced probability course from him. And Probability is not the same as Statistics.

(2) "... Abel group work was so important that he was honored by lower case rather than upper" as also with boolean for George Boole.

(3) "English writers do not usually capitalize the eponyms "shrapnel" (Henry Shrapnel, 1761-1842), "diesel" (Rudolf Diesel, 1858-1913), "saxophone" (Adolphe Sax, 1814-1894), "baud" (Emile Baudot, 1845-1903), "ampere" (Andre Ampere, 1775-1836), "chauvinist" (Nicolas Chauvin, 1790-?), "nicotine" (Jean Nicot, 1530-1600) or "teddy bear" (Theodore Roosevelt, 1858-1916).

However, we do capitalize "Darwinian evolution", "Victorian morality", "Elizabethan plays", "Dickensian stories", "Machiavellian politicians" and "Orwellian surveillance", so perhaps we should capitalize "Boolean logic".

I think the actual guideline to follow here is that we capitalize eponyms only if they are adjectives. Once they become nouns, we quickly stop capitalizing them.

Of course, that then doesn't explain why we do not capitalize "caesarian section" or "draconian measures". Expecting that much consistency from English is asking rather too much.

http://blogs.msdn.com/ericlippert/archive/2006/10/31/boolean-or-or-bool…

Missed this...

One question: how does the theory of large deviations differ from Extreme Value Theory (something I've been learning about from my students)?

As to how statistics differs from mathematics, well as statisticians, we rely on mathematics to give us the foundation for our analyses, e.g. giving the equations for estimating the slope in a regression. But statistics is much more than mathematics: for example, we have to know how to present our analyses (graphs etc.), and to understand the different approaches to analyses, and their strengths and weaknesses, and how to check the models for adequacy.

And that's before we get on to extraneous things like programming, and communication....

Bob