If you’re reading this shortly after it’s posted, you may notice ads for this book popping up in the sidebar and on top of the page. This is probably not entirely a happy coincidence– I was offered a review copy in email from the author and his publisher, and I suspect that they had ScienceBlogs on their radar as a likely forum for web publicity.

With a title like The Drunkard’s Walk, the book could be about one of two things, and the subtitle “How Randomness Rules Our Lives” pretty much rules out any Hunter S. Thompson style gonzo ranting. This is a book about probability and statistics, and how they are widely misunderstood and misapplied.

I jumped at a review copy of this because these are topics I don’t understand nearly as well as I should, and I was hoping for a nice accessible discussion of the topic. I wasn’t disappointed– Mlodinow writes clearly and compellingly about the basic rules of probability and how they apply to everyday situations. And his discussion of Bayesian statistics is the first thing I’ve read on the topic that makes it make sense.

At times, I actually found myself wishing that the book contained more equations. Not because the writing was unclear without them, but because the plain-language explanations were better than anything else I’ve read on the subject. If I had had a textbook that put together detailed equations with explanations this clear, I would probably understand probability and statistics as well as I ought to.

The book mixes straightforward explanations of the basic concepts with historical anecdotes about the development of mathematical techniques for coping with randomness, going all the way back to the ancient Greeks. These include a few obvious people– Pascal, the Bernoullis– but also a number of less well known figures, or at least people who I had never heard of, like Cardano and Quetelet.

The book is also rich with references to recent applications of the key ideas, from the ever-popular Monty Hall problem to the economic studies of Kahneman and Tversky, to high-profile Hollywood studio firings and quirks of national lotteries. If there’s any weakness in the book it’s that these examples sometimes come a little too thick and fast, but he has a few repeating themes that he keeps returning to, and these serve to anchor the book pretty well.

All in all, I think this is an excellent layman-level introduction to the subjects of probability and statistics. Of course, I’ve already admitted the deficiencies of my knowledge in these areas, so it may be that people with a better background will find nits to pick, but as far as I can tell it is mathematically solid. And the writing is clear, easily readable, often funny, and never intimidating. I definitely recommend it to anyone looking for a fun book about the mathematics of random chance.

Comments

  1. #1 dave X
    May 30, 2008
  2. #2 Ambitwistor
    June 3, 2008

    If you want to read more about Bayesian statistics, at a level with equations, I highly recommend Data Analysis: A Bayesian Tutorial by Sivia and Skilling. It is a mercifully thin and relatively gentle text, aimed at undergraduates. Sivia is a chemist and Skilling is a physicist.

    Other Bayesian books by physicists are Bayesian Reasoning in Data Analysis: A Critical Introduction by D’Agostini and Probability Theory: The Logic of Science by Jaynes. The book by D’Agostini, a HEP experimentalist, is the most useful as far as practical analysis goes. It gets polemical in places against “orthodox” statistics. The book by Jaynes, who did stat mech, is more on the philosophical underpinnings of Bayesian theory and the logical arguments which lead one to this type of inference. Jaynes was very polemical.

    The key thing to realize about Bayesian statistics is that it’s not really about prior knowledge or subjectivity. Everyone obsesses about those aspects of it, but they’re side effects. The main point of Bayesian statistics is that you condition your inferences on the observed data, and infer the probabilities of possible hypotheses. In traditional statistics, you condition on an assumed hypothesis, and deduce the probabilities of different kinds of data you could have observed. They answer different questions. Arguably, scientists are more often interested in the former than the latter.

    I myself have used Bayesian methods to good effect in a nonlinear parameter estimation / changepoint problem for thin film magnetoresistance.

The site is currently under maintenance and will be back shortly. New comments have been disabled during this time, please check back soon.