Over at the New York Times’ Freakonomics blog, Justin Wolfers gets into the March Madness spirit by reporting on a study of basketball games that yields the counter-intuitive result that being slightly behind at halftime makes a team more likely to win. It comes complete with a spiffy graph:

Explained by Wolfers thusly:

The first dot (on the bottom left) shows that among those teams behind by 10 points at halftime, only 11.8 percent won; the next dot shows that those behind by 9 points won 13.9 percent, and so on. The line of best fit (the solid line) shows that raising your halftime lead by two points tends to be associated with about an 8 percentage-point increase in your chances of winning, and this is a pretty smooth relationship.

But notice what happens when we contrast teams that are one point behind at halftime with teams that are one point ahead: the chances of winning suddenly fall by 2.4 percentage points, instead of rising by 8 percentage points.

This has an explanation drawn from behavioral economics, which you can go read for yourself. Like all behavioral just-so stories, it seems really plausible. Plus, the trend in the data is really striking. I mean, just look at that graph!

However, I took the liberty of re-plotting their data:

I reconstructed the data by the brute-force method of pixel counting in the GIMP, and then plugging the results into SigmaPlot. It’s not quite perfect, but it’s close enough for government work. Then I fit a straight line to the whole data set (slope of 0.0377, intercept 0.5157, R^{2}=0.98398, other statistical measures available upon request).

And, funny enough, with the straight line there, the difference between leading by one and trailing by one doesn’t look so dramatic, does it? Amazing how excluding the “tie score” point, doing a complicated polynomial fit, and extending it to the un-physical value of a half-point deficit guides the eye, no?

This is not to say that the original researchers Wolfers is drawing this from (Jonah Berger and Devin Pope) don’t have a real point with their paper. Wolfers describes some laboratory tests of the supposed phenomenon that certainly sound more scientific (he also links to their full paper, which I don’t have time to read, but knock yourself out).

The problem is, this sort of how-to-lie-with-graphical-presentation horseshit makes it much harder for me to take the whole thing seriously. And, by extension, makes me cast a more skeptical eye on the whole field of behavioral economics.

(A tip of the hat to Matthew Merzbacher on a mailing list, who pointed out the fit extension to half-a-point, thus triggering this post.)