Last year, a Cornell University economist named Michael Waldman noticed a strange correlation: the more precipitation a region received, the more likely children were to be diagnosed with autism.
[This] soon led Prof. Waldman to conclude that something children do more during rain or snow — perhaps watching television — must influence autism. Last October, Cornell announced the resulting paper in a news release headlined, “Early childhood TV viewing may trigger autism, data analysis suggests.
The resulting paper was a nifty trove of complicated statistics and unexpected correlations. But it was the rumor of causation – the possibility that television might trigger autism – that made the paper so instantly notorious. An interesting article in the WSJ explores whether or not economists should even be asking such questions:
Prof. Waldman’s willingness to hazard an opinion on a delicate matter of science reflects the growing ambition of economists — and also their growing hubris, in the view of critics. Academic economists are increasingly venturing beyond their traditional stomping ground, a wanderlust that has produced some powerful results but also has raised concerns about whether they’re sometimes going too far. …
Such debates are likely to grow as economists delve into issues in education, politics, history and even epidemiology. Prof. Waldman’s use of precipitation illustrates one of the tools that has emboldened them: the instrumental variable, a statistical method that, by introducing some random or natural influence, helps economists sort out questions of cause and effect. Using the technique, they can create “natural experiments” that seek to approximate the rigor of randomized trials — the traditional gold standard of … research. …
But as enthusiasm for the approach has grown, so too have questions. One concern: When economists use one variable as a proxy for another — rainfall patterns instead of TV viewing, for example — it’s not always clear what the results actually measure. Also, the experiments on their own offer little insight into why one thing affects another.
My own opinion is that, as long as we recognize the limitations of these economic approaches, other fields (like neuroscience) should welcome them. Detecting statistical correlations among vast data sets is a valuable tool, especially when it comes to generating surprising hypotheses. (Of course, we need to remember that correlation is never causation.) Whether or not these hypotheses turn out to be true is a completely separate matter.* But the worst thing that can happen is the falsification of an intriguing idea.
What do you think? Should scientists welcome the statistical speculations of economists?
*Perhaps I’m being too sanguine about the empirical potential of economics. After all, economists still think humans are rational agents, a psychological premise that was debunked decades ago.