Why the Op-Ed Pages Should Not Be the Sole Purview of Humanities Majors

Nicholas Kristof has done some excellent reporting on the issues facing the developing world. But he is a case study in how reporting and analysis are not necessarily part of the same skill set. In Thursday's column, Kristof writes (italics mine):

When I was in college, I majored in political science. But if I were going through college today, I'd major in economics. It possesses a rigor that other fields in the social sciences don't -- and often greater relevance as well. That's why economists are shaping national debates about everything from health care to poverty, while political scientists often seem increasingly theoretical and irrelevant.

Economists are successful imperialists of other disciplines because they have better tools. Educators know far more about schools, but economists have used rigorous statistical methods to answer basic questions: Does having a graduate degree make one a better teacher? (Probably not.) Is money better spent on smaller classes or on better teachers? (Probably better teachers.)

Oh boy. If I hadn't been barfing my guts out earlier this week, I would have hit the bar--at noon. Let's leave aside the basic problem that Brad DeLong and Dean Baker have both discussed (with DeLong in the role of penitent)--most of the economics profession that was "shaping national debates" fundamentally missed the collapse of Big Shitpile and the ensuing economic pandimensional clusterfuck. Better tools, indeed.

No, this was the part that nearly drove me to drink:

Economists are successful imperialists of other disciplines because they have better tools. Educators know far more about schools, but economists have used rigorous statistical methods to answer basic questions.

You mean like value-added testing? A method so imprecise that estimates of teaching performance can range from utter failure to 'grant her tenure'? Methods which, depending on the test used, can rank the same teacher in the top fifth of performers or the bottom? A method that concludes fifth grader teachers have significant effects on fourth grade performance--which means that either the methodological assumptions of the tests are violated or school systems are able to fundamentally alter space-time.

I don't think Kristof is a bad guy (even if I don't always agree with him). But if you're going to argue for analytical superiority, you must have some statistical bona fides. Sadly, Kristof appears to be impressed by a lot of seemingly complex math (although much of it isn't that complex once you know some math).

To top it all off, Kristof confuses statistics--which is a very useful thing to understand--with economists who use statistics (if not always wisely). There are plenty of other disciplines that uses statistics, such as biologists. And, of course, statisticians.

Better pundits please.

More like this

Last week, E.D. Kain took Megan McArdle to task for promoting the use of student testing as a means to evaluate teachers. This, to me, was the key point: ....nobody is arguing against tests as a way to measure outcomes. Anti-standardized-tests advocates are arguing against the way tests are being…
I've spent the last two days discussing the problems with value-added teacher evaluation, and I thought I would turn it over to the readers, since there has been some really good discussion. At the end, I'll revisit some statistical and methodological issues, but I want to address a good question…
One of the supposed key innovations in educational 'reform' is the adoption of value added testing. Basically, students are tested at the start of the school year (or at the end of the previous year) and then at the end of the year. The improvement in scores is supposed to reflect the effect of…
There's been a lot of energy expended blogging and writing about the LA Times's investigation of teacher performance in Los Angeles, using "Value Added Modeling," which basically looks at how much a student's scores improved during a year with a given teacher. Slate rounds up a lot of reactions, in…

I blame the muppethumping swedish bank, for giving an illusory legitimacy to economics as a discipline in which it is possible to make a contribution which helps humanity.
If you do *applied* economics, and actually, you know, *help* people, we can always give you a *real* nobel (like the peace prize for microfinance). I see no purpose for the swedish bank prize other than to make insufferable wonks at U of Chicago believe their institution is more prestigious than it is.

Economics? Rigorous? What a fscking joke.

By Andrew G. (not verified) on 22 May 2011 #permalink

1. Your disdain for macroeconomists has nothing to do with whether microeconomic studies that use econometrics have merit.

2. The paper you cite that shows that 5th grade teachers can sometimes predict 4th grade value added is by an economist using econometric methods.

3. Kristof is reacting in particular to the random assignment experiments in development economics policy described in recent books by Dean Karlan and Esther Duflo. These books describe some social experiments that I think do make some considerable contribution to human knowledge about what works in Third World anti-poverty policy. If you don't want to read the books, TED has an excellent video by Esther Duflo that summarizes some of this work.

4. As an economist, my impression is that economics has done a better job than most other scientific disciplines of using statistical methods to explore how non-experimental data can and can't be used to examine issues of causation. This has been forced on economists by the frequent lack of experimental data on many important economic topics. Natural scientists can in many cases use simpler statistical methods because of the greater availability of experimental data. Now, economists can get carried away to assume that sophisticated econometrics applied to non-experimental data always yields conclusions that are as robust as those that arise based on experimental data. On the other hand, the econometrics literature that talks about selection bias, omitted variable bias, and endogeneity, and develops various statistical techniques to overcome these problems, is a genuine contribution of economics, or at least of econometricians, to scientific methodology.

@Tim Bartik- my snark toward economists aside, thank you for bringing Esther Duflo to my attention. Her TED talk is fantastic.
That said, wouldn't the studies she describes be examples of economics taking a tool from an experimental science (the randomized control trial from experimental medicine) and applying it to a (micro?) economic system, rather than economics contributing tools of statistical analysis to another non-experimental field?
Also, while it's easy when talking with biomedical researchers to say 'you can use simpler statistics because you can do experiments', I do not think the same reasoning can be applied to the bulk of astronomers, geologists, ecologists, or theoretical physicists who cannot examine causation in the same way. While not even I would say economists have not made some useful contributions, are you really saying that economists do a better job than all those other disciplines?

Becca:

You are quite right to note that the Duflo/JPAL/Karlan/IPA approach is an application of classical experimental methods. This is one of its strengths.

I can hardly claim expertise on the exact statistical methods used by the various natural science disciplines. My impression is that psychologists have done a great deal with statistical methods to deal with issues of measurement error and having multiple measures of concepts that cannot be measured directly. Economists are not particularly strong on that front, because historically the discipline assumed that it was dealing with data on things like tons of steel and the price per ton of steel, which can be objectively measured.

However, with respect to inferring causation from non-experimental data, my impression is that econometrics (the economics branch of statistics) has done much more on this front than the natural sciences you mention. This is the bulk of what is covered in econometrics courses and texts, with techniques such as instrumental variables, selection bias corrections, regression discontinuity analysis, difference in difference analysis, etc. "Quasi-experimental" analysis is currently one of the main focal points of policy-oriented applied micro economists.

Now, perhaps there is some similar body of work in some of these natural science disciplines that focuses on uncovering causation from non-experimental data. The various disciplines go their own way, and I certainly would not claim to be any expert on exactly what, for example, astronomers do in applied analysis of empirical data. However, my impression is that economists do more than other disciplines in seeking to infer causation from non-experimental data. But I'm not quite sure how I can prove that proposition.

This is by no means meant as a put down of other disciplines. Different disciplines develop the mathematical tools they need to fulfill their discipline's main goals. At least for policy-oriented economists, identifying causation from non-experimental data has a special importance that I suspect it does not have in other scientific disciplines. If you want to do policy analysis, identifying how the world differs due to policy X, compared to a hypothetical world that was otherwise identical but without policy X, is of prime importance.

Because Kristof and many others are interested in this type of causal policy analysis, I can see why he is interested in what econometric methods have to offer.

To add a specific example of the power of non-experimental methods, consider regression discontinuity analysis. Among its many uses, regression discontinuity analysis has been used to analyze the effects of state pre-k programs on kindergarten readiness. These analyses use data on test scores entering pre-k, and entering kindergarten, and the existence of an age cut-off for pre-k and kindergarten eligibility, to analyze the causal effects of state pre-k programs using non-experimental data. The studies have frequently found large effects of state pre-k programs on kindergarten readiness.

I'm not sure if these comments allow links, but interested readers can look up "state pre-k studies" at the National Institute for Early Education Research at Rutgers, or the work on Tulsa pre-k by Bill Gormley and his colleagues at the CROCUS Center at Georgetown.