Today, Inside Higher Ed has an article about the recent decline of peer reviewed papers authored by professors in top five economics departments in high profile economics journals. A paper by MIT economics professor Glenn Ellison, “Is Peer Review in Decline?,” considers possible explanations for this decline, and the Inside Higher ed article looks at the possible impacts of this shift.
The alternative threatening the peer reviewed journals here is the web, since scholars can post their papers directly to their websites (or blogs) rather than letting them languish with pokey referees. But I think the issues here go beyond the tug-of-war between old media and new media and bring us to the larger question of just what is involved in building new scientific* knowledge.
One concern Ellison raises is that well-established economics professors who use their websites to post their new papers are tapping into a large audience (potentially much larger than they would have just publishing in the journals in their field), and they can get their work out quickly, but in the process they are circumventing the quality-control function peer review is supposed to play. The Inside Higher ed article quotes Ellison:
“What I think is potentially problematic is if more top authors start withdrawing from the peer-reviewed journals, then the peer-reviewed journals become a less impressive signal of the quality of the paper.”
“What I worry about,” Ellison said, “is you get to a point where you can’t make a reputation for yourself by publishing in the peer-reviewed journals. That locks in today’s elite.”
Ellison says that the economists opting out of publishing in the peer reviewed journals tend to be at the top of their profession, not those climbing the academic ladder and trying to make a name for themselves. Indeed, Ehud Kalai, a professor at Northwestern University’s Kellogg School of Management and editor of Games and Economic Behavior, points out that the internet won’t be putting the journal publishers out of business just yet:
“The other thing that’s a bit puzzling in this whole theory, it seems to me, is that with this explosion of information on the Internet, peer review has become even more needed because there are so many more papers,” Kalai said, adding that the number of economics journals has exploded in recent years. “They’re just multiplying like mad. If there is a trend not to publish, why are so many starting them?”
Of course, my interest in this story has less to do with the particular dynamics at work in the tribe of academic economists and the sorts of strategies to which these dynamics give rise than to the larger issue of how a scientific community understands the process of building and communicating knowledge.
It’s part of the standard picture of science that you can’t say you’ve built knowledge about a piece of the world until your results and interpretations have withstood the scrutiny of others who are working to understand the same piece of the world. Here’s how I described peer review in an earlier post:
The reviewer, a scientist with at least moderate expertise in the area of science with which the manuscript engages, is evaluating the strength of the scientific argument. Assuming you used the methods described to collect the data you present, how well-supported are your conclusions? How well do these conclusions mesh with what we know from other studies in this area? (If they don’t mesh well with these other studies, do you address that and explain why?) Are the methods you describe reasonable ways to collect data relevant to the question you’re trying to answer? Are there other sorts of measurements you ought to make to ensure that the data are reliable? Is your analysis of the data reasonable, or potentially misleading? What are the best possible objections to your reasoning here, and do you anticipate and address them?
While aspects of this process may include “technical editing” (and while more technical scrutiny, especially of statistical analyses, may be a very good idea), good peer reviewers are bringing more to the table. They are really evaluating the quality of the scientific arguments presented in the manuscript, and how well they fit with the existing knowledge or arguments in the relevant scientific field. They are asking the skeptical questions that good scientists try to ask of their own research before they write it up and send it out. They are trying to distinguish well-supported claims from wishful thinking.
Methodologically, peer review puts scientists into a dialogue with other scientists and presses them to be more objective. It’s not enough that you’re convinced of your finding — you have to convince someone who has officially assumed the role of the guy trying to find problems with your claims.
But, as we’ve discussed before, there are ways in which peer review as it happens on the ground departs from the idealized version of peer review:
In many instances, the people peer reviewing your manuscripts may well be your scientific rivals. Even if peer review is supposed to be anonymous, in a small enough sub-field people start recognizing each other’s experimental approaches and writing styles, making it harder to keep the evaluation of the content of a manuscript objective. And, peer reviewing of manuscripts is something working scientists do on top of their own scientific research, grant writing, teaching, supervision of students, and everything else — and they do it without pay or any real career reward. (This is not to say it’s only worth doing the stuff you get some tangible reward for doing, but it can end up pretty low in the queue.)
Some of this may explain why the typical submission-to-publication interval for economics papers is now around three years.
Is peer review an indispensable step that certifies a finding as “knowledge” before it’s disseminated? Or is it a force that just slows the release of knowledge that a vibrant research community could be using as the foundation for more knowledge?
One of the things I find striking about Ellison’s comments is their focus on the score-keeping aspect of the tribe of academic economists. In some ways, it sounds like these journals are primarily of value in building reputations, rather than communicating knowledge. Top professors who are drifting away from the journals are undercutting the prestige of those journals, thus hurting the prospects for an up and coming economist who hopes a publication in one of these journals will boost her reputation. Those who are currently the elites in the tribe are locked into their elite positions — positions that ensure that the papers they publish on their own websites will get plenty of attention within the tribe.
I’m not denying that the score-keeping dynamic is a real feature of one’s life in the academic world. However, the fact that the well-established scholars in a scientific community can get more of a hearing for their ideas based on the authority they’ve build up from prior works doesn’t mean that the scientific community should start accepting arguments from authority. Indeed, one of the features that is supposed to make science different from other human activities is resistance to arguments from authority. (And, as we admire the long and productive careers of the established scholars in our fields, we shouldn’t forget where crackpots come from.)
Another thing that’s odd here is that the internet has been heralded as a democratizing force, bringing information to more people and letting more of those people enter a conversation about that information — yet Ellison sees signs that the internet is entrenching existing hierarchies in economics. Perhaps this is inevitable when there is such an explosion of information; the only sensible plan is to get your information from a reliable source, rather than having to evaluate it all your own self. I wonder the extent to which peer review may already be lulling readers, making them think, “Someone has already put this paper through the wringer, so I don’t need to be so skeptical myself as I read it.” Could it be that the caution scholars bring to papers published on the internet (without peer review) might better engage the skeptical faculties that, ideally, should always be running?
Finally, if it would be better for the scientific community if papers weren’t endpoints, serving mostly to add another notch to your CV, but rather parts of ongoing conversations meant to advance the knowledge of the community, could there be a real advantage to quick dissemination of findings coupled with something like peer review that happens in the open, as part of the conversation?