The Misuse of Journal Impact Factors

I've written about journal impact factors before, largely to argue that there are better statistics than the traditional impact factor. But an excellent editorial in the Oct. 10 issue of Science by Kai Simons points out a very obvious problem with how impact factors are used (italics mine):

Research papers from all over the world are published in thousands of Science journals every year. The quality of these papers clearly has to be evaluated, not only to determine their accuracy and contribution to fields of research, but also to help make informed decisions about rewarding scientists with funding and appointments to research positions. One measure often used to determine the quality of a paper is the so-called "impact factor" of the journal in which it was published. This citation-based metric is meant to rank scientific journals, but there have been numerous criticisms over the years of its use as a measure of the quality of individual research papers. Still, this misuse persists. Why?

That really is a basic misuse of the statistic, particularly when you consider the following:

This algorithm is not a simple measure of quality, and a major criticism is that the calculation can be manipulated by journals. For example, review articles are more frequently cited than primary research papers, so reviews increase a journal's impact factor. In many journals, the number of reviews has therefore increased dramatically, and in new trendy areas, the number of reviews sometimes approaches that of primary research papers in the field. Many journals now publish commentary-type articles, which are also counted in the numerator. Amazingly, the calculation also includes citations to retracted papers, not to mention articles containing falsified data (not yet retracted) that continue to be cited. The denominator, on the other hand, includes only primary research papers and reviews.

At some point, to accurately assess a scientist's body of work, you have to know the field. It can't be reduced to numbers.

More like this

Todd Zywicki links to Lott's take on Alito. Lott cites a study by Choi and Gulati but gets taken to task in comments by Frank Cross who writes: Perhaps unsurprisingly, this review by John Lott is quite misleading. Under Choi & Gulati's citation-based measure of judicial quality, Alito comes…
From the dept of general-fun-but-with-a-serious-message: Retraction Watch on a somewhat unusual case: "Journal retracts two papers after being caught manipulating citations": Mauricio Rocha-e-Silva ... and several other editors published articles containing hundreds of references to papers in each…
tags: researchblogging.org, H-index, impact numbers, scientific journals A friend, Ian, emailed an opinion paper that lamented the state of scientific research and the effect this has had on science itself. In this paper, by Peter A. Lawrence, a Professor of Zoology at University of Cambridge, the…
Over the last few decades, there has been a veritable explosion in the quantity of scientific journals and published papers. It's a veritable avalanche. Some of the reason for this is simply the increase in the number of scientific researchers that has occurred over the last few decades. Another…

I absolutely loathe the whole concept of an impact factor, and place little trust in it as a measure of the quality of work. Some journals publish mostly papers that are of interest to a broad range of disciplines. Others have a narrow scope, and are aimed at specialists. Of course the first type of journal is going to be cited more frequently.

By Julie Stahlhut (not verified) on 16 Oct 2008 #permalink

And of course, let's not forget that when you say "Bob et al. (2004) failed to quantify a key variable, making their conclusions dubious at best.", that's *still* a citation for that paper, no matter how bad it is.

A number of journal editors have take to "suggesting" that Authors reference other papers from their journal in the revision process. I think eigenfactor is better, but still subject to such unethical abuse. I forget if eigenfactors down-weight references to papers in the same journal, but they should. Consider, if the papers in a Journal only have references to other papers in the same journal, do you think that would indicate a good journal?

Also, it seems to me that impact factors discourage lengthy articles with all the details. Sometimes a short article is appropriate, but science isn't always simple. There are several Journals that I feel have decreased their important scientific content in the pursuit of higher impact factors. At this point, I feel that impact factors are bad for science.

And of course, let's not forget that when you say "Bob et al. (2004) failed to quantify a key variable, making their conclusions dubious at best.", that's *still* a citation for that paper, no matter how bad it is.