The Misuse of Journal Impact Factors

I've written about journal impact factors before, largely to argue that there are better statistics than the traditional impact factor. But an excellent editorial in the Oct. 10 issue of Science by Kai Simons points out a very obvious problem with how impact factors are used (italics mine):

Research papers from all over the world are published in thousands of Science journals every year. The quality of these papers clearly has to be evaluated, not only to determine their accuracy and contribution to fields of research, but also to help make informed decisions about rewarding scientists with funding and appointments to research positions. One measure often used to determine the quality of a paper is the so-called "impact factor" of the journal in which it was published. This citation-based metric is meant to rank scientific journals, but there have been numerous criticisms over the years of its use as a measure of the quality of individual research papers. Still, this misuse persists. Why?

That really is a basic misuse of the statistic, particularly when you consider the following:

This algorithm is not a simple measure of quality, and a major criticism is that the calculation can be manipulated by journals. For example, review articles are more frequently cited than primary research papers, so reviews increase a journal's impact factor. In many journals, the number of reviews has therefore increased dramatically, and in new trendy areas, the number of reviews sometimes approaches that of primary research papers in the field. Many journals now publish commentary-type articles, which are also counted in the numerator. Amazingly, the calculation also includes citations to retracted papers, not to mention articles containing falsified data (not yet retracted) that continue to be cited. The denominator, on the other hand, includes only primary research papers and reviews.

At some point, to accurately assess a scientist's body of work, you have to know the field. It can't be reduced to numbers.

More like this

I just found out that the journal impact factors for 2005 were recently released, and as usual, the journals with the highest impact factors are not necessarily the ones that would be considered the most prestigious. Therefore, the following post from the archives, about an alternative rating…
What goes into a journal's impact factor? It turns out that this is a good question. These impact factors are calculated by Thomsom Scientific and attempt to quantify the import of any particular scientific journal. But did anyone read this commentary in the December 17th issue of JCB? It's a…
The gold standard for measuring the impact of a scientific paper is counting the number of other papers that cite that paper. However, due to the drawn-out nature of the scientific publication process, there is a lag of at least a year or so after a paper is published before citations to it even…
Over the last few decades, there has been a veritable explosion in the quantity of scientific journals and published papers. It's a veritable avalanche. Some of the reason for this is simply the increase in the number of scientific researchers that has occurred over the last few decades. Another…

I absolutely loathe the whole concept of an impact factor, and place little trust in it as a measure of the quality of work. Some journals publish mostly papers that are of interest to a broad range of disciplines. Others have a narrow scope, and are aimed at specialists. Of course the first type of journal is going to be cited more frequently.

By Julie Stahlhut (not verified) on 16 Oct 2008 #permalink

And of course, let's not forget that when you say "Bob et al. (2004) failed to quantify a key variable, making their conclusions dubious at best.", that's *still* a citation for that paper, no matter how bad it is.

A number of journal editors have take to "suggesting" that Authors reference other papers from their journal in the revision process. I think eigenfactor is better, but still subject to such unethical abuse. I forget if eigenfactors down-weight references to papers in the same journal, but they should. Consider, if the papers in a Journal only have references to other papers in the same journal, do you think that would indicate a good journal?

Also, it seems to me that impact factors discourage lengthy articles with all the details. Sometimes a short article is appropriate, but science isn't always simple. There are several Journals that I feel have decreased their important scientific content in the pursuit of higher impact factors. At this point, I feel that impact factors are bad for science.

And of course, let's not forget that when you say "Bob et al. (2004) failed to quantify a key variable, making their conclusions dubious at best.", that's *still* a citation for that paper, no matter how bad it is.