From the Archives: Like prices and hemlines, why do impact factors always go up?

It's that time again: the 2008 Journal Citation Reports are out from Thomson Reuters. It's started already, too, the e-mails to listservs and press releases. So I'm re-posting one of my posts from my old blog for those of you who might not have seen it.

Like prices and hemlines, why do impact factors always go up?
Ever notice that certain time of year when every journal publisher announces how the impact factors of their journals is up? When the Journal Citation Report (JCR) comes out... the press releases follow. The impact factor is a measure of how important the journal is - if it is cited. It's a rolling measure so journals can't rest on their laurels (so much), but there is time for the articles to actually be cited after they're published.

Impact factors of journals are a perennial discussion topic - used by libraries (with other measures) for collection development and by researchers to decide where to publish. They're also mis-used, abused, and misunderstood. But this article isn't about all that. This article looks at whether the impact factors are going up, what aspect of the impact factor provides the greatest contribution or explains the increase, and if the increase is different in different disciplinary categories?

I'll try to use the same notation used in the article. Let's define the impact factor (IF) first. IF looks at how many articles (n) in a current year (t) cite articles from a journal from the previous 2 years (t-1 and t-2) divided by the total number of articles (A) in journal (i) in the two years t-1 and t-2.

It's pretty clear from the numbers that impact factors have gone up in absolute terms, but the authors are interested in the average rate of change so they need to create a weighted impact factor to account for the fact that some journals have a lot more articles than others. The weighting factor is the number of articles from the particular journal from the previous two years divided by the total number of articles from all of the indexed journals over those two years.

The weighted impact factor (where S is the set of all JCR journals in year t) is

So with that- yeah, pretty consistent change - 2.6% per year from 1994-2005.

To find out what caused the increase, they decomposed it to these 4 factors:

1. if there really are just more articles in t than in t-1 and t-2 (alpha sub t)
2. to what extent are the new articles citing the past 2 years (maybe citing older stuff) (p sub t)
3. to what extent are the new articles citing non-JCR journals (newer or regional or less well-respected or too specialized) (v sub t)
4. how many articles are the new articles citing? how many references or how long are the reference lists? (c sub t)

and through some math they get here:

I don't want to give away the whole article, so I encourage you to check out the math and the tables, but it turns out that c is the only thing that makes a difference. It's really almost completely due to the increase in the number of references cited per article!

As for disciplinary differences. We know that cited half life in math, for example, is >10 years -- they cite older stuff. Immunology is under 6 years. We know that biomedical researchers cite a lot more articles than engineers... there are just different ways of doing science, applied science, and math so we expect some difference in impact factor - and really it's not at all a good idea to compare journals that serve different fields by impact factor. The authors make some interesting choices here - made no doubt because they run the Eigenfactor site (stated explicitly as a competing interest). Instead of going with the easy JCR subject categories, which are somewhat disputed - they use the 50 largest of the 88 non-overlapping categories they found using a random walk method (see their PNAS article).

They calculate the weighted impact factor for each of these categories and sure enough, math has a 0.56 weighted impact factor and molecular and cell biology has a 4.76 weighted impact factor. The growth rate is highest for pharmacology 0.098 and negative for history (hm?). After various assorted linear regressions and a hierarchical partitioning, it turns out that v accounts for the largest part the difference between the disciplines. Scroll up, that's right, citing non-JCR journals. CS and math suffer while biomed wins out.

An interesting article and I can really recommend reading it - it's very understandable, and you almost feel like you have your professor carefully walking you through the steps. I always see the press releases, so it's nice to connect those to some sort of reason.

Reference:
Althouse, B.M., West, J.D., Bergstrom, C.T., Bergstrom, T. (2009). Differences in impact factor across fields and over time. Journal of the American Society for Information Science and Technology, 60, 27-34. DOI: 10.1002/asi.20936

More like this