For those of you interested in the science publishing business, there is an interesting paper out about Impact Factors, where they do the math to try to explain why the IFs are apparentluy always rising from year to year, and to figure out the differences between disciplines. They remain agnostic pretty much about the whole hot controversy about the validity of IF, but their data explain some facts about IF that can be added, IMHO, into the growing lists of reasons why IF should be abandoned:
Althouse, B.M., West, J.D., Bergstrom, C.T., Bergstrom, T. (in press). Differences in impact factor across fields and over time. Journal of the American Society for Information Science and Technology DOI: 10.1002/asi.20936
The bibliometric measure impact factor is a leading indicator of journal influence, and impact factors are routinely used in making decisions ranging from selecting journal subscriptions to allocating research funding to deciding tenure cases. Yet journal impact factors have increased gradually over time, and moreover impact factors vary widely across academic disciplines. Here we quantify inflation over time and differences across fields in impact factor scores and determine the sources of these differences. We find that the average number of citations in reference lists has increased gradually, and this is the predominant factor responsible for the inflation of impact factor scores over time. Field-specific variation in the fraction of citations to literature indexed by Thomson Scientific's Journal Citation Reports is the single greatest contributor to differences among the impact factors of journals in different fields. The growth rate of the scientific literature as a whole, and cross-field differences in net size and growth rate of individual fields, have had very little influence on impact factor inflation or on cross-field differences in impact factor.
Christina looks at their math and summarizes the paper as well:
Impact factors of journals are a perennial discussion topic - used by libraries (with other measures) for collection development and by researchers to decide where to publish. They're also mis-used, abused, and misunderstood. But this article isn't about all that. This article looks at whether the impact factors are going up, what aspect of the impact factor provides the greatest contribution or explains the increase, and if the increase is different in different disciplinary categories?
- Log in to post comments
I've read that paper and at first I also thought this would be one more reason to abandon the IF. But I after some more thought, I'm not so sure any more. If citation rates go up on average and if citation rates differ between fields, shouldn't *any* measure of citation reflect these trends?
So I think on the contrary, this paper shows that the IF is somehow correlated with citations, despite its many flaws (which of course depends, among other things, on exactly how the researchers have counted citations).
What this research could do, however, is to provide a metric to normalize citation data to provide a measure that doesn't rise with average citation rate nor differs between fields. In other words, calculate a relative citation metric, relative with respect to the time period and field of the publication.
Hmmm, I did not attempt to wade through the math, but Bjoern makes me rethink....
@Bjoern - more than correlate - citations are used to calculate the impact factor. People have tried to normalize or standardize IFs and citations from multiple disciplines - but it's a really tricky proposition with limited utility (why do we need a single number to compare journals from different time periods and disciplines?). There are plenty of journals that don't sort cleanly. There's the idea, too, that this sort of thing will mask the limitations of the IF and make it even more tempting to abuse. IMHO.