Correlation is not causation: what came first - high Impact Factor or high price?

Bill decided to take a look:

Fooling around with numbers:

Interesting, no? If the primary measure of a journal's value is its impact -- pretty layouts and a good Employment section and so on being presumably secondary -- and if the Impact Factor is a measure of impact, and if publishers are making a good faith effort to offer value for money -- then why is there no apparent relationship between IF and journal prices? After all, publishers tout the Impact Factors of their offerings whenever they're asked to justify their prices or the latest round of increases in same.

There's even some evidence from the same dataset that Impact Factors do influence journal pricing, at least in a "we can charge more if we have one" kinda way. Comparing the prices of journals with or without IFs indicates that, within this Elsevier/Life Sciences set, journals with IFs are higher priced and less variable in price:

Fooling around with numbers, part 2:

The relationship here is still weak, but noticeably stronger than for the other two comparisons -- particularly once we eliminate the Nature outlier (see inset). I've seen papers describing 0.4 as "strong correlation", but I think for most purposes that's wishful thinking on the part of the authors. I do wish I knew enough about statistics to be able to say definitively whether this correlation is significantly greater than those in the first two figures. (Yes yes, I could look it up. The word you want is "lazy", OK?) Even if the difference is significant, and even if we are lenient and describe the correlation between IF and online use as "moderate", I would argue that it's a rich-get-richer effect in action rather than any evidence of quality or value. Higher-IF journals have better name recognition, and researchers tend to pull papers out of their "to-read" pile more often if they know the journal, so when it comes time to write up results those are the papers that get cited. Just for fun, here's the same graph with some of the most-used journals identified by name:

More like this

Science in the open » Why good intentions are not enough to get negative results published "The fundamental problem is that the âwe need a journalâ approach is stuck in the printed page paradigm. To get negative results published we need to reduce the barriers to publication much lower than they…
I just found out that the journal impact factors for 2005 were recently released, and as usual, the journals with the highest impact factors are not necessarily the ones that would be considered the most prestigious. Therefore, the following post from the archives, about an alternative rating…
So what do I mean by Big Deals. In the world of academic libraries, a Big Deal is when we subscribe to the electronic versions of all (or almost all) of a journal publisher's offerings. Usually for it to qualify as a Big Deal, the publisher in question is going to be one of the larger ones out…
tags: researchblogging.org, H-index, impact numbers, scientific journals A friend, Ian, emailed an opinion paper that lamented the state of scientific research and the effect this has had on science itself. In this paper, by Peter A. Lawrence, a Professor of Zoology at University of Cambridge, the…

Important to add - the Part 3:

The curve fits are for the whole of each dataset, even though it's a zoomed view; the Nature set excludes British Journal of Pharmacology, the only NPG title that recorded 0 uses, and Nature itself. Colour coding by publisher is the same for each figure in this post. As in part 2, the correlation between price and use is weak at best and doesn't change much from publisher to publisher. Also, each publisher subset shows a stronger correlation than the entire pooled set -- score another one for Bob O'Hara's suggestion that finer-grained analyses of this kind of data are likely to produce more robust results. Since cutoffs improved the apparent correlation for the pooled set, I tried that with the publisher subsets: