The reverse-AOL maneuver and possible futures for serials

Back in the day, Time Warner merged with AOL. It turned out to be one of the worst merger ideas in the history of merger ideas, and I believe the evidence suggests that most mergers actually turn out to be clunkers!

AOL was simply at the top of its orbit, nowhere but downhill to go.

I wonder, I do, whether Time Warner learned from that experience, and that's why they started shopping exclusive deals to aggregators. (For the record, exclusive deals aren't new in this market.) Grab all the money you can with the exclusivity flag—before the market value of your product declines with a vengeance. Sort of a reverse-AOL maneuver!

I won't recap the death-of-journalism battles here; the basic question "is news being Clayton-Christensen–disrupted by search engines and blogs?" is well-known. I'll just ask you to look over the stable of periodicals at stake in the EBSCO exclusivity deal and form your own conclusions about their future. If I'm right (and I don't know that I am, of course), EBSCO and Gale will wind up with all sorts of egg on their faces.

So, remembering that the news-magazine and scholarly-publishing markets are rather different, can we take any additional lessons about potential publishing-market disruption from this? Perhaps.

The Scholarly Kitchen recently posted an entry arguing (my paraphrase; apologies for any inadequacies therein) that scholarly-journal publishing has avoided an attempted Clayton-Christensen–style market disruption by the open-access movement by fruitful and timely adaptation.

My knee-jerk response at the time that was posted was "So far." Market disruption may seem quick in hindsight, but can take time; OA has had less than a decade (dating from the Budapest Open Access Initiative in early 2002) to make a dent in a rather conservative, traditionalist market whose hegemony has been formed over three centuries or thereabouts. I genuinely believe it's much too early to call this one.

In fact, I'd lay cautious odds on the scenario following:

  1. Now that libraries have well and truly run out of Big Deal money, the big publisher-aggregators will have to look around for ways to keep their profit margins fat.
  2. One obvious way is to eliminate small, low-profit journals and their associated production and overhead costs. Some of these journals will just plain fold (not necessarily a bad thing!). Others will find a way to move open-access, setting up potential market disruption (arguably, this is already a "move upmarket" for open access compared to the typical open-access journal).
  3. More to the point, libraries will protest that they aren't getting as much product as they were for their Big Deal dollar and insist on lowered prices.
  4. At this point the cycle repeats, destructively from the point of view of a big publisher-aggregator or a low-profit journal. If this cycle really gets going, the resulting bloodbath in journals could be tremendous.

Science Online, the EBSCO furor, and the reaction to my previous posts made me rethink things a bit. My question now is "Which market are we talking about here?"

The scholarly publishing enterprise comprises two markets. (Maybe more than two, but let's stick with two for simplicity's sake.) One is where all the money is sloshing around: libraries, publishers, aggregators, societies, Big Deals, et cetera. The EBSCO-Time Warner market. The other is a prestige market, with different players and different rules.

Why do researchers publish articles? "To communicate their results" is the facile but wrong answer. I know any number of researchers who would be happy to be left alone in their labs to do experiments without having to write those pesky papers. They can't get away with that, though, and that's because they have to prove their worth to their institution and (if they rely on grants) their funders if they wish to remain employed, much less be promoted. In other words, they publish for prestige.

Now, prestige used to be almost entirely subjective. If you were on the tenure track, you went to your mentor or observed your departmental colleagues' publication patterns to sort out where to publish. If you were evaluating a colleague for tenure, you looked down their list and compared it against your mental rankings.

Aside from costing a lot of time and effort to accomplish, this process is messy, as subjectivity tends to be, and messiness in a career-life-or-death tenure situation breeds lawsuits and other such unpleasantness, and who needs that? So along came Thomson/ISI with the "journal impact factor," based on how often a given journal is cited, and researchers all over the world breathed heavy sighs of relief. Here was a number they could use to gauge the importance of a journal, and by extension, the researchers who publish in it.

(The parallels with the role of the FICO score in the US credit bubble are left as an exercise for the reader.)

I cannot overstate the bogosity of the journal impact factor. It is ridiculous, especially as a yardstick for an individual researcher. It should be banned. (Seriously, accreditors, why haven't you told departments that being dumb enough to use it in tenure and promotion decisions counts against them?) But just at present, it has cornered the prestige market. What that means is that the journals it favors (and one important reason the impact factor is bogus is that the way it's calculated is heavily tilted in favor of certain classes of journals, and even certain kinds of articles) have also cornered the prestige market.

How does this affect the money market? Simple. It's extremely hard for a library to cancel a high-impact-factor journal, or get rid of a Big Deal if there's an EBSCO/Time-like exclusive-online-access deal involved on a high-impact-factor journal. The price can go through the roof; libraries' hands are—if not tied, certainly encumbered.

So, the Scholarly Kitchen maintains that the money market in scholarly publishing hasn't been disrupted. What about the prestige market?

Well, one argument in favor of open access is that open-access articles are cited more often than those available only through payment. There are plenty of disputes ongoing about whether and why this may be true, but whatever its truth value (I tend to believe it, based partly on my own experiences), there's no denying that this is an argument aimed directly at the prestige rather than the money market.

Has it been disruptive? … Not so much, really. A few savvy scholars use green open access plus publishing in high-impact-factor journals to raise their personal citation numbers, but only a few. I argued in an article of mine (does this link make me a savvy scholar? perhaps!) that the open-access advantage is counterintuitive to researchers, who want (rather naively) to believe that prestige measures correlate highly to quality rather than to icky questions like money or access.

So much for increased citation impact as a disruptive force. Maybe it should work, but it hasn't.

Smart publishers (both toll- and open-access) and open-access repositories report article download numbers for articles online, because that's another number, and numbers have a hypnotic effect on the psyche. I have heard stories of download counts showing up in tenure portfolios, and I have also heard opposition from health researchers to the NIH Public Access Policy on the grounds that openly-available articles reduce download counts from publishers.

Again, we're firmly in the prestige market here… but notice one difference. Impact factor is a journal-level measure. Downloads are reported by article (or sometimes by author, via elementary addition).

Aha. Could that small change, from journals to articles as the unit of measure, be disruptive to the prestige market? It certainly could. What researcher wouldn't be more interested in his or her own results than in a journal-level proxy? Moreover, some bibliometric investigations have suggested that journal impact factor mostly doesn't derive from the general quality of the published content as a whole, but from a few superstar must-cite articles. Once article-level statistics make that clear, it's a significant blow to the journal impact factor and potentially even to journals as brands.

How does that work? Well, in the current environment many researchers, willingly or forced, chase high-impact journals and journal brands at all costs, ignoring other competitive factors like reach, quality of service, speed of publication, excellence of text artisanry, and so forth. Once the impact factor's back is broken by article-level statistics (should that happy day ever arrive), those other factors return to the playing field. If your vaunted "brand" didn't get my last article read, says Dr. Helen Troia, up-and-coming basketologist, why should I bother submitting my next article to you and waiting a year and a half for it? The New Journal of Basketology has a six-month turnaround!

Has anyone else cottoned on to the potential disruptive force of the article-level measurement? Why, yes, as a matter of fact PLoS has! (Genius move, PLoS, by the way. Kudos.) I can't imagine that BMC and Hindawi and others of the gold-open-access ilk won't follow suit; it's too obvious a competitive advantage. Likewise, nothing's stopping toll-access journals from hopping on the bandwagon—adapting, in Scholarly Kitchen's parlance—save perhaps the fear that the numbers would be ugly by comparison, which for some journals wouldn't surprise me at all.

Toll-access publishers of high-impact-factor toll-access journals then find themselves in a bind. If they don't provide article-level metrics, they've fallen behind the state of the art, and will hemorrhage authors to journals that do. If they do, they dissuade more Helen Troias, and given the current problems of access, they may measure up fairly poorly against open-access competitors, at which point they hemorrhage even more authors, not to mention their prior prestige.

And at some point, I should think, the prestige market feeds into the money market. Subscriptions decline, rent-seekers locking up knowledge from readers looks more and more like a losing proposition… utopia? Well, maybe not, but certainly a much more level playing field for gold open access.

For my own part… I published an article last year in Cataloging and Classification Quarterly. You can find its postprint Open Access (see, there I go again!), but that's not because Taylor and Francis usually allows this; it was a special deal, struck when another of the authors in the issue pointed out that it was downright weird for a themed issue on open access not to, er, allow open access.

The journal has another CFP out. I have an idea I really like for an article written to that theme, right down to a gimmicky title stemming from devouring a lot of Fables graphic novels all at once.

But rather than go through negotiations with Taylor and Francis again—and the last time I had to remind them that they'd decided to allow the SPARC Author's Addendum for that issue—or accept preprint-only OA, I think I'll write the article and send it to D-Lib instead.

More like this

I don't know too much about the practice and politics of impact factor. But both ISI and Scopus can tell you how many times (**) an individual article has been cited. It's, I'd guess, just these same individual article citation counts that are averaged to calculate the journal title-level impact factor.

So I don't understand why those captivated with impact factors as a shortcut for calculating prestige/influence aren't already using individual article citation metrics instead of simply title level citation metrics? What am I missing?

Note that actual citation metrics, whether individual article OR title-level, are only calculable by someone who has a giant corpus of articles, and the resources to apply clever algorithms (or human labor) to linking them all. ISI, Scopus, etc. So this is still a potential hurdle to open access journals, it's going to be difficult for them to do this on their own. So what open access journals really ought to be doing, it seems, is trying to get Scopus or ISI to _index_ and _citation count_ them too. i don't see why they woudln't want to do this, to increase their own value (unless their parent companies conflicting interests as publishers of non-open-access journals is involved).

(**) Note well that of course the individual article "times cited" counts are of course subject to some of the same methodological validity problems as the journal level "impact factor"; naturally because they are calculated similarly. You can only measure what you know about, and neither ISI nor Scopus knows about the whole universe. If they can be persuaded to 'know about' more of the open access universe, then not only will they be able to offer citation counts and impact factors for open access literature, but citations _in_ open access literature will effect _others_ citation counts and impact factors, which seems beneficial too.

Re-reading, I realize I used a confusingly ambiguous pronoun there:

"i don't see why they wouldn't want to do this"

The 'they' there was Scopus/ISI. I don't see why Scopus or ISI would not want to include open access journals in their A&I indexing and citation counting metrics, especially as open access journals become a larger share of the pie. Unless Scopus and ISI's parent companies role as publishers of non-open-access journals gives them an interest which percolates down in avoiding giving any limelight to open access journals. I doubt it -- even EBSCO indexes open access journals (they index the Code4Lib Journal now).

My guess is one-number convenience. Though article-level metrics do exist, adding them up for a given author is something of a chore, especially since nobody's worked out anything even remotely resembling a fair way to do it.

If you look at PLoS's metrics pages, though, they're making a start, at the very least by aggregating a lot of information on every article -- not just downloads, but blog and media citations and whatever else they can algorithmically pull together.

ORCID might have something to say on the subject as well, once they get past their initial standards-building effort.

For that matter, Google Scholar gives individual article citation counts as well (though the numbers have to be considered approximate).

I'm not sure impact factors and other journal-level metrics will go away easily, though. I think they're treated to some extent as a proxy for selectivity: that is, high-IF journals are seen as more competitive for submissions, and therefore a paper that manages to get through the gauntlet to get published in them must be something that Respected Colleagues think is particularly good. Because of this, a researcher gets a prestige boost with their close peers as soon as the paper is accepted, and with the community at large as soon as the paper is published.

In contrast, the prestige boost from article citations is much slower, and more drawn out, because you have to wait for the articles that cite the paper. Eventually you might get the same boost of prestige, but if you're looking at a ticking-down tenure clock or a short-range hiring or promotion decision, you may well prefer something quicker.

Download counts do come quicker than citation metrics, but I'm not sure that academic communities are all that comfortable using them as a primary basis of evaluation. We who have been on the web for a while, and have followed site stats, know that there are all kinds of ways to get big hit counts, many of which have little to do with quality.

Quicker, generally recognized forms of peer feedback could help weaken the lock-in effect of high-prestige, high-cost journals. Blogs are one type of quick feedback; it only took a few days after publication for a recent post here to be BoingBoinged, and as a result, you may well enjoy a bigger readership and higher visibility going forward for some time. If more scholars blog, or provide other forms of quick online feedback and promotion, the links and recommendations from those forms might eventually have value comparable to that of being accepted by the Journal of Prestigious Rent Seeking.

Scopus indexes 1200 open access journals at last count. The process for getting indexed is exactly the same for open access as it is for any other. Whether a journal is open access or not isn't taken into consideration.

Notice the Scopus cited-by count on display in the PLoS article metrics.

Scopus also just released some alternative journal-level metrics this week. These are available for the open access journals that are indexed.

Those metrics are available in Scopus or at http://journalmetrics.com/

Best,
Michael Habib
Scopus Product Manager

So if article-level metrics become important, and you won't find me disagreeing, what role do citation managers like Mendeley have to play in this? They're positioned such that they can see everything that their users can see, not limited like an aggregator might be to only those items in their catalog.

Not quite sure I get this. The JIF is also entirely based on an article-metric. What will exactly stop journals from simply accumulating these newer article-metrics into newer instant journal-metrics like the paper citation count > JIF?

Granted, these newer article-metrics probably work better for OA articles, since they're easier to access. But in that case, if anything, the market that will likely be disrupted is the money market and not the prestige market. Assuming that achieving full/global OA is going to happen anytime soon. Journal-level metrics will likely still dominate. And article-level metrics will still be used prominently to make the journal-level metrics happen.