Browsing the vast pile of unread clutter I came across a copy of the Economist from October, featuring How science goes wrong and Trouble at the lab. Somewhere - but I don't know where - I discussed these, but since I can't find it I'll repeat myself.
The first point is that whilst HSGW notices the pernicious effects of publish-or-perish in its analysis, it doesn't mention it in its how-to-fix-it. And yet, in my humble and now totally disinterested opinion, its the core of the problem. People are judged by their number of papers, and by the citations of those papers. The more senior you get the more important quality is, and (I'm talking about natural sciences here) at any level a publication in Nature of Science is a prize worth fighting for, but certainly in the lower ranks raw weight of papers is valuable, and anyway you might get lucky and get cited a lot. So you do your level best to publish a lot. Besides which, whilst in the long term you might need good papers, in the short term of a year or two your performance target is likely paper-based.
There's a reason for this, of course: weight of papers, whilst acknowledged by all to be crude to the point of uselessness, is at least an objective measure; and since nowadays no-one in charge of grants trusts anyone down below, they need - or feel a need - to insist on each grant delivering so many papers. And so on.
The system is pernicious, and very hard to change, since so many interlocking things now depend on it. You could, I would say, usefully delete at least 75% (that's being conservative, but also a bit unfair. Many of the ones I'd throw out would be largely repeating previous ones, with just a nugget of novelty. Waiting longer and collating would improve quality) of all published papers and lose very little (not mine obviously). But if you did that a pile of journals would collapse due to lack of grist for their mill, and the academic ranking system would need to be rejigged. The best way to move towards such a system would be to make the criteria for evaluating folk better; as in not-weight-of-papers-based.
The second point is that the complaints in TATL are nearly entirely statistics-based, which points to a failure at the Economist to get out enough. There are, indeed, lots of biomed papers (waves hands vaguely) that depend on stats to detect tiny-but-nominally-significant effects; but there are many climatology papers that don't really need stats at all to see the effects. So the stuff they are pushing - the traditional "if you do lots of experiments and only publish the positives, then you end up with lots of false positives" only applies to a subset of science. I've no idea how big that subset is, but I'm pretty sure its much smaller than the Economist thinks.
[Update: Bronte Capital discusses this point in another context.]
[Disclaimer: I've been out of science for 5+ years now: things change. And I only knew a subset of one field and a couple of institutions. And indeed the Met. Office, which I knew a bit, was a pretty good example of somewhere that didn't force you to publish and did understand other values. OTOH is was also a cunning trap, since without a steaming pile of papers it was hard to escape.]
- Log in to post comments
The German Science foundation has a rule that helps a little. For a research proposal you can only mention 3 of your own articles per year requested. And in your bio you can only mention your 5 best articles.
That provides some incentive to produce quality over quantity. Although the reviewer may well look on your homepage or the web of science for the quantity.
Would that Victor were correct. Everybunny has access to WoS or Scopus even Google Scholar all of which do citation analysis on demand. However, even members of the Evil Weasel MC (TM ER) miss. It is not just your h number, but also what the biomed types call CNS (Cell Nature Science). Only those count. In physics substitute PRL.
Every human enterprise has a temptation to cheat or exaggerate. The whole point of science is to filter out the false, mislabeled, exaggerated. The article is bad because it assumes that once a paper is published it is accepted, which is not true. I have seen too many poor papers appear, be publicized, and then disappear below the waves.
The Economist is only viewing the problem after half the filter has been completed. See how many of the bad hypotheses are still around after a decade.
[I agree that the ultimate filter is just to ignore bad papers -W]
I agree that the ultimate filter is just to ignore bad papers -W
Easier said than done. I've seen (and occasionally participated in) a few actual scientific controversies, and those fights have a tendency to persist well past their sell-by date. Some years ago a scientist admitted to me that his then most-cited paper achieved that distinction because a rival would include a sentence in his papers along the lines of, "Fulano et al. [19xx] did this wrong." Often, as a matter of politics, you have to mention the other side of the controversy, because you reasonably fear that somebody from that side might be a referee. The phenomenon is particularly bad with GlamourMag publications, because those are often selected to have a big splash. There are other cases where a hypothesis looks reasonable based on the available data but is disproven many years later ("The Seven Percent Solution" from Surely You're Joking, Mr. Feynman! gives an example from physics). Not to mention cases where a publication turns out to be fraudulent, but not before it draws a bunch of citations.
On the flip side, some perfectly good papers end up with low citation rates. Fashions have been known to change, even in science. Sometimes a paper is simply ahead of its time, e.g., Haar invented his eponymous wavelet basis about 70 years before anybody knew what a wavelet was. Or the topic may be sufficiently esoteric that only a few people are working on that problem.
[Cited-for-being-wrong isn't great; but at the level we're talking about here, even cited for being wrong is better than the great morass that don't even attract that much notice. "not even wrong" syndrome. Somewhere I saw the statistic that about 50% of papers are never cited even once (ah: http://garfield.library.upenn.edu/ci/chapter10.pdf says 25%; http://www.sicot.org/?id_page=695 says 55% "in the first 5 years" -W]
Also, the measure of a paper's worth through the number of citations essentially limits the stakeholder community to other publishing researchers. When a paper is used by people who are doing anything other than publishing papers in conventional journals - say, when an ecology paper is read by a land manager who uses its conclusions to help make practical decisions - no citation that counts for anything is generated. Yet one could well argue that that sort of use is more important than providing a line of citation in another manuscript, both because it has substantive real-world impact and because, in an age of limited funding, science that is perceived as useless to the broader community is not likely to be funded.
Cheer up- provincial science convinced it's wildly right is scarier still :
http://vvattsupwiththat.blogspot.com/2013/11/many-of-wilder-tales-of-im…
Enjoyed that immensely, Russell. Maybe it's just me, but that veritable fusillade of pseudo-science made more sense than a lot of what Willis promulgates on WUWT. Though, as with the majority of... stuff that appears there (and is mercifully whisked away rather quickly by the WUWT fire hose, unless maître d' Tony makes it 'sticky'), it's difficult to describe exactly how.