Browsing the vast pile of unread clutter I came across a copy of the Economist from October, featuring How science goes wrong and Trouble at the lab. Somewhere – but I don’t know where – I discussed these, but since I can’t find it I’ll repeat myself.
The first point is that whilst HSGW notices the pernicious effects of publish-or-perish in its analysis, it doesn’t mention it in its how-to-fix-it. And yet, in my humble and now totally disinterested opinion, its the core of the problem. People are judged by their number of papers, and by the citations of those papers. The more senior you get the more important quality is, and (I’m talking about natural sciences here) at any level a publication in Nature of Science is a prize worth fighting for, but certainly in the lower ranks raw weight of papers is valuable, and anyway you might get lucky and get cited a lot. So you do your level best to publish a lot. Besides which, whilst in the long term you might need good papers, in the short term of a year or two your performance target is likely paper-based.
There’s a reason for this, of course: weight of papers, whilst acknowledged by all to be crude to the point of uselessness, is at least an objective measure; and since nowadays no-one in charge of grants trusts anyone down below, they need – or feel a need – to insist on each grant delivering so many papers. And so on.
The system is pernicious, and very hard to change, since so many interlocking things now depend on it. You could, I would say, usefully delete at least 75% (that’s being conservative, but also a bit unfair. Many of the ones I’d throw out would be largely repeating previous ones, with just a nugget of novelty. Waiting longer and collating would improve quality) of all published papers and lose very little (not mine obviously). But if you did that a pile of journals would collapse due to lack of grist for their mill, and the academic ranking system would need to be rejigged. The best way to move towards such a system would be to make the criteria for evaluating folk better; as in not-weight-of-papers-based.
The second point is that the complaints in TATL are nearly entirely statistics-based, which points to a failure at the Economist to get out enough. There are, indeed, lots of biomed papers (waves hands vaguely) that depend on stats to detect tiny-but-nominally-significant effects; but there are many climatology papers that don’t really need stats at all to see the effects. So the stuff they are pushing – the traditional “if you do lots of experiments and only publish the positives, then you end up with lots of false positives” only applies to a subset of science. I’ve no idea how big that subset is, but I’m pretty sure its much smaller than the Economist thinks.
[Update: Bronte Capital discusses this point in another context.]
[Disclaimer: I’ve been out of science for 5+ years now: things change. And I only knew a subset of one field and a couple of institutions. And indeed the Met. Office, which I knew a bit, was a pretty good example of somewhere that didn’t force you to publish and did understand other values. OTOH is was also a cunning trap, since without a steaming pile of papers it was hard to escape.]