Peer review

Seldom in the field of human conflict has so much been written by so many people on a subject about which they know nothing. Or so I'd like to hope: in the sense that I'd hope that the denialist chatter about peer review was the nadir. But I do know something about peer review, though my knowledge is 7 years out of date. Nonetheless, I don't hesitate to comment. If you're wondering (or I'm wondering, coming back to this later) all this kicks off from the ship of fools nonsense, which has elevated peer review to super-star status for its 15 minutes in the blog-o-light.

For a working scientist, peer review is just part of the job. You write up your work, you show it to your colleagues (if you work somewhere like BAS, its likely mandatory that it gets passed around a bit just to make sure you're not saying anything really dumb; your division head corrects a couple of typos. If you're new, this is a really helpful part of the process; if you're not new then likely the internal review becomes a formality), you send it to the best journal you think you can get away with, and eventually you get the reviews back. These will be a mixture of "please cite my paper" (usually disguised as "you need to consider X"), typos, and the occasional well-considered thoughtful comment that genuinely improves things. You sigh, you happily incorporate the thoughtful stuff, you work out how much of the not-very-helpful stuff you can get away with blowing off, and you resubmit (naturally, I don't know what happens when someone senior in the field submits, since I never was). And sometimes you get a reviewer who really really doesn't like your paper for what you regard as invalid reasons, and you have to decide whether to fight to the death or go elsewhere.

For a "skeptic" - many of whom are on display at JoNova - peer review is a process about which they know nothing, except that it produces answers they don't like (note: for those who read my previous censorship post and didn't see the update, I'll say that I was wrong about her site: I'm being allowed to comment freely). What's probably most striking about that post is the level of ignorance on display: about peer review itself, and how it works, but also about prior art. You'd think that problems with PR had only just been discovered. I did try to point that out but as you'd expect, it fell on stony ground.

Peer review is nothing more than argument from authority and should be considered entirely irrelevant when evaluating the science

Comment #1 at JoNova, by "Truthseeker". Of course, you know the old proverb: if someone calls themselves Truthseeker, then...". But anyway: TS's argument is a very common one: what we really care about is the quality of the science: what do we need a bunch of anonymous gatekeepers for?

rej Weeeellll... there are several answers to this. Let's start with the most obvious: there's an avalanche of papers out there, and not enough eyeballs to read them all. Journals like Nature publish less than 10% of what's submitted (although note they are sifting for (ideally) both high quality and "excitement"; arguably, they veer off to the latter when pushed and may sometimes neglect the former). I found Rejection rates for journals publishing in the atmospheric sciences, from which I've taken the figure, and this quote: "Seventy-nine percent of journals have rejection rates of 25%–60%".

OK, so hopefully you accept that we need some kind of gatekeepers to staunch the flow, but how then do we account for the common notion that peer review improves or proves quality or scientific merit? I have two answers:

* in practice, we find it does. Science is what works, bitches. Compare it to other ways of doing things.
* it doesn't prove merit. Many many papers languish unread and uncited in reviewed journals. The ultimate test of the worth of your work is whether people choose to read and then build on what you've done. All peer review does is help you (the reader) by removing some drivel and pointing you towards some hopefully interesting stuff; and you (the writer) by providing a higher chance of people reading your stuff. There's a reason people fight like rats in a sack to get their work into Nature, after all.

For a completely opposite approach, we already have full open-access no-peer-review publication: blogs. Anyone can write what they like and reach the entire world (I'm ignoring arXiv, about which I know nothing). Which suffer from the obvious problems.

“Peer-review” is an ENCLOSED system that no one can challenge

Comment #4.2 from Joe Lalonde. If you're a "skeptic" seeing all your favourites shot down and reduced to producing their own journals, this is likely to seem true. As a normal scientist faced with some silly reviewer who refuses to see it your way and who is mysteriously backed up by the journal editor, it sometimes seems the same. But actually it isn't.

Example from my own humble oeuvre: On the Consistent Scaling of Terms in the Sea-Ice Dynamics Equation by me and a cast of luminaries. That was initially rejected by not one but two referees. Ref 1 said it was true, but so obviously true that it wasn't worth publishing. Ref 2 said it was obviously false. We managed to persuade the editor that ref 2 was wrong, but that because of ref 2, ref 1 must also be wrong (you might have hoped that the editor would have noticed this contradiction by himself, but editors are busy people).

And of course, there are already open-review journals. They aren't in a majority, but they exist.

The other point is that most reviewers have experience of the bastard review system themselves, and can be quite sympathetic.

Anyway: as an outsider, who thinks of themselves as an outsider, and talks to no-one but outsiders, its very easy to get the wrong idea.

The effectiveness, and the desirability, of peer review is negated where a ostensibly scientific subject is politicized

Comment 5.3 by Eric Simpson.

Continuing: If one side ends up controlling peer review, and if that side is pushing for a “cause” that has nothing to do with the science, peer review is worse than worthless. Again, this is what it looks like from the denial-o-sphere: they are all so distant from the real science, that all the scientists look to them to be clustered together. But in reality there is vibrant discourse - well, sometimes. The comment is ill-posed, because the stuff about "sides" doesn't really work. And there are so so few decent test-cases. I can't think of a single paper that the "skeptics" can put forward that should have been published, that wasn't. I suppose they'd retreat to "but the system is so biased against us we don't even try" sort of paranoia. But that's just paranoia.

Let the free market review the papers

(this is JoNova's idea). Hmm, well, maybe. Its easy to forget sometimes that PR has evolved into its current form, and perhaps things have changed. The rise of the internet makes swift and open feedback entirely practical. But what is JN's programme?

one named editor solely makes the decision to publish, and they can ask advice from reviewers, whomever they should choose. The reputation of that one editor should depend on the value of the papers they pass... They need to be paid, and the best ones, more. Editors are, currently, usually unpaid. They do the work either out of love, or because it reflects credit on their career. Paying them - presumably, significant sums - would change the game (one obvious problem: institutes generally OK people taking time off to edit, because its for the general good, and because they aren't being paid. If they *were* being paid, that might change). I'd be concerned that editors would then have a (strong, financial) incentive to stuff papers into the journal. Which is the last think you want. I don't find the rest of her suggestion terribly well thought-out either.

disappointingly, people on that blog haven't taken up her idea and subjected it to constructive criticism, which is a semi-ironic implicit comment on the entire "skeptic" worldview.

What would you do then?

Pah, you mean in an ideal world or in practice? As I and many many other people have said, addressing the flow at its source would be best: which would mean stop judging people on sheer paper count. But that's wrapped up in so many things, including the centralisation of decision-making, that its hard to see as realistic.

In practice I think a transition to an open review system would be very helpful, and solve some of the existing problems. That could also include journals listing papers they rejected before review, if you could work out some way around the copyright or priority problems.

[Update: thanks for your comments. I find it fairly amusing that the majority of commenters here are able to say My experience of peer review has been... as opposed to the denial-o-sphere's fairy tales about what they imagine peer review is like. Of course, since they aren't in any way restricted by reality, the d-o-s puts in far more comments.]

[Uupdate: see-also VV's Peer review helps fringe ideas gain credibility.]


* Their own private reality
* Bad Science
* Unless you plan to do something really bad, why do you insists being anonymous?
* Some links from Eli
* Fix the incentive structure and the preprints will follow - David L. Stern
* Peer review: Troubled from the start Alex Csiszar, Nature, 19 April 2016

More like this

Yup, you've pretty much exactly described peer review as I've experienced it. There really is a remarkable lack of understanding of peer review (of science/academia in general I would say) within the "skeptic" community. From some discussions I've had, if they had their way they'd spend all the research money on auditors/overseers and none on the actual research.

[That's a bit of a relief. I had a terrible feeling that I'd be out of date, or my own experiences would have been atypical -W]

By And Then There… (not verified) on 20 Jan 2014 #permalink

I published through a peer review and arXive. You are correct; many unfamiliar with the process get it wrong. However I suggest one other dimension to the process. The dimension is novelty or, rather, disagreement with prevailing standard models. This is the social dimension.
H. Arp book “Seeing Red” explores the reaction of the science community to a nonstandard interpretation of the data.
One criterion you stated is the internal review that presumes working in a group. Members of groups get published more than individual researchers. The group dynamic helps advance some things, but retards out-of-the-box ideas.
Another part of peer review is the multiple submissions required. This is a real pain and takes away from research time.
If you are paid for your research – it’s publish or perish. The formula is become highly published, the strike out with out of the box material. For my part, I am retired. I don’t need the groupthink. So I get to be wild. So, I’ve migrated to a forum that has minimal review (I was invited, my prior work qualified me). I have thought quite a bit about the tradeoff between ease and wide distribution. I think I am the crazy scientist with way out ideas working alone. I think I like it that way, but sometimes I wonder.

It takes considerable time to prepare a paper for peer review.

Its prospect can act as a deterrent against lazy carelessness and the use of intentional obscurity*. If you don't believe me consider the difference in quality between the two kinds of work from some authors who are well known for both kinds of communication.
* Although reviewers vary enormously. Some want to understand every step ; others just look for originality and interest.

By deconvoluter (not verified) on 20 Jan 2014 #permalink

Although there is much to agree with in this post, you would choke if I left it at that.

Although isolated instances of peer review date back to the 17th Century, as a systematic practice it is much more of a recent phenomenon. Most of the important science we look to was not in fact peer reviewed--hence its role can not be considered essential.

As a gate-keeping mechanism it has obviously failed--bad, even fraudulent science gets past peer review regularly. You speak of it as a filtering mechanism, blocking out the noise, but there are so many journals publishing so many papers that that is no longer true, if ever it was.

Of course skeptics will bitch about peer review--but the subtext is that they're really bitching about Mann and Co. trying to lean on a couple of journal editors--a tactic that worked, you may recall, or the publicized cat fight about the dueling Antarctic Ice papers a year or so ago. You are correct that a scientist can venue shop until s/he finds a congenial editor and that skeptic climate scientists such as Lindzen, Christy and even non-skeptic but non-consensus scientists such as von Storch are not shut out of the literature. But they haven't made recent news, which is what drives blog conversations.

Now that Elsevier is cracking down on scientists who are guilty of the heinous crime of putting their published papers on their own blog, the fledgling movement to open access may gain some momentum. What would work best would be a combination wiki with spawning blogs, or some way to provide electronic marginalia for comments. We'll see.

Side question: Why do you make so much sense in your comments over at Planet 3 and champion such marginal arguments elsewhere?

[If by "elsewhere" you mean the denial-o-sphere, the problem is the low base all the conversations start from, and the poor reasoning ability of those commenting, so its pretty hard to have an intelligent conversation. JoNova's is a fair example. If you had a specific instance in mind, it would be easier to be give a specific answer -W]

By thomaswfuller2 (not verified) on 20 Jan 2014 #permalink

As a gate-keeping mechanism it has obviously failed–bad, even fraudulent science gets past peer review regularly.

Well, it's obviously not perfect, but do you really think it doesn't at least limit the amount of "bad, even fraudulent science" published?

My negative experience with peer-review is mostly lazy reviewers. "How did that one get by?" seems more common. The secon one is the particular "figure of merit" types that either can't think beyond their own, old paper or want a citation. The third is the ladder climbing department head that basically dominates a particular field through subordinate publishing and review to build their CV. They become the walled garden of topics that no one will ever mention again (or at least publish to the journals they do). Patents in some instances count as peer review for the "publish or perish" group and University patent trolls is one of my pet peeves now.

By Tim Beatty (not verified) on 21 Jan 2014 #permalink

bad, even fraudulent science gets past peer review regularly

Peer review is not designed to catch fraud: the presumption is that the authors of the paper performed the described experiment(s) and obtained the described result(s). Most reviewers don't have the time or resources to duplicate the experiment.

I take a Churchillian view of peer review: it's the worst possible system, except for everything else that's been tried. The current peer review system does have problems, but not of the kind you see in denialist circles. The biggest defect I see is the perverse incentive: there is no explicit reward for doing the job well; instead, those who do it well get to do more of it (generally without compensation), while those who do it badly tend not to get asked again. This is particularly an issue in countries like the US where success is measured in terms of winning grants: those proposals are peer reviewed, too, and success rates are low and declining (depending on field, there will be between 4 and 20 proposals submitted for each award made). One of the effects of this phenomenon is the "lazy reviewer" issue Tim @6 mentions.

By Eric Lund (not verified) on 21 Jan 2014 #permalink

I've gotten to experience peer review on 3 sides now - published articles, reviewing articles, and being a journal guest editor. I've used the expression "peer review as a spam filter" before - weeds out a bunch of dross (though some gets through), and lets through the good stuff (though some gets blocked erroneously). I think that this does some discredit to the role of peer review as an improvement process: I find there's actually a decent percentage of "the occasional well-considered thoughtful comment that genuinely improves things" and have seen one of my own papers improve dramatically through the peer review process, have seen one Nature editorial that I reviewed change (and improve) dramatically (all 3 peer reviewers had similar concerns, which helped), and as guest editor have tried to convince the authors to take into account the most important comments and seen less dramatic changes but definitely strengthening.

I will also say that I have not seen serious evidence of the "enclosed system" in the climate change field: I did, however, have a friend in a sub-field of chemistry where apparently there were two camps about a certain phenomena, and if you got peer reviewers from the wrong camp, your paper would be doomed. Which was unfortunate, and made life difficult for my friend's research group... but they did still publish, so not impossible.

I've tended to find peer review a positive, albeit tedious, process in the main. My papers have always benefitted from a bit of tweaking from less-involved parties. This is particularly the case when embarking on projects that touch on areas outside my groups core specialities.

Reviewing papers is always rewarding, both in helping other researchers and also as a means of keeping abreast of developments in my field that may have been missed otherwise. Although it can be a pain if a paper is particulalry bad or non-innovative.

My main bugbear is having to review papers from researchers who's first language isn't English - I've no problem with receiving them direct from the research groups but those that have apparently gone through "scientific editing" to improve the language used (at some cost to the research group) that have had no demonstrable impact on the quality of the English. There's definitely room for improvement in the Scientific Editing business if anyone is casting around for work...

By Quiet Waters (not verified) on 21 Jan 2014 #permalink

The most ferocious peer review process I've ever seen was at Bell Labs, internally, because they did not wish poor papers to get out. if a paper got through that, it tended to get published.

Suppose Member of tech Staff writes paper for external publication. It went up their management chain:
a) Supervisor
b) Dept Head
c) Director
d) Executive Director (who would manage 1000-1500 people)

The ED would send it to 2 other ED's, who would send it down their chains to review, reviews come back up to them, then back to the original ED, and back down the chain through your line of management.

One of my MTS's (I was a Supervisor) savaged a paper that came to us, I signed off on the review (I'd read the paper and agreed), sent it up. My ED copied me on the note he attached: "Dear ED XX: once again, I'm afraid some of my people think a paper by some of yours is junk and I agree with them."

Anyone with any sense knew they'd better publish internal memoes* first, pass them around, talk to experts if needed, given the visibility of this process at all levels, taken seriously. Career-limiting moves could be obvious. Imagine doing a paper with junk statistics ... it would likely end up with John Tukey or somebody in his organization and meet a dire fate, visible to all.

* There was an extensive system for this, already well emplaced in early 1970s. One would write a Technical Memorandum, get it reviewed/approved, probably up to DH or DIR level, attach subject codes and other people's names you thought should be alerted. Each person would maintain a profile of codes of interest.
The Library would combine all this, so that every week, you'd get a bunch of cover page/abstracts of memos, If you wanted one, you circled your name, folded the page over, stapled it, and threw it in the internal mail. Then a few days later, the full memo would arrive.

By John Mashey (not verified) on 21 Jan 2014 #permalink

Somewhat tangential, but there is an example of a particular orthodoxy suppressing publication of opposing points of view. Just not in science. Economics. Krugman has talked about it.

[{{cn}} -W]

The market idea of Jo Nova could be a good idea, but I see one major problem: how do you determine how good a article is. You could use the number of citations. And then set up a market trading in futures for the number of citations 10 years after publishing.

However, if I look at my own publications, I would say that the number of citations is only weakly correlated to the quality, contribution to the field (in my subjective estimate).

[Yup. And the last thing we need is more people writing "citation ready" papers rather than unshowy but good stuff. I think the idea using "the market" is appealing in principle, but rather hard to see how it could work in practice -W]

You could set the papers to multiple reviewers and pay them for how well they get the median right. But that would be highly dangerous, it will be hard to stop people collaborating and this would easily lead to gate-keeping.

[Plus no-one wants to pay reviewers anything at all, and I don't see that changing. Unless... one could concieve of the payment-structure of science changing. Less base salary, more people expected to top their income up from reviewing? -W]

Maybe some social media type of system could work, Like Slashdot. But it would be hard to make sure that important articles in niche fields get sufficient attention relative to average articles in large fields.

[This sort-of feels like the kind of thing traditional publication is missing: at the moment, journals are moving on line, but its just a bulk shuffling of dead trees into dead electrons: its not using the power of the new meeja. I almost like the idea of no journals, no pre-review at all. People just stuff papers online, and then some reputation-based system pulls out interesting ones. Perhaps journals could just become places that review existing stuff, and publish reviews, not papers? -W]

I have the feeling, I get better reviews at European journals as in US journal. Maybe because "publish or perish" is stronger in the USA, maybe because in Europe people know me.

By Victor Venema (not verified) on 21 Jan 2014 #permalink

Josh Nicholson, What is so special about the Winnower?

In the Geosciences we already have a range of Open Review Open Access journals.…

The only novelty I see it that the Winnower would not have anonymous reviewers, only names ones for increased transparency. The Copernicus journals invite several anonymous reviewers and named reviewer can also comment. Experience tells me that it is very rare of named reviewers to show up.

By Victor Venema (not verified) on 21 Jan 2014 #permalink

Pear review is kinda fruity.

By Eli Rabett (not verified) on 22 Jan 2014 #permalink

Less base salary, more people expected to top their income up from reviewing?

This would help with the perverse incentives I mentioned above.

I almost like the idea of no journals, no pre-review at all. People just stuff papers online, and then some reputation-based system pulls out interesting ones.

There is an implementation of this idea in physics: the arXiv. In some subfields of physics (though not the one I'm in), the arXiv is the main vehicle for publishing papers, and subsequent peer review by a journal (e.g., Physical Review Letters or the Astrophysical Journal) is considered a formality. The system isn't perfect: people try to time submissions so that their article is near the top of the list in the weekly e-mail. But it seems like a good starting point toward what you have in mine.

By Eric Lund (not verified) on 22 Jan 2014 #permalink

Paper came back from review today. Lead author's Email, good reviews except #3 and we know who he is. (no joke)

By Eli Rabett (not verified) on 23 Jan 2014 #permalink

Everyone has read Michael Nielsen's "Reinventing Discovery", right?

By Nick Barnes (not verified) on 23 Jan 2014 #permalink

If I have seen further it is by standing on the shoulders of giants. - Isaac Newton

Information wants to be free. -- Stewart Brand

" amount of scientific research, carefully recorded in books and papers, and then put into our libraries with labels of secrecy, will be adequate to protect us for any length of time in a world where the effective level of information is perpetually advancing." -- Norbert Weiner

Capitalism wants information to be expensive; a commodity to be bought and sold. The unhampered exchange of scientific information was seriously curtailed during WWII. In the aftermath governments and corporations were loathe to lose the control they had gained over scientists in the name of national security. Few were willing to resist; Norbert Weiner was one who did.

Scientists have always built upon the work of their predecessors; what we're really talking about is making that communication of ideas more efficient.

Digital libraries, open access journals, scientific blogs, web-seminars, etc all make the communication of ideas more efficient. Why think when you can just look it up?

A brief example: I saw a job ad for a programmer that detailed a coding exercise. The employer wanted code that would return all the occurrences of a certain phrase in Wikipedia. I could sit down and write the code, but it would take *me* at least a couple hours. Instead I did a couple of Google searches for freeware that would do the same thing. Within 5 minutes I had installed two software packages and not only had all of the Wiki pages with the phrase, but had them sorted by page rank.

The efficiency of information grows exponentially when it is openly shared. I doubt even an experienced programmer could beat that time :)

All we need do is address the concerns of Norbert Weiner -- that information should not be controlled by economics, but by the what's in society's best interest.

By Kevin O'Neill (not verified) on 23 Jan 2014 #permalink

Pier review should be the name of the tide gauge version of the surfacestations project.

This is especially for John Mashey, some personal history about Bell Labs that I've had to scrub a bit:

"... an idealised, early version of Bell Labs review process ... in the '50's, which later broke down in physics, to soome extent at the hands of careerist types ... and certainly had completely broken down by the Schon scandal. It may have lingered more iin some departments. ... ED having shouting matches ... [with someone who] put out nonsense, even into the 90's. But ... a claque supporting never could be completely controlled."

By Susan Anderson (not verified) on 23 Jan 2014 #permalink

Don't forget the world some people live in:…

"... take neoliberal theorists like Hayek at their word when they state that the Market is the superior information processor par excellence. The theoretical impetus behind the rise of the natural science think tanks is the belief that science progresses when everyone can buy the type of science they like, dispensing with whatever the academic disciplines say is mainstream or discredited science."

Mirowski, Philip, "The Rise of the Dedicated Natural Science Think Tank" (New York: Social Science Research Council, July 2008).…

By Hank Roberts (not verified) on 23 Jan 2014 #permalink

Sad, but may be true, especially after my time.,
I was only there 1973-1983, and generally worked for some of the savviest lines of management at the Labs, for supervisors to Executive Directors, in some sense at Bell Labs' peak.

The world starting changing ~1983, as the 1984 Bell System breakup was nearing. Inside Bell Labs, people with entrepreneurial streaks could once get huge leverage to make things happen if they knew how to do it ... but then things got harder and harder as the effects of the oncoming breakup started to appear. Some of us foresaw frustrating times ahead, and headed to Silicon Valley, thus forced to move to places like Palo Alto. :-)

As for Schon, I strongly recommend Eugenie Samuel Reich's well-written Plastic Fantastic, fascinating ...
and very sad, even for those of us who were gone 2 decades by the time of the fraud revelations.

By John Mashey (not verified) on 23 Jan 2014 #permalink

If I have seen further than others, it is by treading on the toes of giants - James "PRP Killer" Annan

[Or, in this case, of pygmies. But I don't think James is claiming to see any further because of this. Maybe the PRP people would like the alternative version: "if I haven't seen further, its because giants were standing on my shoulders" -W]

By Eli Rabett (not verified) on 23 Jan 2014 #permalink

James and the Giant PRP Impeachment.

[:-). It is interesting that a few of the more awake denialists - those who've read far enough to notice his involvement - have decided he is Da Man. Whereas I presume he is merely the one who wrote about it on his blog; many others must have complained -W]

It is indeed likely many complained, but James has made it public he complained. And as we all know, the septics love to play the Mann^^man.

Another fundamental role of peer review that is pertinent to the amusing "Pattern Recognition in Physics" journal escapade:

Peer review further induces the manuscript submitter to pay a huge amount of attention into making his/her paper as solid as it can possibly be - especially if s/he chooses to submit to a decent journal. I submit on average 2-3 papers a year, but could probably submit 5 or 6 if I knew that there wasn't a quality-control hurdle to jump. Of course I wouldn't do so since I have the internalised critical facility that most serious scientists have/develop. But if one knew that any old stuff could make it into publication, what might be the point of trying to produce something good? - this in fact is a minor problem in contemporary science publishing with its massive tail of largely rubbish in a huge flush of new largely junk journals. At least this stuff is easily ignorable.

That's something of which the "Pattern Recognition in Science" shower are likely oblivious. The point of science publishing is not simply to publish, but to publish something good and meaningful. The expectation of a robust peer-review in a decent journal focuses the mind towards quality. The peer review itself reinforces this imperative towards quality and weeds out those that have deliberately or unwittingly over-reached themselves or have tried to present their mutton as if it's lamb...

You don't have to be a biased climate scientist to know which way the wind blows. Take catastrophic rising sea levels: you don't have to ask 97% of climate scientist about this, just ask someone with no axe to grind; say a seaside ice cream van driver. I mean these people are extremely territorial but yet there hasn't been one news story throughout the breadth and depth of Britain since the end of WWII where a summer time Mr Whippy route has been forced to change due to rising sea levels!!!!!!

That's all you need to know, that Rossi's Ice Cream van still pulls up at those sacred summer coastal spots. I'd call that an inconvenient Mivvy for the alarmists, wouldn't you Mr Cone olley?

Warmist year ? 99 with a flake

Now that's what I call Pier review

By Lawrence13 (not verified) on 25 Jan 2014 #permalink

Ahhh the climate wars...

So peer review is what it is - sometimes helpful, sometimes not, and if your advisor knows the journal editor your paper will be better received that if not. And that has been my experience. It is ritualilstic, innefficient, flawed - but not the worst way for journals to manage their content. It is more-or-less irrelevant to science.

What matters is reproducibility. Considerable science is done in secret behind closed doors by evil capitalist who then use those discoveries to make money. No peer review required. Nor patents, BTW. Perfect Science, though.

The question at hand is this: Do certain elements of the climate sciences constitute science or not - specifically the forward looking projections about a scary future.

The answer, of course, is NO. Or at least not yet. At best it is an excuse to do computer science.

That doesn't mean it has no value, or the work should stop, or that it isn't entertaining, or that it shouldn't be published, or even peer reveiwed. And whether it should drive expensive policy is a slightly different argument.

It just ain't science. And peer review doesn't make it so. And that's really the point.

[I don't see any argument from you re the "ain't science" point; just an assertion. Do you have an argument? -W]

I would argue that peer review is actually most important for outsiders. First of all, it forces the peer reviewers to study the manuscript and in this way brings fringe ideas to their attention that may otherwise have been ignored. Also the reader is much more likely to take notice and to take seriously a fringe idea in a peer reviewed journal as in a blog post.

Hawkins just wrote an new manuscript on black holes and posted it to Arxiv. Scientists will read that and even the press noticed. Hawkins does not need peer review to give his paper some initial credibility, to suggest people that it is worthwhile to invest their time in trying to understand it. People on the fringe do need that help, peer review helps them.

P.S. Maybe PR, for "peer review", is not such a good abbreviation in an article about climate ostriches. Each time I see it, I first read "Public Relations".

I thought Monckton's "peer review" was funnier: or… -W]

By Victor Venema (not verified) on 25 Jan 2014 #permalink

WMC, nice links. I had already seen the link to WND. Monckton actually has a weekly column there. A great resource for his extremist positions and foul language.

The discussion at NoTricksZone is revealing. I was not aware there was so much conflict in ostrich-land. There is normally almost no critique at WUWT, but apparently that is not for lack critique, but because of a policy not to make it public. NoTricksZone calls on his readers to forgive and forget the conflict between WUWT and PatternRecognition, to make sure that critique does not lead to public debate about the merits of ostrich positions.

By Victor Venema (not verified) on 25 Jan 2014 #permalink

I just want to thank all you hive-bozos... [Um. Well, after that unpromising start it didn't get better, or have any obvious relevance to the topic. But its over at the Burrow if you want to see for yourself -W]

Oh dear. Still in moderation.


[Apologies - released -W]

As I recall, this piece opened with
"Seldom in the field of human conflict has so much been written by so many people on a subject about which they know nothing."
fo which unidentifiable commenters kindly offer more evidence.

By John Mashey (not verified) on 26 Jan 2014 #permalink


It just ain’t science. And peer review doesn’t make it so. And that’s really the point.

[I don't see any argument from you re the "ain't science" point; just an assertion. Do you have an argument? -W]

I'd like to see your argument too, kdk33, starting with what you think "science" is.

By Mal Adapted (not verified) on 26 Jan 2014 #permalink

Can we first agree that peer review is just the way journals manage their content.

[No. I think that's naive and simplistic -W]

The work could be technically right, wrong or mixed; could be an incremental improvement, a breakthrough, or just another example; could be boring, exciting, or scary... And all of these are independant of peer review.

[No. Generally, there's a relationship between the quality of a published paper and the results of the review. In that good papers get published in good quality journals, and poor ones are rejected or improved to a better standard. Noting that rarely bad papers are published, or good papers rejected, doesn't change that -W]



[Do you think asking the same question twice is more likely to make people agree with you? -W]

That then is the crux of the problem.

[What problem? -W]

You've incorrectly linked peer review with technical accuracy (I'll give up the word science as I foresee intractable symantics)

The statement "peer review is how journals manage their content" is factually, perfectly, accurate.

[Its nice to know that you know that you're correct. But notice the statement you've made now is not the same as the statement you made earlier. Spot the difference -W]

I suppose by "naive" and "simplistic" you are implying incomplete. I further suppose that incompleteness is linked to "quality" and "good". These being perfectly subjective - simply the judgements of the reviewers who determine the journal content, and judgements vary as you know..

Yes, I am correct. Do you disagree? If so, can you forumulate a response?

I'm not interested in your spot-the-difference game.

Editor of a major journal in my field was on my dissertation committee. He gave me my first paper to review, because it was like what I had done in my dissertation. It was immediately obvious that the whole thing was based on an arithmetic mistake. I knew the coauthors personally, and felt good about keeping them from looking foolish. I have a similar, but less important, mistake in one of my papers. I missed it, my coauthors missed it, and the reviewers missed it. It was discovered by a colleague in France. I have reviewed papers from colleagues where English was not the first language. I helped them out, and their use of English improved over time. I like to think I helped them along.

By Jim Thomerson (not verified) on 26 Jan 2014 #permalink

My most cherished honour is a simple note at the end of a paper that I reviewed:
"We wish to thank the anonymous reviewer for their comments that have greatly improved this paper."

We went through several rounds of reviews that turned a piece of junk into a modest but useful contribution to the literature. They had (and still have) no idea who I am, so there was no effect of reputation. Just the acknowledgment that they had received god advice & had gained insight as a result.

I count that as a win.

god == good, of course

I wish I could give god advice...

In the old days, before my time, the editor did all that; no referees.

By David B. Benson (not verified) on 26 Jan 2014 #permalink

[Snip] I click on the link to "the Burrow" and, contrary to the promise made in your no. 37 comment, my "zinger" is not there [Snip. You're wrong; it is there. Perhaps you need to refresh your browser? -W]

I stand corrected, WMC--thank you!

"Seldom in the field of human conflict has so much been written by so many people on a subject about which they know nothing."
Indeed, I think this rivals the 100%-confident assertions by people who are obviously clueless about defamation law, as widespread around blogs about this case, which grinds along.

By John Mashey (not verified) on 27 Jan 2014 #permalink

I have developed the idea mentioned above a bit further and made a blog post out of it: "Peer review helps fringe ideas gain credibility.

To stay on topic, I did not start about the market of ideas. But you could see the peer reviewed literature as the regulated top market, Dow Jones and WUWT & Co. as an unregulated grey market with penny stocks.

[Apologies for the delay (and the previous lost comment). You ended up in the spam bin. I don't know how that happened. Its a bit worrying in fact. Perhaps "market" or "Dow Jones"? -W]

By Victor Venema (not verified) on 28 Jan 2014 #permalink

Again, along the same line as #50 but over at Retraction Watch we see many comments by thsoe who know nothing, or less, although at least one (correctly) started:
"I am not well enough informed to argue specific cases,..."

By John Mashey (not verified) on 28 Jan 2014 #permalink