Over at the ARN blog, Denyse O'Leary has a four-part article up attacking the peer-review system. Rob Crowther, of the Discovery Institute's
Media Complaints Division, has chimed in with his own post on the topic. There's a great deal of humor in watching anti-evolutionists try to dismiss peer review as not worth the effort anyway. It bears an amazing resemblance to this really cute old fable about a fox, but I'll be kind and pretend that there is actually something more to the O'Leary and Crowther rants than good old sour grapes.
Their major complaint about peer review is, of course, that their stuff, for some bizarre and unaccountable reason, has a really hard time surviving the process. In Crowther's words:
To sum up, science journals that are wedded to Darwinian evolution refuse to publish authors who explicitly advocate intelligent design. Then Darwinists attack intelligent design as unscientific because it isn't published in peer-reviewed journals.
O'Leary puts it a bit differently, but the basic concept is the same:
There is a modest but growing number of ID-friendly peer-reviewed publications. But - given the woeful state of peer review - papers that support or undermine ID hypotheses would probably be neither better nor worse recommended if they were never peer reviewed, just published, amid cheers and catcalls..
Of course, they try to justify their criticism of peer review on grounds other than their inability to reach the grapes. Peer review, they claim, doesn't identify fraud. It's not that good at catching incorrect findings. It squelches new ideas. It places "intellectual pygmies" in judgement of intellectual giants. It favors consensus. It sucks the life out of people, and is entirely responsible for global hunger and bad hair days. OK, I made the last two up, but you should still get a taste for the basic strategy that's being employed here - it's an oldie, but a goodie. Throw as much crap as you can at the wall, and hope that some of it sticks.
In this case, some of it does stick. It should. Peer review is not a perfect system. It is absolutely flawed. It is, in fact, not good at catching fraud. It does not catch many flawed studies. It does make it more difficult to publish new ideas, and it is absolutely capable of sucking the will to live from people. (Just because I made that one up doesn't mean it isn't right.) To paraphrase Churchill, peer review is the worst system out there, except for all the others that have been tried.
At this point, I must in fairness note that O'Leary, in Part 2 of her "critique" of peer review, specifically takes exception with the Churchillian analogy. However, she does so in a breathtakingly (yet unsurprisingly) asinine manner:
But the convenient analogy to democracy fails. In the first place, the secrecy in which peer review operates make it a poor analogue to democracy. Second, democracy aims primarily to give every citizen a vote. The fact that some citizens vote for cranks or criminals does not mean that democracy has failed. But peer review's primary aim has been quality control, and it has been failing for decades. It squelches too many good ideas while failing to prevent too many frauds.
To begin with, the reference to the Churchill quote does not attempt to compare the way the peer review system operates with the way that democracy operates. The analogy suggests that, like democracy, peer review is a flawed system but one that is less flawed than the alternatives. That much should be obvious to anyone with better reading comprehension skills than the average functional illiterate.
Any attempt to compare the way the peer review system operates with the way democracy operates would be stupid. Peer review is not democratic. It makes no pretense at being democratic. Science is a meritocracy. Peer review is an attempt to ensure that published scientific papers have at least some merit.
As an aside, it's worth mentioning that O'Leary's analysis of the target of the analogy is almost as skilled as her characterization of the goals of democracy. The goal of democracy has never been to give every citizen a vote. If it was, it probably wouldn't have taken quite so long to extend that right to women and minorities, and the right wouldn't be stripped from felons. The guiding principle of democracy, as it is generally practiced today, is the belief that a government of the people will be more able to meet the needs of the people than a government run by a monarch (or other dictatorial leader). This usually works out quite well, but the fact that total bloody idiots still get elected is a flaw. It's an acceptable flaw, however, since (a) there is no other system available that is capable of ensuring that no bloody idiots get elected; (b) later elections can (usually) correct much of the damage caused by the idiot; and (c) democratically elected governments are in fact better at representing the people than appointed or anointed governments.
Similarly, the goal of peer-review is to provide a basic check on the quality of papers prior to publication. It doesn't always work. But, on average, it works better than the other quality control options.
It is possible, however, that O'Leary's criticue of peer review has more merit than her analyses of written English and Political Science were, so let's look at a couple of the specific critiques.
One of her major complaints is that peer review does not detect fraud. Personally, I don't believe that it was intended to do so. Preventing fraud would require the external examination of all of the raw data in every published study, and that is quite simply not feasible - and, in many cases, not possible. O'Leary believes that, "embarrassing frauds have created a demand for a system that can detect fraud." To support this claim, she baldly asserts that:
A host of individual acts of sloppiness (or malice!) can get lost in the smoke generated by a really big fraud like the stem cell scam. Defenders of the system can then safely claim that the Big One is unrepresentative. That is usually not true. It would be more accurate to say that the ensuing uproar is unrepresentative. With scandals, as with rats, if you see one, there are probably a dozen, and the rat that caught your headlights was just unlucky. And a big one always gets more attention than a bunch of little ones.
Even if we accept that assertion - and there is absolutely no evidence that it is the case - it is difficult to see that the numbers demonstrate a significant problem. A search for articles on PubMed that were published during 2005 returns 617,207 results. Limiting that search to show just articles written in 2005 that have been retracted reveals 34, most of which were retracted by their own authors after they discovered problems themselves. However, even if we assume that each and every case where the article was retracted represented a case of fraud - which is clearly not the case, and that there are three dozen (triple O'Leary's assertion) that weren't caught for each of the ones that was, the grand total is still less than two tenths of one percent of the total number of papers published. 99.8% isn't perfection, but it ain't too shabby. Balance that against the massive amount of effort that would be required to check all of the data in every study prior to publication, and it's hard to see how the benefits would outweigh the costs.
Some of the other concerns raised by O'Leary do have a bit more merit. It is sometimes difficult to get new ideas published, in large part because reviewers are more likely to scrutinize every detail of a paper that does not match up with what they think they know about the subject. That's just human nature, and it's hard to get around. O'Leary and Crowther point out that ideas that eventually resulted in Nobel Prizes were originally rejected for publication, but they miss one important detail - the ideas were, in fact, eventually published. The authors might well have needed to do a lot more work to get them published, and provide a lot more evidence, but they did eventually succeed.
It would be nice if there was a system that could get around this, and one might eventually be developed, but we definitely aren't there yet. Some of the proposed changes to peer review - publishing the reviewers' names, publishing reviews along with papers - have merit, but do not address that basic problem (as rejected papers won't be published). Suggestions that do publish every submitted paper, along with critiques, suffer from a different limitation. Right now, more papers are published than can possibly be read. That 617,207 figure for 2005 came from a single database that primarily indexes journals with biomedical applications. It doesn't capture the entire biological literature, buch less the entire scientific literature. Peer-review, while imperfect, does serve to keep the worst papers out of the mix. If everything was published, every individual scientist would effectively have to conduct his or her own review on every paper of possible interest. With so many scientists doing and publishing so much work, that just isn't remotely practical.
- Log in to post comments
Peer-review, while imperfect, does serve to keep the worst papers out of the mix. If everything was published, every individual scientist would effectively have to conduct his or her own review on every paper of possible interest. With so many scientists doing and publishing so much work, that just isn't remotely practical.
I'd like to add that if you know enough about a subject (and you should if you are working in that field) then you too can spot blemishes in papers that you are working with or citing.
An area that routinely pisses me off is the Experimental section of most papers. There should be enough detail in them to be able to reproduce the experiments, but usually this is not the case. Authors want to thwart reproduction in many cases in order to stay ahead of the competition in that field. Referees should require more detailed Experimental sections, and do so occasionally, but not often enough.
I'll simply echo your sentiments that the peer review system is far from perfect, but it's the best we have.
Peer review is a never-ending process that only begins with publication in a peer-reviewed journal. Go to:
http://www.ohioscience.org/CommonGround.shtml
Evolution vs. Young Earth and Intelligent Design Creationism in Ohio's Public School Curriculum:
Finding the Common Ground
Ted Scharf and Phil Geis
Excerts published in the Cincinnati Enquirer, 1/29/2005.
http://www.ohioscience.org/CGSidebar6.shtml
The scientific peer-review process
A common complaint by scientists is that proponents of IDC do not publish in peer-reviewed science journals. Thus in the past year, the Discovery Institute has loudly proclaimed a few new journal publications. And to be fair, The Origin of Species (1859) was not submitted for peer-review by a journal prior to publication, although shorter letters by Darwin and Wallace were read publicly and published by the Linnean Society in London (1858). The heavy emphasis on peer-reviewed journal publications, while very important for scientific careers, tends to obscure the much more comprehensive process of scientific peer-review, which is never-ending.
Peer-review is integral to the processes of science but it does not end with publication in a peer-reviewed journal. Actually, such publication is just the start of scientific peer review. For example, Stephen J. Gould and Niles Eldredge first proposed a new theory of evolutionary change termed "punctuated equilibria" at a scientific meeting in 1971 and published a paper in 1972. This theory is a response to the strictly gradual approach to evolutionary change of the "modern synthesis" of evolution, proposed in the 1930's and '40's. Punctuated equilibria has shown itself to be a remarkably productive and valuable theory in evolutionary biology and paleontology. However despite thousands of peer-reviewed journal publications and books, in 2005 punctuated equilibria remains under discussion in comparison to a strictly gradual approach to evolutionary change. With respect to the competing theories of punctuated equilibria and strictly gradual change, the peer-review process is still actively engaged and seeking new evidence.
The key is surviving and winning scientific peer-review over a period of years, decades, or even centuries, through the accumulation of new evidence. Intelligent Design Creationism has its origins in the works of William Paley (1802) and Georges Cuvier (1812), among others. Yet, two hundred years after some of the ideas central to IDC were first proposed, there is still no replicable scientific evidence supporting IDC that has survived peer review.
Correction: Cincinnati Enquirer publication: 1/29/2006.
"Limiting that search to show just articles written in 2005 that have been retracted reveals 34, most of which were retracted by their own authors after they discovered problems themselves."
Here's another glaring contrast between legitimate science and pseudoscience like ID/creationism. When was the last time you heard of a creationist/IDer retracting or correcting a paper or article after discovering their own mistake? They regularly go in the opposite direction, obstinately refusing to correct errors even when they are pointed out by others.
A substantial part of O'Leary's piece has been lifted from the New Atlantis article.
Here's the link to the New Atlantic piece: http://www.thenewatlantis.com/archive/13/soa/peerreview.htm
I went scanning through O'Leary's article to see if she claims that ID articles are being rejected. Didn't seem to be the case. So here's the question: How can peer review be a problem for people sympathetic with ID if they never submit anything?
What's stopping these ID "researchers" from posting all their unfairly rejected manuscripts on line?
Other than not actually having anything to post, that is.
That O'leary's concerns are unfounded is suggested by some research from the early 1990s (by psychologist Robert Bornstein) that found that somewhere between 75% and 80% of articles submitted for publication within the hard sciences eventually do find peer-reviewed light of day. (Publication rates within the social sciences are lower because of much more constrained resources.)
This puts the failure of ID to publish in peer reviewed settings in sharp relief: It just isn't all that difficult to get something into print so long as one is willing to publish in third and fourth (and below) tiers of publication prestige.
Face it, O'Leary: ID leaves no marks because it shoots blanks.
These ID/DI guys want to have it both ways: On the one hand they claim that they have published in peer reviewed journals on the other hand they try to reject peer reviewing.
At the end they just want to have our souls, i.e. blind acceptance of their beliefs.
Identifying fraud and catching incorrect findings does require data in a publication to be reviewed. Since the majority of ID papers contain claims rather then any data they can not be reviewed at all.
Peer reviewing has disadvantages and every scientist experienced some of its drawbacks. However, to get an impression what lack of reviewing will lead to just have a look at the current posts over at UD. It's like adolescents trying to impress each other with cursory knowledge about cars without even having a driving license.
ups, it must be O'Leary not O'Liary, sorry for that
Gary Hurd over at PT
The result of editorial fiat can be observed in Rivista di biologia
(I've put this here because dur to the MM troll's action the thread at PT went a little bit off topic)
So, did she....suggest any good alternatives?
Some of the journals now provide data repositories.
How many of the creationist journals provide critical comments in issues subsequent to the one in which the original article appeared? Heck, just look at how many of the creationist blogs censor comments and blacklist critics.
Then there's the real test--after an article is published, how much subsequent work is based on the findings of the article?
I personally think more heavily scrutinizing results that conflict with existing knowledge is far from a flaw, but a necessity. The current understanding became the current understanding because it is heavily supported. Anything that conflicts with that also has a high probably of conflicting with the support, and thus be wrong. Therefore, it is only prudent to be extra careful with these because they have a higher probability of being wrong. They may not be wrong, but most are and if they are right they should be able to show it once they have enough evidence. That does not mean that every reviewer really does understand the current consensus, but I think it is a necessary safeguard to prevent junk from being published.
This leads to another point. The IDers want it both ways in another aspect as well. They want it to be easier to publish in journals, but they also want journals to catch flawed and fraudulent results more often. These goals are mutually exclusive. You cannot make it easier to publish without reducing the rigorousness of the review process. you cannot catch flawed studies without increasing the rigorousness of the review process. I think, judging by the actual data presented (and not some IDers' intuition), that we have found a very good compromise. Most studies can be published eventually, and the vast majority of those that are published are not good.
Also, I do not know in general but I am pretty sure that research funded under the NIH must release its raw data some time after completion, 1 year I think. Hubble data is also under a 1 year deadline.
In the world of theoretical physics, one of the bywords for peer-review flakiness is "Bogdanov Affair". (Yes, this is one of the few times I'm willing to link to a Wikipedia article, because I personally put effort into cleaning it up and making it good.) That muddled business is still continuing, in a low-key way, but one can draw a few morals out of it. One such point is an idea I try to phrase in terms of hypothesis testing and error types.
The peer-review system is optimized to detect sloppy science, though not wholly effectively and not to my knowledge with definite goals originally in mind. It is not bad at doing this job, and several of the various improvements suggested (e.g., open reviews) are intended to make the system better at detecting this type of error.
Our system is not optimized to detect outright fraud. Chicanery of the Bogdanov type can slip through, upon occasion. It is not clear that improving the ways we screen for bad but basically honest science would also help screen out people trying to game the system. These two polarizing filters are not exactly parallel, although I doubt they are completely orthogonal either.
With regard to that affair, Steve Carlip said something worth repeating: "referees give opinions; the real peer review begins after a paper is published."
Peer review is there, in part, to help authors. Rarely does a paper not benefit from the critical analysis of outsiders. The worst referee reports are usually those that say "This is a good paper, publish as is." In reality, a bigger problem for authors is lazy referees than over zealous ones. A reality check before publication is a darn sight better than one after.
Peer review has nothing directly to do with detecting fraud - unless the fraud manifests itself in some way. The only deterrent for fraud is the scientific process itself which guarantees that important or big frauds will always be discovered - not usually, but always.
If ID papers were freely published in scientifc journals it wouldn't, in the end, matter scientifically because nothing would ever come of them - assuming, even, that there was anything they could cobble together in the first place. But it would be damaging for many reasons but, primarily, because the ID people would use publication in scientific journals for purely political ends.
There is an easy way for IDers to get published and that would be to disprove papers which support evolution. There is no requirement that a wrong paper be replaced with a right one - if evolution is wrong then a good start would be to disprove papers supporting it but with no reference to ID - that way the paper would not be labelled as being a crank article out of hand. That would be in the true spirit of the scientific method and would actually be a useful thing to have done, especially if it worked.
Bob King | November 18, 2006 04:22 PM said,
The problem with that idea is that many people argue that a scientific theory should not be criticized unless a plausible alternative scientific theory is presented at the same time.
As Thomas Edison said, "I have not failed. I've just found 10,000 ways that won't work."
Regarding peer review Mike Dunford writes: "It does not catch many flawed studies."
Yes it does. It doesn't catch *all*, but I've seen a fair share of flawed papers get nailed in the review process. The trouble is that at the cutting edge, it's not always clear at the time of review that the results are flawed. A good test for seeing how a paper is judged over time is to count the number of citations over the years. The flawed ones tend to get weeded out by the field and drop out of the citations.
Larry Fafarman: "The problem with that idea is that many people argue that a scientific theory should not be criticized unless a plausible alternative scientific theory is presented at the same time."
In the primary literature? I don't think that's the case. There are many scientific theories, including many theories related to evolution that are critically scruntinized in the literature. Although I should note that in many of the papers where authors specify that ID can better explain biology, I can't say I've seen this claim demonstrated with any rigor. That is the problem which is killing ID.
O'Leary writes: There is a modest but growing number of ID-friendly peer-reviewed publications.
Yeah, we've seen the sort of things on that list. What does it mean for a paper to be "ID friendly"? A suggestion below:
Lewis Carroll, Through the Looking Glass
"Limiting that search to show just articles written in 2005 that have been retracted reveals 34, most of which were retracted by their own authors after they discovered problems themselves."
To be fair, this could be construed as suffering from a common bias of nature--one that appears in the analysis of bugs in software releases and of the profitability of casualty insurance schemes. One year may not be enough to reveal problems. While I strongly suspect that rates after 5 years are not that much higher, it would be interesting to look at, say, published papers from 1995, and see how many have been retracted (or otherwise exposed as fraudulent), and how long it took for this to occur.
I don't understand why society would want to replace what is testable and provable with mere faith and belief. Thus far science has been our one and only savior. It teaches the farmer to feed the masses. It teaches our military how to keep our enemies at bay. It teaches the physician to heal the sick and it delivers us from the evil of ignorance. To coin a phrase, science has always "walked the walk" while religion has "talked the talk".
Hi Mike;
Great post :-)
I hope you don't mind that I have linked to on my own blog in an ID related post, By request: Review of a Chick Tract. I have shamelessly copied your sentence "As Thomas Edison said, "I have not failed. I've just found 10,000 ways that won't work."", which was too good to not use :-)
best regards
- pwe