Why negative results are usually not published

Cameron comes up with several persuasive reasons in Why good intentions are not enough to get negative results published:

The idea is that there is a huge backlog of papers detailing negative results that people are gagging to get out if only there was somewhere to publish them. Unfortunately there are several problems with this. The first is that actually writing a paper is hard work. Most academics I know do not have the problem of not having anything to publish, they have the problem of getting around to writing the papers, sorting out the details, making sure that everything is in good shape. This leads to the second problem, that getting a negative result to a standard worthy of publication is much harder than for a positive result. You only need to make that compound, get that crystal, clone that gene, get the microarray to work once and you've got the data to analyse for publication. To show that it doesn't work you need to repeat several times, make sure your statistics are in order, and establish your working condition. Partly this is a problem with the standards we apply to recording our research; designing experiments so that negative results are well established is not high on many scientists' priorities. But partly it is the nature of beast. Negative results need to be much more tightly bounded to be useful .

---------------------------

Fundamentally my personal belief is that the vast majority of "negative results" and other journals that are trying to expand the set of publishable work will not succeed. This is precisely because they are pushing the limits of the "publish through journal" approach by setting up a journal. To succeed these efforts need to embrace the nature of the web, to act as a web-native resource, and not as a printed journal that happens to be viewed in a browser. This does two things, it reduces the barrier to authors submitting work, making the project more likely to be successful, and it can also reduce costs. It doesn't in itself provide a business model, nor does it provide quality assurance, but it can provide a much richer set of options for developing both of these that are appropriate to the web. Routes towards quality assurance are well established, but suffer from the ongoing problem of getting researchers involved in the process, a subject for another post. Micropublication might work through micropayments, the whole lab book might be hosted for a fee with a number of "publications" bundled in, research funders may pay for services directly, or more interestingly the archive may be able to sell services built over the top of the data, truly adding value to the data.

Read the whole thing.

More like this

That's the thrust of an interesting editorial in Nature Medicine: what would you do if you could publish only 20 papers throughout your career? And how would it affect research productivity, scientific publishing, tenure review, and a host of other issues? More after the jump... The editorial…
Over at Cosmic Variance, Julianne Dalcanton describes a strategy for scientific communication that raises some interesting ethical issues: Suppose you (and perhaps a competing team) had an incredibly exciting discovery that you wrote up and submitted to Nature. Now suppose that you (and the…
Via Twitter, Daniel Lemire has a mini-manifesto advocating "social media" alternatives for academic publishing, citing "disastrous consequences" of the "filter-then-publish" model in use by traditional journals. The problem is, as with most such things, I'm not convinced that social media style…
There's an article in yesterday's New York Times about doubts the public is having about the goodness of scientific publications as they learn more about what the peer-review system does, and does not, involve. It's worth a read, if only to illuminate what non-scientists seem to have assumed went…

This is an argument for at least publishing these negative results as raw data, as is, without peer-review. Then people could at least have a sense what test results have in fact been ignored.

In general, I do not understand why everything what happens in science must be turned into a paper. Why not blog about all data you collect during your work? Even better, publish it in some standardized format.

I guess I'll just have to wait for that. :)

The sheer amount of negative data generated around the clock, around the world is reason enough not to publish the negative data. Or as Vadlo has put it here in a nutshell!