I just got back from a 75 minute ethics seminar for summer researchers (mostly undergraduates) at a large local center of scientific research. While it was pretty hard to distill the important points on ethical research to just over an hour, I can't tell you how happy I am that they're even including ethics training in this program.
Anyway, one of the students asked a really good question, which I thought I'd share:
Let's say you discover that a published result is irreproducible. Who do you tell?
My answer after the jump.
First, of course you want to make sure you've done all the things you should to reproduce the experiment. Have you followed the decribed methods precisely? Have you tried it more than a few times (to make sure you have sufficient practice with all the necessary techniques)? Have you brought in a colleague to help you troubleshoot the protocol to make sure there's not some crucial step you're missing? Remember how many times it could take to get those "canned experiments" from your lab classes to work? Experiments that are part of actual research are frequently more sensitive. So start by making sure you're not giving up prematurely.
Next, if you've gotten to the point where you've honed your technique and you're following the procedures to the letter, it's time to get in touch with the authors who published the result to ask for help. Tell them you're trying to reproduce their results but that you're having trouble. Describe what you've tried and ask for advice. It's quite possible that the experimental outcome depends on controlling seemingly little details that weren't fully described in the "materials and methods" because the authors didn't realize how important they were. (This in one reason it's important to reproduce experimental findings -- it gives us better information about which bits of the system are causally relevant.)
If the authors who published the original finding can't help you figure out how to reproduce their results, a couple different things could happen. Those authors may re-examine the system, decide the finding isn't as firm as they thought it was, and contact the editor of the journal where the results were published with this information. Or, those authors may decide they've helped you as much as they can -- at which point, you need to decide whether your research really establishes a different outcome for the experiment they described. Depending on the significance of this finding, you could either convey it in a letter to the editor of the journal, or you could work it up to a full experimental paper of your own.
Of course, first you'll want to make sure your results are reproducible.
As a non-scientist, it sometimes seems to me this was the original inspiration of between 1/4 to 1/3 of the peer-reviewed papers I've read. (However I suspect much of what I read is research on topics which are more contentious than average.)
In order to emphasize your point "quite possible that the experimental outcome depends on controlling seemingly little details that weren't fully described in the "materials and methods" because the authors didn't realize how important they were." I would direct you to the literature about Intel and its production lines.
Intel uses a process called "copy exactly" in their manufacturing. Essentially every manufacturing facility is built to be identical to the prototype facility. You would be amazed at how detailed they get in their mimicking and even then they have an incredibly difficult job getting fully reproducible results. This is a multi-billion dollar company with hundreds of engineers and they have trouble reproducing a result from one location to another.
Reproducibility is not only an issue for experimental results, but also for data analyses. And while experimental methods are often described in considerable detail, it's quite common for analytical methods to be given very little attention. Fortunately there has been some progress on this front recently: see, for example, this paper on statistical analyses and reproducible research and the CONSORT statement (full disclosure: David Moher, who directs the group I work in, was instrumental in the development of CONSORT).
Wouldn't the publishing of a new paper contradicting the previous authors be a problem for the prior authors? Are they directed to re-examine their experimentation to validate the conclusions and results?