Do scientists see themselves, like Isaac Newton, building new knowledge by standing on the shoulders of giants? Or are they most interested in securing their own position in the scientific conversation by stepping on the feet, backs, and heads of other scientists in their community? Indeed, are some of them willfully ignorant about the extent to which their knowledge is build on someone else’s foundations?
That’s a question raised in a post from November 25, 2008 on The Scientist NewsBlog. The post examines objections raised by a number of scientists to a recent article in the journal Cell:
The paper in question, published in a June issue of Cell, described a model for understanding the genetic and cellular machinery underlying planar cell polarity (PCP), the cell-to-cell communication that epithelial cells use to align and arrange themselves to function as an organized tissue.
Developmental biologist Jeffrey Axelrod, the paper’s main author, defended the work, writing in an email to The Scientist, “our paper (Chen et al. June 2008) underwent Cell‘s rigorous process of peer-review prior to publication. We stand by our conclusions as stated in the paper, as well as by our use of citations, and I encourage your readers to look at the papers in question, as they speak for themselves.”
But [University of Cambridge biologist Peter] Lawrence claims that the Axelrod paper, which identifies a transmembrane protein called Flamingo (also known as starry night or stan) as a key signaling molecule in Drosophila PCP, is largely a rehash of his own group’s work, which was published in the journal Development in 2004 and has been cited 35 times, according to ISI. (Axelrod’s Cell paper has not yet been cited in any published papers.)
“The complaint is that the main point of the [Cell paper] is what we discovered and provided evidence for four years ago,” Lawrence said. “It pretends to be much more novel than it is.”
Let me make two smallish points in passing before taking on some larger issues. First, in this last quotation, I’m bothered by the word “pretends”. At least to my ear, pretending involves intent, and intent is a tricky thing to prove. (It’s worth noting, as we’ve discussed before, that plagiarism needn’t involve intent.)
Second, we should not overestimate the virtues certified by a manuscript’s having passed through peer review, even “Cell‘s rigorous process of peer-review”. That the paper was published indicates that the peer reviewers judged its claims plausible (both with respect to prior knowledge in the field and with respect to the specific evidence offered to support the claims) and interesting. There is no guarantee, on the basis of peer review, that the claims offered in the paper are true, nor that the data were reported accurately, nor that the model used to interpret the data is on the right track, nor that the paper made proper mention of prior work.
The peer reviewers make the best judgments they can with the facts at hand. They may not themselves have encyclopedic knowledge of the literature in the field.
Now, on to the bigger questions.
How vital is novelty? Given the current rewards system within the community of science, very. That’s how you establish your priority, on the basis of which your work is supposed to be cited in further discussions of the system or scientific problem.
How important is citation of prior work in conducting (and reporting) new research results?
Drawing on the ideas and results that others have published* requires that you give them credit by citing their work. This is part of the social contract within the scientific community. People publish to share what they know, and their strategies for building that knowledge, and other scientists are welcome to draw on these published resources so long as they cite these resources when they contribute their own findings to the discussion. Failing to cite earlier work is failing to give props to the scientists who advanced the state of understanding to the point at which you jumped into the fray. To the extent that citations actually count for more than ego (e.g., playing a role in how scientists and their work are evaluated in tenure and grant reviews), failing to cite the work of other scientists can hurt their careers in tangible ways.
If you are caught not citing work upon which you are building your own work, your willingness to violate the social contract and to do tangible harm to the careers of other scientists may well be held against you. Those uncited scientists, after all, may be reviewing your next manuscript or grant proposal, or even answering a request from your department for an “outside letter” evaluating your standing in your research field.
If you know about prior work in the area, you ought to read up on it and cite it.
If you don’t know about prior work in the area, you ought to do a literature search to see whether any exists. If it does, return to step 1(read up on it and cite it).
If you can’t find any prior work in the area, you can go ahead and do your research to see what you can find out.
It seems possible that a group could be ill-informed about the state of the literature on a particular system or question, especially if you’re just moving into this area. Maybe you’re not clear on the best search strategies to turn up the key pieces of literature. Maybe the literature search took a back seat to getting a lab set up or getting a cranky experimental system to cooperate.
Having done the experimental research and drawn conclusions from it, obviously you’d like to report it.
Reporting your results (i.e., how you set up your system, what you observed, the conclusions you drew) conveys information to the rest of the scientific community. One thing your article might convey is that you don’t know how to do a good literature search, or that you can’t be bothered to consult the existing literature.
Usually, that’s not the main conclusion you want people to draw from your article. But authorial intent only gets you so far.
Now, from the point of view of what the scientific community collectively knows, instances in which a research group blows off the literature search and unintentionally replicates a finding that has already been reported might actually be useful — they offer truly independent replications of earlier findings, with the group producing the replication unbiased by what the earlier group reported, having not seen those reports to be biased by them. When independent groups find the same thing, it suggests that the results are pretty robust, which is a nice feature in scientific knowledge.
And, given that only first place counts in a priority race, it seems unlikely that many research groups would have a good reason to try to replicate existing results that they know about. As helpful as replication may be from the point of view of establishing what the scientific community knows, there’s no career reward for it. In other words, the community might get something positive out of the efforts of groups who fail to do thorough literature searches — it’s just that these groups, or the scientific groups of whose results they were unaware, won’t necessarily get the career reward they expected.
Should one do the research, publish the findings, and only after the fact discover that other scientists had been there first (and published about it), the sensible thing to do is to acknowledge those others late rather than not at all. Saying “I should have known about these earlier publications” is a classier move than denying you goofed by not finding them and acknowleding them in the first place. Of course, if you feel that the earlier references are reporting results that are substantially different, you need to make the case that there are salient differences between this earlier work and your later work. But acknowledging a mistake, cleaning it up, and moving on is crucial as far as scientific life skills go.
To be fair, in this particular case there are other worries that have been raised beyond failure to properly cite prior work. Again, from The Scientist NewsBlog:
Lawrence wrote in a letter to Cell that the paper was “seriously flawed both scientifically and ethically and in my opinion amounts to a theft of our intellectual property (especially the results and conclusions of our prior paper, Lawrence et al., 2004).” Lawrence’s letter was not published in Cell, but he sent it to The Scientist. At least four other researchers submitted letters independently – some also obtained by The Scientist – to the journal last July. Some of these also claimed that the Axelrod group’s science in constructing a model for PCP was subpar.
“I hope you will agree with me that (i) this paper is a disaster for the field (it will set the community back by several years) and (ii) it is not good for the journal either,” wrote Marek Mlodzik, chair of developmental and regenerative biology at Mount Sinai School of Medicine, in a letter to the editor of Cell, Emilie Marcus.
Mlodzik said that the Axelrod paper completely ignores some of his own previous research on PCP; specifically, a 2005 paper that proposed a similar model for PCP. “They should have cited it because of the model,” Mlodzik told The Scientist.
Mlodzik also takes issue with the science in the Cell paper, citing in his letter to the journal a couple examples where “the authors use wrong data or conceptually flawed experiments to give false credibility to their model.”
In addition to complaints from scientists who published earlier results on PCP, we have worries expressed that the data cited in the paper is “wrong”, that the experiments discussed are “conceptually flawed”, and that the scientific reasoning used to construct or support the model is “subpar”. These may all be reasonable criticisms, or they may not. Indeed, they may be the sort of criticisms that scientists working on a particular problem could raise about other papers that passed through peer review and included meticulous citations of earlier work on the problem. Pre-publication peer review is not the end of the scientific community’s organized skepticism. Rather, published articles are fair game for critique — including critique of reported data, of the conceptual underpinnings of experimental design, of the logic behind the model offered to explain the results, and so forth. Because what is credible starts out feeling like a very individual matter, the scientific community needs to enter into discussions of published results together, helping the community’s members establish what is persuasive to others within that community.
And, fair or not, scientists are likely to find more persuasive the testimony of those they take to be honest and meticulous, Scientists who can’t even do a good literature search may have other as yet unexposed gaps in their practice that render their testimony less credible.
_____
*Indeed, one should also cite the ideas and results of others that have been communicated by other routes. Publications leave a more obvious paper trail, so that’s where most attention is focused, but proper credit for intellectual contributions ought to be accorded whenever possible.