The Mismeasurement of Science

tags: , , ,

ResearchBlogging.org

A friend, Ian, emailed an opinion paper that lamented the state of scientific research and the effect this has had on science itself. In this paper, by Peter A. Lawrence, a Professor of Zoology at University of Cambridge, the main point is that modern science, particularly biomedicine, is being damaged by attempts to measure the quality and quantity of research being produced by individual scientists. Worse, as this system careened out of control, it gave rise to a new and more damaging trend: ranking scientists themselves based on the number of citations that their papers garner. Of course, this has led to rampant citation bartering among scientists as they seek to improve their ratings.

All scientific journals have impact numbers, and the larger this number is, the greater that journal's perceived importance in the field. Impact numbers are computed from the number of times that the average paper in a particular journal is cited within two years following publication, so this number is not a measure of the quality of any particular paper, but rather, it is a crude assessment of the quality of the journal in which a paper was published.

There are a number of reasons that impact factors are not an accurate measure of the quality of an individual paper. For example, a particular paper might be proven wrong, after having wasted the time and effort of hundreds of scientists, not to mention their precious funds, but it will still look good on a CV solely based on the journal's impact factor. Further, it often takes longer than two years to appreciate truly imaginative and ground-breaking research. Additionally, whether a particular paper is cited is based more upon that paper's visibility than by its actual content or quality.

Despite these known problems with the impact factor system, that hasn't stopped the scientific community from developing a similar system to also make comparisons between scientists themselves. The so-called H-index measures the total number of papers that a scientist has written and the number of citations that they receive, and the higher one's H-index, the better a scietnist he or she is.

Predictably, this has led to a variety of unethical behaviors designed to improve the impact rating for published papers and to increase one's H-index that serve to make life more difficult for those scientists out there who actually behave in a principled manner.

First, it has become crucial to get one's papers into high impact-factor journals and by doing so, one vastly improves his or her chances for actually getting a job. In fact, two or three highly-placed papers means the difference between tenure and unemployment. As a result, instead of solving interesting scientific challenges, many scientists' efforts now focus on the process of publication and trying to get their papers past the editors and reviewers into high impact journals.

Second, scientists are changing their research strategies to follow existing fashions by changing topics and research organisms and by making links (real or imagined) between their work and medicine. By doing so, they ensure that the resulting papers will have a variety of qualified referees and that there is an increased number of citations that the published paper receives.

Third, because more papers make one look more productive than do fewer papers, many scientists will divide up their findings into the smallest publishable niblets possible. As a result, most top journals provide little space for papers anyway, and thus "a typical Nature paper now has the density of a black hole". Worse, many authors ignore or hide results that do not fit with the story being told in the paper because doing so makes the paper less complicated and thus, more appealing.

Fourth, there is the human factor: scientists are forming larger groups than ever before for the purpose of paper authorships. Thus, due to the lack of individualized attention, many young scientists are being treated more as technicians than as colleagues, and are thus being set up for failure. Even those who manage to get their PhDs many be so disillusioned with the scientific process that they leave the field forever -- thereby not becoming a future competitor for the group leader, their own advisor.

Fifth, established scientists are rewarded for devoting outrageous amounts of time to traveling to meetings, where journal editors are often found, where they network instead of staying in the lab and getting more work done.

Last but certainly not least, science is becoming a more ruthlessly self-selecting field where those who are less aggressive and less self-aggrandizing are also less likely to receive recognition and rewards for their work. This is especially applicable to women in the sciences since there has been little, if any, change in the numbers of women at the top of their fields despite a huge influx into biomedical research. As a result, those people who are less pushy are more likely to leave science altogether because they correctly perceive that they are being discriminated against in the workplace.

So what does Lawrence propose to remedy this situation? In short, Lawrence says that it is time for the pendulum of power to swing back to favor the bench scientist. To do this, he says that hiring committees must remember that they are hiring people instead of numbers, that candidates possess a mix of important abilities and qualities, and that originality is the most important of all. Unfortunately, originality in research is not something that can be measured by impact factors nor by previous citations but instead, require that the hiring committee spend time and effort to learn more about each candidate.

Second, Lawrence proposes a scientific code of ethics that can be enforced to establish a standard of professional behavior, beginning with a discussion about just what justifies authorship. Additionally, the process of reviewing manuscripts must to be assessed so as to protect them from unscrupulous referees who "murder papers" for personal gain, who take advantage of priviledged information or who share the contents of a paper with others. Lawrence also proposes that, for example, an Ombudsman might be set up by large granting agencies to whom scientists can appeal for redress when they feel they have been abused by the reviewers or by a journal.

Hopefully, by implimenting a few changes to the way that science is done, we will improve the quality and originality of the research itself while also creating a world that is conducive to attracting, nurturing and retaining its best and brightest practitioners.

This paper was published in the latest issue of Current Biology.

Source

"The Mismeasurement of Science" by Peter A. Lawrence. Current Biology 17(15):R583-R585.

More like this

Lawrence argues against evaluating productivity of scientists from the "H factor," saying that it (among other things) leads to scientists "dividing up their findings into the smallest publishable niblets possible" (in order to maximize the number of papers published from the work). That, however, it not correct; to the contrary, the whole point of H-factor is that it's a factor that emphasizes the number of papers that are cited in the literature. Unless a paper is worth being cited repeatedly, it's irrelevant to the H-factor. To some extent, the invention of the H factor is an attempt to prevent the strategy of increasing the measured impact by publishing multiple worthless papers. See wikipedia

I'm a PhD student in computer science and I'm dangerously close to giving up the idea of a academic research position in the field due to these concerns. Frankly, I think I'd be better off to try and take my ideas to the commercial world and get funding for them, since that is effectively what I would be doing if I were to get into academia.

That would on top of having to produce reams of papers. If papers in the ACM Digital Library from the last 5 years are any indication, I don't need to come up with much of anything, I just have to figure out how to say the same thing slightly differently. And don't dare try to do anything revolutionary. (Frankly, most of the interesting revolutions in computing are coming from industry.)

Then again, maybe I'm just disgruntled since I'm trying to get my dissertation written by the end of the year and my wife wants me to have some income soon.

I'm particularly finding this interesting because it is essentially the exact same set of complaints I've seen echoed on several Physics-related blogs of late. See this discussion of the journal system on Not Even Wrong on the subject, jumping off of a post about a large-scale plagiarism scandal. Or see the comments of this post on Backreaction, which discuss some recent unusual publication patterns centered around a mathematically-interesting but probably not physically-significant construct called "Unparticles".

I am an assistant professor in a top-10 science department in my field. I think it is admirable, but naive, to think that departments look for "originality". In fact, what academic departments really seek is "impact". If you discover something interesting but leave the follow-up work to a different lab, then you will have little impact. The other lab will get much of the credit and the lucrative position and funding. That is simply the way the system works, and perhaps the way it should work. Those of us who like to focus on science (as opposed to marketing our science) must realize that the self-promoters often can get away with less originality simply because they are more aggressive. If we want the system to change, first get into positions of power (e.g., by becoming a dean or department chair) and then choose originality over impact when offered the choice.

By Anonymous professor (not verified) on 03 Oct 2007 #permalink

Predictably, this has led to a variety of unethical behaviors...

I'd note that nowhere in the article does he *demonstrate* that these behaviors are increasing at all, let alone that trends like increased group size are driven by impact metrics. And my impression is that Least Publishable Units have been steadily increasing, for all the complaining in recent decades about "salami science". (And as Geoffrey Landis says, addressing that perceived problem is the whole freaking *point* of the H-index.)

To me, impact factors are a useful way to get a rough estimate of a the significance of a journal or paper. If scientists use them in any other way, I'd say that suggests a much bigger problem in their thought processes that goes way beyond the worries raised here.

This is totally depressing.

I wonder how (and whether) the trend toward open-source publishing will change this...although I think it'll be some time before the high-impact-factor journals really head that direction.