tags: researchblogging.org, H-index, impact numbers, scientific journals
A friend, Ian, emailed an opinion paper that lamented the state of scientific research and the effect this has had on science itself. In this paper, by Peter A. Lawrence, a Professor of Zoology at University of Cambridge, the main point is that modern science, particularly biomedicine, is being damaged by attempts to measure the quality and quantity of research being produced by individual scientists. Worse, as this system careened out of control, it gave rise to a new and more damaging trend: ranking scientists themselves based on the number of citations that their papers garner. Of course, this has led to rampant citation bartering among scientists as they seek to improve their ratings.
All scientific journals have impact numbers, and the larger this number is, the greater that journal’s perceived importance in the field. Impact numbers are computed from the number of times that the average paper in a particular journal is cited within two years following publication, so this number is not a measure of the quality of any particular paper, but rather, it is a crude assessment of the quality of the journal in which a paper was published.
There are a number of reasons that impact factors are not an accurate measure of the quality of an individual paper. For example, a particular paper might be proven wrong, after having wasted the time and effort of hundreds of scientists, not to mention their precious funds, but it will still look good on a CV solely based on the journal’s impact factor. Further, it often takes longer than two years to appreciate truly imaginative and ground-breaking research. Additionally, whether a particular paper is cited is based more upon that paper’s visibility than by its actual content or quality.
Despite these known problems with the impact factor system, that hasn’t stopped the scientific community from developing a similar system to also make comparisons between scientists themselves. The so-called H-index measures the total number of papers that a scientist has written and the number of citations that they receive, and the higher one’s H-index, the better a scietnist he or she is.
Predictably, this has led to a variety of unethical behaviors designed to improve the impact rating for published papers and to increase one’s H-index that serve to make life more difficult for those scientists out there who actually behave in a principled manner.
First, it has become crucial to get one’s papers into high impact-factor journals and by doing so, one vastly improves his or her chances for actually getting a job. In fact, two or three highly-placed papers means the difference between tenure and unemployment. As a result, instead of solving interesting scientific challenges, many scientists’ efforts now focus on the process of publication and trying to get their papers past the editors and reviewers into high impact journals.
Second, scientists are changing their research strategies to follow existing fashions by changing topics and research organisms and by making links (real or imagined) between their work and medicine. By doing so, they ensure that the resulting papers will have a variety of qualified referees and that there is an increased number of citations that the published paper receives.
Third, because more papers make one look more productive than do fewer papers, many scientists will divide up their findings into the smallest publishable niblets possible. As a result, most top journals provide little space for papers anyway, and thus “a typical Nature paper now has the density of a black hole”. Worse, many authors ignore or hide results that do not fit with the story being told in the paper because doing so makes the paper less complicated and thus, more appealing.
Fourth, there is the human factor: scientists are forming larger groups than ever before for the purpose of paper authorships. Thus, due to the lack of individualized attention, many young scientists are being treated more as technicians than as colleagues, and are thus being set up for failure. Even those who manage to get their PhDs many be so disillusioned with the scientific process that they leave the field forever — thereby not becoming a future competitor for the group leader, their own advisor.
Fifth, established scientists are rewarded for devoting outrageous amounts of time to traveling to meetings, where journal editors are often found, where they network instead of staying in the lab and getting more work done.
Last but certainly not least, science is becoming a more ruthlessly self-selecting field where those who are less aggressive and less self-aggrandizing are also less likely to receive recognition and rewards for their work. This is especially applicable to women in the sciences since there has been little, if any, change in the numbers of women at the top of their fields despite a huge influx into biomedical research. As a result, those people who are less pushy are more likely to leave science altogether because they correctly perceive that they are being discriminated against in the workplace.
So what does Lawrence propose to remedy this situation? In short, Lawrence says that it is time for the pendulum of power to swing back to favor the bench scientist. To do this, he says that hiring committees must remember that they are hiring people instead of numbers, that candidates possess a mix of important abilities and qualities, and that originality is the most important of all. Unfortunately, originality in research is not something that can be measured by impact factors nor by previous citations but instead, require that the hiring committee spend time and effort to learn more about each candidate.
Second, Lawrence proposes a scientific code of ethics that can be enforced to establish a standard of professional behavior, beginning with a discussion about just what justifies authorship. Additionally, the process of reviewing manuscripts must to be assessed so as to protect them from unscrupulous referees who “murder papers” for personal gain, who take advantage of priviledged information or who share the contents of a paper with others. Lawrence also proposes that, for example, an Ombudsman might be set up by large granting agencies to whom scientists can appeal for redress when they feel they have been abused by the reviewers or by a journal.
Hopefully, by implimenting a few changes to the way that science is done, we will improve the quality and originality of the research itself while also creating a world that is conducive to attracting, nurturing and retaining its best and brightest practitioners.
This paper was published in the latest issue of Current Biology.
Source
“The Mismeasurement of Science” by Peter A. Lawrence. Current Biology 17(15):R583-R585.