Ethical research

Sean Cutler is an assistant professor of plant cell biology at the University of California, Riverside and the corresponding author of a paper in Science published online at the end of April. Beyond its scientific content, this paper is interesting because of the long list of authors, and the way it is they ended up as coauthors on this work. As described by John Tierney, Dr. Cutler ... knew that the rush to be first in this area had previously led to some dubious publications (including papers that were subsequently retracted). So he took the unusual approach of identifying his rivals (by…
Earlier this week, I found out about a pair of new case studies being released by The Global Campaign for Microbicides. These cases examine why a pair of pre-exposure prophylaxis (PrEP) clinical trials looking at the effectiveness of microbicides antiretrovirals in preventing HIV infection were halted. Here are some details: Between August 2004 and February 2005 the HIV prevention world was rocked by the suspension and cancellation of PrEP effectiveness trials in Cambodia and Cameroon. To the considerable surprise of researchers, advocates, and donors, the trials became embroiled in…
As we continue our look at ways that attempted dialogues about the use of animals in research run off the rails, let's take up one more kind of substantial disagreement about the facts. Today's featured impediment: Disagreement about whether animals used in research experience discomfort, distress, pain, or torture. This disagreement at least points to a patch of common ground shared by the people disagreeing: that it would be a bad thing for animals to suffer. If one party to the discussion accepts the premise that animal suffering is of no consequence, that party won't waste time haggling…
One arena in which members of the public seem to understand their interest in good and unbiased scientific research is drug testing. Yet a significant portion of the research on new drugs and their use in treating patients is funded by drug manufacturers -- parties that have an interest in more than just generating objective results to scientific questions. Given how much money goes to fund scientific research in which the public has a profound interest, how can we tell which reports of scientific research findings are biased? This is the question taken up by Bruce M. Psaty in a Commentary…
The Independent reports that drug giant Pfizer has agreed to pay a $75 million settlement nine years after Nigerian parents whose children died in a drug trial brought legal action against the company. It's the details of that drug trial that are of interest here: In 1996, the company needed a human trial for what it hoped would be a pharmaceutical "blockbuster", a broad spectrum antibiotic that could be taken in tablet form. The US-based company sent a team of its doctors into the Nigerian slum city of Kano in the midst of an appaling meningitis epidemic to perform what it calls a "…
This week at Bloggingheads.tv, PalMD and I have a chat about science, ethics, and alternative medicine. Plus, we have a little disagreement about what constitutes paternalism. Go watch!
An article in the Wall Street Journal notes the collision between researchers' interests in personal safety and the public's right to know how its money is being spent -- specifically, when that money funds research that involves animals: The University of California was sued last summer by the Physicians Committee for Responsible Medicine, a group that advocates eliminating the use of animals in research, to obtain records involving experiments. In its complaint, the group said "only through access to the records...can it be determined how public funds are being spent and how animals are…
One of my correspondents told me about a situation that raised some interesting questions about both proper attribution of authorship in scientific papers and ethical interactions between mentor and mentee in a scientific training relationship. With my correspondent's permission, I'm sharing the case with you. A graduate student, in chatting with a colleague in another lab, happened upon an idea for an experimental side project to do with that colleague. While the side project fell well outside the research agenda of this graduate student's research group, he first asked his advisor whether…
Do scientists see themselves, like Isaac Newton, building new knowledge by standing on the shoulders of giants? Or are they most interested in securing their own position in the scientific conversation by stepping on the feet, backs, and heads of other scientists in their community? Indeed, are some of them willfully ignorant about the extent to which their knowledge is build on someone else's foundations? That's a question raised in a post from November 25, 2008 on The Scientist NewsBlog. The post examines objections raised by a number of scientists to a recent article in the journal Cell…
Over at DrugMonkey, PhysioProf notes a recent retraction of an article from the Journal of Neuroscience. What's interesting about this case is that the authors retract the whole article without any explanation for the retraction. As PhysioProf writes: There is absolutely no mention of why the paper is being retracted. People who have relied on the retracted manuscript to develop their own research conceptually and/or methodologically have been given no guidance whatsoever on what aspects of the manuscript are considered unreliable, and/or why. So, asks PhysioProf, have these authors…
Charles B. Nemeroff, M.D., Ph.D., is a psychiatrist at Emory University alleged by congressional investigators to have failed to report a third of the $2.8 million (or more) he received in consulting fees from pharmaceutical companies whose drugs he was studying. Why would congressional investigators care? For one thing, during the period of time when Nemeroff received these consulting fees, he also received $3.9 million from NIH to study the efficacy of five GlaxoSmithKline drugs in the treatment of depression. When the government ponies up money for scientific research, it has an interest…
Let's wrap up our discussion on the Martinson et al. paper, "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors". [1] You'll recall that the research in this paper examined three hypotheses about academic scientists: Hypothesis 1: The greater the perceived distributive injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (51) Hypothesis 2: The greater the perceived procedural injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (52) Hypothesis 3: Perceptions of injustice are more…
Last week, we started digging into a paper by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries, "Scientists' Perceptions of Organizational Justice and Self-Reported Misbehaviors". [1] . The study reported in the paper was aimed at exploring the connections between academic scientists' perceptions of injustice (both distributive and procedural) and those scientists engaging in scientific misbehavior. In particular, the researchers were interested in whether differences would emerge between scientists with fragile social identities within the tribe of academic…
Regular readers know that I frequently blog about cases of scientific misconduct or misbehavior. A lot of times, discussions about problematic scientific behavior are framed in terms of interactions between individual scientists -- and in particular, of what a individual scientist thinks she does or does not owe another individual scientist in terms of honesty and fairness. In fact, the scientists in the situations we discuss might also conceive of themselves as responding not to other individuals so much as to "the system". Unlike a flesh and blood colleague, "the system" is faceless,…
In the 12 September, 2008 issue of Science, there is a brief article titled "Do We Need 'Synthetic Bioethics'?" [1]. The authors, Hastings Center ethicists Erik Parens, Josephine Johnston, and Jacob Moses, answer: no. Parens et al. note the proliferation of subdisciplines of bioethics: gen-ethics (focused on ethical issues around the Human Genome Project), neuro-ethics, nano-ethics, and soon, potentially, synthetic bioethics (to grapple with ethical issues raised by synthetic biology). Emerging areas of scientific research raise new technical and theoretical questions. To the extent that…
In the last post, we started looking at the results of a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1] in which they deployed focus groups to find out what issues in research ethics scientists themselves find most difficult and worrisome. That post focused on two categories the scientists being studied identified as fraught with difficulty, the meaning of data and the rules of science. In this post, we'll focus on the other two categories where scientists expressed concerns, life with colleagues and the pressures of production in science. We'll also look…
In the U.S., the federal agencies that fund scientific research usually discuss scientific misconduct in terms of the big three of fabrication, falsification, and plagiarism (FFP). These three are the "high crimes" against science, so far over the line as to be shocking to one's scientific sensibilities. But there are lots of less extreme ways to cross the line that are still -- by scientists' own lights -- harmful to science. Those "normal misbehaviors" emerge in a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1]: We found that while researchers were aware…
Back in June, I wrote a post examining the Hellinga retractions. That post, which drew upon the Chemical & Engineering News article by Celia Henry Arnaud (May 5, 2008) [1], focused on the ways scientists engage with each other's work in the published literature, and how they engage with each other more directly in trying to build on this published work. This kind of engagement is where you're most likely to see one group of scientists reproduce the results of another -- or to see their attempts to reproduce these results fail. Given that reproducibilty of results is part of what…
In a post last week, I mentioned a set of standards put forward by Carol Henry (a consultant and former vice president for industry performance programs at the American Chemistry Council), who says they would improve the credibility of industry-funded research. But why does industry-funded research have a credibility problem in the first place? Aren't industry scientists (or academic scientists whose research is supported by money from industry) first and foremost scientists, committed to the project of building accurate and reliable knowledge about the world? As scientists, aren't they…
In the August 25, 2008 issue of Chemical & Engineering News, there's an interview with Carol Henry (behind a paywall). Henry is a consultant who used to be vice president for industry performance programs at the American Chemistry Council (ACC). In the course of the interview, Henry laid out a set of standards for doing research that she thinks all scientists should adopt. (Indeed, these are the standards that guided Henry in managing research programs for the California Environmental Protection Agency, the U.S. Department of Energy, the American Petroleum Institute, and ACC.) Here are…