Good ethics, good science, and the question of whether some knowledge is poisoned.

In comments to a pair of posts about research with animals, some issues that are germane to the subject of research with human subjects have come up. In particular, they raise the question of whether scientists ought to use results from ethically flawed experiments. And, this question pushes the question of the extent to which ethically flawed research can still be scientifically sound.

Here, I want to dig into the first question, but I'll only make a first pass at the second.

First, here are the comments that precipitated this post.

On my first post on the lab group lock-out from the animal care facility at Laurentian University, Roman Werpachowski responds to my suggestion that halting suboptimal research with animals (rather than figuring out a way to make it better while continuing it) might amount to animal discomfort for nothing (i.e., with no usable results from the experiments):

Surely, there may be ironies in shutting down the research in the interests of the animals. Given that the research projects were already in progress, the animals likely had already experienced some discomfort.

Great. Given that I already stole your car, may I keep it using longer?

Now, however, since the experiments were stopped midway through, some of this may be discomfort that doesn't bring any new knowledge. Depending on the ways in which the animals were used, it's even possible that the animals might end up having to be destroyed -- which, arguably, could be a bigger harm to the animals than the continuation of the experiments.

Why do we say "destroyed" and not "killed"? Why hide the ugly reality?

This is not to say that the protocols might not have needed revision -- just that there might have been better ways to ride out the period during which they were being revised if the interests of the animals were really what was at stake.

And then we start another experiment, and the argument is carried on further...

While I was getting in touch with my empathy for the students involved (and wanting to find a way to recue something usuable from their doomed experiments), Roman suggests that animal welfare is more valuable here than lost time or effort from the students. Indeed, perhaps animal welfare is of greater value than the knowledge that could be gained by harming the animals in the particular ways the ACC felt these animals were being harmed.

More recently, in comments on my musings about our attitudes toward use of animals for food compared to use of animals for scientific research, Nazi research came up. David Harmon commented:

The basic justification for animal experimentation is benefit for humanity. To forbid animal experimentation altogether would be to declare that *no* possible benefit to humanity would be worth harming any other species. This is contradictory to the practice of most of humanity, who blithely continue eating meat (when/if they can get it). At the same time, consider that various data gathered by the Nazis, via experiments on prisoners, was destroyed unpublished by the scientific council assembled for the question. That excludes the other extreme -- that is, there are "harms" that are flatly not justified by scientific progress.

[Bold emphasis added. Also, note that I haven't been able to nail down an authoritative reference on the claim that Nazi data was destroyed, but it doesn't sound implausible. If anyone can provide a reference on this, it would be appreciated.]

So here's the burning question: In a case where scientific research has been conducted but ethical standards have been violated in the conduct of that research, what should we do with the results of the research?

We're assuming, for the sake of argument, that the ethical standards that have been violated aren't standards that are relevant to whether the scientific knowledge produced is accurate or reliable. It's clearly a bad idea to rely on the results of research that includes fabrication or falsification; these violations undermine the reliability of the scientific knowledge good research is supposed to produce. But what if the execution of the experiments and the collection and analysis of the data are clean, but animals or humans are harmed? Is any knowledge produced in such experiments off-limits?

Making humans and animals suffer (in violation of regulations on scientific research, especially) is bad. Generally, scientists would say the production of more knowledge is good. Does the bad here outweigh the good? Is the good that comes from the knowledge a way to mitigate the bad? (Could the use of this knowledge for good be, perhaps, a way to redeem the suffering, to make sure it wasn't in vain?) Or is the bad here a line that must not be crossed, regardless of how valuable the knowledge produced might be?

Part of the worry, I think, is that using results from unethical experiments might encourage more unethical experiments. Look, here are some really good scientific results (even though they bent the rules about how to treat the human and/or animal subjects, but still ...). If your results are recognized as being good, and if other scientists use them, you're still a member-in-good-standing of the scientific club. It would be better to live within the ethical standards for use of animal and/or human subjects, but the science is still sound. And perhaps, you got your reliable results a bit quicker, or with less hassle.

So ... why not go that route again?

Should it come to the point where scientists are routinely working around the ethical standards rather than living up to them, we might well ask what the point of such standards could be. A rule that no one takes seriously isn't much of a rule. It doesn't help much if people who aren't doing scientific research take the rule seriously while those who are doing scientific research ignore the rule. In this case, it can seem to the researchers that the rule is imposed on them, unfairly, by people who don't understand what you have to do to get good scientific knowledge in a timely manner, or how important that knowledge is.

To prevent science from sliding into this state, some people argue that the results of ethically flawed research are poison. The scientific community can't touch them, else the rules will be taken lightly and scientists will make calculated decisions to bend the ethical rules when it speeds knowledge production.

Another reason for concern, of course, is that if the public finds out that scientists have been bending the rules ethically in their research, the public might well decide that scientific research ought not to be funded with public monies. Enforcement of the rules is a way to stay on the public's good side. Behaving as if rules are for little people ... not so much.

Discussing the Council of Biology Editors' position on how journal editors should deal with submissions that are scientifically sound but ethically lacking (don't publish them), Marcia Angell [1] gives a nice explanation of the rationale:

The policy of rejecting for publication reports of unethical research has three justifications. First, it is likely to deter unethical work. Because publication is central to the reward system in scientific research, very few researchers would knowingly jeopardize their chances for publication. On the other hand, any other policy would lead to more unethical work, because cutting ethical corners would often enable researchers to get clearer answers faster and thus give them a competitive edge. Second, refusing to publish reports of unethical work protects from erosion the principle of the primacy of the research subject. [They have in mind human subjects here.] This is true even when the violations are small. Small lapses, if permitted, invite bigger ones. Third, refusal to publish unethical work serves notice to society at large that even scientists do not consider scientific knowledge the ultimate good in society.

Some questions remain. What about scientists working in contexts where the main reward isn't publication and scientific recognition? (For example, what if you're working for a private company, or an organization bent on taking over the world?) Perhaps in research with human subjects researchers ought -- and do -- recognize the primacy of the research subject, but this isn't quite the relationship between researcher and subject in research with animals -- do the same rules on publication apply? And, in fact, are there scientists who do consider scientific knowledge the ultimate good in society? Could such scientists ever see ethical standards for use of animals and human subjects in experimentation as anything but an unwelcome imposition from without?

The last question is one that sticks with me, in part because I wonder about people's ability to uphold standards they are not themselves committed to. Are they really living up to the standards, or just doing a really good job of looking like they are? And if it's the latter ... how do we know these people aren't just doing a good job looking like they're doing their experiments rather than fabricating and falsifying to their heart's content?

In other words, what reason do we have to believe that scientists who treat animals and/or humans in ways we would count as unethical aren't doing other unethical things that undermine the quality of the scientific knowledge they're producing?
___
[1] Marcia Angell (1992) "Editorial Responsibility: Protecting Human Rights by Restricting Publication of Unethical Research," in George J. Annas and Michael A. Grodin (eds.), The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation. Oxford, 276-285; p. 282.

Categories

More like this

One of the things we'd like to be able to do with our powers of ethical reasoning is tackle situations where we're not immediately certain of the right thing to do (or, for that matter, of the reason why the plan someone else is advocating strikes us as wrong). A common strategy (at least in an…
In an earlier post, I looked at a research study by Nelson et al. [1] on how the cognitive development of young abandoned children in Romania was affected by being raised in institutional versus foster care conditions. Specifically, I examined the explanation the researchers gave to argue that…
In the U.S., the federal agencies that fund scientific research usually discuss scientific misconduct in terms of the big three of fabrication, falsification, and plagiarism (FFP). These three are the "high crimes" against science, so far over the line as to be shocking to one's scientific…
Today we continue our look at the reasons that attempts to have a dialogue about the use of animals in scientific research routinely run aground. Dialogue, you'll remember, involves the participants in the dialogue offering not just their views but also something like their reasons for holding…

Also, note that I haven't been able to nail down an authoritative reference on the claim that Nazi data was destroyed, but it doesn't sound implausible.

AFAIR, at least some of the data was used later. Research on hypothermia, for example. But I don't have any reference.

OTOH, I know a professor who was asked to review a PhD thesis who's author cited Mengele's results on something.

By Roman Werpachowski (not verified) on 22 Mar 2006 #permalink

My POV would be: good ethics and good science are completely orthogonal to each other. It's perfectly possible to do good science whilst having no ethics whatsoever (the Nazi experiments being the classic example), and it's perfectly possible to behave ethically whilst having no grasp of basic scientific methodology.

Both, however, are necessary conditions for a healthy democracy, so it's still important for scientists to behave ethically as well as scientifically.

I would disagree that dubious ethical behaviour directly casts doubt on the research. It might suggest underlying biases that have caused a person to discard what are usually considered to be universal moral principles - for example in the case of the Nazi pamphlet "100 scientists against Einstein". But the one certainly doesn't imply the other.

I have come to believe that by slipping out of the bounds of our comfy disciplines we can find very useful information that can aide in answering questions like the one you pose here. As an example consider that in the criminal justice system thousands of lawyers, politicians and judges have put their mind to a directly parallel problem: how to deal with illegally obtained evidence. While the parallels are not perfect (since the criminal law and science operate on different underlying premises) the result is a somewhat cogent set of rules that could inform your discussion here.

The overarching view of the criminal justice system has been that any encouragement of misbehaviour will only result in more misbehaviour and as a result tainted evidence is almost always excluded from discourse. For a quick overview check out "the exclusionary rule" at Wikipedia.org as it discusses some of the concepts that should be considered in your discussion including the concepts of "inevitable discovery", "attenuation", and "the independent source exception".

In the student's case the attenuation principal can be read to indicate that if the misconduct was unintentional or an unexpected byproduct of the research then it may be considered in a different light than if the misbehaviour was reckless or easily anticipated. The independent source exception would apply in that the data derived from the work could not be used as primary material but could be used as secondary material supporting ethically obtained results. This would bolster science without giving the misbehaving scientists any rewards for their misbehaviour and would have the benefit of not ignoring the sacrifice that was made by the experimental subjects.

Two questions spring to mind: If a piece of research that is in itself ethically conducted relies on previous unethically obtained results, is it itself tainted? And, if a piece of research was considered ethical at the time and place it was conducted, should it still be useable even if it would not be considered ethical by today's standards?

It is pretty clear that if you take a hard-line stance on both questions, there is probably no research going on today, in any science discipline, that isn't ethically bankrupt. Much of early research in biology, chemistry and medicine was conducted in a way that ranges from not up to our standards down to hellish nightmares to its subjects. But of course, we still directly or implicitly make use of much of that early data.

Much later research, too, would today likely not be allowed, and yet has a great impact on current research (Milgram's psychological experiment on authority is a good example).

[Somewhat related to Blair's comment] This discussion immediately brought to mind a subject that has been in the news of late: the use of torture to obtain information about an imminent attack.

[And Corkscrew wrote] "I would disagree that dubious ethical behaviour directly casts doubt on the research."

It might not directly cast doubt on the research, but it casts doubt on the judgement of the researcher, and indirectly on the research.

The comparison with torture is probably not apposite - one of the major issues with torture (and, indeed, 'mere' psychological pressure) is that it often leads to factually (rather than morally) tainted data (I believe Umbert Eco has written a lot on this, among others). While this may be the case with some scientific research, I really doubt it is a major risk factor.

However, to return to the 'real' question here: I would suggest that rather than rejecting the knowledge that is obtained in good but unethical research as morally tainted we should have robust ethical codes and penalize those who breach them. According to this model, if a scientist were to obtain good findings from unethical research the knowledge itself would enter our knowledge base but the scientist who breached the codes would be penalized - through professional ostracization, perhaps, plus imprisonment or fines if appropriate.

Since the knowledge itself would be of value, it would seem foolish to simply reject it unless it were factually flawed; it is the unethical aspect which is the prioblem, and that is a question of personal culpability.

This is a little off the thread of how to increase ethical standards of modern research, but this discussion stimulated a dormant memory...

I recall reading (sorry, don't have a reference, it was decades ago) about Nazi experiments with hypothermia. I still shudder at the idea of standing around a cold water tank with a stopwatch and clipboard recording time, body temperature and behavioral responses as people succumbed to hypothermia.

Those experiments provided data that keeping the brainstem out of the cold water significantly increased survival time. Essentially all modern life vests provide a high collar that keeps the back of the user's head out of the water even if they become unconscious.

If you wish to reject this information as tainted, then must you avoid using modern life vests? Personally, I choose to honor the sacrifice of the Nazis' "research subjects" and make use of the knowledge for which they gave their lives.

Personally, I choose to honor the sacrifice of the Nazis' "research subjects" and make use of the knowledge for which they gave their lives.

Hmm, not sure I could think of this as a 'sacrifice' made by the Nazi's victims!

I have no idea if this attribution of lifejacket design to the Nazi hypothermia experiments is true; stories abound of scientific developments that can be attributed to the nazis (it is commonly said, for example, that methadone was developed by nazi wartime scientists as a terminal cure for junky soldiers stealing medical heroin from army stores). Most such stories are false.

That having been said, if your story is true, it could stand as a good (if extreme) example of how we can make constructive use of unethically obtained data without endorsing either the research or the researchers. Even if false it makes a nice thought experiment:)

Addendum: Those who don't know the Nazi Methadone story can learn from Tom Cruise, who is especially well-informed on rthis as on many other medical matters. He will tell you, for example, that when invented Methadone was originally called Adolphium, in tribute to Adolf Hitler (it's true - the claim that in fact it was given the trading name Dolphium from the Latin dolor, pain, because it was developed for use as an analgesic in surgery is a pernicious myth).

Ho hum.

"And, in fact, are there scientists who do consider scientific knowledge the ultimate good in society? Could such scientists ever see ethical standards for use of animals and human subjects in experimentation as anything but an unwelcome imposition from without?"

I'd say that answering that question was the real point behind rejecting the Nazi data. Scientists have a old reputation for doing dubious things in pursuit of knowledge, probably since well before Galen's grave-robbing days. The scientists judging the Nazi experiments chose to draw a line in the sand, declaring that the means did *not* justify the end, and as punishment, the Nazis would be denied the only afterlife that science itself is concerned with -- they would lose the "scientific immortality" of their results. Of course, they were drawing that line well into "safe territory" -- any arguments would have been rather subdued just by the associations with the (other) atrocities of the Nazis.

Nowadays, we have more complicated questions, and we're starting to draw lines in the "gray zone". I'd say the important point is that this isn't really a scientific issue. As you point out, ethics (and morality) are external to science. They are not however, external to humanity, and (so far) all our scientists are also humans. The question is whether scientific data should be rejected due to the "taint" of (morally) unacceptable means used to gain it. But that question isn't about science... it's about morality, and what it means to be a human doing science.

Just as a sidenote: It's really easy to construct "problem boxes" which purport to force such a choice. ("Oh, I bet you wouldn't be so picky if your kid was dying of....") But of course, the real constraint there is that you're trapped in a problem box, where all alternatives are excluded by fiat. ("No no no, suppose the only way you could ever find a cure was by torturing babies ...") In the real world, moral responses include creative solutions, negotiations, and tertium quids (plural?). It's not all about Standing Firm Against Evil.

By David Harmon (not verified) on 24 Mar 2006 #permalink

"And, in fact, are there scientists who do consider scientific knowledge the ultimate good in society? Could such scientists ever see ethical standards for use of animals and human subjects in experimentation as anything but an unwelcome imposition from without? ... I wonder about people's ability to uphold standards they are not themselves committed to. Are they really living up to the standards, or just doing a really good job of looking like they are? And if it's the latter ... how do we know these people aren't just doing a good job looking like they're doing their experiments rather than fabricating and falsifying to their heart's content?"

Such scientists are necessarily committed to doing scientifically valuable research. The ethical principles of upholding truth and not harming living organisms are entirely separable and independent. In addition, I do not see how one would hide ethical violations in one's methods without falsification that would compromise the value of the research -- not consistent with the goal of maximizing scientific knowledge. If there exist scientists who view knowledge as the ultimate good in society, they can be expected to spend their time doing valuable science that they can publish rather than scheming to violate ethical policies and then falsify their results.

By Thiotimoline (not verified) on 11 Aug 2007 #permalink