One of my students raised a really good question in class today, a question to which I do not know the answer -- but maybe you do.
We were discussing some of the Very Bad Experiments* that prompted current thinking** about what it is and is not ethically permissible to do with human subjects of scientific research. We had noted that institutions like our university have an Institutional Review Board (IRB) that must approve your protocol before you can conduct research with human subjects. At this point, my student asked:
Are there cases where researchers send protocols to the IRB that are clearly unethical -- not just a little unethical in the gray areas, but way on the satisfying-scientific-curiosity-with-no-benefit-to-society-and-great-risk-of-harm-to-the-subjects side of the line?
And if so, does anything happen to them beyond their protocols being rejected? In other words, do they get to pine for the satifaction of their scientific curiosity without some kind of intervention in which members of their scientific community sit them down and explain how wrong their clearly unethical protocols are?
Please, don't violate any confidentiality oath in answering these questions. But, do you have experience of any scientists sufficiently disconnected from the current norms on research with human subjects that they have openly revealed their longings for the "good old days"?
*For example, the Nazi medical experiments and the Tuskegee syphilis experiment.
**Current thinking, for our purposes, follows a trajectory from the Nuremburg Code to the Declaration of Helsinki to the Belmont Report.
I thought the real question would be "Do evil scientists actually bother to submit their evil protocols to ethics committees in the first place?"
Anon, not submitting the protocol because you recognize it's evil enough that it won't get approved shows a certain level of awareness of the norms of the community. I think my student was really asking if there might be scientists who are somewhat clueless about these norms and thus aren't making any effort to conceal their inclination to see humans primarily as useful research material.
Great question, Janet. The IRB I serve on hasn't gotten any egregiously unethical proposals in the short time I've served on it, although there are the rare ones that raise obvious concerns. I think that because practically no one works in a vacuum that it's difficult for really inappropriate proposals to make it as far as the IRB... I suspect that they would almost always get weeded out before they reach us.
As for the second question, I don't suspect that most researchers wax nostaligic about the days before the Belmont Report, but they do see us as an often-annoying hurdle. One researcher who is an acquaintance of mine (and who does not have to go through my IRB) certainly sounded as if he thought our thoroughness was overdone, and I'm sure that he was being diplomatic in his criticism. What seems completely reasonable to the IRB may just seem like extra work to the researchers.
Sometimes evil is banal. Sometimes it's obvious. Sometimes it's not obvious. If a researcher knows enough to submit a protocol to an IRB, then s/he at least is aware of the need to not be obvious about any evil being proposed.
Sounds like a great undergraduate research experiment. Give your kiddos the necessary grant application forms, have then draft the most heinous protocols they can come up with, and submit away. Make the due date April 1, just to cover yourself.
By the time you get to the point in your career that you are actually writing up a proposal to IRB (i.e., you are your own PI most likely), you have already been immersed in your field of research for years and probably have a very good feel for what is and what is not acceptable.
Strange things can happen when the immersion is not enough because the standards are changing too fast. I think the standards for human research are not changing nearly as fast as standards for animal research.
So, I expect that there are many more "surprizes" (and subsequent wars between PIs and IACUCs) in animal research proposals. And in such cases it is not an 'ooops' kind of deal, as in "sorry, I did not know it was not acceptable". It is more likely to be "who the heck are you to tell me what is acceptable as, unlike you paper-pushers, I actually know what is OK or not OK for this particular species" (and is probably right about it).
Janet. Wouldn't failure to submit to an IRB be an example of a scientist who is completely out of touch with the prevailing standards of ethical scientific care.
Now assuming somebody found out about this wouldn't that also constitute a similar situation to what your student was querying?
Does somebody sit them down and explain?
I've not been around very long and haven't heard anything about either situation occuring or any responses to it.
PS Another good example for your classes in future might be the "unfortunate experiment" with regard to Cervical Cancer in New Zealand. Apart from other factors it also seems to be an example of someone who has stepped or drifted outside the orthodox view but has not been restrained by his colleagues
This: "By the time you get to the point in your career that you are actually writing up a proposal to IRB"... is not true! I'm doctoral student (still in course work) and I'm submitting my third on Tuesday. I think many IRBs require departmental representative sign-offs first. The department reps are well indoctrinated and will catch most problems. Doctoral students have to have a faculty member PI. I would think it would be more established researchers who would be more likely to try to submit something that didn't meet the criteria of the board -- but still not blatantly unethical.
I'd have to agree with Christina. In our neck of the woods IRB submissions are often written up by experienced research assistants, PhD students and postdocs.
Upon thinking about my question a bit more, one area where this seems it could be somewhat possible to occur would be in psychology/sociology.
It seems possible for a researcher to not be capable of empathy or to have the same emotional responses as the average person. This researcher could in turn then propose an experiment and not have an instinctual sense for how their experiment is potentially harmful psychologically and/or emotionally in the way most others would automatically view it.
I'm not sure I understand why you think that it's more likely to occur in PSYC/SOSC? Are psychologists and sociologists less likely to have empathy?
Unethical activity on the part of the researcher are more likely to occur after the approval process by the IRB or the IACUC. In those cases, someone else (resident, doctoral or postdoctoral fellow or a peer) must observe and report the culprit. Unfortunately, there are cases where observers do not report unethical activity for different reasons, none of which is justified, but could be understood.
The reason I think it may be more likely in psychology or sociology is because in my opinion mental and emotional responses are much more difficult to predict than physiological ones. Here's a simple example: when you cut someone, they bleed, no matter whose body it is. Some may bleed more, some less, but the outcome is pretty predictable. But if you scream in ten people's faces, it's likely they will all have different responses: some may cower and become frightened, some might become angry and yell back, and some might be simply annoyed and apathetic.
What I'm trying to say is that a person who is not empathetic may not understand how some aspect of an experiment could cause mental harm/trauma to subjects if s/he does not experience those emotions him/herself. This person may not fully understand the potential consequences.
One example of this could be the infamous 1971 prison experiments done by Stanford University.
I serve on an IRB at a research entity. I have never seen an obvious "evil" intent submission in the many years I have sat on the committee. I have seen submissions that I thought bordered on irresponsible or unnecessary, but those were rejected. I don't know if the researchers were held accountable.
The submissions I worry about are not from our folks, but are from other institutions with IRBs who rubber stamp materials. When we share responsibility and review the materials afterwards, it is fairly obvious that the materials were not reviewed very closely.
I also agree with the poster above who noted that unethical behavior would mostly follow after the material had been reviewed and approved. Most IRBs do not have audit resources or mandates.