Institutional review boards overreaching?

Institutional review boards (IRBs) are the cornerstone of the protection of human subjects in modern biomedical research. Mandated by the federal government in the 1970's in the wake of research abuses of the 20th century, in particular the the horrors of the infamous Nazi biomedical experiments during World War II that were documented in during the Nuremberg trials and the Tuskegee syphilis experiment in which black men with syphilis in rural Alabama were followed without treatment in order to study the natural course of the disease, a study that lasted into the early 1970's. In the wake of this abuse, a document, based on the work of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1974-1978), the Department of Health and Human Services (HHS) published "Ethical Principles and Guidelines for the Protection of Human Subjects of Research" (otherwise known as the Belmont Report) was published in 1979. Based on the Belmont Report, the Common Rule was codified in 1991 and serves as the basis for all federal rules governing human subjects research. All federally-funded research must abide by the Common Rule, and many states have laws requiring that even research not funded by the federal or state government must also abide by the Common Rule, which regulates the makeup and function of the IRBs that oversee human subject research.

No one would argue that we should go back to the bad old days, before the Common Rule, when the rules governing human research were vague to nonexistent, and human subjects relied on the ethics of individual researchers, which would, quite naturally, vary from researcher to researcher. IRBs, as flawed as they sometimes are, represent the most potent patient advocacy and protection mechanism at present. However, in academic medicine, there has been a perception for a while now that IRBs are expanding their reach and making the approval of human subject research more and more onerous, that they are putting up unnecessary roadblocks to research with minimal or even no risk of harm to human subjects, and that they are expanding their purview to areas into which they were never intended to go. Now the American Association of University Professors is echoing the same concern that I've been hearing for a while:

Institutional review boards -- never designed for oversight of journalism programs or surveys by sociology majors -- have gone way beyond their mandates and purpose, to the detriment of scholarship, says a new report from the American Association of University Professors.

IRB's serve an important purpose when people who are the subjects of research can face real harm, said David Hyman, an author of the report and a professor of law and medicine at the University of Illinois at Urbana-Champaign. But in cases where the chance for harm is quite low, the IRB process is not needed, he said. IRB review, he pointed out, has been required for projects such as journalism study, oral history research, and simple surveys of family members.

"That's just nutty," he said. "People talk to their parents and relatives all the time without IRB approval."

The report recommends that IRB's cease reviewing a number of projects where the chance for physical injury to a human subject is slim to nonexistent. When adults are the subject of surveys, interviews, or publicly observed, there is no need for an IRB process, said Jonathan Knight, the AAUP's point person on academic freedom.

The report lists a number of "more or less familiar horror stories" to back up the claim that the process has gotten out of hand. In one case, a linguist had to get signed approval from the participants of a study who were not literate. In another, a white graduate student was told that he could not interview African-American students on career expectations because the interview might cause trauma.

None of this is surprising to anyone involved in clinical research. Over the last decade or so, IRBs have made the requirements for doing any sort of clinical trial progressively more and more onerous, in some cases going far beyond what is required to guarantee human subjects protection. You may think this is a good thing, and it is--to a point. However, there comes a point when requirements pass beyond the point of ensuring patient safety and autonomy and into the realm of stifling research, or at least making it far more difficult than it already is. It is not clear to me that we have reached that point, but if things keep going the way they are going that point cannot be far off. Indeed, now virtually any study, even one that involves nothing more than patient questionnaires, must receive IRB approval, and, at least at our institution, getting that approval is becoming more and more difficult. The AAUP report notes that, as any human institution with a lot of power tends to do, IRBs appear to be asserting power over areas that they were never intended to regulate:

A linguist seeking to study language development in a preliterate tribe was instructed by the IRB to have the subjects read and sign a consent form before the study could proceed.

A political scientist who had bought a list of appropriate names for a survey of voting behavior was required by the IRB to get written informed consent from the subjects before mailing them the survey.

A Caucasian PhD student, seeking to study career expectations in relation to ethnicity, was told by the IRB that African American PhD students could not be interviewed because it might be traumatic for them to be interviewed by the student.

An experimental economist seeking to do a study of betting choices in college seniors was held up for many months while the IRB considered and reconsidered the risks inherent in the study.

An IRB attempted to block publication of an English professor's essay that drew on anecdotal information provided by students about their personal experiences with violence because the students, though not identified by name in the essay, might be distressed by reading the essay.

A campus IRB attempted to deny an MA student her diploma because she did not obtain IRB approval for calling newspaper executives to ask for copies of printed material generally available to the public.

These horror stories are no surprise to academic physicians involved in clinical research. No one argues that IRBs shouldn't have jurisdiction over clinical trials and that they shouldn't zealously guard patient safety and be sure that the risks of the research do not outweigh its potential benefits. No one is saying that IRBs shouldn't make sure that informed consent is truly informed. However, in other sorts of research, such as outcomes research involving chart reviews, where the potential for harm is minimal to nonexistent given that it is a review of cases after the fact and that data is pooled, IRBs have developed a distressing tendency to question every detail of the proposed protocol. Protocols that involve nothing more than periodic blood draws are not uncommonly ruthlessly questioned and dissected, for example. It is not at all surprising that they would behave similarly as they move into regulating non-biomedical research. Part of the problem, as the AAUP recognizes, is that the power of IRBs is absolute. There is no appeal:

Under the IRB review procedure, an investigator must obtain prior IRB approval of his or her research protocol before the research can be undertaken.4 Members of a campus IRB are instructed by the regulations to decide, among other things, whether the risks the research would impose on its "subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result." Thus IRB members are instructed to form their own view of the risks their colleagues' research would impose on its subjects, and on the importance of the results that might be obtained from the research, and to deny permission to conduct the research if in their view the risks are not reasonable relative to the value of the likely results. There could hardly be a more obvious potential threat to academic freedom.

Moreover, no provision is made in the regulations for an appeal process in case a research protocol is rejected by a campus IRB. It is consistent with the regulations for an institution to provide an appeal process, but where the research is to be federally funded, or the institution has opted for a single review procedure that requires IRB approval, the appeal process would have to be to yet another IRB. We do not in fact know of any institution that makes explicit formal provision for such an appeal.

Lack of an appeal process is relevant in another way. An IRB may demand that a change be made in a research protocol as a condition of approval. Prospective researchers are given an opportunity to try to convince the IRB that the change need not be made, but scheduling difficulties often cause lengthy delays; and in any case, unless the prospective researcher is able to convince the IRB to rescind its demand, the IRB's demand settles the matter.

The consequences for research can be profound. For example, this occurred in a proposed study on substance abuse:

Nearly eighteen months and 17 percent of the total research budget had to be spent on obtaining the nine IRB approvals that were required for the study to be undertaken. The IRBs demanded many changes in the formatting and wording of the consent and survey forms, and each change demanded by one IRB had to be approved by all the others. The researchers claim that by the end of the process, no substantial change had been made in the protocol, and that the changes demanded had no discernible impact on the protection of human subjects.

It's a lament heard time and time again regarding clinical trials. One could make a crack about "absolute power corrupting absolutely," but this clearly isn't a matter of corruption. It's more about the all too human natural tendency of such regulatory bodies to expand their reach, even with the best of intentions. It's a classic case of "mission creep." Indeed, members of IRBs truly want to fulfil their charge of protecting human research subjects. (And, in fact, they are told again and again that any doubts they have must be aired, no matter how trivial.)

The worst thing is, the increased vigilance doesn't necessarily add to the protection of human subjects. In my personal experience observing what has occurred in the clinical trials in which my colleagues and I hvae been involved, most of the demands the IRB has imposed have involved questioning every sentence of the informed consent that must be signed and harping on points that only are only tangentially related to human subjects protection. In one protocol, the need for even relatively minor core needle biopsies was questioned as being totally unnecessary, even though such biopsies caused minimal pain, involved minimal risk, and would provide invaluable information about whether the study drug was working or not. All involved a lot of rewriting and a lot of argument with the IRB. Again, certainly the argument that doing human subjects research should be difficult can be made, but it's getting to the point that researchers are avoiding doing minimal risk human subjects research because they simply don't think it's worth it to have to deal with the IRB. Any sort of tinkering with the rules governing IRBs is also fraught with risk. No government official or University President wants to be seen as advocating the loosening of patient protections, which is how any reform runs the risk of being perceived.

So what can be done? The AAUP quite correctly points out that simply exempting "social sciences" and humanities research from IRB oversight is not the right answer, and its reasoning rings true:

We believe that recommendation to be a mistake, on two counts. (1) It is arguable that some social science research has the potential to cause serious psychological harm. An example that generated public anger, and that has come in for much discussion since, is the experiment conducted by Stanley Milgram at Yale in the early 1960s. (In that experiment, the subjects were ordered to do what they were falsely told would cause pain to others as part of a study of learning; the aim of the experiment was to find out how many of the subjects would obey the orders.) We do not address this argument here. We point to it merely in order to bring out that an across-the-board exemption for all social science research is arguably overbroad. (2) Some biomedical research does not impose a serious risk of harm on its subjects--for example, biomedical research that involves no bodily interventions and consists entirely of an effort to acquire survey data.

Instead, the AAUP recommends exempting straightforward questionnaires and interviews or observation of behavior in public places, both of which recommendations seem like a reasonable start, as long as such studies are conducted such that results can't be linked with the identities of individual subjects. Indeed, even in biomedical research, we already have such an exemption for studies involving only human tissue (blood, pathology specimens, etc.) that have been deidentified, such that investigators can't link them to the patient from which they came. Such studies do not undergo full IRB review, but are instead granted an administrative exemption, usually by the head of the IRB.

Another step that should be taken is to have an objective study of IRBs and their efficacy and efficiency done. Remember, the plural of "anecdotes" is not "data," and we have almost no solid, objective data regarding IRBs and how much delay they introduce into the clinical research enterprise. What is even worse, however, is that we have very little objective data to show that IRBs actually do protect human research subjects (other than in comparison with egregious abuses that happened decades ago) and some disturbing anecdotes that suggest that harm to human subjects is more common that we would like. For instance, presumably the gene therapy study that resulted in the death of Jesse Gelsinger had full IRB approval. Moreover, current law seems impotent when mercury militia activists like Mark and David Geier set up their own dubious IRB stacked with their associates and ideological compatriots to rubber stamp their scientifically worthless and ethically challenged clinical trial using Lupron, a drug that shuts down sex hormone synthesis, to treat autistic children. (Indeed, the Geiers even have a blatant conflict of interest in that it is a "treatment" that they are trying to patent.) Certainly, no federal or state regulatory body seems to have called them on it, despite Kathleen Seidel's tireless efforts to publicize the abuse. Meanwhile big pharmaceutical companies can go "IRB shopping" for the most lenient IRB to oversee their trials (Evans, D., M. Smith, and L. Willen, Big Pharma's Shameful Secret, Bloomberg Markets, December 2005).

Under the current system, those who play by the rules (namely, the vast majority of university-based researchers) have to deal with an increasingly onerous set of expectations and requirements, while those who do not (such as the Geiers) or who try to game the system by using for-profit IRBs (some pharmaceutical companies) seem somehow able to bypass the increasingly zeaolous IRB or render it tame. Clearly, a system will always be needed to protect human subjects from overzealous, unethical, or just plain incompetent clinical researchers. The question is: How can we keep and build on what is good and what works in the present system while decreasing the burden on researchers and cracking down on those who attempt to game or bypass this important system to protect patients? Now, more than ever, need good objective data about how well IRBs are actually fulfilling their charge and the costs involved in both lost time and additional expense upon which to base recommendations for reform that protect patients but do not burden investigators unnecessarily with requirements that have little or no relevance to protecting human subjects. There is no doubt that IRBs (or some similar mechanism to review human subjects research) are necessary; the question is how to make them more effective at protecting human subjects with as little impediment to valuable research as possible. It's a difficult balancing act in the best of times, but impossible without better information.

Categories

More like this

Back in late December, I came across an op-ed piece in the New York Times written by Dr. Atul Gawande, general and endocrine surgeon and author of Complications: A Surgeon's Notes on an Imperfect Science and Better: A Surgeon's Notes on Performance, that struck me as a travesty of what our system…
Here we go again. Every time I think I can get away from this topic for a while, I get sucked back in. Indeed, it seems that hardly a week can go by when I don't find myself pulled inexorably back to this horrible, horrible clinic and what I consider to be the abuses of science and clinical trials…
Via Inside Higher Ed comes news that the Food and Drug Administration has changed its mind (do administrative bodies have "minds"?) about rules it recommended on how scientists get approval for their research projects from IRBs (institutional review boards). In particular, the rules were intended…
The other day, I happened across an Op-Ed article in the New York Times that left me scratching my head at the seeming insanity of the incident it described. The article, written by Dr. Atul Gawande, author of Complications: A Surgeon's Notes on an Imperfect Science and Better: A Surgeon's Notes on…

Just like IACUC. No appeal mechanism so they do whatever they want, which is mostly making life miserable to PIs and making research mnore and more impossible to do.

Full disclosure: I'm a licensed attorney.

Seems like many of the IRBs that are the subjects of the anecdotes above could use good lawyers (yes, I hear the groans) - the niggles with consent wording, privacy concerns re pooled data, etc., sound like classic cases of amateur lawyering. Amateur lawyering frequently causes problems similar to those caused by amateur doctoring (a/k/a "woo") - ineffective "treatment" that unfortunately may result in harm (in the IRB case, lack of protection against legal liability for the institution and/or its researchers).

"It's a lament heard time and time again regarding clinical trials. One could make a crack about "absolute power corrupting absolutely," but this clearly isn't a matter of corruption. It's more about the all too human natural tendency of such regulatory bodies to expand their reach, even with the best of intentions."

Actually, this is a classic example of "corruption" -- in the original sense of the proverb! The saying isn't referring to bribery, suborning, and the like -- that's a modern usage.

The point is simply that unchecked power tempts any person or group to extend their reach further and further, until their original purposes are undercut by the will to maintain control, and the reluctance to let something go by without "pissing on it" a la scent-marking.

By David Harmon (not verified) on 30 Oct 2006 #permalink

Regarding Tuskeegee, a useful contextualization is available.

Regarding IRBs, the history profession has been struggling with them for some time, and it really could destroy recent history and archival work, if they don't start to follow national guidelines (or your own excellent suggestions).

Quis custodiet ipsos custodes?

By JohnnieCanuck (not verified) on 30 Oct 2006 #permalink

After working with a hospital IRB for a year getting approval for a physician to conduct a retrospective study on her own charts, I agree with Orac.

For Jud -- at least in this case, the hospital's in house attorney was one of the members of the IRB. I have no idea how common it is to have legal expertise in ready supply.

Very enlightening commentary, Orac! Thanks for the mention, and the links.

One thing that stuck out like a sore thumb when I first checked out the online registration for the Geiers' kitchen-table "Institute for Chronic Illnesses" IRB was the fact that the IRB had no Federal-Wide Assurances -- that is, they hadn't made the required formal commitment to follow the Common Rule that would be required if they were recipients of federal funding. Ensuring compliance with the Common Rule is the raison d'etre for an IRB. It looks like they're trying to expedite publication of their reports in journals that require an IRB statement; and to appear legitimate, as if they've committed to adhere to a set of ethical standards, though they haven't gone so far as to give their official, legally enforceable word.

I wonder if they realized when they erected their little facade that Maryland is one of those states that requires compliance with the Common Rule, regardless of an investigator's source of funding.

Not that there was any IRB statement on their most recent article, in which they discuss their experimentation with Lupron on autistic children; they claim to have treated about one hundred children so far, and two of those described are adolescent boys. Pretty slack on the part of the editors -- one of whom happens to be, like Mark Geier, an expert witness in the Omnibus Autism Proceeding. The whole business smells like a fish that's been dead for a week.

Outstanding post, doc...this is the kind of blogging that deserves to be in the 'news and views' section of one of our top biomedical journals.

The question you raise that I've never thought about is whether IRBs have been shown prospectively to actually protect human research subjects - a superb question.

I don't think anybody should be surprised that a committee with a mandate to "find ethical problems" will do so, even if they have to perform mental gymnastics in the process. The lack of accountability of IRBs is a serious problem, as is the lack of research into their actual effects on the research process. There's far too much "feel goodism" and far too little "validation" of so-called ethical requirements.

By bob koepp (not verified) on 01 Nov 2006 #permalink

For anyone who wants to see more from both sides of this debate, you might be interested in a conference that took place at Northwestern University back in April, Censorship and Institutional Review Boards. Conference co-organizer Philip Hamburger, of Columbia University Law School, had previously written a paper claiming that the existing system of IRB review violates the First Amendment. The full set of papers from the conference is going to be published in February 2007 in Volume 101, Issue 2, of the Northwestern University Law Review.

As to your worthwhile point that there is "very little objective data to show that IRBs actually do protect human research subjects," you might be interested in the information I provide in a recent book (What the Doctor Didn't Say: The Hidden Truth about Medical Research, Oxford University Press 2006). The book provides an overview of what is happening in the world of human subjects protection, especially regarding medical research. And, looking at current practices for obtaining informed consent, one might well conclude that we actually need more IRB oversight, not less. I provide the details of many clinical trials where subjects were being denied information they would normally find crucial in choosing whether or not to participate. (For example, there is the colon cancer study where the subjects were not told about the concern of a higher risk for the cancer coming back, nor that few of the doctors enrolling them in the study would have participated themselves.) Anyhow, people can look at what happened in these studies and make their own decisions about what it says about the need for greater or lesser protection by IRBs.