Discussion: (0 comments)
There are no comments available.
View related content: Health Care
It’s Christmas in August for hopeful scientists. The National Institutes of Health is now sending out its annual “priority scores,” the indicators of whether grant requests will likely receive financing from the agency. And hearts are beating faster than ever, as the federal stimulus package has poured an additional $8.2 billion into the institutes’ budget specifically for research.
However, for those grant-winners whose studies will involve human volunteers, another big hurdle remains: federal ethics regulations. No one denies the need to shield human subjects from undue risk. But current regulations have become so stringent and unwieldy that the ethics oversight system often impedes the kind of careful research we should be promoting. As two highly regarded medical ethicists, Dr. Norman Fost of the University of Wisconsin and Dr. Robert J. Levine of Yale, put it in the Journal of the American Medical Association, the system regulating the use of human subjects is “increasingly dysfunctional.”
This sort of dysfunction was on display last year after the Office for Human Research Protections, a regulatory branch of the Department of Health and Human Services, halted a University of Michigan study of a simple five-step checklist to prevent infections at health centers. The checklist merely routinized procedures that should be performed anyway, including washing hands with soap and cleaning a patient’s skin with antiseptic. But because researchers intended to assess the checklist’s success, their work met the strict definition of “research,” and thus required patients’ informed consent. Because they had never obtained it, the human research office shut the study down.
Fortunately, the Michigan incident generated enough outcry among scientists, physicians and the public that the agency reversed itself a few weeks later. But in this, it was the exception rather than the rule.
At medical schools and other research facilities, scientists using human subjects seek approval from institutional review boards, which enforce compliance with federal regulations on risk, informed consent and subject confidentiality. Although some boards are flexible, many are too risk-averse, narrowly interpreting each word in research protocols lest they be second-guessed by Washington regulators.
Some scientists deliberately design their studies more conservatively than they would like in order to placate fastidious review boards. Onerous paperwork regularly delays projects for months and inflates costs. A report last year in the journal Science warned that such paperwork is at “risk of pricing large clinical trials out of reach.”
Large trials are often spread across many medical centers–each with its own review board. Before a 43-center study of hypertension could begin at Veterans Affairs medical centers, for example, a year and a half was spent getting all 43 boards to approve the same protocol. Each time a single board required a change to the protocol, the others had to review it and agree as well.
Stanford University researchers have estimated that it cost them about $56,000 in administrative wages, 18 months of delay and 10,000 pages of paper to make a small change to an already-approved research program that simply compared the progress made by patients attending different types of addiction-treatment programs.
The lengthy approval process cuts into scientific productivity. David Dilts, the director of clinical research at the Knight Cancer Institute in Oregon, found that 30 percent of all board-approved oncology trials are eventually abandoned.
To circumvent overbearing federal boards, researchers occasionally turn to for-profit independent review boards to approve their studies. These boards cost less and work quickly, but some have dangerously lax standards. This spring the Government Accountability Office announced that it had submitted a fabricated study of a fake surgical adhesive gel called Adhesiabloc to an independent review board, which approved the protocol by unanimous vote, saying the “gel is probably very safe.”
Even if a study gets approval from an institutional review board, it’s not clear that the subjects are all that protected. The consent forms may run to dozens of pages written in dense legalese. Thus they provide legal cover for the research institutions, but do little to inform subjects.
And some efforts to protect subjects veer into a sort of bureaucratic paternalism. After 9/11, mental health researchers’ efforts to get timely access to disaster victims was thwarted by review boards concerned that subjects would be “re-traumatized” by filling out questionnaires. Many review boards insist that subjects may be “coerced” into participating in a study if offered a token gratuity of, say, $25.
By late fall, the lucky grant winners are expected to have their studies up and running. But before volunteers can participate, the protocols must gain review board approval. With probably more studies than ever looking for a green light, researchers are preparing for a bottleneck. Clearly, the oversight system has fallen short of the ethical standard it purports to uphold: that the benefits of medical research exceed the costs.
Sally Satel, M.D., is a resident scholar at AEI.
For grant-winners whose studies will involve human volunteers, another big hurdle remains: federal ethics regulations.
There are no comments available.
1150 17th Street, N.W. Washington, D.C. 20036
© 2014 American Enterprise Institute for Public Policy Research