- Author discusses new book on the flaws of institutional review boards
- IRB Overreach?
- The Sociology of IRB's
- Are IRB's Needed for War Zones?
- Reviewing the Reviewers
- AAUP recommends more researcher autonomy in IRB reform
- Research Review Boards Faulted
- U.S. suspends human subjects research at Bluefield College
'Behind Closed Doors'
Author gets inside look at IRBs, and offers perspectives on how they operate, and how researchers can improve their chances of a smooth review.
Many researchers level complaints against institutional review boards (IRBs), which can delay or derail projects their members deem unethical, unrealistic or illegal. Seeking to understand how the boards work, a Wesleyan University assistant professor, Laura Stark, sat through hours of deliberations at boards across the country. Behind Closed Doors: IRBs and the Making of Ethical Research, published this year by the University of Chicago Press, explains through observation and interviews how and why IRBs function the way they do. Stark agreed to answer a few questions from Inside Higher Ed.
Q: You had to receive approval from two IRBs while conducting research for this book, one at Princeton University and one at the National Institutes of Health. Describe those experiences. What did you learn while researching this book that you’ll apply next time you go before an IRB?
A: I learned that one of the most important things that researchers can do to assure a smooth IRB review process is talk with IRB administrators. As many researchers know from conducting multisite research, different IRBs interpret regulations slightly differently. In chapter two of the book, I show that when presented with the same study, different IRBs will apply regulations differently because IRB members’ thinking is anchored by the previous cases and problems they have handled in the past. IRBs use case-based reasoning to make decisions. That is why it was useful for me to communicate informally with administrators at Princeton and NIH: they were able to explain, through conversations and e-mails, how their IRB would read my protocol. They decoded for me the distinctive concerns their board would have -- and the concerns they would not have -- based on their previous experiences (or lack of experience) with studies using similar methodologies or populations. [Stark began her research as a Princeton University graduate student and examined NIH records in studying the history of IRBs.]
Q: What inspired this project? Do you have any IRB horror stories from past projects?
A: I always feel I’m disappointing people when I report that I’ve never had a bad experience with an IRB. I like to think it’s because I now have a good sense of how they work, and know how to work with them. I’ve certainly heard a lot of IRB horror stories, though! ...
I will say that I was immediately excited by the project idea because it was uncharted territory: to observe IRB meetings and to reconsider the history of IRBs using new historical materials from NIH. The topic was new enough that a reviewer of an NSF grant application wrote that I should not be funded because I would not get permission to observe IRBs. For me, this comment was an example of how grant evaluation in science -- like IRB evaluation -- can curb or open up new areas and topics for exploration.
Q: Your entire book centers around the fact that IRBs are often shrouded in mystery or misconceptions. How were you able to get so many IRBs to open their doors and allow you to observe their debates? Did that openness surprise you?
A: This book is indebted to the IRB members who entrusted me to observe and audio record their meetings. They were brave, and it showed a desire to improve and reflect on how they work, which I respect a great deal. The willingness of three IRBs to be studies showed the variability among boards and how important informal trust of researchers can be.
At first, I was declined by six IRBs, but this no doubt reflected my own hubris in approaching the boards out of the blue with a request to observe and audio record their meetings for a year. I attribute my eventual success in getting access to a good suggestion from Princeton sociologist Robert Wuthnow. In addition to the observations and historical research, I also planned to conduct one-time interviews with a national sample of IRB chairs to get a sense of the IRB landscape (which I draw on in the book, as well). Wuthnow suggested that I select IRBs to contact for the ethnographic research after I complete the interviews. This was sage advice. During the interviews, several board chairs were particularly eager to aid research about IRBs and to learn more about the social process of science decision-making. After completing the national interviews, I re-contacted those board chairs and asked whether their board members would consider allowing me to observe and audio record meetings for a year. They said yes.
Q: You mention that something as simple as spelling errors – one applicant’s incorrect use of “principal” drew the ire of an IRB member – can speak to the competency of the researcher and play a role in a project’s approval or denial. Is that fair?
A: I think the real question is whether “fairness” should be the most important criteria that committees use in evaluating applications -- whether for grant funding, college admissions, or IRB approval. It would seem that fairness is not the only criteria used in IRB evaluations. In focusing on written errors, board members are looking for signs that researchers are trustworthy, careful people who aren’t going to make a mistake in their studies (e.g., giving incorrect dosages or passing too much responsibility to students). As I argue in the book, the seemingly disproportionate concern over typos and written mistakes in applications is not a matter of fairness, but of trustworthiness. Is that a criterion worth considering? If so, is attention to detail in written documents a good way to evaluate trustworthiness? For that matter, should researchers be evaluated at all, or simply the studies being proposed? These are questions for the scientific and scholarly community to answer.
Q: Your hypothetical proposal in which companies would be tested on whether they screen ex-convicts based on race received “very different” responses from each of the 18 IRBs that reviewed it. Is some level of inconsistency inevitable between IRBs and to what degree is it acceptable?
A: This finding goes to show the many ways in which IRB administrators and members can interpret the rules. In Chapter Two I explore Devah Pager’s experiences in getting approval at several IRBs for her excellent work on employment discrimination. Pager’s account illustrates that when IRB members read new protocols, they conjure their local institutional history and use case-based reasoning to make decisions.
The main aims of the book are to document how our everyday experience of the law is simply a product of how people enact the law and, specifically, how people with the power to apply rules that affect science and scholarship are, in effect, shaping what we can know and say for both good and ill -- whether we are considering IRBs or film censorship boards.
Q: IRBs almost always vote unanimously. The most jarring form of dissent you observed was an abstention. Does that expectation of the minority voting with the majority stifle opposing views? What reasons did board members who voted against their own wishes give you?
A: IRB meetings are all about persuading colleagues. The near-ubiquity of unanimous votes demonstrates that an IRB decision is not an aggregation of competing opinions. I argue that votes are unanimous because IRB members actually do tend to agree with each other by the end of a meeting. They tend to arrive at meetings with disagreements and leave with genuine consensus. The big question is who is an “expert” on a topic (and in oratory skill) and thus most persuasive.
Q: If you were going to give fellow researchers five quick pieces of advice before their proposals go to an IRB, what would they be?
A: I am eager to use my book to help researchers, as well as to contribute to broader scholarly discussions of expertise and knowledge-production in science and medicine. In that spirit, here are a few pointers, which I’ve elaborated in a chapter (with Adam Hedgecoe) in the Sage Handbook on Qualitative Research in Health:
- Talk with administrators before submitting a study for review. Ask for points of clarification, offer to speak with primary reviewers if the study goes to the full board, or ask for suggestions on how to meet your research needs within the framework of the regulations.
- In the review application, cite or give examples of similar studies. Demonstrate that it is not unprecedented to base studies on the proposed research population and research method.
- In the review application, give evidence of research participants’ experiences in similar studies to justify the level and type of risks and benefits participants can expect. Consider using published studies. Epilogues can be treasure troves. If there are no prior studies, propose to conduct a brief pilot study, or volunteer a follow-up deadline set for the early stages of the full study, by which the reviewers will get a report on how participants are feeling about the study.
- Prioritize consent procedures. There are many aspects of the review process one could focus on. IRB members' greatest concern is with the quality of the consent process. That said, remember that consent does not have to include a signed form.
- When possible, attend review meetings. Many ethics committee meetings are, by regulation, open to the public, even if the meetings are closed by custom. Attending meetings from distant field sites is of course difficult. Still, researchers can offer to be available remotely or ask a proxy to attend the meeting. Although visitors cannot observe the formal vote, meetings can fruitfully be used to assuage reviewers’ concerns, to speed the review process, and to teach students how to prepare studies.
- Read the regulations, then err on the side of caution. A familiarity with the regulations can be taken to be a good-faith effort to engage the ethics review process. In addition, the regulations mark the specific language and concepts that are open for debate, many of which also have colloquial meanings. (For example, in U.S. regulation it is not an option to state a study has "no" risk, but instead "no more than minimal risk.") Importantly, opting for a conservative interpretation of the regulations will allow researchers to propose ethics practices they find workable, rather than to have reviewers impose practices that the research will then be obliged to use or to negotiate.
Q: On a national level, is the current IRB model effective? What changes might improve it?
A: Thankfully, the Office of Human Research Protections is overhauling research regulations this year. I hope that my book can inform changes that are for productive the researcher community: scientists, scholars, students, and research participants alike.
Search for Jobs