Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Wanted: Consistency in Social and Behavioral Science Institutional Review Board Practices


by Jim Vander Putten - September 14, 2009

Why do some social and behavioral science IRBs require researchers at other institutions to obtain local IRB approval for low-risk data collected in person, but not for low-risk data collected over the internet?

Recently, higher education media have published a number of articles scrutinizing college and university Institutional Review Boards (IRBs). On Insidehighered.com, Scott Jaschik addressed issues such as sociologists’ frustrations with IRBs (August 15, 2005), the debate surrounding the appropriateness of IRB review for anthropologists conducting research for the U.S. military (October 22, 2007), and application of IRB research evaluation criteria to oral history projects (January 3, 2008). Paul D. Thacker reported on the American Association of University Professors report on IRB jurisdiction (October 16, 2006).


In the Chronicle of Higher Education, Karen Markin’s broad overview (The Chronicle, August 12, 2005) addressed institutional research regulations for human participants, animals and hazardous materials, Jeffrey Brainard reported on requirements for training in human participants research (The Chronicle, July 2, 2008), and Jennifer Howard briefly discussed the federal Office for Human Research Protections’ proposed expansion of IRB rules (The Chronicle, February 4, 2008). Jennifer Howard’s article on types of research eligible for IRB review (The Chronicle, November 10, 2006) briefly raised the important issue of inconsistent IRB practices across institutions, and it merits further discussion.


The scholarly attention devoted to social and behavioral science IRB practices has focused primarily on individual institutional policies and practices, rather than on the national system of higher education. In my view, this has created a ‘silo effect’ that has contributed to inconsistent IRB practices across institutions. The University of Illinois Center for Advanced Study’s 2006 White Paper on Improving the System for Protecting Human Subjects effectively used research scenarios and anecdotes of individual institutional IRB practices as a basis to call for the creation of a national clearing-house of examples of social and behavioral research issues (risk, harm, practice vs. research, confidentiality vs. anonymity, etc.). However, the Illinois White Paper did not address two important issues in which I’ve observed wide national variations of IRB practice: requirements for informed consent and expectations for multiple institution IRB review.



Informed Consent for Online Surveys


First, the ethical practice of obtaining informed consent is a primary method of protection for research participants and involves communication between the researcher and the prospective participant regarding the details of the research study and participation in it. These details include explanations of the purposes of the research, expected duration of participation, a description of the procedures to be followed, foreseeable risks or discomforts, benefits to participation, and confidentiality, among others. During my six years as Chair of a social and behavioral sciences IRB, the board required informed consent documents or survey cover letters containing these relevant study details for all university faculty, staff, and student activities that met the federal definition of human subjects research, regardless of whether they were federally funded.


I assumed that this foundational principle of the responsible conduct of research was a universally accepted IRB expectation. But then I received an emailed invitation to participate in a web-based national survey of college and university administrators being conducted by researchers at a Carnegie Doctoral/Research-Extensive institution in the Northeast. When I clicked on the survey URL in the email message, I was connected directly to the first page of the survey, and not a survey cover letter containing the relevant elements of informed consent. I contacted this institution’s IRB Chair asking if this research project had received IRB approval, and was amazed by the reply explaining that this study wasn’t identified as involving human participants and therefore was ruled as ineligible for IRB review. When I shared this response with an incredulous faculty colleague, she replied, “Where did the IRB think the data was going to come from?”


I thought this was an isolated incident, until a few months later when I received another emailed invitation to participate in a web-based national survey of faculty about online instructional practices being conducted by researchers at a Carnegie Master’s L (larger programs) University in the Southeast. Again, clicking on the survey URL in the email message connected me to survey page 1 instead of a cover letter detailing informed consent information. I also contacted this institution’s IRB Chair to inquire whether this research project had received IRB approval, and was informed that this study was originally approved in the ‘Exempt from Full Board Review’ category several years ago. In the recent IRB review approving the continuation of the study, no informed consent documents were judged to be required.


Although these were low-risk social and behavioral science studies, IRBs should not be absolved from ensuring that researchers will fully inform prospective participants about specifics of the research tasks. I deserve to know the answers to a number of questions before I decide whether to participate in a study: What is the nature of my involvement?  Will survey completion take 5 minutes of my time or 35 minutes? How (not just if) will my identity be protected and confidentiality maintained in this study? In a web-based survey, am I required to respond to each question before proceeding to the next? Perhaps most importantly, what will researchers do with my responses if I decide to cease participation before completing the research task? Will they be kept or discarded? Do I have a voice in the matter? As a result of the absence of this information, I declined to participate in both studies.


Furthermore, the intersection of human subjects research integrity and internet-based data collection has been the subject of several recent books and articles. With the proliferation of easy-to-use web-based survey administration web sites and increasing interest in investigating various aspects of online instruction, the technological threats to the protection of research participants cannot be minimized. In my view, requiring standard informed consent practices helps to assure prospective participants that their identities will be protected and their confidentiality secured. This increases trust and the likelihood of participation and getting better, more accurate data. Isn’t that an ultimate goal of the research?


Multiple Institution Review


The second issue involves IRB expectations for multiple institution review. During my service as social and behavioral sciences IRB Chair, I regularly received inquiries from researchers at other institutions informing me of their plans to collect data on my campus, and requests for information on the required IRB training materials and protocol submission forms. Our board’s workload strain made it difficult to justify reviewing IRB proposals from researchers whose primary affiliations were with other institutions, and whose research had already been approved elsewhere. Instead, our practice was to reply to each inquiry and request a copy of the IRB approval from the researcher’s institution of primary affiliation. If the proposed research had been approved there, our board honored the judgment of the other IRB’s decision. In retrospect, however, this decision has been called into question now that my assumption of consistent national informed consent practices has been proven inaccurate.


A few years ago, another faculty member and I conducted a qualitative research study at five different Doctoral/Research institutions in the Southeast. As a professional courtesy, we informed each institution of our plans to interview faculty and staff on their campuses, and noted that the study had already been approved by our institution’s IRB. I was surprised when each institution required us to submit ‘Exempt From Full Board Review’ IRB proposals for review and approval as a precursor to conducting the research on their campuses. With my knowledge and expertise as a sitting IRB Chair, I volunteered to complete the proposals to increase the likelihood of IRB approval upon first review.


You can imagine my surprise when several of the IRBs rejected the proposals on the basis of an inconsistent array of style issues, such as consent forms not cumulatively paginated (e.g., 1 of 3, 2 of 3, etc) and either written or not written in the past tense. The time delays associated with revision and re-submission of these IRB proposals (some of which were rejected a second time) were measured in months, and would have been even longer had we been required to complete each institution’s responsible conduct of research training program. These delays began a chain reaction of subsequent delays in data collection, research conference proposal submissions and presentations, and manuscript submissions for publication consideration. For untenured faculty, these delays can present formidable obstacles to meeting institutional expectations for scholarly productivity leading to tenure and promotion.


Need for Consistent IRB Practices


When these IRB expectations regarding small details of informed consent documents are compared to my experiences as an invited research participant and being provided no consent documents, the stark contrast raises serious questions about consistency of IRB practices across institutions. One immediate question is: Why do some social and behavioral science IRBs require researchers at other institutions to obtain local IRB approval for low-risk data collected in person, but not for low-risk data collected over the internet? The obvious answers are IRB workload strain and lack of awareness of internet-based research being conducted, but the unintentional discrepancy in IRB practices based on data collection method is glaringly evident and inappropriate.


One analogy that has often been drawn likens IRBs to local school boards with the authority and discretionary latitude to apply federal regulations as they best see fit. However, each individual IRB application of these regulations has contributed to the extreme variance in policies and practices detailed here. As noted earlier, one recommendation in the Illinois White Paper called for a national clearing-house of social and behavioral sciences IRB best practices. Based on my experiences as an IRB Chair, researcher, and research study participant, this would be a useful development and should include guidance on consistent expectations for the use of informed consent documents regardless of research risk, data collection method, or funding source to provide optimal protection for prospective research participants. In addition, several biomedical researchers have conducted large-scale studies documenting the significant additional costs in time and money incurred in the required training, protocol preparation, and local IRB review of multiple-site studies. As a result, another social and behavioral sciences IRB clearing-house best practice must identify a limited set of circumstances in which multiple institution IRB review is necessary and appropriate (e.g., proposed research involving indigenous or other protected populations) regardless of whether data is collected in person or online.


However, I would go one step further and recommend the expansion of federal regulations requiring researchers to complete training in the responsible conduct of human participants research before conducting research. This expansion should include minimum requirements for researchers to actually implement the ethical practices regarding informed consent that they learned in their institution’s training program. In education terms, it is inappropriate to train researchers on the history of informed consent and methods to incorporate specific elements of informed consent into their research, and then decline to hold researchers accountable for doing so. If a specified set of minimum requirements are implemented nationwide, then IRB review would begin to approximate peer review systems that are the bedrock of scholarly quality and integrity. As a result, IRBs can reduce their workloads by confidently honoring the approval decisions of other institutions’ IRBs in multiple-institution social and behavioral science research projects involving human participants.


In conclusion, it is universally acknowledged that IRB regulations written for biomedical research as applied to social and behavioral science research have been a poor fit. But the absence of a consistent set of best practices for the responsible conduct of low-risk social and behavioral science research, and no consistent practices to protect research participants, thereby increasing the likelihood of collecting high quality data, is equally inappropriate.




Cite This Article as: Teachers College Record, Date Published: September 14, 2009
https://www.tcrecord.org ID Number: 15767, Date Accessed: 1/20/2022 12:24:19 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Jim Vander Putten
    University of Arkansas-Little Rock
    E-mail Author
    JIM VANDER PUTTEN is an Associate Professor of Higher Education at the University of Arkansas-Little Rock, and coordinates the Faculty Leadership concentration in the University of Arkansas-Little Rock (UALR) doctoral program in Higher Education. He served as Chair of the UALR Institutional Review Board between 2000-2006. His research agenda focuses on qualitative data analysis, faculty social origins, and the organizational culture and climate for research integrity.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS