Preparing the Educational Researchers the World Needs
by Bruce Torff - January 12, 2011
In the last decade, “data‐driven” assessment practices and quantitative research have risen to prominence in the activities and policies of school districts, state and federal agencies, accrediting bodies, and grant‐making organizations both public and private. This has resulted in an acute need for professionals with expertise in quantitative methods (including research design and statistics). But schools of education remain largely devoted to qualitative methods, turning out doctoral graduates who lack expertise in quantitative research. Doctoral programs in education should place greater emphasis on the quantitative methods now required by our society’s educational institutions.
Are doctoral students in education getting the training they need to effectively serve schools and other educational institutions in the 21st century? There is reason to think they arent. In this commentary I suggest the time is right to reform these programs, bringing doctoral studies in education into alignment with the needs of the educational world. For several decades, tension has been detectable between educational researchers who favor different kinds of research methods. On the one side are qualitative research advocates who favor in‐depth studies of small numbers of participants, to investigate in an ecologically valid manner why and how participants think and act as they do (e.g., Berg, 2003; Bogdan & Biklen, 1998). On the other side are quantitative researchers who prefer scientific research designs, large samples, and statistical procedures, allowing research conclusions to be generalized to a target population (e.g., Hoy, 2010; Muijs, 2004). Research employing both sets of methods (i.e., mixed method or integrated research) has emerged, although these projects remain comparatively rare (e.g., Tashakkori & Teddlie, 2003).
The methods wars are often suspended in an uneasy truce, and not an altogether candid one. In public, the dispute is often mitigated with the agreement that both sets of methods have merit, since different kinds of research questions call for different kinds of methods (e.g., Creswell, 2002; Torff, 2004). In private, however, theres plenty of trash talk to be heard, with quantitative research criticized as lacking ecological validity and deaf to participants perspectives, and qualitative research disparaged as lacking generalizability and fraught with researcher bias.
Quantitative research rises to prominence in educational institutions. It was not always thus. In the first half of 20th century, educational research was almost exclusively quantitative, if with fewer statistical procedures available than today. Qualitative methods rose to dominance in the 1960s and 1970s, diversified by such concepts as ethnography, phenomenology, grounded theory, and critical social research.
But the 21st century has brought forth a remarkable shift back toward the quantitative. The shift can be seen in the activities and policies of school districts, state and federal agencies, accrediting bodies, and grant‐making organizations both public and private.
School districts are extensively “data driven” these days, with curriculum and instruction informed by quantitative data from a variety of tests – some state‐administered, some not (e.g., Deeb‐Westerveldt & Thompson, 2010). This will likely increase as “response to intervention” practices gain traction in schools (e.g., Brown‐Chidsey & Stegge, 2010).
State and federal policies have much to do with this trend, of course. Since the standards-and‐ tests movement began in the 1990s, educational policies have increasingly mandated that quantitative data be used to hold everyone accountable, including students, teachers, schools, school districts, and state departments of education. At present a nationwide initiative is afoot to use students test scores in teacher evaluation (especially with value-added statistical models), and more and more states are signing on. There can be little doubt that the future of educational‐assessment policy is quantitative.
Emphasis on quantitative methods is reflected as well in grant‐making organizations, including both governmental agencies (e.g., National Science Foundation, Institute for Education Sciences) and private foundations (e.g., the Spencer Foundation, the Grant Foundation). It is exceedingly difficult these days to procure funding without quantitative methods. Even projects that provide a direct educational service (e.g., ones that work to enhance the reading skills of underprivileged students) are required to include a quantitative evaluation component.
Similar predilections are evident in the activities of accrediting agencies. For example, the Middle States Association now requires colleges to provide quantitative evidence of the effectiveness of each and every course offered, along with a plan to marshal the data for course improvement. Finally, the American Educational Research Association has partnered with the federal government to offer several programs involving quantitative analysis of large datasets: AERA Dissertation Grants; AERA Research Grants; the AERA Institute on Statistical Analysis for Educational Policy; and AERA Faculty Institute for Teaching of Statistics with Large‐Scale Data Sets (see www.AERA.net). AERA (2008) has also issued a “definition of scientifically‐based research” that requires, among other things, “observational or experimental designs and instruments that provide reliable and generalizable findings.” This document also holds that “the examination of causal questions requires experimental designs using random assignment or quasi‐experimental or other designs that substantially reduce plausible competing explanations for the obtained results.”
Enduring emphasis on qualitative research in doctoral education. Its clear that the world has gone quantitative. But doctoral programs in education have not followed suit; most continue to emphasize qualitative methods. Of course, quantitative research is stressed in a handful of programs (e.g., UCLAs Advanced Quantitative Methods in Education Research program), and in some specialties within educational research at many institutions (e.g., educational psychology). But these are exceptions. Nationwide, the vast majority of doctoral dissertations at schools of education use qualitative methods, and many education professors are vocal in their advocacy of these methods (e.g., Walters, Lareau, & Ranis, 2009).
Quantitative methods are taught in doctoral programs, typically, but thesis advisors rarely prompt doctoral students to use them, instead counseling students to take a qualitative approach, just as most do in their own work. This situation would not be problematic if along the way doctoral graduates developed strong skills in quantitative research design and statistical analysis. But in most cases they don’t, because they haven’t practiced these things much.
Reform in doctoral education. It adds up to this: most education schools turn out new doctorate‐holders lacking the quantitative skills the education world now requires. This situation adds to the shortage of appropriately prepared educational researchers and subtracts from the credibility of doctoral programs in education.
Needed is a reform initiative to strengthen doctoral students quantitative skills. Such an initiative calls for greater emphasis not just in statistics, but also in research design. Doctoral students need explicit training in drafting research questions, establishing variables and measures, sampling procedures, and data‐collection protocols. Requiring a doctoral course in research design seems a good idea, but many doctoral programs in education don’t have one.
And sometimes coursework isn’t enough – that’s one of the main reasons why statistics courses in doctoral programs so often bounce off. Students need rich, in‐depth apprenticeship in quantitative research. For example, a researcher was awarded a contract from a state agency to analyze data from a racial discrimination survey, and instead of completing the project alone, she rounded up her doctoral students and completed the work while describing what she was doing (making her thinking visible) and delegating tasks to students appropriately (affording them hands‐on experience). This kind of doctoral education is supported by a considerable literature in apprenticeship learning (e.g., Seely Brown, Collins, & DuGuid, 1989; Rogoff, 2003). Like bike riding, learning to conduct quantitative research requires experiential learning but its not clear many doctoral students gain such experience.
The goal of doctoral programs in education is to prepare professionals to conduct research, teach, and provide leadership in support of our societys efforts to educate its citizenry. Meeting this goal means turning out graduates with the skills in demand at our various educational institutions (e.g., schools, state and federal agencies, accrediting bodies, and grant‐making organizations both public and private). At present, doctoral programs are ill equipped to do so, hewing to qualitative research methods when quantitative ones have become central to the activities and policies of educational institutions. Qualitative methods play a vital role in educational research and should continue to be taught in these programs. But doctoral programs need to have a stronger quantitative component if they are to provide what the world needs of them: well‐educated professionals prepared to research, teach, and lead.
American Educational Research Association (2008). Definition of scientifically based research. Washington DC: AERA. (http://www.aera.net/Default.aspx?id=6790)
Berg, B. L. (2003). Qualitative research methods for the social sciences (5th ed.). Boston: Allyn & Bacon.
Bogdan, R.C. & Bicklen, S.K. (1998). Qualitative research for education: An introduction to theory and methods (3rd Ed.). Boston: Allyn and Bacon.
Brown‐Chidsey, R. & Stegge, M.W. (2010). Response to intervention: Principles and strategies for effective practice (2nd Ed.). New York: Guilford Press.
Creswell, J. W. (2002). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (2nd Edition). New York: Pearson Education.
Deeb-Westerweldt, W. & Thompson, E. (2010). Data talk: Creating teacher and administrative partnerships around data. Deer Park, NY: Linus.
Hoy, W. (2010). Quantitative research in education: A primer. Thousand Oaks, CA: Sage Publications.
Muijs, D. (2004). Doing quantitative research in education with SPSS. Thousand Oaks, CA: Sage Publications.
Rogoff, B. (2003). The cultural nature of human development. New York: Random House.
Tashakkori, A. & Teddlie, C. (2003). Handbook of mixed methods in social and behavioral research. Thousand Oaks, CA: Sage Publications.
Seely Brown, J., Collins, A. & DuGuid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32‐42.
Torff, B. (2004). No research left behind. Review of C. Glickman (Ed.) (2004), Letters to the next president: What we can do about the real crisis in public education (New York: Teachers College Press) and K. Cushman (2003), Fires in the bathroom: Advice for teachers from high school students (New York: The New Press). Educational Researcher, 33, 27‐31.
Walters, P.B., Lareau, A. & Ranis. S. (2009). Education research on trial: Policy reform and the call for scientific rigor. New York: Routledge.