Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Arts, Humanities, and Sciences in Educational Research and Social Engineering in Federal Education Policy


by Frederick Erickson - 2005

This article argues is that the current promotion by the federal government of “science” as a unitary paradigm for educational research is a mistaken effort that may be well intended—or may not be.

The argument of this article is that the current promotion by the federal government of ‘‘science’’ as a unitary paradigm for educational research is a mistaken effort that may be well intended - or may not be. In either case, regardless of intentions, the consequences of this promotional effort are likely to be negative for educational research as a whole, because the promotion of the ‘‘scientific’’ not only misrepresents science itself, but it also undervalues and marginalizes the contributions of the arts and humanities to our understanding of educational aims and practices. The push for more ‘‘science’’ in educational research and of ‘‘scientific evidence’’ as a warrant for particular educational practices has involved Congress, the U.S. Office of Education, and the White House. It also manifests in a report issued in 2002 by the National Research Council (Shavelson & Towne, 2002). I see this policy orientation as wrong, leading toward dangerous kinds of social engineering, and this article explains why I believe this.


Let me begin the argument by repeating what I said in comments coauthored with my colleague Kris Gutierrez in last November’s issue of the Educational Researcher (Erickson & Gutierrez, 2002). For the most part the NRC report does an adequate job of describing scientific research in education, and it makes clear that certain kinds of qualitative research are ‘‘scientific.’’ In addition the report makes many reasonable recommendations for the organization of educational research sponsorship in the federal Department of Education.


That said, it seems to me - and now even more than at my earlier writing - that the NRC report is seriously flawed, even when read as a standalone document. When it is read in the wider context of practices and policy being undertaken currently by the federal Department of Education, the NRC report ends up justifying scientism rather than science. There is real danger that the report may mislead a whole generation of educational researchers, and that is especially unfortunate because there is much good material in the report itself and its intent was to protect educational research from an extreme form of naive scientism.


The fundamental problem, in my judgment, is that the committee that produced the report accepted uncritically its charge from the entity that commissioned the report, the National Educational Policy and Priorities Board of the U.S. Department of Education. The charge reads (see Shavelson & Towne, 2002, p. 22):


This study will review and synthesize recent literature on the science and practice of scientific education research and consider how to support high quality science in a federal education research agency.


WHY SCIENCE AND NOT ARTS AND HUMANITIES ALSO?


In contemporary (high modernist) parlance ‘‘science’’ connotes rigor and certainty - it provides a warrant for true belief, a ground for master narratives. I believe that what we need most fundamentally in educational research is wisdom, insight, courtesy, empathy for the people we study (both in the ways in which we collect data and in the ways in which we characterize those people in our reporting), and scholarly care in the analysis of evidence and in reporting - indeed educational research at its best should be imbued with what the ancient Greeks called the cardinal virtues: prudence, temperance, fortitude, and justice.


I wouldn’t worry about calling for educational research to be ‘‘scientific’’ if the term were being used in a more generous, premodern sense. In the medieval university, theology was called the Queen of the Sciences. By this was meant that the subject matter of theology - the nature of God’s being and will - was the most important thing humans had to think about. Such thinking should be done with the greatest possible care and with the greatest possible scholarly accountability. That’s what ‘‘science’’ meant - careful thinking accountable to public expert criticism. Such individual theologians as Thomas Aquinas, Anselm, and William of Occam each set forth differing theoretical accounts of God’s being and will, and they wrote and spoke in the expectation of public accountability for their discourse. (Indeed, in the oldest extant university building at Oxford University, Duke Humphrey’s Library, we can see that the ground floor is furnished as a hall for public disputation between scholars - all of whom were clergy. Two lecterns are placed opposite each other on raised platforms at the center of the hall so that the disputants could face one another directly and be seen and heard clearly by all those present.)


Accountability and rigor also can be found in the scholarly study of the arts. In my own undergraduate and initial graduate education I studied historical musicology at Northwestern University. I hold a master’s degree in that field. It is the study of artistic practices in music and their changes over time. In German its name Musikwissenschaft makes even more strongly the point I want to make, since this term is literally translatable as ‘‘the scientific study of music.’’ (Wissenshchaft being the word referring to physical science as well as to careful study in the human sciences and in the arts.) Let me give you an example of accountability and rigor in historical musicology.


I studied musicology in the heyday of close analytic reading of musical texts, analogous to ‘‘close reading’’ of poetic and fictional texts in literary criticism of the same era. (Things have changed since, but that’s a story for another time.) My instructors said, ‘‘We’re not interested in what Bach had for breakfast - look at the music itself.’’ They meant: pay attention to fine details of optional choice in musical form. The most senior professor in the department, John F. Ohl, was a stickler about levels of inference in formal pattern identification. I remember him saying of our claims to have identified a subtle pattern in melody or harmony, ‘‘Well - I just don’t see it.’’ By chance last week I found in a box one of my papers in a course on music in the middle ages, in which I was enrolled in the fall of 1961, my first paper in my first course with Professor Ohl. Here is what I wrote:


Many of the art forms of antiquity or of preliterate modern societies seem to contain basic stylistic patterns which, by alteration and rearrangement, make up the basic ‘‘vocabulary’’ of the art form. This is true of the graphic art of the North American Indian, the literature of the early Hebrews, and the musical idioms of the Post-Augustan Mediterranean area, which gradually became a part of ‘‘Gregorian’’ chant.


Therefore it seems appropriate to study the Third Mass for Christmas [the Christmas morning mass] in comparison with other chants, in order to determine whether or not the process of centonization, or patchworking of common stylistic motives, is evident.


I then went on to provide specific musical examples that showed centonization in the Introit ‘‘Puer natus est nobis,’’ which is the chant that is sung at the beginning of the Mass. Here is what Professor Ohl wrote in red pencil as a marginal comment: ‘‘I think your criteria for examples of centonization, as you cite them above, must be regarded as too easy-going; the resemblances seem less close to me than you imply. B-minus.’’


He then followed that general comment with a detailed, point-by-point critique of my analysis. From that point on I reigned in my tendencies to take inferential leaps. And I went on to get A’s from Professor Ohl. But at no place in my entire career as a graduate student was I ever more careful with data, even when some years later as a doctoral student I was punching in sums of squares by hand on a Monroe calculator in an introductory statistics course.


I challenge anyone to come up with better examples of scholarly rigor than those to be found in that kind of musicology - higher standards for careful thinking, a more thorough process of accountability for intellectual claims. But is it ‘‘science?’’ Not in the sense meant by the NRC report. And in that, the report is at fault for impoverished imagination, for constricted intellectual reach.


CONSTRUCTION OF TOPICS FOR EDUCATIONAL RESEARCH


The second major flaw in the NRC report comes near the beginning of the introduction (Shavelson & Towne, 2002) in a list of illustrative research topics.


Knowledge is needed on many topics, including: how to motivate children to succeed; how effective schools and classrooms are organized to foster learning; the roots of teenage alienation and violence; how human and economic resources can be used to support effective instruction; effective strategies for preparing teachers and school administrators; the interaction among what children learn in the context of their families, schools, colleges, and the media; the relationship between educational policy and the economic development of society; and the ways that the effects of schooling are moderated by culture and language. In order that society can learn how to improve its efforts to mount effective programs, rigorous evaluations of innovations must also be conducted. (p. 12)


Granted, the report does not present this list as an exhaustive one. But it is called illustrative - and the list of nine topics above includes not one that is primarily descriptive. All invite some kind of causal analysis and the use of some sort of inferential statistics. This narrows the range of envisioning fruitful topics for educational research in a way that excludes a focus on what qualitative research is best at (more on that in a moment). Moreover, in a later chapter on research design (Shavelson & Towne, 2002, p. 99), as three main types of research questions are introduced, the descriptive one is limited in a significant way. The report presents three types of questions - ’’What is happening’’ (concerning description), ‘‘Does x cause y?’’ (concerning the identification of cause) and ‘‘Why does x cause y?’’ (concerning explanation of causal relations).


An interpretive qualitative researcher would say that the question ‘‘What is happening?’’ is always accompanied by another question: ‘‘And what do those happenings mean to those who are engaged in them?’’ A critical qualitative researcher would add a third question to the previous two: ‘‘And are these happenings just and in the best interests of people generally - do they contribute to or inhibit human life chances and quality of life?’’ Answering such questions is what qualitative research is best at.


As the NRC’s list of illustrative topics stands, it privileges questions of efficiency and effectiveness over questions of hermeneutical or critical description and analysis. There is not a single question on that list in the form ‘‘What’s happening and what do those happenings mean?’’ If the NRC committee’s charge to define the ‘‘scientific’’ had not stacked the deck enough, at the very outset of that committee’s deliberations, their report’s presentation of exemplary topics - by what it silences implicitly through omission as well as by what it says explicitly - completes the disadvantaging of interpretive and critical studies in education, marginalizing as well those of feminist research, history, and philosophy.


And for the federal Department of Education currently, never mind scrutinizing the ends of education, or of considering relationships between ends and means. What’s most important is the determination of effects - what works. And these are obvious effects - as conceived by common sense, not subtle effects, and certainly not contradictory and unintended effects. The Department of Education is currently establishing a clearinghouse that will tell you what works - and only evidence derived from randomized field trials will be reported by that clearing house. Make no mistake: The Blue Meanies have taken over the Yellow Submarine.


What might other kinds of questions be? Here’s a list of fruitful ones, with theoretical foundations, mostly of the type ‘‘What’s happening and what do those happenings mean?’’: What is it like to be a child in the bottom reading group in a particular first grade class? How does Miss Smith set up her kindergarten classroom so that students learn to listen closely to what each other says? What happened as the math department at Washington High School seriously tried to shift their teaching away from math as algorithms to math as reasoning? Why do the Black kids sit together in the lunchroom and should we as educators care about that? What about the highly scripted early elementary school reading program Open Court and its relation to teacher morale, not considering morale as an abstract outcome variable but as a concrete, palpable state of being, the ‘‘life world’’ of daily work life - how do teachers think and feel about teaching the Open Court reading program - experienced teachers, inexperienced teachers, skilled teachers, not so skilled teachers, advocates of whole language and those who have always followed the instructions in the teachers’ manual? Why do teachers call the Open Court monitors ‘‘the Open Court Police’’? (That’s not a question to be answered by experimental analysis of causal relations - it’s a matter of human meaning as causal for human social action, as such meaning is constructed in life experience.) What differences in per pupil expenditure between school districts in a given state are fair - how much difference does it take to be demonstrably unjust - what are morally defensible criteria for judgments about justice in school funding? How do children who come to school speaking Spanish use their mother tongue in learning to speak and read English? What, on a large scale, do teachers believe about the ability of poor children of color to do well academically in school - how do they explain the high rates of low achievement by such children? Do different kinds of teachers have different views of this? What are the ways of characterizing types of teachers that best fit the full range of variation in their beliefs about children’s capacities to learn? Does this look different in Japan, and Sweden, and South Africa? Has this looked different in America over the past 200 years? (And I mean sound evidence about beliefs: How do we get really good evidence about what teachers really think and feel?)


We won’t answer those questions by means of randomized assignment of students to treatment and control groups. There is a place in educational research for large scale experiments. But the questions such approaches know how to answer are not the only ones worth asking.


CONCLUSION


Let me close with a narrative, since stories about what happens in the world can help us think and come to know important things about what is going on around us. In the fall of 2002, in Palo Alto, California, a group of academics were gathered at a party. They were discussing the NRC report and the current federal policy of privileging randomized field trials as the ‘‘Gold Standard’’ for educational research. One of the people in the room was a physician. He mentioned a report published in a medical journal that quoted a researcher who had worked for many years at the top laboratory for polio research, the Salk Institute. The medical researcher said that if knowledge development in polio research had had to depend only on conclusive findings from experiments, research on polio would today consist mainly of studies of the treatment effects of the iron lung.


In sum, the current federal agenda for increasing the ‘‘science’’ in educational research is a mistake, a course which if continued is likely to result in tragic consequences for educational researchers, practitioners, students, and families. I believe that this agenda is not simply an intellectually neutral search for better knowledge but that it is about knowledge production for social engineering - and we should be aware that this is social engineering toward extreme right wing ends. Make no mistake; these are dangerous times.



References


Erickson, F. & Gutierrez, K. (2002). Culture, rigor, and science in educational research. Ed- ucational Researcher, 31(1), 21–24.


National Research Council. (2002). Scientific research in education. In R. J. Shavelson, & L. Towne (Eds.), Committee on scientific principles for educational research. Washington, DC: National Academy Press.



Cite This Article as: Teachers College Record Volume 107 Number 1, 2005, p. 4-9
https://www.tcrecord.org ID Number: 11683, Date Accessed: 10/23/2021 7:48:58 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS