Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

An Elusive Science - The Troubling History of Education Research


reviewed by Les McLean - 2002

coverTitle: An Elusive Science - The Troubling History of Education Research
Author(s): Ellen Condliffe Lagemann
Publisher: University of Chicago Press, Chicago
ISBN: 0226467724, Pages: 264, Year: 2000
Search for book at Amazon.com


History is important; reading this book is a vivid reminder that "those who would ignore history are doomed to repeat it." There is also at least some truth in Paul Valéry’s statement, "History is the most dangerous product that the chemistry of the intellect ever evolved. Its properties are well known. It makes us dream, it intoxicates people, creates false memories for them, exaggerates their reactions, keeps their old wounds open, torments their rest, leads them to delusions of grandeur or of persecution, and makes nations bitter, arrogant, insufferable and vain." If history in general does this, what are we to expect from a "troubling history"? With trepidation, we press on.

Alas, the author scooped my intended opening sentence with, "… relatively early in the nineteenth century, the very term education research seemed to be an oxymoron to many notable university leaders" (p. 232). There is much support in this book for such a view—a troubling history indeed. Before describing the book’s many strengths, however, I must object to the title. This is a history of education research in the United States of America—with one or two small exceptions, the rest of the world does not get a look-in. The stance is familiar, one I have come to label W=US (the world is the United States), as in "… the questions essential to my study—regarding the primary forces shaping the study of education over the last one hundred years; how the enclosure of education in special, distinct schools and departments affected the study of education both inside and outside those preserves; and what the history of educational scholarship can tell us that has value for the present situation of education in the United States—are what discipline history is about" (p. xiv). I beg to differ. A "discipline history" would not ignore the educational scholarship from, to cite only a few examples, De Landseere’s laboratory in Liège, the London Institute of Education, Torsten Husén’s 50+ books, and CITO in the Netherlands. Howard Gardner got it mostly right on the dust jacket: "Ellen Lagemann’s groundbreaking book presents a revealing portrait of American education, funding agencies, contrasting research paths, and the broader society within which priorities are set and reset. A full, fair, and fascinating study of an important topic" (emphasis added). Important it is, fair it is, fascinating it often is, but full it is not.

In her opening section, Lagemann sets the tone, asserting that " … many of the most difficult educational problems that exist in the United States today are related to the ways in which the study of education has been organized and perceived within universities" (p. xv). She construes her history in three parts, punctuated by the two "World Wars," the second part being the years between the wars. Part I (early 1890s to about 1920) is largely the story of the dominant universities: Chicago and Columbia Teachers College, with contributions from Harvard. Education at each university is dominated by powerful individuals, first Dewey at Chicago, then Judd (Dewey moving to Teachers College) and Dewey’s eclipse by E. L. Thorndike at Columbia. "Thorndike’s triumph and Dewey’s defeat were essential to the early educationists’ quest to define a science that could help them rationalize the nations public schools" (p. 22). Europeans get a mention only when noticed by US "educationists," for example, "… to wit the debates among the followers of Pestalozzi, Froebel, Hegel, and Herbart" (p. 21). If Pestalozzi et al. are not part of the history in their own right, then how can there be followers? In their effort to define a science, the dominant figures turned to psychology, newly separated from philosophy and becoming an empirical science. G. Stanley Hall, a student of William James and a pioneer in the child-study movement, earned the first Ph.D. in Psychology in the USA and had a checkered history in academia. Lagemann accurately entitles this chapter, "Reluctant Allies."

In Chapter Two the story moves back to Dewey and his laboratory schools (Chicago, then Teachers College), illustrating how some scholars wished more specialization and some (e.g. Dewey) believed in working directly with teachers and students. Thorndike’s triumph was to de-emphasize practical study in favor of theory. It is said that he firmly believed that he could improve a teacher’s performance without ever seeing her teach. In the same year that Dewey left Chicago (1904), Thorndike published An Introduction to the Theory of Mental and Social Measurements, foreshadowing his famous dictum, "whatever exists at all exists in some amount. To know it thoroughly involves knowing its quantity as well as its quality" (cited on p. 57). As a teacher of statistics at Teachers College in the mid-60s, in a department with E. L.’s son, R. L. Thorndike, I felt the great man’s spirit still powerfully present. Lagemann brings this complex story to life, arguing persuasively that Judd at Chicago and Thorndike at Teachers College were critical in gaining acceptance for controlled experimentation and statistical measurements as essential methods of educational study. I am not quite persuaded. Acceptance may have been gained, but practice did not follow, at least as regards controlled experimentation, until the early sixties. Chapter Three continues the examination of quantitative techniques, giving mention to Binet, Simon, Galton, Pearson and Spearman (but not R. A. Fisher, who perfected the analysis of variance). Pearson’s correlation coefficient was an important development, but I disagree that it gives a "precise description" of relationships. Hall is more sensible than many psychologists, referring to the "quasi-exactness of quantitative methods" (p. 90). Concerning Terman’s development and widespread administration of intelligence tests, Lagemann asserts that Terman’s beliefs overcame his own data (p. 90). Was Terman a closet Bayesian? Lagemann is critical (rightly in my view) of educational psychologists of the day who were enamored of mental tests in general: "But having found a technology that could be applied and tinkered with endlessly, they generally avoided questions concerning the value and necessity of sorting students in the first place" (p. 94). Alas, the situation has not improved much from 1920 until now.

Part II (the interwar years) is entitled "Cacophony," and concentrates on curriculum study. "Chaos" would be a possible title. The stories in Part II are interesting, but there is precious little educational research in evidence. It is asserted (without evidence) that we now have "efforts to study child and human development and to develop teaching materials and approaches to curriculum consistent with that research" (p. 130). Much talk and curriculum development are documented but little "study." The exception is the now legendary "Eight-year Study," began by the Progressive Education Association (PCA) and rescued by none other than Ralph Tyler; Tyler would dominate education measurement and research policy for another 50 years. One of several innovations in the Eight-year study was explicit attention to "opportunity to learn" (OTL), a concept we ahistorians thought we invented for the Second International Mathematics Study (SIMS) in 1980. Lagemann understates her case yet again when she writes, "new beliefs associated with evaluation would take decades to seep fully into educational policy and practice" (p. 139). Tyler recruited an amazing staff for his efforts to use learning objectives to guide test construction. Working with him were Hilda Taba, Bruno Bettelheim and Lee Cronbach. I read with amazement of the projects of the PCA. One of the many books that resulted was Louise Rosenblatt’s Literature as Exploration, which gave birth to "reader response." Margaret Mead was an intellectual leader. The climate also gave birth to the University of Chicago’s Committee on Human Development (with Rockefeller support), led by W. Lloyd Warner, who supported and encouraged Allison Davis, a prolific writer and the first Black full Professor in other than the all-Black colleges. The advent of WW II brought a dramatic climate change. First proposed in the 30s, the Educational Testing Service (ETS) became irresistible with the huge growth of applicants to colleges and universities after the war. "… the conceptions of educational purpose that were advanced by the establishment of ETS and then by the advent of the Cold War for a time turned scholarship in education away from the progressive purposes that had been so central to it during the interwar era" (p. 157). Well said, but just "for a time"?

Lagemann sees the post-war years as a search for "Excellence and Equity," with equity issues competing with excellence. Problems continue for education research, as "The critics’ belief that the schools were mediocre because educationists were intellectually vapid carried the day" (p. 161). She still sees an academic emphasis in the 50s, just before my time in education research, but where was it? The 50s were the years of the great curriculum development projects—UICSM (the ‘new math’) and the Physical Sciences Study Curriculum (PSSC), based in the disciplines but not guided by education research. If Max Beberman and Matthew Zacharias had only done more formative evaluation, they might have avoided the rejection and demise of their efforts. Careful field testing of Zacharias’s Man a Course of Study (MACOS) would have revealed the deep passions of the creationists and conservatives and given a chance for revision and more gradual introduction. I had not realized that Jerome Bruner worked closely with Zacharias on MACOS. The initiative failed in spite of the sale of one million volumes of course material over 20 years, and one result was that the National Science Foundation withdrew its support for curriculum development. It is no accident that this same period saw the emergence of the field of program evaluation, beginning as curriculum evaluation. Many of us date that emergence from the appearance in 1963 of Lee Cronbach’s paper, "Course improvement through evaluation," followed closely by seminal works from Michael Scriven and Robert Stake. The story of educational evaluation as a field is missing from this history.

On the other hand, the "Theory movement" in Educational Administration gets much attention. It was initiated by Jacob Getzels (a psychologist) and was based in social science. This was a break from previous developments, which concentrated on surveys and improvement of practice based in experience. Later, Daniel Griffiths said it had "moved educational administration" (cited p. 182), but the movement failed after only 15 years. Educational Administration became enamored (again) with organization theories and grew increasingly apart from the feelings and beliefs of teachers and principals. Another story untold in this history is of the debate between Griffiths and Thomas Barr Greenfield about the direction of teaching and research in educational administration, with Greenfield stressing the importance of values and Griffiths holding out for a value-free discipline. The story from Greenfield’s perspective is told in Greenfield and Ribbins (1993) . The late T. B. Greenfield was a Canadian.

Lagemann’s story begins to jump forward and backward in time and to become fragmented as she tries to mention as many as possible of the diverse developments from the 60s to the 90s. The Cooperative Research Program (Tyler again), the National Institute of Education, Research Centers, Regional Laboratories, ERIC, National Defense Education Act, Head Start, to name only a few. And yes, there appeared the National Assessment of Educational Progress (NAEP—Tyler again!). Initially blocked by the Chief State School Officers, Tyler was able to get it going by handing it over to the Educational Commission of the States (dominated by the ‘Chiefs’). Lagemann’s account is insightful as she brings out the difficulties Tyler and colleagues faced in building tests that would explain variations in school achievement. The federal commissioner behind NAEP, Frank Keppel, wanted data to guide federal focus on "equal educational opportunity," but he did not realize "that the meaning of opportunity could not be really ‘clear’ until social scientists became more sophisticated in their understanding of outcome variables" (p. 193). Lagemann might have mentioned that we are not there yet! Another illustration of the limits of our understanding at the time (and still) was provided by the "Coleman Study." James Coleman’s monumental work is as good an example as any of the perils of large-scale surveys of schooling—measurement, statistics, interpretations, and applications. Coleman interpreted his results, as did others, as showing that compensatory education, or even education in general, did not help poor and minority students, much to the dismay of advocates. Every aspect of his study was attacked. It was never discredited, but it was marginalized. None of this slowed the expansion of Title I programs for the improvement of education of disadvantaged pupils because the programs were exceptionally popular with the public. The National Institute of Education (a Nixon initiative) was not so lucky. It was plagued by low funding, indifference in Congress and turmoil over the bombing of Cambodia (Kissinger’s name is not mentioned), and a desire for quick, practical results. Professional associations and education groups were indifferent, leading to its demise in 1985-86, "just as promising new directions were emerging in education research" (p. 211).

"New directions" described in the final chapter begin with cognitive science and qualitative methods, "by the 1990s" (p. 212). Egon Guba, a pioneer in development and use of qualitative methods since the early 60s, is not mentioned, another example of a gap in Lagemann’s perceptions. Of course there have to be some omissions, but some developments (and developers) should not be left out. Piaget finally gets a nod in his own right (under cognitive science). The Harvard Center for Cognitive Studies (Bruner and Miller) gets deserved recognition, and we find a section on "Systemic Research" with a superficial reference to the "Follow Through" program. Its scope, duration and controversy deserve more attention, as I elaborate later. Now that we are reaching the end, we can identify other gaps. Robert Boruch’s excellent book on randomized field experiments gets brief mention, but the most important publication for experimental methods does not—Donald T. Campbell and Julian C. Stanley’s Experimental and Quasi-experimental Designs for Research.

An important thread in the history of education research is statistical method, which for Lagemann seems to end with the correlation coefficient. During my doctoral studies in the early 60s, we debated at length the "unit of analysis" question—when and where should the unit of analysis be the student, or the classroom, or the school? We had no satisfactory answer. Two decades later, Professor Harvey Goldstein published the first book on "multilevel" analysis and went on to provide a comprehensive answer—how to include student, class and school-level data in the same analysis. Goldstein is British, but parallel work followed in the USA under the title "hierarchical models". Researchers are just beginning to learn these new methods. Another development important enough to deserve mention has been the emergence of item-response models (usually erroneously presented as item-response theory, IRT). Large-scale testing everywhere now derives scores and equates tests using item-response models. Computer-assisted instruction (CAI) does not appear in the history, in spite of the decades of work and millions of dollars spent on futile efforts. The University of Illinois’s PLATO program alone was significant enough to deserve mention. Efforts to link research and practice are justifiably treated, and Laurence Stenhouse (another foreigner!) is recognized for his intellectual leadership at the University of East Anglia. Home town boy Patrick Suppes’s monumental volume is overlooked. One senses that Lagemann, who says she worked on the book for over nine years, was hurrying to finish. She does not stop here, however, to our benefit.

Since any work such as this will have blank spots, questionable emphasis and unavoidable superficiality, we turn to the "Conclusion" section, hoping for helpful overview and synthesis. We are not disappointed. The section opens with a discussion of Dewey’s The Sources of a Science of Education (1929), drawing from this late work some of the author’s insights into the problems of education research. Lagemann ends the introductory section with this paragraph:

With uncanny prescience, Dewey thus pointed to problems that became central to educational scholarship during the first decades of the century. Despite the new directions of recent years, those problems have remained central until century’s end (p. 232).

So what are these problems, according to Dewey and Lagemann? There are three clusters, named by Lagemann. Following them, we look at her advice, "What’s to be done?"

Problems of Status, Reputation, and Isolation

The uncertain status of "educationists" is described over and over again by Lagemann. Those of us laboring under that banner are all too familiar with it. As others from "reputable" academic departments, coming from mathematics and statistics I could often deflect it, but never evade it completely. Lagemann calls on Carl Kaestle’s paper, "The Awful Reputation of Education Research" (reference 4, p. 282) to bolster her case. Our reputation in the larger scholarly community is, with few exceptions, poor to lousy. In part because of our status and reputation, we are often isolated, to our everlasting disadvantage. It is little solace to read that these problems have persisted for more than a century. Lagemann’s study, however, is richer for her straightforward inclusion of this aspect of our troubles.

A Narrow Problematics

Behaviorism and genetic determinism are cited as frameworks that have led us to define our research questions too narrowly. I agree but objected in my notes that Lagemann laid too much blame on Thorndike, only to find her admitting this on page 235. Who among us will disagree with her: "I find the early problematics of educational study deeply troubling" (p. 236). As a teacher of research methods over many, many years, I found myself pushing harder and harder to broaden the scope of students’ thinking, only to have to pull it back again to a problem achievable in finite time. To adapt a current catch phrase, "Think globally, research locally." Lagemann finds the field "shapeless" c. 1890 but well shaped by roughly 1920. The latter seems too kind. She again gives considerable attention to educational administration—too much, in my opinion—but I agree wholeheartedly that "Owing to the technical and individualistic orientation that was introduced into education as it became established in universities, scholars tended to investigate pedagogical, administrative, or policy questions in education without taking up their social implications" (p. 237).

Problems of Governance and Regulation

"… successful innovations in education are more dependent on entrepreneurship than on the validity of the research that supports them" (p. 238). Right on! Also indisputable is, "vaguely described innovations that are based on untested claims are worrisome" (p. 239, c.f. Kilpatrick’s ‘Project method,’ c. 1920). I would argue that precisely described innovations based on narrow testing are also worrisome, e.g. Direct Instruction. Lagemann laments that in education we have no counterpart to the Better Business Bureau or an equivalent to the Food and Drug Administration. She offers reasons why we do not, but I am not persuaded that such organizations are models to which we should aspire. As with the more prestigious sciences, I believe that we have a reasonable system of peer review and critique that will eventually reveal bogus innovations and questionable practices. The House panel discussed earlier was an example of the latter, and there is a current example of considerable interest. A researcher at the Manhattan Institute published a study claiming to show that a Florida scheme that links average school test scores with vouchers has brought about impressive gains in school achievement. (Low-scoring schools who receive a grade of ‘F’ one year risk having all their students offered vouchers if they do not improve the next year. In the first two years, 100% of ‘F’ schools achieved at least a ‘D’ the next year. The issue is whether achievement really went up and if so whether the changes can be attributed to the voucher threat.) The report appears not to have received the usual peer review. Soon after its appearance, two university scholars have undertaken re-analysis of the data (published on the State of Florida website) and published critiques questioning the conclusions of the Manhattan Institute report. The debate is far from over, but policy-makers have much more information to use in considering whether provision of vouchers should be linked to test scores.

What’s to Be Done?

Intervene in the university. Lagemann argues persuasively for attempts at innovative organization of universities to foster relevant research. Here again, her own narrow problematics limit her perspective. There have been notable interventions in universities outside the USA, most ambitiously in the Ontario Institute for Studies in Education of the University of Toronto—one hour’s flight from New York, but apparently a world away. In 1965 the founders of OISE (as it was then) attempted strenuously to break the mold that kept education research from helping to make important improvements to elementary and secondary education. OISE was set up as a well-funded, independent college with a tripartite mandate: (a) Carry out research on educational problems, (b) Offer graduate education in research and practice, and (c) Establish "Field Centres" where researchers would work with teachers and school officials to identify problems, design research and communicate research results. Thirty-five years later, I (a charter faculty member) believe that the experiment has been at best a modest success. Reasonable colleagues may disagree, but I believe that such an ambitious effort, so close to the USA, could have illuminated the argument that attempts should be made to intervene in universities. The London Institute of Education, Deakin University in Australia and the Australian Council for Educational Research might also be examined. Attempts have been made and need to be taken into any comprehensive account.

Foster a stronger professional community. Here I part company with Lagemann. As a long-time member of AERA (membership number 500), a sometimes member of APA (Division 10), an intermittent member of the National Council on Educational Measurement (NCME), a strong member of the Canadian Council on Research in Education (CCRE, Past-president of the affiliate Canadian Educational Research Association), one-time member of the British Educational Research Association (BERA), I am not persuaded that a stronger professional community can solve our problems. The "professional community" suffers from the problems of status, reputation and isolation already mentioned. We need more detailed guidance if we are build stronger professional communities that will strengthen education research more than they now do.

Scholars should more commonly come to acknowledge their responsibility to educate the public about education and about education research. Right on again, Lagemann, but there is nothing to prevent entrepreneurs with bogus innovations from embarking on their own "education" of the public. The researcher from the Manhattan Institute mentioned above has been very successful in gaining publicity for his conclusions, exaggerating them in the press according to the critics. There are numerous excellent examples, no doubt, but I only observed one at first hand, and it is on the public record. Superficial attention to the "Follow Through" program was mentioned above, superficial because a large-scale field experiment emerged in the 70s under the title, "Planned Variations." Depending on who was counting, 17 to 21 different variations on early childhood education were given funding by the US government to demonstrate their approach in many schools as part of a very large comparative study. Evaluation of the variations was first contracted to the Stanford Research Institute (SRI, still affiliated with Stanford at that time) but subsequently taken away from SRI and awarded to Abt Associates. The evaluation mandated by the then Office of Education so enraged some of the sponsors of "variations" that the Ford Foundation asked Ernest House, Professor at the University of Colorado, Boulder, to convene a panel to review the evaluation. The story is too long to tell here, but it is fully reported. Relevant here is that Professor House met with staff of key members of congress to brief them about his panel’s report and gave numerous interviews. Articles were published in popular journals and magazines as well as in scholarly journals. No one can say with confidence what was the effect of this attempt to "educate the public," and perhaps more importantly educate some decision-makers, but it can be noted that the Follow Through program continued for a number of years in spite of negative evaluation findings by Abt Associates. Lagemann argues that schools and departments should discuss how such "training" might be provided. Such discussions should certainly be encouraged, perhaps by the strong professional communities.

Look at history. "History can perhaps become an instrument of reform" (p. 246). Agreed, provided one takes a broad view of history. I am skeptical that a careful examination of one’s own navel, however large and well developed it might be, is the best way to find improvements in one’s quality of life. I sense Lagemann agrees and means a wide examination of history. Her book, however, does not reflect this broad view. The troubling history of education research is yet to be written.

 



Cite This Article as: Teachers College Record Volume 104 Number 1, 2002, p. 99-108
https://www.tcrecord.org ID Number: 10742, Date Accessed: 12/4/2021 12:04:41 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS