Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

State Assessment Becomes Political Spectacle--Part I: Introduction to Policy Stories and Policy Studies(i)

by Mary Lee Smith, Walter Heinecke & Audrey J. Noble - September 13, 2000

The authors set the stage for the story of assessment policy in Arizona in the 1990s that they will unfold in the segments to follow.


It is Friday, January 21, 1995, a typical brilliant winter day in Phoenix. Tired but satisfied, the policy researcher watches as the printer churns out the last pages of her report. For the last three years she has been studying the consequences of the Arizona Student Assessment Program -- everyone calls it "ASAP" -- the state assessment policy. The centerpiece of the program is a performance test, which its founder, former state Superintendent C. Diane Bishop, has touted as the cutting edge of the national movement to reform schools by imposing alternative forms of testing. ASAP tests "the way kids learn best" -- that’s been the slogan, by now familiar. It "makes teachers teach" toward application of knowledge rather than toward drilling bits of disconnected basic skills, toward "higher-order thinking skills," toward "real" reading and writing. The researcher’s findings show that this ambitious program has had mixed effects, not surprising given its short history and implementation difficulties. But definitely, these problems could be fixed, with some good collaborative work and more time.

The phone rings. Beth is calling. Beth, who is not only an informant to the research study, but a virtual poster child for ASAP. As an elementary school principal she has thrown her all into embracing its aims.

"Have you heard the news? ASAP is dead." In rueful and ironic tones, she relates to me that at a capitol press conference, the newly elected state Superintendent, Lisa Graham Keegan, has "suspended" ASAP Form D, the performance assessment, which was scheduled to be administered the very next week. Keegan based her decision on a report, by the test publishers themselves, no less, that said that Form D did not have high enough reliability and validity to give the state accurate accountability information.

Beth muses, "How could Lisa do that? How could she just undue a policy that the legislature mandated? Does this mean the state is going backwards? Back to standardized tests and drill and kill? What about all the work we’ve done? Teachers at this school are going to freak."

The researcher is too taken aback to provide much consolation. Long since given up on speaking truth to power, she has had the modest expectation that research could contribute to reasoned debate about the effects of state assessment policy on school practices. But if ASAP is no more, who cares what its consequences were? It will take her a year to regroup her research agenda.

But the spectacle is far from over. Two weeks later, Keegan will announce that ASAP is not just suspended, but due for a complete overhaul. She will commission an Academic Summit, which will develop new state standards -- in one month time -- with new state assessments to follow. These will emphasize basic skills and vocational skills and the accountability that has been sorely lacking up to now; that is, under the watch of her predecessor.



Fifteen months later, March 25, 1996, what we thought would be the finale to the Academic Summit turns out to be a mere complicating action. As the summiteers and other educational policy watchers take their seats in the board room, the members of the Arizona Board of Education assume their leather high back chairs behind a long curved table raised to imposing height above the spectators. The Board meets today to consider -- we expect them to vote, yea or nay -- the new academic standards developed during the Academic Summit. The summit and standard-setting process has dragged on more than a year already, whereas state superintendent Keegan had originally thought that three months would be enough time to rewrite the state curriculum frameworks and develop tests to measure them. The best laid plans had also imagined that the process would be simple and managed in such a way as to feature the concerns of parents and the corporate community rather than those of bureaucrats, professionals and curriculum experts. Sweeping out the old meant rejecting the progressivism deeply embedded in the Essential Skills and ASAP, which, she believed, had mistakenly abandoned traditional drill and practice, basal and textbook mode of instruction and high-stakes testing, in favor of student-centered, integrated, higher-order thinking and problem discovery and solving, and the like. The new standards and assessments would instead be clear, measurable, and focused on basic skills. But there was still a constituency for ASAP among the Summit participants that had pushed for progressive educational values, and the standard-setting process had turned contentious and drawn out as each faction struggled for purchase. We had followed this history with interest. But now, finally, at least the Language Arts and Mathematics design teams had worked out their final drafts. We observers thought they represented compromises between progressive and traditional values -- there was something in them for both sides. So, it was now time for the Board to make them official. Members of the Board had already indicated that they wanted to go forward so that test development could begin. The agenda called for short testimonies from the floor, and then a vote. No one expected much out of the ordinary.

The buzz takes us all by surprise, as Governor Symington strides purposefully and rapidly into the room and requests permission to address the Board. Trailing him are members of his entourage, who distribute copies of his prepared address to members of the press. His newly appointed educational advisor, C. Diane Bishop, the former Superintendent, accompanies him but remains silent, in spite of the fact that "her baby" -- ASAP, the centerpiece of her administration, is about to be trashed. Just from the looks on their faces, we can see that the Board members and Keegan are just as surprised as the rest of us at this unprecedented intrusion and at what he has to say.

We will have to look later at the text to get the details, but the gist of his remarks is this: that the draft standards show the "reckless drift toward fads and foolishness" that characterize most of professional educators’ work, that the standards fail to mention phonics, spelling, or memorization of math facts and the state capitals. The arts standards come in for particular ridicule. Taking questions from the floor, he declares that the state should reject pointy-headed elitist professional jargon and just get back to basics and standardized testing for everybody.

Keegan is clearly flustered and tries to correct what she sees as Symington’s misreading of the standards, but the damage is already done. He leaves, the Board breaks, and the arguing goes on in the audience. "This just throws everything into a cocked hat," sighs a member of a standards design team. "All our work, all our compromises. This is just a signal to his appointees on the Board to follow-through on his conservative educational agenda and not give in to the professionals. If we had any doubts before, we now know that ASAP is really dead." Actions of the Board will later prove her prescient.

Another observer reaches a different kind of interpretation, one that highlights the confluence of Symington’s bankruptcy and criminal indictments, his plan to seek reelection anyway, Keegan’s own gubernatorial ambitions, and her open criticisms of his administration and calls for his resignation. Word was, he had even tried to lure her out of town to a governors’ meeting on education that coincided with the board meeting. But that we’ll never know.


What do these stories of political intrigue have to do with assessment policy? The conventional view defines policy as the rules by which a society or institution is governed. Then assessment policy must be a state’s rules and programs for determining who and what will be tested, on what sets of content, by what instruments, on what schedules, and how the results of the tests will be aggregated and counted. In addition, assessment policy sets the functions that these testing programs will serve. In the contemporary scene, the functions of state assessments are likely to include accountability for achievement as well as reform of schooling (either to increase academic performance or impose curricular coherence across schools). Most often, assessment policy embraces multiple functions (McDonnell, 1994).

The conventional model of school reform assumes that states develop policies by consensus, state departments develop programs and policy instruments consistent with the goal consensus, and educators respond rationally and predictably to implement the programs. In the case of assessment policy, a state adopts academic standards and frameworks to bring about curricular coherence and increase achievement. Tests are chosen as the tools or instruments to attain policy goals (McDonnell & Elmore, 1987). By placing consequences (high stakes) on test results, the state expects that students, teachers, and school authorities will focus their energies on the policy targets. The conventional model proposes that schools change because of state policy, in mechanistic ways (though the precise mechanisms are unknown), rationally, with a sense of shared vision, and more or less predictably and uniformly. Even slow or variable response can be explained on rational grounds; e.g., by lack of resources or knowledge. Furthermore, a state’s standards and assessments are assumed to be a fixed object, an invariant target toward which schools aim their arrows.

Against this conventional model, we consider an alternative, political model. In the alternative model, assessment policy is more like a moving target that is variously constructed by political and policy actors as well as the educational practitioners who must respond to it. Politics at both a macro and micro level influence these constructions. The process contradicts assumptions of rationality and uniformity, occasionally exemplifying the spectacular, though often hiding the ugly pushing and tugging for power, resources and political agendas behind a facade of rationality. By politics we mean not the usual view of the contest between Republicans and Democrats but the dynamic process wherein partisans contend for power, prestige, position and ideology in official (governmental or institutional) capacity. The enactment of assessment policy is as much a symbol over which the partisans contend as it is a deliberate technique to change schools. We believe that this alternative model provides the best explanation for the birth and death of ASAP -- the fundamental alteration of a state’s assessment policy.

The term political spectacle comes from Murray Edelman (1985; 1988), whose ideas suffuse this paper. He argued that politics and policy are matters of symbol, myth, and spectacle constructed for and by the public. Far from being involved democratically in policy formation and implementation, the public is mere spectator to the political drama that unfolds before it. By extension, the policy researcher often is unaware that he or she is just another spectator, or worse, another actor in the play. According to Edelman, "the conventional view [assumes] that rational choice may never be optimal, but is a central influence in decision-making and policy making.... [But] the phrase rational choice is one more symbol in the process of rationalization.... [A]ny political analysis that encourages belief in a secure, rational, and cooperative world fails the test of conformity to experience and to the record of history"(Edelman 1988), pp.4-5.

Next – Part II: Theoretical and Empirical Foundations of the Research




Ball, S.J. (1987). The micropolitics of the school. London: Methuen.

Ball, S.J. (1990). Politics and policy making in education: Explorations in Policy Sociology. London: Routledge.

Berliner, D.C. & Biddle, B.B. (1995). The manufactured crisis: Myths, fraud, and the attack on America’s public schools. Reading, MA: Addison-Wesley.

Edelman, M. (1985). The symbolic uses of politics. Urbana, Illinois, University of Illinois Press.

Edelman, M. (1988). Constructing the political spectacle. Chicago, IL, University of Chicago Press.

Erickson, F E. (1986). Qualitative methods in research on teaching. In M. Wittrock (Ed.), Handbook of research on teaching. (pp. 119-161). New York, Macmillan.

Flexer, R. J. & Gerstner E.A (1994). Dilemmas and issue for teachers developing performance assessments in mathematics. Los Angeles, CA: UCLA Center for the Study of Evaluation/Center for Research on Educational Standards and Student Testing.

Gaddy, B.B., Hall, T.W. & Marzano, R.J. (1996). School wars: Resolving our conflicts over religion and values. San Francisco: Jossey-Bass.

Hall, P. M. (1995). The consequences of qualitataive analysis for sociological theory: Beyond the microlevel. The Sociological Quarterly, 36, 397-423

Hero, R. E. (1998). Faces of inequality: Social diversity in American politics. New York, Oxford University Press.

House, E. R. (1991). "Big policy, little policy." Educational Researcher, 20, 21-26.

Kingdon, J.W. (1995). Agendas, alternatives, and public policies. New York, Harper-Collins.

Lipsky, M. (1980). Street-level bureaucracy: Dilemmas of the individual in public services. New York: Russell Sage Foundation

Marshall, C., Mitchell, D. & Wirt, F. (1989). Culture and educational policy in the American states. New York: Falmer Press.

McDonnell, L. (1994). Assessment policy as persuasion and regulation. American Journal of Education, 102, 394-420.

McDonnell, L. & Elmore, R. (1987). Getting the job done: Alternative policy instruments. Educational Evaluation and Policy Analysis, 9, 133-152.

Miles, M. & Huberman, A.M. (1994). Qualitative data analysis. Newbury Park, CA: SAGE

Noble, A J. (1994). Measurement-driven reform: The interplay of educational policy and practice. Tempe, Arizona: Arizona State University.

Noble, A J. & Smith, M.L. (1994). Old and new beliefs about measurement-driven reform: Build it and they will come. Educational Policy, 8, 111-136.

Nolan, S.B., Haladyna, T.M. et al. (1989). A survey of Arizona teachers and school administrators on the uses and effects of standardized achievement testing. Phoenix, AZ: Arizona State University-West Campus, Education and Human Services.

Rein, M. (1976). Social science and public policy. New York: Penguin Books.

Schoen, D.A. & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies. New York: Basic Books.

Shepard, L.A., Flexer, R., Hiebert, E., Marion, S., Mayfield, V. & Westonet, T. (1995). Effects of introducing classroom performance assessments on student learning. Los Angeles, CA: UCLA Center for Research on Educational Standards and Student Testing.

Smith, M.L. (1996a). The politics of assessment: A view from the political culture of Arizona. Los Angeles, CA: UCLA Center for the Study of Evaluation/Center for Research on Educational Standards and Student Testing.

Smith, M.L. (1996b). Reforming schools by reforming assessment: Consequences of the Arizona student assessment program. Los Angeles, CA: UCLA Center for the Study of Evaluation/Center for Research on Educational Standards and Student Testing.

Smith, M.L., Edelsky, C. et al. (1989). The role of testing in elementary schools. Los Angeles, CA: UCLA Center for the Study of Evaluation/Center for Research on Educational Standards and Student Testing.

Smith, M.L., Noble, A.J., Cabaym M., Heinecke, W., Junker, M.S. & Saffronet, Y. (1994). What happens when the test mandate changes? Results of a multiple case study. Los Angeles, CA: UCLA Center for the Study of Evaluation/Center for Research on Educational Standards and Student Testing.

Smith, M.L. & Rottenberg, C. (1991). Unintended consequences of external testing in elementary schools. Educational Measurement: Issues and Practice, 10, 7-11.

Stone, D. (1997). Policy paradox: The art of political decision making. New York, Norton


i The work reported herein was supported under the Educational Research and Development Centers Program, PR/Award Number R305B60002, as administered by the Office of Educational Research and Improvement, U.S. Department of Education. The findings and opinions expressed in this report do not reflect the positions or policies of the National Institute on Student Achievement, Curriculum, and Assessment, the Office of Educational Research and Improvement, the U.S. Department of Education, or the National Center for Research on Educational Standards and Student Testing (CRESST). A more extensive version of this paper exists as a CRESST report, "Politics and Assessment Policy."


ii Examination of documents reveals that rightwing organizations typically extoll the virtues of teaching reading by the phonics method and math by memorization of math facts. These pedagogical techniques are viewed as within the purview of the family and not the professions (Phyllis Schafley claims that any mother or grandmother can teach a child to read, for example). They repudiate the teaching of higher-order thinking, whole language and bilingual education, and other recommended approaches of progressivism and constructivism (Dewey and Vygotsky’s communist leanings make these perspectives suspect). Especially, Outcomes-Based Education, which they define in an all-inclusive way, comes under fire. These preferences have the characteristics of fixed ideologies, in that they seem founded in biblical interpretation, are immune to fair debate, and tend to demonize the opposition. Refer to more extensive treatment of this subject in and . We do not imply, however, that all proponents of phonics-based education are part of this ideological community, only that there is a pattern that connects political and pedagogical conservatism.

iii A sample from the Generic Reading Rubric: "A 3 response demonstrates an adequate understanding of the text. There is evidence of understanding of both the gist and specific parts of the text. It is not as complex as a 4 response. It may include minimal extensions, such as connections to other texts, experiences, abstractions and/or generalizations. All elements of the question are addressed in the response."

iv A State Board member said this: "I have always been concerned that seat time should not be a graduation requirement. Ever since I was in high school, seat time was all you really needed to get a diploma. And we all agreed that we wanted a diploma to mean something, to have some stakes to it, some risks to it, perhaps even get to the point ultimately where there could be a guarantee to the business community that if our students have a diploma that they can count on them having certain skills."

v Data show that the SAT scores in Arizona were above the national average and had been level for four years.

vi An informant remarked, "There is a feeling at the department that Diane is advising Fife on who to appoint to the board, because of animosity toward Lisa. Lisa had this whole plan ready to go, and then the membership of the Board changed and she has been stymied in everything she has tried since then."

Cite This Article as: Teachers College Record, Date Published: September 13, 2000
https://www.tcrecord.org ID Number: 10454, Date Accessed: 10/24/2021 11:40:04 PM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Mary Smith
    Arizona State University
    Mary Lee Smith is professor of Educational Leadership and Policy Studies in the College of Education at Arizona State University. Her research interests include the effects of state-mandated measurement-driven reform on schools. Among her publications are Analysis of Quantitative and Qualitative Data (Handbook of Educational Psychology).
  • Walter Heinecke
    University of Virginia
    Walter Heinecke is an Assistant Professor in the Department of Educational Leadership, Foundations and Policy in the Curry School of Education at the University of Virginia. His research interests include the impact of policy on practice in education. He has conducted research on the impacts of standardized testing on elementary school instruction, desegregation, educational technology and school reform policy. He is co-editor of Advances in Research on Educational Technology.
  • Audrey Noble
    Univeristy of Arizona, Tuscon
    Audrey J. Noble is the Director of the Delaware Education Research & Development Center at the University of Delaware. Her current policy research examines Delaware's efforts to reform education through standards, assessment, capacity-building, and governance. Among her recent papers is "Old and new beliefs about measurement-driven reform" in Educational Policy.
Member Center
In Print
This Month's Issue