State Assessment Becomes Political Spectacle--Part II: Theoretical and Empirical Foundations of the Research


by Mary Lee Smith, Walter Heinecke & Audrey J. Noble - September 13, 2000

Part II of a serialized article on the evolution of the state assessment system in Arizona in the 1990's


...Continued from Part I: Introduction to Policy Stories and Policy Studies


PART II.THEORETICAL AND EMPIRICAL FOUNDATIONS OF THE RESEARCH

As strong as our statements about the political nature of assessment policy might be, they emerged rather late in the history of this project. Vignette 1 hints of a struggle to cope with the political events surrounding our conventional study of the effects of Arizona state assessment. Where we ended up was not where we started. In its ultimate form, this paper rests on both empirical and theoretical foundations.

To explain these data, we turned to several theoretical frameworks in the literature. Marshall, Mitchell, and Wirt (1989) argued that there is reliable connection between policy and a state's political culture. They define culture as persistent patterns of values that can predict the behavior of policy actors who contend with each other for the power to allocate these values in the form of policies. They posed three alternative political cultures -- moralistic, individualistic, and traditionalistic. Through their empirical study, the authors identified Arizona's political culture as traditionalistic, wherein the dominant values behind state policy are efficiency and choice rather than quality or equity. Traditionalistic state policy cultures emphasize "the leading role of economic elites in shaping public decisions, with a consequent fusing of private and public sectors and a limitation on citizen participation," (p.118) a distrust of bureaucracy, labor unions, and professional (e.g., teacher and administrator) authority and concerns (e.g., professional development and certification standards). Furthermore, there are strong anti-taxation sentiments and persistent demands for accountability in political cultures such as Arizona. Localism is valued over central control of public policy. Equity issues -- the recognition of disparities and injustices among groups and the use of policy to correct them -- receive low priority. Government is viewed as a means of maintaining the existing order (rather than as a marketplace or as a commonwealth as it is viewed in the individualistic and moralistic cultures, respectively). Though specific policies and partisan political configurations may change, the dominant political culture persists and reasserts itself over time. We are mindful of the criticisms of political culture theory but still believe it has explanatory power. For our purposes, a state political culture predicts which policies are likely to persist.

Other structural issues that transcend state political culture must also be considered. First, the national discourse of crisis due to public school failure serves as backdrop to assessment policy change, beginning with A Nation At Risk and so often reinforced that evidence and argument that ought to disconfirm the perspective go unnoticed and even ridiculed . Second, the national discourse that relates education to the national economy (Ball, 1990; House, 1990) certainly shapes assessment policy as well. That is, the national discourse makes commodities of test scores, attributes economic prosperity to higher achievement test scores, and prioritizes the concerns of corporations over those, say, of the environment or community.

The partisan dynamics in the state during the period of this study also refracts the data. Early on, the policy actors were split between the parties. Later, Arizona became virtually a one-party state. The governor, state superintendent, and legislative majorities were conservative Republicans, and the appointments they made to the State Board of Education, Arizona Department of Education staff, and various ad hoc groups reinforced their perspective. The dominant discourse was union-bating and educator-bashing, federal mandate- and court order-defying. Right-wing extremists often made the news, as did religious conservatives. Assessment policy could hardly be immune from this climate, particularly because of the relationship between political and pedagogical conservatism.

Although political culture provides a matrix for understanding policy activity, it fails to account for the spectacular events in Arizona assessment policy. For that we take the perspective that policy becomes real at the point of the interaction; at specific times and places in which particular actors encounter it. These interactions occur at many points: when a policy agenda reaches the table, when it is enacted, administered, and implemented. At each stage, the policy is interpreted by the actors involved in ways that may have little to do with the official, written policy itself. Controversial policies become part of the political spectacle . They take on the characteristics of symbols as much so as they become technical instruments aimed at real changes . The garbage can theory of policy making suggests that policy entrepreneurs identify problems to link to available solutions in such a way as to advance careers or achieve other gains (largely out of sight of the public, Edelman would say). The linkages are possible only in a short policy window in which the frequently contradictory images and interests of policy constituencies can be obscured long enough to get a policy on the agenda . Once a policy is legislated, it can go through substantial transformations from its founders' original intentions as it is reinterpreted at the levels of administration, implementation, and reaction . These frameworks suggest categories and propositions for interpreting data in this study. Integration of the frameworks would provide a theory of politics in policy, but this paper does not go so far.

We base the validity of the narrative and analysis that make up the body of this paper on three sets of empirical evidence gathered over the entire period of ASAP history. The first set of data was generated by examining documents and interviewing key policy actors who contributed to the enabling legislation and translation of legislation and policy goals into program administration. A detailed account of the methods of collection and analysis is given in Noble (1994) and Noble and Smith (1994). Interviews were conducted in 1993 with 13 policy actors, including legislators, the State Superintendent and involved officials at the Arizona Department of Education, an ADE advisor from the local university, and officials at the Arizona Education Association and Center for Establishing Dialogue in Education. The aim of the interviews was to reveal the intentions of policy actors and their expectations for assessment policy and its consequences, the images they held about the nature of teaching, learning, assessment, and reform, and their perceptions about early reactions to the new policy. These interviews were recorded, and their transcripts provided data for the present analysis. We gathered an extensive array of documents, including legislation, rules and regulations, ADE announcements and newsletters, the assessments and rubrics themselves, and as many of the technical reports as were made available. Direct observations of ADE and State Board open meetings and workshops rounded out the data collection of the policy study and later fed into the analysis reported herein.

The second source of data was a two-year study of initial reactions by schools to ASAP . During the first year of this project, a multiple case study design placed researchers in four elementary schools for a full school year to understand the meanings and actions of educators in particular contexts as they came to terms with state assessment policy and tried to implement it. Classroom observation, interviews with teachers and administrators, examination of curriculum and testing documents were the modes of data collection. We analyzed data within and across sites . We found that we could account for the local status of ASAP implementation by the following categories: financial and knowledge resources available, the compatibility of local images and ideologies with those of the state policy, and the accountability culture.

During the second year, which coincided with the second year of ASAP Form D administration, we conducted focus group interviews at the four original sites as well as five other, purposively selected schools. The second-year interview agenda consisted of the following parts. An opening statement laid out the direction for the participants: "As you know, the ASAP program was intended to change schools toward more holistic, integrated instruction and to make schools more accountable. We would like to know how schools have reacted to the ASAP program. What does the reading, writing, and math curriculum look like here? How do you see it as consistent or inconsistent with ASAP? What do you think a teacher needs to be able to know and do to implement the ASAP program? How does that fit or not fit with your own knowledge and teaching skill or philosophy? In your view, what has happened at this school as a reaction to ASAP? What if anything has gone on in this school or district in terms of helping teachers teach more holistically (consultants, in-service, collaboration, etc.)? What messages do you get from administrators or the public about the importance of high ASAP scores? What if anything do you do to make sure your students score well on the ASAP?" The transcripts of recorded interviews fed into the analysis of the present study.

In addition to the focus group interviews, we conducted surveys of educators sampled representatively from the state as a whole. Our research questions were as follows: What is the status of change toward ASAP policy ideals from the perspective of teachers? What is the meaning of mandated assessment and the role it plays in their practice? How do issues of resource availability, authority structures, assumptive worlds, and accountability relate to local change? What is the relationship of capacity development and equity to assessment? The questionnaire sent to a representative sample of Arizona teachers was the product of six developmental phases. In the first phase, the analysis from the case studies was used to construct items related to: 1) local status with respect to change toward ASAP ideals; 2) Resources for Change; 3) Power to Change; 4) Consonant Assumptive Worlds; and 5) Role of Testing. In addition, items were constructed that would indicate the teachers' perceptions of equity issues in relation to mandated testing. Many of these items were statements taken directly from participants in the policy and case studies. Items were also constructed to measure teachers' knowledge of the curricular content and pedagogy relevant to ASAP ideals, the amount and kinds of relevant professional development they had experienced and the opportunity their students had to learn material and tasks that ASAP measures. We also drew items and ideas from previous studies to enlarge our interpretive framework and provide a basis of comparison across time and sites. Telephone interviews of district and school administrators were conducted. Questionnaires were sent to teachers. Response rates were adequate at the teacher level and very high at the school and district level, so that we had confidence in the generality of the findings. Data were analyzed separately for the survey and synthesized across the various components of the study.

The final source of evidence for the present study mirrored the first. An extensive analysis of documents encompassed new technical reports and reports of assessment results, newspaper articles and press releases, legislation both introduced and passed, State Board of Education agenda and minutes, reports and briefing papers of advocacy groups and ADE advisors, archives of the Academic Summit that drafted standards, and the standards themselves as considered, revised, and approved by the State Board.

Interviews were conducted with policy actors: the current and former Superintendent and several deputies, officials and staff at the Arizona Department of Education (both current and former), members of the State Board of Education, legislators on the relevant education committees and their staff, officials in the local affiliates of the American Federation of Teachers, National Education Association, School Boards Association, and Administrators Association. In the interviews we attempted to uncover the factual basis of the events around the change in assessment policy as perceived by these policy actors. For example, we asked what had led up to the Superintendent's decision to suspend ASAP Form D and the announcement of the Summit -- who was involved and what evidence and argument had contributed to the actions and what reactions were noted afterwards. In addition, we aimed to uncover the intentions, ideologies, interests, and images that guided the work of these actors. We tried, for example, to ascertain whether the Superintendent's decisions were primarily political or primarily instrumental (Edelman, 1985; Rein, 1976) by asking about perceived differences between the old Essential Skills and the new standards, the old ASAP and the plan for the new standards-based testing, by asking about the contributions of technical vs. political advisors and documents, and by asking about the dichotomy between constructivist and basic skills philosophies, as well as concerns for equity, accountability, professional involvement and training, and opportunity to learn. In all, over 20 interviews with policy actors were conducted during late 1995 and early 1996. We have attempted to preserve the confidentiality of all these actors except for those public figures at the highest level.

In addition to interviews with policy actors, we had access to archives and information from several informants. A legislative intern assigned to the education committee documented the progress of bills related to assessment policy as well as teacher certification, finance, and charter schools that provided a broader political context and revealed the ideologies and actions of legislators and staff, and the sources of influence on their work. Another source of insider information was a videotape of a teleconference between staff and policy actors from Arizona and Delaware, during which issues of assessment policy were discussed. Several participants in the Academic Summit and the subsequent curriculum and technical subcommittees provided extensive time and insight into the standard-setting process and the influences of the ADE and State Board and made internal documents available to us.

Extensive observation supplemented document analysis and interviews. The Academic Summit was observed in all its general sessions. The open sessions of the Language Arts and Workplace Skills design teams were also observed. However, most of the work of the design teams was conducted privately, so that our only source of data on their work comes from the informants. We also observed and recorded four of the thirteen open meetings conducted to present the initial drafts of the Standards. In addition, we observed five meetings of the State Board of Education as they discussed and voted on the approval of the Standards and other aspects of assessment policy. Extensive hand-written notes were taken, and some meetings were tape recorded as well.

The ideas of Martin Rein influenced us as we amassed and addressed these accumulated data. He distinguished the consensual from the contentious and paradigm-challenging approaches to empirical policy research. The consensual approach "Proceeds from agreed-upon aims; it asks whether policies and the specific programmes that implement them, work as intended" (Rein, 1976, p. 126). In the contentious approach, the researcher acts as a "moral witness" or "social critic" about government's aims, actions or nonactions regarding social needs. Rein noted the limitations of positivist science in policy research, in particular the aspiration to produce generalizable, objective, value-free and definitive propositions. Instead, the aim of the policy researcher should be to develop stories that weave together values, facts, images, and tentative explanations with the setting and characters -- the local context. The test of the story includes the verification of its facts and the coherence of facts and explanations. The researchers, though never completely free of values and biases, can nevertheless strive to restrain and postpone their intrusion into the story and seek review and reaction from others to examine the coherence and plausibility of the story.

The data analysis for this study began with three readings of the data. Next, the data were arranged in an event chronology. A time by policy actor matrix was then developed, taking as major categories those constituencies that had a stake in assessment policy. Examining the data within these categories suggested a narrative line with characters, settings, a plot and perhaps a moral. We tried as much as possible to represent the voices of the actors themselves and to quote extensively from their public pronouncements and reports. The major difficulty in this process was finding an end to the story, while events in Arizona assessment policy continued to unfold. We are mindful that plot lines are constructions, whereas elements of real life continue on, without inherent beginning, climax, and resolution. Two vignettes were written to portray the sense of spectacle (Edelman, 1988) that permeated the changes in assessment policy. Vignettes are artful and condensed reconstructions of actual data that particularize assertions from empirical research (Erickson, 1986; Rein, 1976). Having written the narrative that tied the pieces of the ASAP story together, we then engaged in a process of internal, structural corroboration. That is, we sought to challenge the facts of the story, accepting only those that could be corroborated by more than one source. We worked reflexively to make sure that facts as we believe them and the explanations that we spun out fit one another. Finally we have subjected the report to review by two informants we believe are sympathetic to our point of view and one who is impartial.

Like the politics and policies we observe, we recognize that our own actions as researchers, our values, definitions, and categories -- even our sense of what is a fact, are open to critique. We have given our best efforts to strive for a complete and faithful portrayal of what we have seen and understood and to avoid creating heroes and villains. Much of what we learned was new and surprising. But in at least four respects we brought in prior descriptive and normative categories. First, we have stated elsewhere our belief that the intervention model the state employed in ASAP (based on the Field of Dreams assumption) was naive and self-defeating . We cling to the notion that some intervening processes have to occur if a mandated test is to have any instrumental effects, and that educators must as interpretive beings come to grips with the policy, understand it, learn how to change toward it, and reflect on their actions. Second, we have tried to raise issues of social equity in relation to policy from the background, even though they were seldom mentioned by policy actors in Arizona. Third, we have been convinced by experience and evidence that high stakes testing has both intentional and unintentional effects, some deleterious to the processes it is supposed to address . Finally, even with all its faults, there is a professional knowledge base connected to testing. Without being a slave to its technical minutia, assessment policy makers ought to have at least passing familiarity with it or develop consultative relationships, not with the test developers alone (who themselves must be considered policy actors), but with relevant and thoughtful experts (among whom we do not count ourselves). When policy makers ignore technical issues, they confirm that it is political spectacle that we have before us, and the instrumental aims are a kabuki mask.

We have been immersed in research on ASAP for as long as ASAP itself has lasted. Our study of its consequences, though comprehensive in a conventional model of policy research, failed to tell the whole story or adequately explain what happened. We offer this report to round out the picture and to suggest that there are no policies without politics, and no sensible policy research without political analysis.

Next – Part III: Defining ASAP: Scripts and Readings



Cite This Article as: Teachers College Record, Date Published: September 13, 2000
https://www.tcrecord.org ID Number: 10457, Date Accessed: 10/25/2021 12:52:49 AM

Purchase Reprint Rights for this article or review