State Assessment Becomes Political Spectacle--Part III: Defining ASAP: Scripts and Readings
by Mary Lee Smith, Walter Heinecke & Audrey J. Noble - September 13, 2000
Part III of a serialized article on the evolution of the state assessment system in Arizona in the 1990's
...Continued from Part II: Theoretical and Empirical Foundations of the Research
PART III. DEFINING ASAP: SCRIPTS AND READINGS
WHAT WAS ASAP?
The Arizona Student Assessment Program (ASAP) was the official state assessment policy from its inception in 1990 to its radical revision in 1995. A complex program, it included content standards, or state curriculum frameworks, a set of state tests, and various accountability mechanisms. The Arizona Department of Education derived its authority for administering the program from Arizona Revised Statutes 15-741. Analysis of state documents shows that the formal intent of the program was to increase accountability to the state's curriculum frameworks and to move schools in the direction of greater emphasis on higher-order thinking, complex problem-solving on real-world problems, integrated subject matter, and application of basic skills. In its entirety, the program included the following elements.
1. Arizona Essential Skills. The state curriculum frameworks in reading, writing, mathematics, science, social studies, health, foreign language, music, performing and visual arts purported to reflect high levels of expectation for all students, application of basic skills, problem-solving abilities, and higher-order thinking. They included benchmarks of what pupils ought to know at grades 3, 8, and 12.
2. Performance Assessments. Assessments of clusters of Essential Skills were administered to all students in grades 3, 8, and 12, in the form of ASAP Form D, the on-demand or "audit" form of the state assessment program. The state published guidelines that were intended to insure that Form D would be administered under standard conditions and scored under auspices of ADE, using the "generic rubric" or 4-point scoring guide. Form D tested in integrated style (students had to write an essay or some other extended form in response to a reading assignment with an embedded math problem). Four variations of Form D (D-1 through D-4) were to be phased in over four years, each testing one-quarter of the Essential Skills. ASAP Forms A, B, and C consisted of performance assessments in reading, writing, and math to be used for preparing pupils to take the Form D, or as instructional packets and for district assessments (see item 4). Forms A, B, and C tested the content areas separately, purported to measure all the Essential Skills, and were administered and scored by teachers, also using the generic rubric, rather than at a central scoring site, as Form D was. Spanish language versions of all forms of the performance assessment were available. There were guidelines for teachers to modify test administration conditions for handicapped students. The performance tests were developed by Riverside Publishers, which also had contracts for scoring services and technical reports.
3. Norm-referenced testing. The Iowa Test of Basic Skills and Tests of Academic Proficiency provided a means for comparing the achievement of Arizona schools with that of a national norming sample. A limited battery of tests was given to students in grades 4, 7, and 10 during the fall months.
4. District Assessment Plans (DAP). The DAP served as a compliance tool. Every district had to submit a DAP each year to the Arizona Department of Education, which reviewed and approved it or asked for revisions. The plan specified the method by which each Essential Skill would be measured and the grade level at which it would be measured. DAPs provided assurances that districts would measure students' mastery of the Essential Skills by grades 3, 8, and 12 (each district set its own level of mastery). Districts could choose which of three methods to use for its DAP testing: ASAP performance assessment Forms A, B, or C or a system of portfolio assessments or criterion reference measures. Either of the latter were acceptable to the ADE if the generic rubric could be applied to the results. In response to a 1994 policy adopted by the Arizona Board of Education, ADE planned to use a revised version of Form A assessments as a graduation competency battery.
5. Essential Skills Reporting Documents (ESRS). Each district was required to report annually to ADE on the number and percentage of pupils that had attained mastery of Essential Skills and report results of achievement testing and non-test indicators (e.g., amount of time spent in homework, number of books read, and the like).
6. Report Cards. In June of each year the Arizona Department of Education issued report cards for each student, school, and district, as well as for the state as a whole. The state report card reported descriptive statistics on the assessments, aggregated to the county and state levels. Demographic data and non-test indicators were also reported. School and student report cards were proprietary, with individual reports to parents and school reports to the district. Other reports were public documents.
7. District Goal-setting. Districts were to report annually to ADE, detailing the goals for the subsequent school year, based on results of all the assessments. In addition, the report listed the strategies for reaching these goals and budgets, and timelines for implementing those strategies..
Looking back at the paragraphs above, one gets the impression that ASAP was complex but clearly delineated. This impression is pure historical fallacy, however. It is doubtful that any one individual during 1995, whether policy actor, politician, or educator, had the whole list in mind. From the vantage point of the average teacher or principal, one would only "know" ASAP as a particular set of performance tests and perhaps a set of reform ideals. What one knew was what one had experienced directly. This included statements in the press or during meetings that "ASAP was the best we know about how students learn," or that ASAP represented authentic assessment, integrated learning, a new role for teachers, a way out of the quagmire of high-stakes standardized testing and the stale brand of teaching that seemed to follow it. The accountability aspects and intents of the program came later to awareness, and to some, not at all. Our surveys and qualitative studies showed that ASAP was not the same thing for everyone. Later, these ambiguous and multiple meanings would explain the confused reaction to Keegan's suspension of ASAP Form D. Those who equated Form D with ASAP found it difficult to grasp that the DAP, accountability, norm referenced tests, and Form A provisions were still in place. The conceptual confusion also intruded on the research process. When anyone mentioned ASAP, we had persistently to deconstruct which ASAP they were thinking about at that moment.