Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Exit Strategies: How Low-Performing High Schools Respond to High School Exit Examination Requirements


by Jennifer Jellison Holme - 2013

Background: Over the past several decades, a significant number of states have either adopted or increased high school exit examination requirements. Although these policies are intended to generate improvement in schools, little is known about how high schools are responding to exit testing pressures.

Purpose: This study examined how five low-performing high-poverty high schools responded to the pressures of Texas’ exit testing policy. The goal of this study was to understand how schools responded to the pressures of Texas’ exit testing system (in terms of curriculum, instruction, and supports for low-achieving students) and how educators reconciled those pressures with other accountability pressures that they faced.

Research Design: This study employed qualitative case study design. Five low-performing high schools were sampled within the state of Texas, each of which served large concentrations of at-risk students. A total of 105 interviews were conducted across the five case study sites over the course of 2 years (2008–2009).

Conclusions: This study found that the Texas exit testing policy created a misalignment between educator and student-level accountability, which had particularly negative consequences for struggling students. The findings of this study suggest a need for policy makers to reconsider the assumptions on which exit tests are based and to more closely consider the goal of exit testing systems in the context of, and in relation to, the larger systems of accountability in which they are embedded.

Over the past several decades, a significant proportion of states have either adopted or increased high school exit examination requirements. These exams, which are tests that students must pass by the end of high school to receive a high school diploma, are affecting a growing majority of students in the United States. In 2001, the 18 states that had enacted exit test requirements enrolled approximately half (49%) of all public school students, and 54% of students of color (Center on Education Policy [CEP], 2003, p. 6); by 2010, the 28 states that had instituted exit test requirements enrolled 74% of U.S. public school students, 78% of low-income students, 83% of students of color, and 84% of English language learners (CEP, 2010, p.1).


Exit examination policies are viewed as a way to “restore value” to high school diplomas by holding high schools and their students to higher academic standards (Achieve, 2005). The policies are intended to push schools to increase the rigor of curricula and to improve instruction, leaving students better prepared on graduation for either college or the workforce. Yet research evidence to date indicates that these policies have yet to yield hoped-for gains in either achievement or attainment (see, e.g., Grodsky, Warren, & Kalogrides, 2009; Reardon, Arshan, Atteberry, & Kurlaender, 2010). Some studies, in fact, have found that high school dropout rates have actually increased in states with exit examination systems, particularly for low-income students and students of color (see, e.g., Ou, 2009; Papay, Murnane, & Willett, 2010).


Existing research, however, fails to provide insight into the reasons that these policies have yet to yield intended benefits: Is there something faulty about the design of the policies or about the ways in which schools are responding? Little is known about how teachers make sense of exit testing policies, how these policies impact instruction, or how exit testing policies impact the supports provided for the most at-risk students in schools.


This study examines high schools’ response to exit testing policies through in-depth case studies of five high schools across five school districts in Texas. The schools were selected because they were the very schools implicitly targeted by these policies: low-performing comprehensive high schools serving large concentrations of at-risk youth. This study explores how the pressures of the Texas exit testing system influenced organizational change and teacher practice through in-depth interviews with 105 educators across the five sites. As this study will illustrate, the exit testing system in Texas has created a misalignment between educator and student accountability, where schools tend to pursue the demands of the former while sidelining the needs of the latter.


 THE EVOLUTION OF EXIT TESTING POLICIES


Exit exams have been a part of state testing policy for over three decades (Resnick, 1980). The first wave of exit tests were adopted in the 1970s and early 1980s, when many states began to require that students pass Minimum Competency Tests (MCTs) to receive a diploma. The MCTs were intended to ensure that high school graduates had mastered only minimum basic skills, and tests were usually set at the eighth-grade level. By 1982, a total of 38 states had passed MCT policies, and 19 required passage of MCTs for the receipt of a diploma (Linn, Madaus, & Pedulla, 1982; Pipho, 1978, 1983; Resnick; Winfield, 1990).


This focus on minimum skills tests shifted in 1983 with the publication of “A Nation at Risk,” a report calling for higher standards and more rigorous curriculum to shore up national competitiveness (Holme, Richards, Jimerson, & Cohen, 2010). Policy makers responded by instituting tougher graduation requirements and increasing the rigor of high school classes (Tyack, 1993). As this call for higher standards evolved into the standards movement in the 1990s, the MCTs were generally phased out (Hamilton, 2003).


The most recent wave of exit examination policies was enacted in the late 1990s and early 2000s, and they differ from their MCT predecessors in that they are set to higher standards (at least a 10th-grade level) and focus on higher order thinking skills (CEP, 2008). Much of the rationale for these policies has been centered on the need for better educated workers in a “global economy” (Holme et al., 2010). As California’s state superintendent Jack O’Connell stated, explaining the rationale for the state’s exit exam policy, “We face a new economy driven by global innovation that will demand higher-level skills and knowledge to meaningfully enter the work force. It is imperative that all of California’s children reach at least the minimal bar set by our exit exam” (California Department of Education, 2006).


The new generation of exit tests is aimed at changing both school and student behavior: The tests are intended to push schools to target supports to the lowest performing students and to improve the rigor of the curriculum and instruction to which all students are exposed. Exit tests are also intended to motivate students to put forth more effort and master the higher standards being demanded within the tests. Yet relatively little is known about whether or how these expectations are playing out in schools.


UNDERSTANDING SCHOOL RESPONSES TO EXIT TESTING PRESSURES


Although a number of researchers have examined the effects of exit tests on student achievement and attainment, few scholars have investigated how schools respond as organizations when faced with exit testing requirements (Holme et al., 2010). Existing studies of school response have found mixed results: Although many schools have increased supports for lower achieving students (see, e.g., DeBray, 2005; Holme, 2008; Serow & Davies, 1982), studies have also found that many schools have engaged in self-protective behavior, “gaming” the system to create the appearance of compliance through a number of strategies (i.e., teaching to the test or encouraging low achievers to drop out; see, e.g., DeBray; Sipple, Killeen, & Monk, 2004; Vasquez Heilig & Darling-Hammond, 2008).


The majority of these studies, however, were conducted on schools subject to the “older” generation of exit testing systems (with fewer and less rigorous tests) and before schools were subjected to the requirements of No Child Left Behind (NCLB). Today, exit tests constitute just one component of a complex and relatively higher stakes set of federal and state accountability pressures facing schools. Researchers have yet to examine how educators reconcile the pressures added by the newer, more elaborate exit testing policies with other (often competing) accountability pressures that they face.


There has also been relatively little focus within the research literature on the way in which the lowest performing schools respond to those competing pressures. With few exceptions, existing studies have largely focused on the responses of schools across the performance spectrum (high, middle, and low performing) with little specific attention to the responses of the lowest performing schools (see, e.g., Holme, 2008; Sipple et al., 2004; Vogler, 2006, 2008). Most studies to date are also based on relatively superficial data (surveys, administrator interviews); few in-depth case studies have been conducted about how teachers and schools as organizations respond to exit testing policies (for an exception, see DeBray’s 2005 single-site case study) or how such responses may relate to the overall accountability systems of which exit tests are typically just one part.


This study draws on in-depth interview data from five high schools to examine how the various pressures produced by the Texas exit examination system unfolded in those very high schools that are the implicit targets of these policies: low-performing high schools serving large concentrations of at-risk youth. The goal of this study was to understand how schools responded to the pressures of Texas’ exit testing system (in terms of curriculum, instruction, and supports for low-achieving students) and how educators reconciled those pressures vis-à-vis other accountability pressures that they faced.


METHODOLOGY


To examine how educators interpreted and responded to exit testing policies, this study employed qualitative case study methodology, a method that provides a way to examine how educators make meaning of particular policies and how those meanings may relate to local contextual conditions (Merriam, 1988; Yin, 1994). Five low-performing high schools were sampled within the state of Texas, each of which served large concentrations of at-risk students. The sampling strategy first involved identifying all high schools in the state that served a student population that was at least 50% “at risk,”1 as well as at least 50% economically disadvantaged.2 In addition, because it was presumed that there would be some “press” resulting from cumulative performance on all four of the required exit-level tests, high schools were selected that were in the bottom quartile of exit testing performance in the state at the beginning of data collection, with less than 70% cumulative pass rates for 11th graders on all four exit-level exams.


From this initial group of identified schools, the sample was narrowed to seven counties in Texas sampled for geographic diversity. Schools were then classified and ultimately sampled for diversity to capture a range of schools by urbanicity, race, ethnicity, and size. Although the final sample included schools that had very high levels of poverty and were predominantly non-White, the schools represented a range of contexts and district types.


The final sample of schools, which were all given pseudonyms, were: Hudson High School, located in a core urban district; Stewart High School, a low-performing the school within an otherwise high-performing district; Vanderbilt High School, located in a central city urban district; Morrison High School, located in a relatively small but very racially diverse suburb; and Kingsbridge High School, located in a segregated suburb adjacent to a large urban center (see Table 1).


Table 1. School Characteristics and Performance, 2008–2009 Academic Year

School Pseudonym

Geographic Location

% Passing all 4 exit exams (first attempt, spring 11th) 2007–08

Enrollment

African American

Latino/a

White

 % At risk

% Econ. Disadvantaged

Hudson HS

Urban

50%

1,525

12.7%

79.3%

5.6%

 82.4%

 84.1%

Stewart HS

Inner Ring Suburban

63%

2,898

7.3%

77.5%

13.5%

61.8%

63.2%

Vanderbilt HS

Urban

62%

1,923

2.3%

92.3%

4.8%

 68.6%

78.6%

Morrison HS

Inner Ring Suburban

46%

2,528

36.2%

49.1%

12.4%

44.2%

58.3%

Kingsbridge HS

Segregated Suburb

58%

1,818

0.7%

97.6%

1.5%

67.3%

89.5%

Source: Texas Education Agency.



Semistructured interviews were conducted with school leaders, a cross-section of core subject teachers at each school, and district leaders. A total of 105 interviews were conducted across the five case study sites over the course of 2 years (2008–2009), with data collection in individual schools lasting approximately 10–12 months per school. Because the goal of the study was to understand how schools reconciled the pressures of the exit tests with other accountability pressures, interviews were conducted with both core subject teachers in exit testing grades (11th grade) as well as non-exit-tested grades (9, 10, and 12; see Table 2). Interviews were sought and obtained with nearly all the department chairs of the core academic departments across the five schools. Interviews were supplemented by targeted observations of department and leadership team meetings, as well as statistical data from the Texas Education Agency.


Table 2. Summary of Interviews

 

Vanderbilt

Morrison

Hudson

Stewart

Kingsbridge

Total

English/Language Arts

5

4

6

3

2

20

Math

4

4

2

2

3

15

Social Studies

4

1

3

3

2

13

Science

1

1

1

2

2

7

Administrators

2

2

2

3

2

11

Counselors

2

1

1

3

4

11

Other

1

2

3

1

2

9

District

4

2

4

5

4

19

Total

23

17

22

22

21

105


Interviews were fully transcribed, and both interviews and observations were coded first for within-case themes, then for cross-case themes. Themes were clustered in multiple ways: by topic area, by level of analysis (district, school), and by role within the organization as appropriate (i.e., by academic department). Findings were checked and rechecked for both confirming and disconfirming evidence (Bogdan & Biklen, 2006; Miles & Huberman, 1994). In cases in which apparent discrepancies arose within analysis, interviews were reread and checked against the coded data (often multiple times) to ensure that interpretation was consistent with the preponderance of data and that conclusions were not overreaching.


As Yin (1993) noted, the goal of qualitative case study research is not about statistical generalization (i.e., making assertions about the average effect of a policy) but about theoretical generalization: that is, to “expand our understanding of theoretical propositions and hypotheses” (p. 39). The goal of this particular study, therefore, is not to draw generalizations about how the average low-performing school responds to exit testing pressure, but to examine whether and under what conditions the theory of action underlying exit testing policies plays out in expected (or unexpected) ways.


FINDINGS


Most accountability systems can be classified with respect to the way they answer the following questions: who is held accountable, for what, to whom, and with what consequences (Abelmann, Elmore, Even, Kenyon, & Marshall, 1999; Elmore, 2003; O’Day, 2002). Exit testing policies are unique in that they yield multiple answers to each of these questions; thus, they involve multiple levels of accountability, with multiple targets and pressures.


On the one hand, exit testing systems constitute a form of individual student-level outcomes-based accountability; students who fail to pass their respective states’ required exit exams are ineligible to receive a diploma in their state. Exit testing policies also constitute a form of bureaucratic school-level outcomes-based accountability (O’Day, 2002); the tests hold schools accountable for testing outcomes by higher levels of the bureaucracy (the parent district and the state). In the vast majority of states with exit test requirements, exit test results are used for school accountability rankings under either state or federal accountability systems or, in some states, both systems (CEP, 2009).


Under Texas’ exit testing system, both schools and students are held accountable, yet they are held accountable for different outcomes and face different consequences. Texas’ system currently requires students to pass four subject-area exit exams (in English language arts [ELA], mathematics, social studies, and science) known as the Texas Assessment of Knowledge and Skills (TAKS), which are taken initially at the end of 11th grade. Students are given retesting opportunities through the end of 12th grade and beyond. Although some states do offer alternative means for students who fail the test to obtain a diploma, in Texas, students who fail have no alternative means of obtaining a high school diploma from a public high school in the state.


Although students are held directly accountable for passing the four exit tests by graduation, schools are not held directly accountable for the percentage of students who do so. Instead, schools are held accountable for a combined average of test results across 9th, 10th, and 11th (exit level) grades within each subject area. The state requires no reporting of, and does not give schools credit for, the remediation of students who initially fail on their first attempt but who subsequently pass on a retest at some point before graduation. Although the state does hold schools accountable for graduation rates (an indirect measure of exit testing pass rates), this accountability is fairly weak, as will be discussed later in more detail.


Under the Texas state accountability system, therefore, schools suffer no specific consequences for low performance on exit tests alone. Schools are instead sanctioned for low average performance across multiple grades in each individual subject area. The sanction consists of a label of “academically unacceptable,” which comes with increasing levels of intervention by the state. Few formal reward mechanisms exist beyond achieving a positive accountability label of “acceptable,” “recognized,” or “exemplary” (the highest).


The way in which Texas has structured the requirements for accountability under NCLB also releases schools from accountability for exit test performance: Under NCLB requirements for Texas, schools are held accountable only for 10th-grade performance on tests in ELA and mathematics, as well as graduation rates (which indirectly, but only weakly, serve as a measure of cumulative passing rates on exit tests, for reasons described later in this article).


In sum, neither the Texas state accountability system nor the Texas requirements for NCLB hold schools directly accountable for the proportion of students who pass the exit test by the end of 12th grade (see Table 3 for a comparison of the state and federal systems). Yet although Texas high schools face little direct (or distinct) pressure from exit tests alone, they do face significant pressures from the broader state and federal accountability systems, of which exit tests are just one part. These broader systems are largely punitive in nature, providing few rewards for success while meting out increasingly severe consequences for poor performance.


Table 3. Misalignment Between School and Student Accountability: Who Is Held Accountable for What, and With What Consequences?

  

Percentage of students passing 9th- grade tests

Percentage of students passing 10th- grade tests

Percentage of students passing 11th- grade exit tests (first attempt)

Pass all 4 exit tests by 12th grade

Graduation or completion?

Consequences

What Schools are Held Accountable For



Under the State Accountability System



Reading

Mathematics



ELA
Social Studies

Mathematics

Science



ELA

Social Studies

Mathematics

Science




*



Completion

(% of students who graduate or who are reenrolled)



Negative label, sanctions or threat of sanctions



Under the Federal Accountability System

 


ELA

Math


 



*


Graduation

(70% on time graduates, or any improvement from prior year, no subgroups)


Negative label, sanctions or threat of sanctions

What Students are Held Accountable For


Under an Exit Exam System

   


(With retesting opportunities)



Diploma denial

* Misalignment between (gap in) school and student-level accountability.

ELA = English language arts.


Research has found that such negative incentives can cause schools that are threatened to focus on organizational survival by creating short-term strategies or engaging in “gaming” activities to satisfy accountability demands (see O’Day, 2002; Vasquez Heilig & Darling-Hammond, 2008). Such gaming behavior was reported in interviews across all five schools in this study: Educators across all the schools reported some type of gaming in an effort to avoid the threat of punishment or, in the case of several schools under corrective action, further punishment.


In the following sections, data are presented that illustrate a range of gaming behaviors by educators in each of the five schools: the narrowing of instruction to tested material; the strategic targeting of resources on students who are likely to “pay off” for accountability purposes; the manipulation of retention and promotion decisions to increase test scores in high-stakes grades; and the lack of attention to struggling 12th graders who do not “count” as much in the state accountability system. Although such self-protective responses have been documented in studies of school responses to accountability demands, this analysis illustrates how those responses were highly consequential for the curriculum and supports offered to those students most at risk of failing exit tests.


REDUCED RIGOR: LIMITED EXPOSURE TO THE 11TH-GRADE CURRICULUM


Texas’ exit testing system is a part of the “new generation” of exit tests, a movement by states to align exit tests to higher standards with the hope of increasing the rigor of curriculum to which students are exposed. Texas was a leader in this movement when, in 2003, the state replaced its 10th grade exit-level tests (covering two subjects) with of a set of 11th-grade comprehensive exams in four subject areas.


Interviews for this study, however, revealed that in contrast to the hopes of increased rigor with the shift of tests to the 11th grade, the exit exams actually prevented many teachers from covering the full range of 11th-grade standards, particularly for the lowest achieving students. This outcome stemmed in part from the structure of the exit tests, which are “comprehensive exams” that test students on material learned throughout the 9th, 10th, and 11th grades, with a relative emphasis within the test on those standards learned in the 9th and 10th grades. (Similar “comprehensive exams” are currently used by 21 of the 28 states with exit testing systems; CEP, 2010.) Such broad-ranging tests, according to teachers, required that they tailor their instruction for the most struggling students to those standards that would be tested to ensure that those students had a good chance of passing the exit test. Because the exit tests primarily consisted of questions related to 9th- and 10th-grade standards, many students did not receive exposure to the full 11th-grade curriculum in those courses.


This practice was most prevalent in 11th grade math and science courses in which the material students were tested on was particularly wide ranging and in which the passing rates across the board were low. Nearly all the interviewed teachers who taught algebra 2 (normally taken in the 11th grade) said that they rarely covered much of algebra 2 content during the semester because the exit test focused much more heavily on algebra 1 and geometry. An algebra 2 teacher from Hudson described this phenomenon in stark terms:


A lot of times we feel like . . . it’s like we’re raping the algebra 2 curriculum. . . . We just end up forcing a lot of it out, and  . . . it’s a mixture [of pressure] from the principals and then just looking at those kids’ faces at the end of the year when they’re like, sorry, you didn’t pass, I can’t help you, couldn’t help you all year long ‘cause we weren’t teaching you what was on the test, we were teaching you what was in our course, which are two very different things.


Similarly, an algebra 2 teacher at Kingsbridge said in response to a question about how the exit test impacted what he did in the classroom,


The way it impacts me in the classroom is as an algebra 2 teacher I don’t get to teach as much algebra 2. I have to make sure that the students remember their eighth grade objectives that are still tested on the TAKS, and the geometry objectives that will be tested on the TAKS. They come to us deficient in algebra 1 skills to begin with and so it makes it very challenging to get to the point where you can cover any algebra 2.


This pressure to focus instruction on tested material was also felt by 11th-grade (exit level) science teachers. A Kingsbridge physics teacher (an 11th-grade science course) admitted that he often teaches students less physics than is contained within the 11th-grade curriculum standards as a result of the pressures imposed by the exit test. When asked how the exit tests have affected what he does in his classroom, he answered,


I think that truly we spend a lot of time going back and covering other content. You know, if I teach physics, which is what I teach now, I have to go back and review a lot of chemistry and biology, make the time for that, rather than really being able to go in depth with the physics, just so that the kids can stay and have all that information available.


Although the pressure to reduce standards coverage was felt most keenly by the 11th-grade algebra 2 and physics teachers, whose students were tested on a particularly wide range of subject matter, the pressure was also felt by social studies and English teachers. As an 11th-grade English teacher at Morrison HS observed, “They say, don’t teach the test, but yet there are such high stakes attached to that test.”


A social studies teacher at Hudson noted that these pressures to teach to the test were getting more intense because the standards are rising each year. He observed of the rising pass rates required by the state:


I mean, unfortunately it means that in the spring we’re gonna teach even more to the test than we’ve been teaching before, and that’s now how I . . . that’s not my preferred teaching style, and so I’m still . . . you know, I’m fighting as hard as I can to be as creative as I can and still fit in the, you have to be able to regurgitate this for a test.


Although other studies have found evidence of “teaching to the test” and “narrowing of the curriculum” in response to pressures of accountability, the findings of this study illustrate how such pressures to teach to the test had the most negative effects on the curriculum of the “regular” (non-Advanced Placement) students subject to exit exams, particularly those students in those courses who were actually prepared for grade-level curriculum. Although the students ended up more prepared for the test, they received, as a result of this pressure, an 11th-grade curriculum that diverged significantly from the state’s intended curriculum.


Some teachers noted that the intense curricular focus on the content of exit tests left students in the “regular” courses in particular less prepared for college-level work. The chair of the English language arts department at Kingsbridge noted this problem in describing how they prepare students for the essay portion of the exit test, which is graded on a scale from 1 (lowest) to 4 (highest):


The [exit test] test prompts are usually something like “explain the importance of friendship” or something like that. Through research and studying past answers . . . one thing I’ve noticed is if the student purely does a scientific explanation of the importance of friendship without making it personal or a narrative, they don’t get the score. They could write a brilliant paper on the sociological importance of bonds between human beings and get a 1. Then a student writes a narrative about their best friend during summer camp or whatever and gets a 4 mainly because [the test evaluators are] looking for elements such as dialogue and detail and things that a narrative facilitates, so that’s what we focus on, a narrative. . . . So we spend 4 years teaching them how to write a narrative, and then they get to college and have to do expository or comparison/contrast or something like that and they struggle with it.


LACK OF SUPPORT FOR STRUGGLING STUDENTS


In theory, exit testing systems are intended to prompt schools to provide intensive tutoring and supports for the lowest achieving students who are most at risk of failing the exam. However, in three of the five schools in this study (Stewart HS, Morrison HS, and Vanderbilt HS), administrators reported that they made a decision to focus their limited resources on providing interventions for those students who were just below cutoff score for the state tests (termed the “bubble students”) to increase the number of 11th-grade students classified as proficient in each subject, which would improve their state accountability rankings. This meant that those students most vulnerable to failing the exit exam were explicitly denied needed supports in several contexts. One of the schools where this practice was reported was Stewart HS, where the administrator in charge of testing and interventions admitted that they openly focused some of the interventions on those borderline students. He reflected,


One of the things we did is we met together as a math department, we had given a benchmark that was basically like a practice TAKS, and we looked at that score, along with their grades in the class, and had the teachers decide, ok, this kid I think will definitely pass, this kid might pass with interventions, and this kid I don’t think is gonna make it. So the “might pass with interventions,” lots of schools call them the “bubble kids,” we took those kids and put them in a tutorial program, . . .  The kids that we didn’t think were gonna pass, most of them . . . we didn’t enroll them in the tutorials ‘cause we had a limited number that we could play with there.


The chair of the social studies department at Stewart HS similarly noted of the school’s focus on those “bubble” students:


What we do with our students is as we go through the benchmarking, you know, you’re gonna have your students that pass, then you’re gonna have your low end, hopefully it’s not that many, and then in the middle you’re gonna have what we call our bubble kids, and they’re the ones that could go, you know, either way, so we put more emphasis on, obviously, the low end and the bubble, but the bubble ones, you know, those are the ones that, you know, if we get them to cross over then, you know, it’s gonna, in the long run, pay dividends for us, and the success rate’s gonna be better.


Similar strategies were reported at Morrison high school. When asked whether all students who failed the state tests were in remediation courses, a math teacher responded,


Not all of them. What they do, they kids they select to do [the remediation class] is what they call the “bubble kids,” the kids that missed it like two, three, four questions because we just wouldn’t have enough teachers to get all of them. Again, it is for the game that is played, it’s for the numbers.  


The administrators of these three schools, therefore, targeted their limited resources, in the context of a system with intense negative pressures, away from the most struggling students and toward those students who were at the margin of passing. Thus, in contrast to the expected response by schools to exit tests, in these schools, the students who struggled the most academically—those least likely to pass the exit test—were less likely to receive supports.


MANIPULATING RETENTION AND PROMOTION DECISIONS


Although exit tests are intended to ensure that schools monitor and support low-achieving students in their progress toward graduation, administrators in two schools (Stewart HS and Hudson HS) actually interfered with that progress by intervening in student retention and promotion decisions to help school accountability rankings. This practice emerged in direct response to the pressures of NCLB, which in Texas is structured in such a way that high schools are held accountable only for 10th-grade test scores in ELA and mathematics. As a result of this pressure, educators at schools in the study (Stewart HS and Hudson HS) reported that their administrators would seek to avoid classifying the weakest students as 10th graders at testing time by retaining them in the 9th grade for longer than necessary and then aggregating those students’ credits only when the student had obtained enough credits to be moved up to the 11th grade. For example, the head counselor at Stewart HS, which ironically did not receive Title 1 funds and was not therefore subject to NCLB sanctions, noted in the school that the threat of a negative label (“not meeting AYP”) was enough to motivate these practices:


The priority is 10th grade because that’s our AYP [adequate yearly progress]. There is a big focus on working with those kids and I’ll be honest with you; making sure that if they’re in ninth grade they don’t get up to 10th grade because there’s a certain percentage, and I don’t even know what the numbers are, that we can’t go over. It’s becoming more and more of a problem as the No Child Left Behind raises up the scores that are required. A lot of the focus goes to the 10th grade as far as what they can do to help improve their scores.


Similar strategies were used at Hudson HS, which was in Stage 4 of Program Improvement under NCLB and thus was working particularly hard to avoid further sanction. One teacher who worked with second language learners noted that such practices meant that these students were denied a much-needed testing opportunity in the 10th grade to better prepare for the high-stakes exit test:


It’s very disturbing to us to see how many kids’ transcripts are manipulated so that they don’t ever take the 10th-grade test. For example, we will aggregate in December. We will aggregate credits earned halfway through the year to see if we can get enough credits to move a kid from the ninth grade to the 11th grade or from the 10th grade to the 11th grade so that they will not be on our annual yearly progress. . . . So now you are in exit test [year]. Now, what that does to the child is it removes one more opportunity for him to take the test, you know.  


Schools therefore strategically manipulated retention and promotion decisions to help improve outcomes for their “highest stakes” accountability system, NCLB. One effect of this practice was to interrupt students’ progress and keep them separated from their age-level cohort longer than necessary, a practice that has been shown to lead to academic disengagement and increased propensity for dropping out (Roderick, 1994). Another effect of those practices was to deny students an opportunity to take the 10th-grade exams, which limited the information available to those students and their teachers in terms of areas of remediation required prior to taking the high-stakes 11th-grade tests.


UNEVEN SUPPORTS FOR STRUGGLING 12TH-GRADE “RETESTERS”


Exit tests are also intended to prompt schools to focus particularly intensive resources on those students who fail the “first-chance” exit test, which in Texas is given in the spring of 11th grade. Those students who do fail the first-chance test are intended to be provided with intensive supports to help them ensure that they master the material and pass the test by graduation. However, in contrast to this expectation, the “first-time failers” received uneven attention across the five schools in the study: Whereas some schools indeed provided intensive supports to help students pass, in other schools, those students were essentially “off the radar” organizationally—although supports were offered, the schools did not monitor whether students took advantage of them or how effective they were.


This uneven focus on the failers can be traced to the structure of the accountability system, which does not hold schools accountable for the outcomes of those students. As stated previously, under the state accountability system, schools are not held accountable for the degree to which they help students who fail in the 11th grade pass at a retesting opportunity in 12th grade; schools are only held accountable for the percentage of students who initially pass the exit test the first time it is given, in the spring of their 11th-grade year. The state does not require schools to meet any thresholds for cumulative pass rates (11th grade combined with 12th-grade retest pass rates), and schools are not required to report retest or remediation rates.


Schools are held accountable for a single 12th-grade outcome: graduation. Yet the measures and targets for graduation provided little direct pressure on schools to remediate students who initially failed in the 11th grade. Under the Texas state accountability system, for example, schools are held accountable for a measure of “completion” that counts the percentage of entering ninth graders who, 4 years later, either receive a diploma or reenroll and continue their education (into the fall of their 13th year).


This loophole meant that three schools in the study (Vanderbilt, Hudson, and Stewart) with low retest pass rates were able to satisfy the completion rate (which was set at 75% in 2008 and 2009) by tracking down and reenrolling students who had completed their coursework but had not passed their exit tests: Vanderbilt, which had an on-time graduation rate of 62.4% for the Class of 2008, reenrolled 12.3% as continuers, and Hudson, which had an on-time graduation rate of 56.0%, reenrolled nearly 1/4 of their ninth-grade class (24.4%) as continuers (see Table 4). Stewart HS was also able to meet the state requirement through this metric: With an on-time graduation rate of 73.8%, it fell shy of the required 75% but was able to meet the rate as a result of reenrolling 9.1% as continuers. As the principal of Hudson told us,


The state . . . [says], ok, your kids have to graduate in four years, but if you can get them back in school the next year, as a continuing student, then they will count in what’s called the completion rate, not the graduation rate. Like I said, we’ve always done real good about getting our students back, so we’re pretty ok with that number.


Table 4. The Gap Between State and Federal Graduation Rate Accountability


Class of 2008

State Accountability


NCLB Accountability


Gap Between State and NCLB Accountability for Graduation

 


Completion rate: % of 9th graders who graduate or are still enrolled by the fall of the 5th year


Graduation rate: % of 9th graders who graduate 4 years later

Continuers:

% of 9th-grade cohort who are nongraduates and who are enrolled in 13th year

Vanderbilt HS

74.7%*

62.4%**

12.3%

Hudson HS

80.4%*

56.0%**

24.4%

Morrison HS

n/a

n/a

n/a

Stewart HS

82.9%*

73.80%

9.1%

Kingsbridge HS

88.1%*

79.50%

8.6%

* Met state “completion rate” threshold (70%) for class of 2008.

**Did not meet NCLB AYP requirements for graduation (70% or any improvement) for class of 2008.


Principals of these schools reported that the majority of these 13th-year continuers were students who had completed coursework but who had not yet passed the exit-level test. The state accountability system, therefore, allowed these schools to obtain an “acceptable” rating with large numbers of students having not yet passed the exit test. As a teacher at Hudson observed, once students are reenrolled and the schools are given credit for them as continuers, the school tends to give little attention to them: “[The administrators] desperately try to get these kids to reenroll . . . they need a lot of kids enrolled [to show] they’re attending school and that they’re enrolled, and they haven’t dropped out, and then it’s kind of like after that time no one really cares where they go [emphasis added].”


The sole formal pressure on schools to successfully remediate 12th-grade retesters was the federal graduation rate requirement under NCLB. To satisfy the graduation rate requirement for AYP, schools were required (in 2008 and 2009) to graduate 70% of entering ninth graders within four years. However, this metric had a significant loophole: It allowed schools to meet AYP if their graduation rate fell below 70% as long as the school showed any improvement at all from the prior year (therefore meeting the “required improvement”). Of the five schools in this study, two (Vanderbilt and Hudson) were not able to meet the AYP standard because their graduation rates had actually declined slightly from the prior year.


Although NCLB’s graduation rate pressure was relatively weak, schools did respond to this pressure by offering those 12th-grade students who had not yet passed an exit test course credit for enrolling in organized remediation courses in an effort to prevent them from falling behind on their course credits while they were receiving remediation for the exit test. Vanderbilt, Stewart, and Hudson High Schools each gave their 12th-grade TAKS test prep courses (for retesters) names that would allow students to count the courses for grade-level core course credit, such as “literary genres” (the class for seniors who had not yet passed the exit-level ELA test) and “math concepts” (the class for seniors who had not passed the exit-level math test). Although the courses were labeled as core 12th-grade content courses, they were centered purely on test preparation for an 11th-grade test. A math TAKS teacher at Hudson HS noted of his remediation class, “The class is 100% of test prep, every class period. The class should be called ‘TAKS math prep’ but then, of course, if it were . . . they wouldn’t give any credit to the kids.” An ELA TAKS remediation teacher at Hudson HS similarly noted of her course,


Because it is a TAKS class,  . . . we are very focused, and we don’t make any kind of excuses, this is purely TAKS, that’s what we do, and we all know why we are in the class. We all know we failed, and there is nobody who didn’t fail in that class. It’s pretty blunt and straightforward. Very much, very much taught to the test.


Because these test prep classes were focused on exit tests that assessed students on content in Grades 9, 10, and 11, students in these courses were exposed to very little 12th-grade curriculum—or even the full 11th-grade curriculum—even though they were earning 12th-grade credit for those courses. The principal of Stewart High School noted that this focus on test prep comes at a cost for the students who have not yet passed: “Those kids’ senior year is all . . . is remediation or teaching the test to them, I don’t think we’ve added value on the senior year to them.” Thus, although the remediation efforts helped some students pass the exit test and graduate because they had earned the requisite credits (thereby helping schools’ AYP), those students in the test prep courses graduated with limited exposure to the 12th-grade level curriculum.


The lack of explicit accountability for 12th-grade remediation within the state accountability system also meant that the effectiveness of these test prep courses (and other remediation) was not always monitored: Although two schools (Stewart and Kingsbridge) did keep close track of remediation success rates and tracked student progress, in the other three schools (Vanderbilt, Hudson, and Morrison), there was less vigilant tracking of remediation success rates. The credit recovery teacher at Vanderbilt, for example, told us that


remediation rates are not even tracked . . . there’s no number on a database that says this is the remediation rate, this is the program the kids are in, this is what they were doing. . . . There’s not a number on a database that I can point to and say [what remediation rates look like]. Who cares, you know.


These findings illustrate the mismatch between school and student-level accountability within the state accountability system: Although students are required to pass all four exit tests by the end of 12th grade, schools are not held directly accountable for ensuring that students who failed in the 11th grade were successful on a retest opportunity (see Table 3 for an illustration of this gap). Although schools did respond to the federal graduation rate accountability by offering credit for test prep courses as a way to get students to pass the exit tests and graduate on time, because these courses were focused on prepping for the 11th-grade exit tests, the courses left students with limited exposure to either the full 11th-grade or the 12th-grade-level curriculum. These responses are in direct contradiction to many of the hoped-for outcomes of the newer exit testing policies, which are intended to graduate students who have mastered more rigorous coursework in preparation for college and the workforce.


DISCUSSION AND IMPLICATIONS


The findings in this article have highlighted a significant gap in exit testing systems between school and student-level accountability: Although exit testing systems hold students accountable for passing all four tests by the end of the 12th grade, in Texas and in many other states, schools are not held directly accountable for those outcomes. As a result, this study found, schools focused on the demands of the system that placed the most direct pressure on them, giving relatively less attention to—or making decisions that went against the interests of—the most at-risk students who were least likely to pass the exit test by the end of the 12th grade. Indeed, as this article illustrated, many schools relied on short- term gaming strategies to cope with the pressures of the accountability system.


That this policy has yielded few benefits is troubling because the negative consequences of the exit testing system fell the hardest on the most at-risk students. Consider the pathway of struggling 11th graders in these schools who were facing the exit test by the end of their 11th-grade year. Throughout the 11th grade, many of those students would have been enrolled in 11th-grade core courses that consisted (particularly in math and science) of extensive review of 9th- and 10th-grade material, with significantly reduced coverage of the 11th-grade level curriculum. Some of the struggling 11th graders (in Hudson HS and Stewart HS) may have been enrolled in 11th-grade courses only halfway through the year, given that they were intentionally retained in ninth and skipped at semester break straight to 11th grade because of an effort by administrators to be positively evaluated for federal AYP requirements. As a result, those students would have had just one time, in the ninth grade, to take a state exam prior to the exit test. By testing time, in the spring, the weakest students in three of the case study schools (Vanderbilt HS, Morrison HS, and Stewart HS) may have received only limited intervention or supports because they were not considered “bubble” students—their scores were not close enough to the pass rate to be deemed worthy of the school’s limited remediation resources. In the fall of 12th grade, those students who failed at least one portion of the exam would likely have been enrolled in remediation courses, which were closely aligned to the test. In several of these schools, the quality or effectiveness of the courses was not monitored because those test results did not “count” for accountability purposes. Because these remediation courses (which count for grade-level credit) focused so closely on test prep, the most struggling students in those courses would likely graduate with the ability to pass the test but without exposure to much of the 12th-grade high school curriculum in that subject area. Those students who still failed the test in the 12th grade may be asked by the school to reenroll in October, but may receive little follow-up from the school once they are counted officially for accountability purposes.


The findings of this analysis provide some potential insights into the findings of large-scale quantitative analyses of the effects of exit tests, which have found that exit tests have not yielded gains in achievement, as measured by (nonexit) state or national assessments (see Holme et al., 2010). This study suggests that although initial pass rates have been improving over time on exit tests in Texas and many other states (CEP, 2009), based on the interviews for this study, the achieved gains appear to be likely due to targeted test preparation strategies that are exit-test specific.


The findings from this study also provide some indication as to why larger scale studies have documented increases in dropout rates in states with exit testing systems, particularly for those students who fail the first chance test (see, e.g., Holme et al., 2010; Ou, 2009; Papay et al., 2010). This study indicates that schools under pressure from accountability may engage in practices (strategic retention, targeting tutoring for bubble students, and so on) that directly or indirectly lead to discouragement dropout for the students who are academically the most at risk of failing.


What lessons do these findings hold for policy? At first glance, these findings may indicate a need for a closer alignment between student and school accountability to eliminate the gap that has been described. Yet the findings on gaming behavior indicate that better alignment alone will not be sufficient, because as long as schools are threatened with sanctions for failure to meet specific performance thresholds, incentives for gaming and strategic responses will exist.


This study does indicate the need for policy makers to consider reducing or eliminating the high stakes (the “diploma penalty”) for students within exit testing systems. As this study illustrated, when schools are threatened with sanctions for failing to meet their high-stakes benchmarks, those students most at risk of failing exit tests may fail to receive the needed supports to meet theirs.


The findings of this study ultimately suggest a need for policy makers to reconsider the assumptions on which exit tests are based and to more closely consider the goal of exit testing systems in the context of, and in relation to, the larger systems of accountability in which they are embedded. Without modifications, exit testing policies may not only fail to prompt improvement in the education of the most at-risk students but also push those very students these policies are intended to serve into the student- school accountability “gap.”


Acknowledgments


The research reported herein was supported by a grant from the Spencer Foundation. The author would like to thank the editors of Teachers College Record, as well as the anonymous reviewers, for their helpful feedback on the manuscript. The author would also like to thank Meredith Richards and Rebecca Cohen for their assistance with data collection for the project.


Notes


1. The Texas Education Agency (TEA) classifies a student as “at-risk” if he or she meets one of 12 identified criteria. For a list of those criteria, see http://ritter.tea.state.tx.us/perfreport/aeis/2010/glossary.html. Under these criteria, a student would be identified as “at risk” if he or she was retained, failed a state assessment, was limited English proficient, was pregnant, or met a number of other risk factors.

2. TEA labels a student as “economically disadvantaged” if he or she is eligible for free or reduced price lunch, Temporary Assistance for Needy Families, or the Supplemental Nutrition Assistance Program.


References


Abelmann, C., Elmore, R., Even, J., Kenyon, S., & Marshall, J. (1999). When accountability knocks, will anyone answer? Philadelphia, PA: Consortium for Policy Research in Education.


Achieve, Inc. (2005). Action agenda. Retrieved August 12, 2005, from http://www.achieve.org


Bogdan, R., & Biklen, S.K. (2006). Qualitative research for education: An introduction to theories and methods. Boston, MA: Allyn & Bacon.


California Department of Education. (2006, May 30). Schools chief Jack O’Connell visits Central Valley Schools to discuss options for students yet to pass exit exam [Press release]. Retrieved June 7, 2010, from http://www.cde.ca.gov


Center on Education Policy. (2003, August). State high school exit exams: Put to the test. Washington, DC: Author


Center on Education Policy. (2008, August). State high school exit exams: A move toward end-of-course exams. Washington, DC: Author.


Center on Education Policy. (2009, November). State high school exit exams: Trends in test programs, alternate pathways, and pass rates. Washington, DC: Author.


Center on Education Policy. (2010, December). State high school tests: exit exams and other assessments. Washington, DC: Author.


DeBray, E. (2005). A comprehensive high school and a shift in New York State policy: A study of early implementation. High School Journal, 89, 18–45.


Elmore, R. F. (2003). Accountability and capacity. In M. Carnoy, R. F. Elmore, & L. S. Siskin (Eds.), The new accountability: High schools and high stakes testing (pp. 195–209). New York, NY: RoutledgeFalmer.


Grodsky, E. S., Warren, J. R., & Kalogrides, D. (2009). State high school exit examinations and NAEP long-term trends in reading and mathematics, 1971–2004. Educational Policy, 23, 589–614.


Hamilton, L. (2003). Assessment as a policy tool. Review of Research in Education, 27, 25–68.


Holme, J. J. (2008). High stakes diplomas: Organizational responses of California’s high schools to the state’s exit examination requirement. Research in the Sociology of Education, 14, 157–188.


Holme, J. J., Richards, M., Jimerson, J., & Cohen R. (2010). Assessing the effects of high school exit exams. Review of Educational Research, 80, 476–526.


Linn, R. L., Madaus, G. F., & Pedulla, J. J. (1982). Minimum competency testing: Cautions on the state of the art. American Journal of Education, 91(1), 1–35.


Merriam, S. B. (1988). Case study research in education: A qualitative approach. San Francisco, CA: Jossey-Bass.


Miles, M. B., & Huberman, M. (1994) Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage.


O’Day, J. (2002). Complexity, accountability, and school improvement. Harvard Educational Review, 72, 293–327.


Ou, D. (2009). To leave or not to leave? A regression discontinuity analysis of the impact of failing the high school exit exam (CEP Discussion Paper No. 907). London, England: Center for Economic Performance.


Papay, J. P., Murnane, R. J., & Willett, J. B. (2010). The consequences of high school exit examinations for low-performing urban students: Evidence from Massachusetts. Educational Evaluation and Policy Analysis, 32, 5–23.


Pipho, C. (1978, May). Minimum competency testing in 1978: A look at state standards. Phi Delta Kappan, 59, 585–588.


Pipho, C. (1983, January 3). Student minimum competency testing. Denver, CO: Education Commission of the States.  


Reardon, S. F., Arshan, N., Atteberry, A., & Kurlaender, M. (2010). Effects of failing a high school exit exam on course taking, achievement, persistence, and graduation. Educational Evaluation and Policy Analysis, 32, 498.


Resnick, D. P. (1980). Minimum competency testing historically considered. Review of Research in Education, 8, 3–29.


Roderick, M. (1994). Grade retention and school dropout: Investigating the association. American Educational Research Journal, 31, 729–759.


Serow, R. C., & Davies, J. J. (1982). Resources and outcomes of minimum competency testing as measures of equality of educational opportunity. American Educational Research Journal, 19, 529–539.


Sipple, J. W., Killeen, K., & Monk, D. M. (2004). Adoption and adaptation: School district responses to state imposed high school graduation requirements. Educational Evaluation and Policy Analysis, 26, 143–168.


Tyack, D. (1983). Seeking common ground: Public schools in a diverse society. Cambridge, MA: Harvard University Press.


Vasquez Heilig, J., & Darling-Hammond, L. (2008). Accountability Texas-style: The progress and learning of urban minority students in a high-stakes testing context. Educational Evaluation and Policy Analysis, 30, 75–110.


Vogler, K. E. (2006). Impact of a high school graduation examination on Tennessee science teachers’ instructional practices. American Secondary Education, 35, 33–57.


Vogler, K. E. (2008). Comparing the impact of accountability examinations on Mississippi and Tennessee social studies teachers’ instructional practices. Educational Assessment, 13, 1–32.


Winfield, L. F. (1990). School competency testing reforms and student achievement: Exploring a national perspective. Educational Evaluation and Policy Analysis, 12, 157–173.


Yin, R. K. (1993). Applications of case study research. Newbury Park, CA: Sage.


Yin, R. K. (1994). Case study research (2nd. ed.) Thousand Oaks, CA: Sage.




Cite This Article as: Teachers College Record Volume 115 Number 1, 2013, p. 1-23
https://www.tcrecord.org ID Number: 16736, Date Accessed: 10/25/2021 12:39:25 AM

Purchase Reprint Rights for this article or review
 
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Jennifer Holme
    University of Texas at Austin
    JENNIFER JELLISON HOLME is an assistant professor of education policy and planning in the Department of Educational Administration at the University of Texas at Austin. Her research focuses on the politics and implementation of educational policy, with a particular interest in the relationship between school reform, equity, and diversity in schools. She recently published “Assessing the Effects of High School Exit Exams” (with M. Richards, J. Jimerson, & R. Cohen) in the Review of Educational Research (2010).
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS