Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

Under Pressure in Atlanta: School Accountability and Special Education Practices During the Cheating Scandal

by Brittany Aronson, Kristin M. Murphy & Andrew Saultz - 2016

A 2011 report by the Georgia Bureau of Investigation (GBI) confirmed a widespread cheating scandal among teachers, principals, and administrators in the Atlanta Public School system (APS) from 2009–2011. To date, it is the largest cheating scandal of its kind in the United States. The vast public investigation of this scandal provides an opportunity to gain an in-depth understanding of school accountability practices, particularly as they pertain to the education of students with disabilities. The purpose of this article is to draw from the lessons learned from the APS scandal with particular attention paid to the unintended consequences of high-stakes accountability practices, especially for students with disabilities. First, we examine the policies and practices related to disability referrals and identification practices at the federal level during the 2009–2011 school years. Second, we explore what literature on accountability practices and disabilities suggests about the APS scandal. Finally, we discuss broader implications for policy, practice, and future research related to the education of students with disabilities in high-stakes test driven classrooms.


A 2011 report by the Georgia Bureau of Investigation (GBI) confirmed a widespread cheating scandal among teachers, principals, and administrators in the Atlanta Public School system (APS). Evidenced by erasure analysis on students criterion referenced competency test (CRCT), the investigation implicated 178 teachers and principals, 82 of whom confessed to cheating (Vogell, 2011). On April 1, 2015, 11 educators were convicted for their roles in the cheating scandal (Blinder, 2015). To date, it is the largest cheating scandal of its kind in the United States. Due to the vast public investigation, the APS cheating scandal provides a unique opportunity to gain an in-depth understanding of school accountability practices, particularly as they pertain to the education of students with disabilities. Thus, the purpose of this article is to draw from the lessons learned from the APS scandal of 20092011 and to explore the unintended consequences of high-stakes accountability practices, especially for students with disabilities.

Many have written that accountability policy has been detrimental to students, teachers, and schools. One popular criticism is that the pressure on teachers and schools to produce results on standardized tests has led to many unintended consequences (Lavigne, 2014; Nichols & Berliner, 2007). Schools, which are increasingly at risk for losing funding or facing a variety of sanctions under No Child Left Behind (NCLB), have often transferred that pressure onto teachers to produce results. An analysis of what happened in APS during the cheating scandal provides more confirming evidence of these unintended and damaging consequences.

To explore school accountability and disability practices, we are guided by three questions: First, what were the policies and practices related to disability referrals and identification practices at the federal level during the 20092011 school years? Second, what does the literature on accountability practices and disabilities suggest about the APS scandal? Finally, what are the broader implications for policy and practice, as well as for future research related to the education of students with disabilities in high-stakes test driven classrooms?


Literature on educational accountability has examined the effects of various reforms on students standardized test performance. For example, multiple studies have examined the relationship between NCLB policy mandates and math and reading achievement (Dee & Jacob, 2011; Wong, Cook, & Steiner, 2015). Some of this evidence suggests that math achievement, particularly for students from lower socioeconomic backgrounds and students of color, has increased (Dee & Jacob, 2011). While some have attributed these broad changes in teacher practice and schools to the increased pressure that the AYP system created (Manna, 2010; McDermott, 2011), others point to the fact that math curricula are more easily manipulated (e.g., amenable to teaching-to-the-test effectssee Nichols, Glass, & Berliner, 2006, 2012).

In spite of these questionable gains, many hold the position that the overemphasis on testing leads to unreasonable and unattainable expectations for student performance (Ravitch, 2010; Zhao, 2009). This seemed to be the case in APS where unattainable achievement goals inspired a culture of fear and intimidation foisted onto teachers to get students to pass tests. This situation led many teachers to believe they had no other option than to cheat if they were going to raise test scores. Bowers, Wilson, and Hyde (2011) found:

The pressure to meet targets and improve students CRCT scores was the single most frequent explanation given by teachers for why they cheated. Most teachers, and many principals, described an oppressive environment at APS where the entire focus of the district had become achieving test scores rather than teaching children. (p. 356)

Our analysis revealed that teachers in APS experienced undue pressures to get low-performing students to meet unattainable achievement goals too fast. This put teachers in a moral dilemmaignore the pressures and teach students according to best practices, and risk facing administrative ire, public humility, and potential loss of job; or cheat to ensure they look good to their bosses, and students look good to the nation.  

One type of pressure in APS was the distribution of administrative (not teacher) incentive bonuses for high test score performance (Hing, 2011; Mitchell, 2015). In this climate, administrators were given greater reasons to cheat than the teachers they directed. Importantly, we know that incentive pay systems, especially under the broader umbrella of mandated high-stakes testing under NCLB, does not work. For example, Springer et al. (2013) conducted a three-year study in the Metropolitan Nashville School System from the 20062007 to the 20082009 school year. Middle school math teachers voluntarily agreed to participate in a controlled experiment that assessed the effects of incentive rewards for teachers whose students showed unusually large gains on the Tennessee standardized TCAP test. Springer et al. (2013) explained, The experiment was intended to test the notion that rewarding teachers for improved scores would cause scores to rise (p. xi). Findings indicated that teachers assigned to the treatment group (those receiving bonuses) did not outperform those assigned to the control group (those not competing for bonuses). Elsewhere, findings on incentive pay effects are mixed. Some found that incentive pay works to improve student outcomes under some circumstances but not others (e.g., Figlio & Kenny, 2007; Ladd, 1999; Lavy, 2002). As demonstrated, in the aftermath of NCLB, with the intensity of testing, merit pay has not appeared to necessarily have positive impacts on student achievement.

We know high-stakes testing pressures result in a range of unintended (and mostly negative) outcomes, such as the differential treatment and attention to students denoted as on the cusp or bubble and those deemed unteachable (e.g., Booher-Jennings, 2005). Others emphasize the relation between pressures to teach to the test and resultant curriculum narrowing and creativity undermining (Zhao, 2009). Teachers also report feeling increasingly demoralized and marginalized (e.g., Ravitch, 2010). High-stakes testing accountability erodes educational practices for all students, but especially for our most marginalized and needy students.


Cheating is one type of unintended consequence that seems to have become more prevalent throughout public schools under NCLB and assumes many different forms (Nichols & Berliner, 2007). Hursh (2007) highlighted some of the ways states and school districts have exploited arbitrary policies to mask the realities of student achievement. For example, in New York there was evidence of changing test score cut-off criteria in order to meet some predefined state goal. That is, policymakers changed the exams degree of difficulty & depending on whether the State Education Department (SED) wants to increase the graduation rate and therefore makes the exam easier or wants to appear rigorous and tough and therefore makes the exam more difficult (p. 299). In other words, some states changed the degree of difficulty simply by changing proficiency cut scores that determines how many students pass or fail. Specifically, in New York, students of color and students with disabilities have suffered from accountability politics the most, as evidenced by increased dropout rates since the passage of NCLB (Hursh, 2007).

Arizona provides another example of a state that has manipulated cut scores simply to make the states reputation suggest rigor and achievement are results of higher accountability measures. Before 2005, the Arizona Instrument to Measures Standards (AIMS) assessment held an acceptable passing rate of 71% in math and 72% in reading; however, after the high school students poor performance, the state board decided to lower the passing rate to 60% in math and 59% in reading. These types of manipulations to test scores further marginalize already neglected students when tests are arbitrarily made harder (Nichols & Berliner, 2007).

There have been other accusations of altering student achievement data in Texas. The Texas Assessment of Academic Skills (TAAS) took effect in 1994 under then Governor George W. Bush. The model Texas adopted was what molded NCLB since Texas exhibited drastic improvements that became known as the Texas Miracle. Students test scores increased each year and graduation rates were at an all-time high. Attempting to understand what made Texas schools successful, McNeil (2000) set out to study several Houston schools.  She did not find schools demonstrating best teaching practices or higher-order thinking. Instead, she found schools with increasing regulations, phony curricula simply covering topics rather than going into depth, and a rote ritual of teaching and learning. Consequently, students from lower socio-economic schools that also recorded lower test scores suffered the most from the standardization reforms. As Hursh (2007) contended, Because culturally advantaged middle-class and upper-class students are likely to rely on their cultural capital to pass the exams, it is disadvantaged students who receive the additional drilling (p. 301), as seen in APS and other urban school districts with extreme testing pressures. Whats more, later evidence exposed that dropout rates in Texas were not decreasing, but rather creative bookkeeping altered the evidence (Hursh, 2007). The superintendent of Houston Schools ordered districts not to report students as dropping out, but rather to report they left to attend another school. In 20012002, the dropout rate was an astonishing 1.5%. Other falsities in Texas schools revealed missing special education scores, as well as increased retention rates so schools would have an additional test-prep year (Hursh, 2007). Haney (2000) concluded that what had become known as the Texas Miracle was really a myth.

Another scandal involving the manipulation of student achievement data occurred in Columbus City Schools in Ohio. A report by the Ohio Department of Education found that district officials withdrew students from the school enrollment because they were worried that the school test scores and attendance statistics would be too low on the school report cards (Richards, 2014). The report on officials changing student data explicitly cited the accountability pressure on school employees and financial incentives embedded within the state policy as reasons the alterations may have occurred (Richards, 2014). As cheating scandals continue to occur throughout the country, we wonder how students, including those students with disabilities, are affectedparticularly given the history of dismissing students with disabilities from accountability data (Hursh, 2007; McNeil, 2000).


The idea of access to a free and appropriate public education (FAPE) for students with disabilities itself is a young phenomenon in U.S. public education. Before the passage of the Education for All Handicapped Children Act (EAHCA) in 1975, schools and states alike could deny students with disabilities entrance into school simply because of their disability.  It has only been 40 years since that legal landmark, yet the landscape of schooling for students with disabilities and the responsibilities of their educators have shifted dramatically. As the pressures rise to ensure students with disabilities receive equitable opportunities and are held to high standards in public schools, there is a growing concern about the potential for cheating in an attempt to meet high-stakes goals, particularly if teachers are left feeling unprepared for the work at hand (Mehta, 2013).



In 1989, the National Council on Disability initiated dialogue about the role of accountability for students with disabilities, moving beyond access to public education and toward examining the quality of education and student outcomes (McLaughlin, 2010). Accountability for students with disabilities was driven by the goals set forth in the Individual Educational Plan (IEP). Goals were not necessarily standards-based and were highly individualized, making it hard to compare students and impossible to look at students as groups. There was no accountability in place for students progress on IEP goals, and because they did not have to align with standards, students were seldom given access to the general education curriculum (McLaughlin, 2010). Many students were denied access to the general education curriculum because there were no laws requiring their participation. This was left to the discretion of the local school community (Koretz & Barton, 2003).  


While the battle was initially centered on granting access to school buildings, it was not until the Individuals with Disabilities Education Act (IDEA) reauthorization in 1997 that federal provisions began to take shape to include students in local and statewide assessments. The 1997 reauthorization mandated that students with disabilities be included in the general education curriculum, and participate in local and statewide assessments where school districts deemed appropriate. However, because of the autonomy granted to school districts to include students with disabilities where appropriate, the districts could continue to selectively exclude students whom they perceived may not score well on assessments (Schulte & Villwock, 2004).  


With the passage of No Child Left Behind in 2001, accountability for students with disabilities changed dramatically. The law stipulated that public schools include students with disabilities in the general education curriculum and track how they are performing, and that the schools would be held accountable for their academic outcomes (West & Schaefer-Whitby, 2008). For the first time, students with disabilities were recognized as a distinct subgroup, meaning that their progress on tests was monitored collectively instead of at the individual level. Furthermore, the performance of students with disabilities could be measured against other subgroups and overall average scores in their school, and other schools, districts, and states. This opportunity opened the door for stakeholders to begin asking why students with disabilities from one school, district, or state perform better or worse than their counterparts. Are disabilities the primary driver of outcomes, or is it the quality of services provided by a school? Why do students with similar disabilities from one school, district, or state exhibit differences in performance?  

The era of school accountability introduced new data and changed the experiences of students with disabilities and the educators who serve them (Koretz & Barton, 2003; West & Schaefer-Whitby, 2008). For the first time, schools were responsible for the academic achievement of their students with disabilities. Duncan (2010) asserted that this new data gives our teachers the information they need to adjust their instructional practices to ensure student success. However, the ways in which teachers ultimately respond through instructional practice to policy and data are largely driven by their prior beliefs and knowledge, particularly when it comes to vulnerable student populations including English language learners (ELLs) and students with disabilities (Bertrand & Marsh, 2015).   


NCLB (2001) introduced adequate yearly progress (AYP) into public education. States are required to design standards and develop or adopt standardized tests to measure the degree to which students are meeting the standards. Every year, schools must administer state tests to assess student proficiency in reading and math, and report data for all students in their school, including the following subgroups: (a) students who are economically disadvantaged; (b) students from racial and ethnic subgroups; (c) students with limited English proficiency; and (d) students with disabilities (Yell, Katsiyannis, & Shiner, 2006). Ninety-five percent of all students and all subgroups must participate in testing in an effort to ensure that schools do not selectively exclude certain students. All schools including subgroups must have a percentage of students within that subgroup reach a proficiency score to make AYP. Because some grades and/or schools may have subgroups that are too small to reliably measure progress for AYP, states must identify what constitutes a minimum number of students per subgroup to be measured for AYP. For example, in order for students with disabilities to be counted as a subgroup in any given school, the number of identified students with disabilities must meet or exceed that minimum threshold (Harr-Robins, Song, Garet, & Danielson, 2015).  

States set proficiency rates for the exams, which serve as the goal posts for AYP. NCLB set consequences for schools that failed AYP in consecutive years (Katsiyannis, Zhang, Ryan, & Jones, 2007). After the second year a school fails to make AYP, the school must develop an improvement plan and students receive the right to transfer to a satisfactorily performing school. After three years, low-income students can receive supplemental education services free of charge. After a fourth year of failure to make AYP, corrective measures begin, including reorganization of administration and replacement of staff members, in addition to assistance from external experts. After five years of inability to make AYP, the school must be restructured by either being converted into a charter school and/or replacing the entire staff.

One of the criticisms of states developing and setting their own standards for AYP is that it results in wide variation across states, wherein some create conceivably very easy goals and others institute very difficult targets (McCombs, Kirby, Barney, Darilek, & Magee, 2004). As a result, some stakeholders became concerned about the potentially negative ramifications of measuring performance of subgroups.

Response to Intervention

The Individuals with Disabilities Education Improvement Act (IDEIA) of 2004 recommended a Response to Intervention (RTI) framework for disability identification and referral (Swanson, Solis, Ciullo, & McKenna, 2012), largely to reduce the number of inappropriate referrals and cultural and linguistic disproportionality in special education (National Center on Response to Intervention, 2010). However, RTI is designed to support all students with a focus on prevention and early identification of academic difficulties, including but not limited to those related to a disability.  

RTI aligns with both NCLB (2001) and IDEIA (2004). NCLB requires that schools must use evidence-based instructional practices for all students, and IDEIA further specifies expectations in requiring that evidence-based instructional strategies and interventions be utilized with struggling students and documented before a special education referral is madeas a safeguard to ensure that a lack of satisfactory academic progress did not occur due to inadequate instruction (Swanson et al., 2012). In RTI frameworks, decisions about appropriate support and placement of students are made over an extended period of time and supported by a variety of assessment and observation data, rather than by a time-limited and often decontextualized special education assessment.  

The instructional framework for RTI occurs across a multi-tiered system, typically consisting of three tiers (National Center on Response to Intervention, 2010). While there is no single mandated formula, there is a generally agreed upon model (Harvey, Yssel, & Jones, 2015). In Tier 1, evidence-based practices are provided in the general education classroom. RTI research asserts that approximately 80% of students should respond to instruction in the general education classroom provided by a general education teacher. Their growth is evaluated via formative and summative assessments conducted by the general education teacher. In any given classroom, approximately 10%15% of students will not respond to Tier 1 instruction. In those areas in which students are struggling, they will transition to Tier 2 instruction.  

Tier 2 instruction focuses more closely on the specific areas in which students are not making satisfactory progress during Tier 1 instruction, but does not indicate that they have a disability. Instead, it provides an opportunity for small-group instruction tailored to areas of need. Tier 2 instruction is typically provided by the general education teacher, but may be done collaboratively, or exclusively, by the special education teacher depending on the school infrastructure. Tier 2 instruction typically occurs as a small group inside the general education classroom or as a pull-out (Swanson et al., 2012). Monitoring of student progress occurs with much greater frequency. Approximately 5% of students will not respond to, or make satisfactory progress in, Tier 2 instruction and will then transition to Tier 3 instruction.  

Tier 3 instruction is the most intensive, and is typically individualized. Accordingly, progress-monitoring frequency increases. If special education professionals have not yet become involved, they will appear in Tier 3 instruction. Depending on the state, Tier 3 may be a final tier of instructional work prior to referral for special education, or referral may begin at this tier (National Center on Response to Intervention, 2010).  

Response to Intervention Preparation and Practice in Reality

Research does not necessarily translate to practice in schools when it comes to implementation and sustainability of RTI without specialized preparation for multi-tiered instruction (Brownell, Sindelar, Kiely, & Danielson, 2010). RTI has dramatically changed the roles and responsibilities of educators, particularly those of the general educator. From 20092011, APS was newly adopting its RTI policies and practices, and many teachers were unaware of the role they played in this process (E. Harper, personal communication, May 20, 2015). In order for schools to be successful in sustained implementation of RTI, school leaders, teachers, and related service providers require targeted preservice and ongoing professional learning opportunities and support (White, Polly, & Audette, 2012). In addition, there needs to be an overhaul of preservice preparation. Current research indicates that many institutions of higher education may not be providing adequate preparation in RTI across all teacher preservice programs, including but not limited to special education (Harvey et al., 2015).

A 2008 national survey underscored inconsistencies in RTI-related knowledge and implementation across the country in K12 schools (Hoover, Baca, Wexler-Love, & Saenz, 2008). Forty-four states responded to the survey, and all indicated that there was some degree of RTI implementation occurring in their state. However, of the 44, 17 states responded that less than 10% of districts were engaging in RTI-related activities. Even less is known when trying to ascertain the extent and fidelity of implementation of RTI activities in individual schools (White et al., 2012). Due to limited guidance on implementation and variability in implementation, and such high-stakes decision-making tied to the process (i.e., differentiated and often targeted instructional supports and special education referral), this is an area ripe for exploitation pressures to arise (Harvey et al., 2015).

This scenario can have serious implications for fair education, support, and the avenue to referral for special education services. When teachers and school leaders have limited knowledge and training opportunities on evidence-based practices such as RTI, and school infrastructures are not built to support sustainable practices in this framework, unsuccessful and even detrimental academic and behavior management practices may occur in their place (Alvarez, 2007).



RTI practices in Atlanta mirror the findings from Hoover et al.s (2008) national survey in that many teachers and school psychologists in APS were unclear about RTI or how it might be implemented. It is unclear from the records what kinds of RTI practices, if any, were in place during the 20092011 school years. It was reported that RTI was still newly being implemented in APS, and the procedures were often not clearly understood by classroom teachers. One former school psychologist in APS reported that classroom teachers were unclear about RTI implementation. The skewed statewide assessment results, in conjunction with teachers lack of knowledge regarding RTI, more than likely yielded negative consequences for students who may have qualified for special education support and/or services (E. Harper, personal communication, May 20, 2015).

Records from the Georgia Department of Education (GDOE, 2010) reveal no specific language around multi-tiered instructional frameworks like RTI. There is only language around referral for services, which would occur after Tier 3. The GDOE records from 20092011 state that upon completion of assessments and other measures, a group of qualified professionals and parents known as the Eligibility Team determine whether or not a student qualifies for special education services. Eligibility is determined when the team:

Draw[s] upon information from a variety of sources, including aptitude and achievement tests, parent input, and teacher recommendations as well as the information about the childs physical condition, social or cultural background, and adaptive behavior. (GDOE, 2010)

In APS, the eligibility team was referred to as the Student Support Team (SST) and typically consisted of the school administrator assigned to lead the SST, the classroom teacher, special education teacher, school psychologist, guidance counselor, and other stakeholders that might be involved in the welfare of the student. The SST used the psychological assessments (including an IQ test) as well as other measures of assessment (e.g., classroom assessments, teacher/psychologist observations, records analysis, and interviews). The team was designed to work together to evaluate all of the data and draw a comprehensive conclusion from multiple measures of success. Given the lack of clarity regarding the implementation of RTI in APS between 2009 and 2011, it is hard to determine the SSTs role in successfully referring students for special education services and/or students who may have qualified if certain measures of assessment had not been altered.


The inclusion of students with disabilities in the classroom is still relatively new, and their inclusion in accountability measures and our reframing of means to disability referral, such as the RTI approach, are even more recent. Researchers, policymakers, and practitioners alike are still trying to determine the most effective practices, and the ways in which to support practitioners in effective implementation and sustainability. Regardless, the pressure and potential for consequences are very real, as are the ramifications for how educators and administrators feel and choose to behave at work, especially in schools across the country populated with the most vulnerable students (King-Sears & Baker, 2014), such as the Atlanta Public Schools. APS provides a unique opportunity to further explore the consequences of high-stakes accountability, particularly for students with disabilities.


APS is a large urban school district serving nearly 50,000 students in 52 elementary, 10 middle, nine high school clusters, two single-gender academies, and 14 charter schools (Atlanta Public Schools, 2015). During 20082009, the National Center for Education Statistics reported the racial/ethnic makeup of APS as 0.1% American Indian, 0.8% Asian American/Pacific Islander, 4.8% Hispanic, 83.9% Black, and 10.4% White, with 76.3% of the student population on free and reduced-price lunch. Approximately 8.9% of students in APS were on IEPs (National Center for Educational Statistics, 2009). In the following sections, we review the timeline of events that led up to the nations largest known cheating scandal in public school history and discuss the implications of this event for students with disabilities.  


The Criterion Referenced Competency Test (CRCT) was a multiple-choice exam given annually to all students in Georgia between 2000 and 2014, and was used to comply with NCLB mandates. The five tested subjects included reading, English/language arts, math, science, and social studies. Students scores on this exam fall into one of three categories: (a) does not meet standards, (b) meets standards, or (c) exceeds standards. NCLB mandated that states set, measure, and report the passage rates for all students, including students (a) who are economically disadvantaged, (b) who represent racial and ethnic subgroups, (c) with limited English proficiency, and (d) with disabilities (McDermott, 2011).  

Starting in the early 2000s, students academic performance seemed to increase substantially. For example, in 2009, 46.4% of sixth graders in Fulton County (which includes APS) exceeded benchmark standards/criterion in reading, an increase from the 41.6% reported the previous year. With such a high percentage of students of color and/or lower socioeconomic status, APSs success in raising student achievement gained attention. In fact, APS soon became a national phenomenon that served as a model for transforming low-performing urban school system[s] (Judd, 2015).

That same year, the American Association of School Administrators awarded Beverly Hall the 2009 National Superintendent of the Year Award for outstanding gains in test scores and graduation rates (Judd, 2015). However, simultaneously, the Atlanta Journal-Constitution (AJC) began publishing articles suggesting such gains in achievement were statistically impossible. In 2008, Heather Vogell, an investigative journalist for AJC, and John Perry, one of the nations leading journalist database analyzers, began their routine look at the CRCT scores and found the data had suggested a surprising number of schools made gains. Ultimately, when more closely examining the CRCT results, it was suggested that many of these gains were mathematically impossible. For example, at one Atlanta school, 19 of the 19 fifth graders who took the CRCT math retest passed. However, on average for the rest of the state, only about half of the students passed at each school. The APS school increased an average of 48 points, compared to the state average of 16 points (Perry & Vogell, 2012). Hall was quoted as saying, [S]keptics didnt believe poor [B]lack children could learn (Judd, 2015), suggesting such allegations were deficit-oriented and racist. Unfortunately, in the case of APS, the skeptics were righttests had been altered. It was not the case that the APS students were incapable of learning, but rather that their test scores were not accurate indicators of their ability to demonstrate what they did know on the CRCT assessment. In April of 2015, the Fulton County Superior Court found that teachers and administrators in APS had widely participated in falsifying student exams.


Initial speculation of cheating on the CRCT in APS began in 2001. The Atlanta Journal-Constitution published CRCT results that seemed unlikely for gains or losses in a single year (Vogell, 2011). For example, in 2000 only 21% of fourth graders at Dobbs Elementary School passed the math portion of the CRCT, but by the following year, 85% of students passed. This initial story did not receive much media attention, perhaps because it was shortly after the September 11th terrorists attacks. However, as time went on, reporters at the Atlanta Journal-Constitution continued to question the validity of such drastic gains in test scores. It was the newspapers stories that laid out the groundwork for beginning an investigation (Riley, 2015). Repeated coverage of the alleged cheating prompted then Governor Sonny Perdue to request an official investigation into the allegations in 2008an investigation that continued under the next governor, Nathan Deal (2011present).      


In 2011, the Georgia Bureau of Investigation (GBI) issued a report confirming a widespread cheating scandal among teachers, principals, and administrators in APS (Bowers et al., 2011). The investigation initially focused on the high quantity of erasure marks on students test booklets, but was complemented by nearly 2,000 interviews and analyses of more than 800,000 documents from the 20082009 academic school year. In the end, the investigation implicated 178 teachers and principals, 82 of whom confessed to cheating (Vogell, 2011).

The GBI consulted with analysts at McGraw-Hill who quantified the number of Wrong to Right (WTR) erasures to compare the state average with the average for APS.  The WTR erasures were standardized to estimate the probability of a school falling outside the normal range of expected values. As a result, schools were expected to fall within three standard deviations of the mean (resulting in a one in 370 chance that erasures occurred by coincidence; 35 school districts in Georgia were flagged with more than 5% of classes falling outside of the three standard deviations threshold). However, APS exceeded every other district in the state. Bowers et al. (2011) explained that the erasure analysis only flagged classes that departed from the state average by three or more standard deviations; however, APS had standard deviations ranging from the 20s to the 50s. The report concluded it was virtually impossible for WTR answers to occur on tests without any form of human intervention. In addition to the erasure analysis, after a thorough investigation, Bowers, Wilson, and Hyde found that the following events occurred in APS schools due to a culture of intimidation and fear (see Figure 1).

Figure 1. Georgia Bureau investigative report findings

Teachers and administrators erased students incorrect answers after the test was given and filled in the correct answers

The changing of answers by teachers and administrators was, in some cases, so sophisticated that plastic transparency answer sheets were created to make changing the test answer sheets easier

Changing of answers was often done at weekend gatherings, and in at least one instance at a teachers home in Douglas County, Georgia

A principal forced a teacher with low CRCT scores to crawl under a table at a faculty meeting

Teachers arranged classroom seating for tests so that lower-performing children could cheat off the higher-scoring students

Children were denied special educational assistance because their falsely reported CRCT scores were too high

Students requested that they be assigned to a certain teacher because that educator was said to cheat

First- and second-grade teachers used voice inflection while reading the test to identify the answer

Teachers pointed to the correct answer while standing at students desks

Teachers gave the answers aloud to students

Some teachers allowed students to change the previous days incorrect responses after giving them correct answers

Teachers looked ahead to discuss the next days questions

In one classroom a student sat under his desk and refused to take the test. This child passed

This culture of cheating may have impacted children who were denied special educational assistance because their falsely-reported CRCT scores were too high (Bowers et al., 2011, pp. 1819). It was through this highly published report and the media attention that charges were filed against the APS system and the specific educators indicted.


On September 29, 2014, the trial began. On April 1, 2015, the jury found 11 educators, including both teachers and administrators, guilty of artificially inflating test scores by changing answers on the 2009 CRCT. Given the fact that bonuses and raises were awarded to administrators based on test scores, some of the convicted were charged with violating the states Racketeer Influenced and Corrupt Organizations (RICO), a statute usually associated with organized crime, and could be sentenced to 2030 years in prison (Mitchell, 2015). Fulton County Superior Court Judge Jerry Baxter ordered immediate jail time, pending sentencing, for 10 of the 11 convicted. On April 14, 2015, eight teachers and administrators were sentenced between one and seven years in prison. However, on April 30, 2015, Baxter reduced the sentence for those serving seven years in prison to seven years on probation, $10,000 in fines, and 2,000 hours of community service work (Ellis & Lopez, 2015).

More than a dozen former and current APS students testified in the trial. One student, 16-year-old Vantricia Haynes, and her mother shared a story about receiving a letter announcing Vantricias qualifying for the gifted program after she received exceeding scores on the CRCT in fourth grade. Vantricia was born with a medical condition that doctors believed would affect her ability to learn, so the mother felt it was odd for her daughter to receive such a letter and declined to enroll her in the gifted program. However, after fourth grade, Vantricia struggled excessively in school, playing catch up all through middle and high school, often in tears over items she did not understand. In an interview with AJC, Vantricia shared that her former teacher hinted at correct answers and told students to redo their work. In response to how the cheating scandal impacted her, she stated, [W]hen you go out into the real world, you really wouldnt know what to do since you have things handed to you all the time. Her mother expressed concern that because of the cheating scandal, her daughters academic struggles have been hidden (Tagami, 2015). This is just one story highlighting the damaging effects of the APS cheating scandal on students, particularly students with disabilities who did not receive the additional support they likely needed.  

The CRCT is not the only source of data examined by the SST and thus should not have been a sole reason why a student would not receive a referral for special education services. However, the GBI report found through extensive interviews that children were denied special educational assistance because their falsely-reported CRCT scores were too high (pp. 1819). We see this in the case of Vantricia, who was referred to the gifted program for her exceeding test scores, rather than for special educational services that should have supported the medical condition impacting her ability to learn. So while this should not have been the only factor that influenced an educators decision to refer a student to the SST team, the GBI finding suggests that the false reporting of CRCT scores may have prevented some students from receiving the additional assessments and services they needed. Additionally, it is difficult to determine the actual number of students who received and/or were referred for special education services in APS throughout the time of the scandal as The school district [APS] often failed to comply with Georgias open records laws, withheld public information and gave false data to an agency of this state (Bowers, Wilson, & Hyde, 2011, p. 4). These data suggest that APS students learning and academic needs were not being met as their eligibility or ineligibility was predicated on false and inaccurate data. This is particularly problematic for vulnerable and historically marginalized student populations who are already at risk for not receiving appropriate instruction. It is our goal to assess the implications of this finding for school-level practitioners working with students with disabilities and for policy development.


On the surface, the APS scandal appears to be a unique situation that demonstrates the failure of leadership, a culture of fear, and a district in peril. The APS scandal presents an extreme example of a district that reacted in a predictable way to a system that exclusively focused on test scores. With the media, government, and the public watching test scores with bated breath, APS employees knew their future relied on children performing at an increasingly higher rate on state exams. The narrowly focused accountability system led the APS employees to obsess over test scores. This predictable reaction is likely occurring in other places. In fact, cheating scandals in other places share many factors with APS (e.g., Columbus, Ohio). The National Center for Fair & Open Testing (2015), an organization that is publically critical of standardized testing, has documented cases of cheating in up to 40 states. Robert Schaeffer, the organizations Public Education Director, also points out what is missed in the media regarding the publics perception of the cheating scandal in Atlanta:

Many policymakers still ignore the most important lesson to be learned from Atlanta. Cheating is an inevitable consequence of the overuse and misuse of standardized exams. Federal, state and local testing policies put intense pressure on teachers, principals and other administrators. They create a climate in which educators believe scores must soar by whatever means necessary, as the Georgia Bureau of Investigation concluded. (National Center for Fair & Open Testing, 2015, para. 6)

In agreement with Schaeffer, we recognize the larger collateral damage that has been caused by the high-stakes testing and accountability movement (Nichols & Berliner, 2007). As a consequence of the climate of accountability, students who often come from poverty, have special learning needs, and represent a wide array of ethnically and linguistically diverse populations are the ones affected by the pressures trickling down the totem pole to meet the impossibly high expectations.

While the details of this particular case are important to holding the people involved accountable, the case illustrates some of the major problems with the current accountability structure across the country. We hope to use this case to demonstrate that Atlanta, in many ways, showcases the challenges that many other urban districts in America face in terms of rapidly changing student demographics, policies, and expectations. Thus, the case provides us with an opportunity to reflect on the broader implications for policy and practice in order to help inform future practices that will benefit all students, and the administrators and educators serving them.  

In light of what happened in Atlanta, policymakers must carefully consider the implications of the measures, and distribution values of measures, they choose to use in their evaluation of schools, administrators, and teachers, now more than ever. By moving beyond a heavy emphasis on test scores, evaluations may yield a more representative picture of school performance and environment, and administrators and educators may feel able to focus on the whole of their responsibilities and not just final test scores. Regardless, the high-stakes accountability era that our public schools exist within is likely here to stay. So the questions arise: How do we prepare administrators and educators to effectively meet the needs of all students in this high-pressure, outcomes-oriented environment? And how can this preparation aid educators to navigate a climate of accountability without succumbing to pressures such as cheating?    



During the 20122013 school year, there were approximately 6.4 million children with disabilities attending public schools across the United States, representing 13% of students (National Center for Education Statistics, 2015). The majority of those 6.4 million students, 35%, had a specific learning disability (SLD), a disorder in one or more of the basic psychological processes involved in understanding or using language, spoken or written, that may manifest itself in an imperfect ability to listen, think, speak, read, write, spell, or do mathematical calculations (IDEA, 2004). The large proportion of students with SLDs in our public school classrooms represents a challenge and opportunity for rethinking how we support school leaders and teachers as they support students taking high-stakes exams and face the accompanying high-stakes pressure.   

Although students with disabilities represent a large portion of our total student population, research indicates that many administrators and educators at the school level may be underprepared to serve them effectively (DiPaola & Walther-Thomas, 2003; King-Sears, Carran, Dammann, & Arter, 2012; Villani, 2006). As more students with disabilities receive instruction in the general education environment, there are important implications for the preparation of all teachers. This is the work of all administrators and general educators in a school building, not just the special education teacher.  

School Leadership Preparation

School leadership program training is often more focused on academics and lacks adequate coursework and field-based experiences related to work with students with disabilities (DiPaola & Walther-Thomas, 2003; Villani, 2006). This can create unintentional noncompliance with policy mandates at the school level. School leaders need learning opportunities in this area in order to learn strategies to support and enact an infrastructure that promotes optimal school conditions that comply with policy and research-based practices for all students (Grissom & Harrington, 2010).   

Teacher Preparation

When it comes to teacher preparation, teacher education programs have historically funneled teaching candidates into separate pathways for general and special education (Pugach & Blanton, 2009). It is imperative that all preservice teachers have the opportunity to engage in coursework that fosters knowledge pertaining to evidence-based practices for working with students with disabilities. Additionally, they must receive field-based opportunities to develop and demonstrate competent practices and dispositions with all students (King-Sears et al., 2012).

As it currently stands, many general education teachers report feeling underprepared to adequately meet the needs of students with disabilities in their classrooms (Laarhoven, Munk, Lynch, Bosma, & Rouse, 2007). For example, many institutions of higher education are falling short when it comes to providing content knowledge and field-based practice opportunities to gain expertise in RTI (Harvey et al., 2015). When teachers enter classrooms feeling underprepared, it is likely to lead to a lower sense of self-efficacy. This in turn leads educators to experience a sense that their actions in the classroom are unlikely to lead to successful outcomes (Cook, 2007). We do not have the evidence to confirm that the teachers in APS felt equipped to support students with disabilities or that they knew if students needed referrals for testing. However, given the existing research in general education teacher preparation, it is likely that they felt underprepared to support students with disabilities and unsure if their students should be referred for special education. In addition to the culture of fear created in APS, for many teachers, navigating special education policies and evidence-based practices more than likely added pressure. For teachers to be successful in the face of high-stakes accountability, they need to be better prepared to understand and implement practices that meet the needs of all students within an RTI framework. It is only then that teachers can be advocates for their students and be aware of the agency that they hold.

Teachers have the greatest in-school effect on student achievement (Goldhaber, 2002). When a teacher does not feel successful, he or she faces what Osher et al. (2007) refer to as a burnout cascade, or a cycle of deteriorating behavior that ultimately affects both students and teachers. Under such conditions, the teacher becomes reactive and engages in more negative behavior. The implications of this cycle are serious: A teacher may burn out and leave the school. If the teacher stays, his or her performance quality likely diminishes and the learning environment yields negative effects for students, particularly for the most vulnerable students characterized by learning and behavioral difficulties (Jennings & Greenberg, 2009). When historically marginalized children are in classrooms characterized by less support, they experience poorer academic achievement and increased conflict in the classroom. However, when historically marginalized students are placed in classrooms offering strong instructional and emotional support, they can attain academic achievement that is on par with their classmates (Hamre & Pianta, 2005).    


In the case of school leaders and teachers in Atlanta, the stakes and the potential consequences were very real and severe. How adequately prepared did educators feel to support the needs of an increasingly diverse body of students? How confident did educators feel that their instructional strategies would lead to satisfactory test scores for all students? At this moment in time, the answer is: We do not know. In a review of newspaper articles, press interviews, and the Georgia Bureau of Investigation Report, there is little to no mention of their work with students with disabilities. In order to better understand the ways in which practices with students with disabilities were affected during the cheating scandal, we must return to the source. We recommend that future research seek to understand how confident and competent school leaders and teachers alike felt in helping all students, including those with disabilities, to be successful during the Atlanta cheating scandal.  

Their responses have important implications for understanding how teachers and school leaders in urban districts across the country may feel as they try to meet the demands of high-stakes accountability, and could inform the creation and direction of future pre- and in-service professional learning opportunities. When school leaders build school infrastructures that are conducive to learning for all students, and general and special educators alike feel well versed in the implementation of evidence-based strategies for all students, we open the doors to success for all.  


Aronson, Murphy, and Saultz wrote this paper assuming equal authorship. Author names appear in alphabetical order.


Alvarez, H. (2007). The impact of teacher preparation on responses to student aggression in the classroom. Teaching and Teacher Education, 23, 11131126.

Atlanta Public Schools. (2015). About our schools. Retrieved from http://www.atlantapublicschools.us/domain/7725

Bertrand, M., & Marsh, J. A. (2015). Teachers sensemaking of data and implications for equity. American Educational Research Journal, 52, 861893.

Blinder, A. (2015, April 1). Atlanta educators convicted in school cheating scandal. New York Times. Retrieved from http://www.nytimes.com/2015/04/02/us/verdict-reached-in-atlanta-school-testing-trial.html?_r=0.

Booher-Jennings, J. (2005). Below the bubble: Educational triage and the Texas accountability system. American Educational Research Journal, 42(2), 231268.

Bowers, M. J., Wilson, R. E., & Hyde, R. L. (2011). Report of the overview and findings of the special investigatory committee set up to investigate cheating in Atlanta Public Schools (Vols. IIII). Office of the Governor Special Investigations. Retrieved from http://graphics8.nytimes.com/packages/pdf/us/Volume-1.pdfhttp://graphics8.nytimes.com/packages/pdf/us/Volume-


Brownell, M. T., Sindelar, P. T., Kiely, M. T., & Danielson, L. (2010). Special education teacher quality and preparation: Exposing foundations, constructing a new model. Exceptional Children, 76, 357377.

Cook, L. (2007). When in Rome & : Influences on special education student teachers teaching. International Journal of Special Education, 22, 119130.

Dee, T., & Jacob, B. (2011). The impact of the No Child Left Behind on student achievement. Journal of Policy Analysis and Management, 30(3), 418446.

DiPaola, M. F., & Walther-Thomas, C. (2003). Principals and special education: The critical role of school leaders (COPPSE Document No. IB-7). Gainesville, FL: University of Florida, Center on Personnel Studies in Special Education.

Duncan, A. (2010, July). Unleashing the power of data for school reform. Paper presented at the STATS-DC 2010 National Center for Education Statistics Data Conference, Bethesda, MD.

Ellis, R., & Lopez, E. (2015, April 30). Judge reduces sentences for 3 educators in Atlanta cheating scandal. CNN.  Retrieved from www.cnn.com/2015/04/30/us/atlanta-schools-cheating-scandal/

Figlio, D., & Kenny, L. (2007). Individual incentives and student performance. Journal of Public Economics, 91(56), 901914.

Georgia Department of Education. (2010). Eligibility determination and categories of eligibility. Retrieved from https://www.gadoe.org/Curriculum-Instruction-and-Assessment/Special-Education-


Goldhaber, D. (2002). The mystery of good teaching: Surveying the evidence on student achievement and teachers characteristics. Education Next, 2, 5055.

Grissom, J. A., & Harrington, J. R. (2010). Investing in administrator efficacy: An examination of professional development as a tool for enhancing principal effectiveness. American Journal of Education, 116, 583612.

Hamre, B. K., & Pianta, R. C. (2005). Can instructional and emotional support in the first-grade classroom make a difference for children at risk of failure? Child Development, 76, 949967.

Haney, W. (2000). The myth of the Texas miracle in education. Education Policy Analysis Archives, 8(41). Retrieved from http://epaa.asu.edu/ojs/article/view/432/828

Harr-Robins, J., Song, M., Garet, M., & Danielson, L. (2015). School practices and accountability for students with disabilities (NCEE- 2015-4006). Washington, DC: Institute of Education Sciences.

Harvey, M. W., Yssel, N., & Jones, R. E. (2014). Response to Intervention preparation for preservice teachers: What is the status of Midwest institutions of higher education? Teacher Education and Special Education, 38, 105120.

Hing, J. (2011, August 15). Cheating Atlanta schools received $500k in bonuses, what now? Colorlines News for Action. Retrieved from http://colorlines.com/archives/2011/08/cheating_atlanta_schools_received_500k_in_bonuses_what_now.html

Hoover, J. J., Baca, L., Wexler-Love, E., & Saenz, L. (2008). National implementation of Response to Intervention (RTI): Research summary. Retrieved from www.nasde.org/Portals/0/NationalImplmentationofRTI-ResearchSummary.pdf

Hursh, D. (2007). Exacerbating inequality: The failed promise of the No Child Left Behind Act. Race Ethnicity and Education, 10, 295308.

Individuals with Disabilities Education Act of 2004, Public Law 108446, 118 Stat. 2658 (2004).

Jennings, P. A., & Greenberg, M. T. (2009). The prosocial classroom: Teacher social and emotional competence in relation to student and classroom outcomes. Review of Educational Research, 79, 491525.

Judd, A. (2015, April 4). APS cheating case: From first hint of scandal to jury verdict. Atlanta Journal-Constitution. Retrieved from http://www.ajc.com

Katsiyannis, A., Zhang, D., Ryan, J. B., & Jones, J. (2007). High-stakes testing and students with disabilities: Challenges and promises. Journal of Disability Policy Studies, 18, 160167.

King-Sears, M. E., & Baker, P. H. (2014). Comparison of teacher motivation for mathematics and special educators in middle schools that have and have not achieved AYP. International Scholarly Research Notices: Education, 2014, 114.

King-Sears, M. E., Carran, D. T., Dammann, S. N., & Arter, P. S. (2012). Multi-site analyses of special education and general education student teachers skill ratings for working with students with disabilities. Teacher Education Quarterly, 39, 131149.

Koretz, D. M., & Barton, K. (2003). Assessing students with disabilities: Issues and evidence (CSE Technical Report 587). Los Angeles, CA: University of California, Center   for the Study of Evaluation, National Center for Research on Evaluation, Standards, and Student Testing.

Laarhoven, T., Munk, D., Lynch, K., Bosma, J., & Rouse, J. (2007). A model for preparing special and general education preservice teachers for inclusive education. Journal of Teacher Education, 58, 440455.

Ladd, H. (1999). The Dallas school accountability and incentive program: An evaluation of its impact on student outcomes. Economics of Education Review, 18(1), 116.

Lavigne, A. L. (2014). Exploring the intended and unintended consequences of high-stakes teacher evaluation on schools, teachers, and students. Teachers College Record, 116(1), 129.

Lavy, V. (2002). Evaluating the effect of teachers group performance incentives on student achievement. Journal of Political Economy, 110(6), 12861317.

Manna, P. (2010). Collision course: Federal education policy meets state and local realities. Washington, DC: CQ Press.

McCombs, J., Kirby, S. N., Barney, H., Darilek, S., & Magee, S. J. (2004). Achievement state and national literacy goals: A long uphill road. New York, NY: Rand Corporation for the Carnegie Foundation.

McDermott, K. (2011). High-stakes reform: The politics of educational accountability. Washington, DC: Georgetown University Press.

McLaughlin, M. J. (2010). Evolving interpretations of educational equity and students with disabilities. Exceptional Children76, 265278.

McNeil, L. (2002). Contradictions of reform: Educational costs of standardized testing. New York, NY: Routledge.

Mehta, J. (2013). The allure of order. New York, NY: Oxford.

Mitchell, C. (2015, April 2). Atlanta educators convicted in test-cheating trial. Education Week. Retrieved from http://www.edweek.org/ew/articles/2015/04/02/atlanta-educators-convicted-in-test-cheating-trial.html

National Center for Education Statistics. (2009). Table A9. Racial/ethnic compositions as a percentage of public elementary and secondary school district membership and free and reduced-price lunch eligibility in the 100 largest school districts in the United States and jurisdictions, by school district: School year 20082009. Retrieved from http://nces.ed.gov/pubs2010/100largest0809/tables/table_a10.asp

National Center for Education Statistics. (2009). Table A10. Number of students in public elementary and secondary schools in the 100 largest school districts in the United States and jurisdictions, by types of school and number and percentage of students with Individualized Education Programs (IEPs): School year 20082009. Retrieved from http://nces.ed.gov/pubs2010/100largest0809/tables/table_a10.asp

National Center for Education Statistics. (2015). Children and youth with disabilities. Retrieved from https://nces.ed.gov/programs/coe/indicator_cgg.asp

National Center for Fair & Open Testing. (2015, April 6). Bob Schaeffer of FairTest on Atlanta Cheating Scandal. Retrieved from http://dianeravitch.net/2015/04/06/bob-schaeffer-of-fairtest-on-atlanta-cheating-scandal/

National Center on Response to Intervention. (March 2010). Essential components of RTI: A closer look at Response to Intervention. Washington, DC: U.S. Department of Education, Office of Special Education Programs, National Center on Response to Intervention.

Nichols, S. L., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing corrupts Americas schools. Cambridge, MA: Harvard Education Press.

Osher, D., Sprague, J., Weissberg, R. P., Axelrod, J., Keenan, S., Kendziora, K., & Zins, J. E. (2007). A comprehensive approach to promoting social, emotional, and academic growth in contemporary schools. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (5th ed., Vol. 5, pp. 12631278). Bethesda, MD: National Association of School Psychologists.

Perry, J., & Vogell, H. (2012, March 26). Surge in CRCT results raises big red flag. Atlanta Journal-Constitution. Retrieved from http://www.myajc.com/news/news/local/surge-in-crct-results-raises-big-red-flag-1/nQSXD/

Pugach, M., & Blanton, L. P. (2009). A framework for conducting research on collaborative teacher education. Teaching and Teacher Education, 25, 575582.

Ravitch, D. (2010). The death and life of the great American school system: How testing and choice are undermining education. New York, NY: Basic Books.

Richards, J. S. (2014, October 9). Columbus schools taking back bonuses based on cheating. The Columbus Dispatch. Retrieved from http://www.dispatch.com/content/stories/local/2014/10/08/Columbus_schools_taking_back_bonuses_from_schools_that_cheated.html

Riley, K. (2015, April 4). Seeing truth through to the end. Atlanta Journal-Constitution. Retrieved from http://www.ajc.com

Schaeffer, B. (2015, April 6). Bob Schaeffer of FairTest on Atlanta cheating scandal. [Web blog post]. Retrieved from http://dianeravitch.net/2015/04/06/bob-schaeffer-of-fairtest-on-atlanta-cheating-scandal/

Schulte, A. C., & Villwock, D. N. (2004). Using high-stakes tests to derive school-level measures of special education efficacy. Exceptionality, 12, 107126.

Springer, M. G., Hamilton, L., McCaffrey, D. F., Ballou, D., Le, V. N., Pepper, M., & Stecher, B. M. (2013). Teacher pay for performance: Experimental evidence from the project on incentives in teaching (Executive summary). Nashville, TN: National Center on Performance Incentives.

Swanson, E., Solis, M., Ciullo, S., & McKenna, J. W. (2012). Special education teachers perceptions and instructional practices in response to intervention implementation. Learning Disability Quarterly, 35, 115126.

Villani, S. (2006). Mentoring and induction programs that support new principals. Thousand Oaks, CA: Sage.

Vogell, H. (2011, July 26). Investigation in APS cheating finds unethical behavior across every level. Atlanta Journal-Constitution. Retrieved from http://www.ajc.com/news/news/local/investigation-into-aps-cheating-finds-unethical-be/nQJHG/

West, J. E., & Schaefer-Whitby, P. J. (2008). Federal policy and the education of students with disabilities: Progress and the path forward. Exceptional Children, 41, 116.

White, R. B., Polly, D., & Audette, R. H. (2012). A case analysis of an elementary schools implementation of Response to Intervention. Journal of Research in Childhood Education, 26, 7390.

Wong, M., Cook, T. D., & Steiner, P. M. (2015). Adding design elements to improve time series designs: No Child Left Behind as an example of causal pattern-matching. Journal of Research on Educational  Effectiveness. Retrieved from


Yell, M. L., Katsiyannis, A., & Shiner, J. G. (2006). The No Child Left Behind Act, adequate yearly progress, and students with disabilities. Teaching Exceptional Children, 38, 3239.

Zhao, Y. (2009). Catching up or leading the way: American education in the age of globalization. Alexandria, VA: ASCD.

Cite This Article as: Teachers College Record Volume 118 Number 14, 2016, p. 1-26
https://www.tcrecord.org ID Number: 21552, Date Accessed: 10/23/2021 2:34:46 PM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Brittany Aronson
    Miami University
    E-mail Author
    BRITTANY ARONSON earned her doctorate in 2014 in Learning Environments and Educational Studies. She currently serves as a Visiting Assistant Professor at Miami University and teaches undergraduate and graduate coursework in teacher leadership and multicultural education. She recently published “Culturally Relevant Education: A Synthesis Across Content Areas” in Review of Educational Research. Her research interests include critical teacher preparation, social justice education, multicultural education, and educational policy.
  • Kristin Murphy
    University of Massachusetts Boston
    E-mail Author
    KRISTIN M. MURPHY is an Assistant Professor of Special Education at University of Massachusetts Boston. Her research interests include enactment of policy in exclusionary schools including correctional, hospital, and other alternative settings, and professional learning opportunities for urban school leaders and teachers. She recently served as a co-author on “Responsibilities and Instructional Time: Relationships Identified by Special Educators in Self-Contained Classes for Students with Emotional and Behavioral Disabilities,” published in Preventing School Failure.
  • Andrew Saultz
    Miami University
    E-mail Author
    ANDREW SAULTZ is an Assistant Professor of Educational Leadership at Miami University. His research focuses on how school accountability impacts the relationships between levels of government, the teacher labor market, and parental satisfaction with schools. He recently served as a co-author on articles including “Parent Trigger Policies, Representation, and The Public Good,” published in Theory and Research in Education, and “Exploring the Supply Side: Factors Related to Charter School Openings in NYC,” published in Journal of School Choice.
Member Center
In Print
This Month's Issue