Title
Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Moving Children, Distorting Data: Changes in Testing of Students With Disabilities in Connecticut from 2000–2013


by Robert Cotto Jr. - 2016

Connecticut experienced two major changes in testing policy for children with disabilities that played a major role in conclusions about educational progress in the state. First, the No Child Left Behind Act (NCLB) of 2001 required that all students with disabilities participate in grade-level, standardized tests. This movement of students deepened a crisis of stagnant and disparate achievement indicators. Policy reversed in 2007, when the federal Department of Education opened the door for modified assessments based on grade-level content and standards. When testing policy reversed, the exclusion of students with disabilities temporarily resolved this crisis by artificially inflating test results in math and reading. This article provides an overview of testing data from the Connecticut State Department of Education within its historical context. These fluctuations in standard test participation often linked closely with overall results and produced misinterpretations of educational and racial progress over time. Responses to these changes in testing policy make Connecticut an illuminating case regarding the problem of high-stakes testing and changes in policies for students with disabilities in a particular state characterized by deep racial and economic inequity. Rather than raising questions, moving children helped reinforce the legitimacy of high-stakes testing and nationally touted educational reforms.

INTRODUCTION


In October 2005, Connecticuts Commissioner of Education, Betty J. Sternberg, reflected on apparent declines in reading test results. She told The Hartford Courant, It raises a red flag for me (Frahm, 2005). The following year, a national think tank speculated that this decline resulted from Connecticuts resistance to requirements of the No Child Left Behind Act of 2001 (Frahm, 2006). Indeed, Connecticut recently sued the federal government over parts of the NCLB law, particularly over the testing of children with disabilities, emergent bilingual students, and the requirement for yearly testing (Blumenthal, 2006). However, Sternberg argued that the actual reason for mixed test results was a tremendous increase in the number of children taking the tests, particularly children with disabilities and emergent bilingual students. These disputes faded as Connecticuts outlook changed with its 2007 revised policy that allowed students with disabilities to take modified assessments in reading and math. As a result, Connecticuts average test performance increased, particularly in its cities with large proportions of Black, Puerto Rican, and Latino students.


In this article I examine changes in testing policy for children with disabilities and their role in conclusions about educational progress in Connecticut. As the details above suggest, the 2000s saw two major policy changes in the testing of children with disabilities that resulted in shifting (and distorted) conclusions about public education. First, the NCLB Act of 2001 required that all students with disabilities participate in grade-level, standardized tests. This policy was reversed in 2007, when the federal Department of Education opened the door for modified assessments based on grade-level content and standards. Resultant fluctuations in test participation produced test results that were misleading about Connecticut students educational progress over the last decade. Premature conclusionsboth negative and positiveabout Connecticuts public schools and reforms had state and national implications. The responses to these changes in testing policy make Connecticut an illuminating case regarding the problem of high-stakes testing as a means toward educational progress for marginalized children, particularly those with disabilities.


As special education and testing policies changed over the course of 20002010, the resultant movement of children (most of whom were poor and of color) from grade-level testing to accommodated/alternative testing dramatically distorted overall state results. The two major movements of children with disabilities on and off standard tests had inverted consequences. Moving children with disabilities into the pool of regular standardized test takers deepened a crisis of stagnant and disparate achievement indicators early in the decade. When children with disabilities moved off the standard tests, the inflated results temporarily resolved that crisis. Moving children with disabilities distorted data so profoundly that it reinforced the legitimacy of high-stakes testing and related reforms as a method to improve achievement and public education in Connecticut.


EXAMINING NCLB, HIGH-STAKES TESTING, AND IMPLICATIONS FOR STUDENTS WITH DISABILITIES


A primary goal of NCLB was to close the achievement gap between affluent and low-income children, as well as White children and children of color. The law aimed to reduce disparities in achievement test results, through accountability, flexibility, and choice (No Child Left Behind Act, 2001). By requiring that every student participate in standardized tests and attaching penalties for not meeting group targets, the law purported to leave no child behind academically. Punishments for failure and school choice would force educators to get all children to reach proficiency on standardized tests (Nichols & Berliner, 2007; Nichols, Glass, & Berliner, 2006; 2012). The law also represented a major shift from state to federal control of education. Although this type of federal involvement was novel, high-stakes testing was not. NCLB came after experiments with high-stakes testing in places across the country such as Chicago and Texas (Cuban, 2010; Lipman, 2004; Valenzuela, 2005) The promise of universally high achievement would be met through these common sense reforms (Apple, 2006). Higher test results indicated improvement, while stagnant or declining measures indicated a lack of improvement. With overwhelming bipartisan support in the U.S. Congress, the NCLB Act was enacted in 2002.


The premise of NCLB was simple. If all children of different racial and ethnic backgrounds, including children with disabilities, were held to the same standards on state testing systems and penalties attached to the test results, then schools and districts would improve all students achievement (Kennedy, 2006). To accomplish this, NCLB also required that states implement assessment systems that tested all children in grades 38 and 10. This requirement included the testing of children with disabilities and emergent bilingual students, known over the years as Limited English Proficient (LEP) and most recently, English Language Learners (ELL) (Bartlett & Garcia, 2011; No Child Left Behind Act, 2001). While disability advocates and scholars raised questions about the law, there was a general sense that this change would benefit traditionally underserved students, particularly those with disabilities who had long been excluded from various aspects of public education (Hehir, 2006). In a way, NCLB continued the trajectory toward the fuller inclusion of children with disabilities in all educational opportunities in public schools. For disability advocates, including students with disabilities into mass testing systems might secure their full inclusion in learning opportunities and a high-quality education.  


Many disability scholars argue in favor of high-stakes testing and standards-based reforms (Hehir, 2006; Thurlow, 2002) as it relates to educational experiences of students with disabilities. For example, many argue that the mandate to include students with disabilities in the state test system is a step toward promoting overall inclusion (McGrew, Thurlow, & Spiegel, 1993). Others examine how test accommodations or alternate or modified assessment for students with disabilities support their learning. These concerns range from the design of modified or alternate assessments and accommodations (Elliot, Kettler, & Roach, 2008), procedures in the selection of accommodations and assessments (Katsiyanis, Zhang, Ryan, & Jones, 2007), and access to high-quality curriculum and instruction (Roach & Elliot, 2006). In short, many remain committed to including students with disabilities in high-stakes standardized testing systems, as long as there are modifications to participation.


In spite of those in favor of high-stakes testing systems, there are also many who are concerned. Hehir (2006) acknowledged that standards-based reforms with resource deficiencies could be detrimental to opportunities to learn. Other scholars (Cawthon, 2007) have raised questions about the unintended consequences or challenges in implementing high-stakes testing systems for children with disabilities (Browder, et al., 2005; Langenfeld et al., 1996; Nagle, Yunker, & Malmgren, 2006), as well as all students (Berliner, 2011; Darling-Hammond & Vasquez-Heilig, 2008). A particularly interesting line of reasoning has explored whether or not the standardization associated with the NCLB Act can ever be compatible with the individualization of the Individuals With Disabilities Education Act of 2004 (Annino, 2007). In rare instances, disability scholars have raised these critical questions about the basic premise of high-stakes testing regimes: the idea that more difficult standards, or learning goals, along with mass standardized tests and threats of penalties will, by themselves, improve educational outcomes in general as well as for students with disabilities.


TESTING OF CHILDREN WITH DISABILITIES IN CONNECTICUT: POLICY AND PRACTICE


The Northeast, including Connecticut, had among the highest overall achievement results in the United States on the National Assessment of Educational Progress (NAEP) when Congress passed the NCLB Act (U.S. Department of Education, 2001). However, like several other Northeastern states, Connecticuts neighborhoods, and therefore schools, are also highly segregated by socioeconomic status and race (Dougherty, 2011). The states predominantly White and wealthy suburbs contrast with its major cities that house mostly Black, Puerto Rican, and Latino residents who are among the lowest in the nation in median income (CT DECD, 2013; Sacks, 2008). In Connecticut, as in the rest of the nation, growing disparities in achievement and educational opportunity between more affluent White students and generally lower-income students of color resulted in demands for improvement from above and below over the last 30 years (Dougherty, 2011).


Before the NCLB Act, Connecticut had near universal standardized testing for students in grades 4, 6, 8, and 10 (Sternberg, 2006). These tests took place in September of each school year, and the Connecticut State Department of Education (CT SDE) reported the data later in the school year. Test results (English/Language Arts, math and writing) were disaggregated according to student demographic group even before it was a requirement of the NCLB Act. Children with disabilities participated in regular state testing in Connecticut with accommodations and, where relevant, would participate in alternative assessments.


OUT-OF-GRADE LEVEL TESTING IN CONNECTICUT PRIOR TO NCLB


Many states, including Connecticut, offered an entirely different version of their standardized tests, called an Out-of-Level (OOL) test. The OOL test might include the typical accommodations, but the most important difference was that it tested students on content and skills below the grade level in which they were enrolled at the time of the test. For example, a sixth-grade child with a disability could have taken an OOL test that assessed learning of content at the fourth-grade level. This alternative assessment could be viewed as both an accommodation and an entirely different test. Presumably, instruction and curriculum for students matched this OOL test selection. If a students Individualized Education Program (IEP) recommended off-grade level standards, then the student would be assessed on the corresponding OOL test. Provided that his or her IEP recommended this combination, the hypothetical sixth-grade student with a disability that participated in the OOL test at the fourth-grade level would be assessed and taught on content for the fourth grade. Because students took the standard or OOL tests in September, the information gathered from them could potentially inform instruction for at least part of the school year.


For legitimate reasons, disability scholars questioned this practice (Minnema & Thurlow, 2003). A key issue was the practice of testing students on content grades below their actual classroom grade (e.g., Theoharis, Causton, & Tracy-Bronson in this volume). This was a problem because some students would likely receive classroom instruction on very different content and skills if they were expected to take a separate test below their actual grade level (Connecticut State Department of Education, 2004; Minnema & Thurlow, 2003). In 2002, Connecticut was one of the 14 states that offered an OOL test for students with disabilities (Minnema & Thurlow, 2003). Although only a small portion of the states overall population, students taking the OOL test were arguably in the most need of academic support.


In terms of reporting, achievement data for children with disabilities participating in the Connecticut OOL tests remained separate from the overall results of the standard tests. The standard and OOL test data was still visible in state participation reports (2004), but each test had separate reports and information. The benefit of separating the results of the standard and OOL tests was that different tests would not be inappropriately compared with each other. The major drawback of this separation was that results of the standard tests did not account for the small but sizeable portion of students with disabilities taking the OOL tests. Without careful consideration of both standard and OOL test results as parts of a whole testing system, the public would not see the full picture of achievement on tests in Connecticut. The State Department of Education reported the standard and OOL information separately, so they understood the distinction. The public could not as easily see this separation.


This testing system in Connecticut, including the provisions for students with disabilities, did not satisfy the requirements of NCLB. The law required all students, including those with disabilities, to take the standard, grade-level assessments in every year between grades 3 and 8, as well as in grade 10. For Connecticut, NCLB meant eliminating the OOL tests for students with disabilities and placing them on standard tests. Eliminating the OOL tests also meant a blending of the previously separate tests results of two groups of studentsthose taking the standard and the OOL tests. Importantly, there is evidence that this shift began in Connecticut immediately before the NCLB Act, perhaps in anticipation of a change in testing policy. However, the move of students with disabilities from the OOL to standard tests accelerated after the law passed. The law definitively ended the official policy of teaching students with disabilities off-grade level content and administering OOL tests to them.


ENTER NCLB AND CONNECTICUTS REACTION


The states Attorney General, Richard Blumenthal, and former Commissioner of Education, Betty Sternberg, disagreed with the federal laws new testing requirements. They expressed their respective opinions in separate essays in the Harvard Educational Review. In their opinions, the NCLB law was not an improvement to Connecticuts state testing system, which Sternberg (2006) called a gold standard in testing. First, Sternberg and Blumenthal objected to the financial and educational consequences of testing all students in the designated grades each year. Second, they raised concerns about the use of standard, grade-level assessments for children with disabilities and emergent bilingual children (Blumenthal, 2006).


According to Sternberg, Connecticuts progressive system of standards-based testing was a better option compared to the NCLB Acts requirements. The Connecticut system of biennial formative testing early in the academic year was more ideal compared to yearly, summative testing of all students at the end of the school year. Teachers and schools could use formative tests to modify and tailor instruction for students. But summative tests could not be used for this purpose since the results would be reported after the close of the school year. Furthermore, Connecticuts standardized tests allowed open-ended responses, so they were presumably higher-quality assessments than those recommended by the U.S. Department of Education.


Sternberg acknowledged the common achievement disparities for children along lines of race, class, and disability. However, the gold standard approach to testing students out of their grade level arguably left students with disabilities at a perpetual academic disadvantage. They could spend substantial amounts of time learning and testing in content below their actual grade level. Sternberg never addressed this point in her essay. But she rejected the idea that more testing would reduce academic disparities. The technical issues around testing concerned Sternberg more than the high stakes or testing, per se. For some groups, this message could have been read as an endorsement of the status quo rather than urgency to address the needs disadvantaged students. Unlike Sternberg, Blumenthal provided a clear example of his concern about NCLB for children with disabilities. He stated:


A tenth grade special education student who is learning fractions and decimals, for example, should not be required to take an algebra test, but that is exactly what federal officials demand. Such a requirement provides no benefit and skews results, undermining the tests very purposes of assessing students progress and identifying academic weaknesses. (Blumenthal, 2006, p. 567)


Taken together, Blumenthal and Sternberg suggested that individual students academic needs (e.g., the Individualized Education Program) should guide decisions about testing rather than a blanket requirement for grade-level testing. For them, providing the option of OOL or standard testing best responded to the individual needs of students with disabilities. Writing in 2006, they also knew firsthand that combining the results of all students skewed individual and overall results in Connecticut. In spite of these concerns, Connecticut lost its lawsuit against the federal government. The decision sealed Connecticuts move toward the near-complete inclusion of students with disabilities on standard assessments and reporting. Having lost its lawsuit, the state moved forward with yearly mass testing.


MOVING STUDENTS WITH DISABILITIES AND A DEEPENING EDUCATIONAL CRISIS


At the beginning of the decade, overall test results were mixed as children with disabilities increasingly participated in the standard Connecticut Mastery Test (CMT), rather than the OOL tests. Public data from successive cohorts of students taking the CMT version 3 (20002004) and version 4 (20062013) provided a nearly uninterrupted record of participation and achievement on these states tests. A review of data for successive cohorts of students in grades 4, 6, and 8 over five years showed a close relationship between participation and the percentage of children at or above the proficient level in some subjects and grades, but not all. The very close relationship between participation and proficient rates suggests that changes in participation of students with disabilities likely impacted proficient rates. As more students with disabilities participated in the standard tests beyond their abilities/above their instructional level, Connecticuts average proficiency rates often declined or stagnated. The meaning of these state averages was muddled by an overrepresentation of students taking tests for which they were not prepared or equipped. Then, between 2004 and 2008, nearly every student in the state participated in the standard tests. At the same time, overall achievement data appeared stagnant. This state of affairs deepened a perceived educational crisis.


FROM OUT-OF-LEVEL TO STANDARD TESTS FOR STUDENTS WITH DISABILITIES: 20002004


From 20002004, a smaller percentage of children took the Out-of-Level tests compared to the standard state assessments (CMT 3). In 2000, roughly one third of children with a disability in grades 4, 6, and 8 took an OOL test. The remaining two thirds of all students with disabilities took the standard assessments in reading, math, and writing, with a smaller portion taking a Skills Checklist. At the highest point of participation, students with disabilities taking the OOL tests accounted for approximately 3% of all students in Connecticut in each tested grade (Connecticut State Department of Education, 2004).


Within only a few years, participation of students with disabilities shifted rapidly to the standard tests. Connecticuts final administration of the OOL tests was the 20032004 school year. Then the CT SDE eliminated the OOL for students with disabilities in 20042005, primarily in response to the NCLB Act requirements. Participation shifted from 60% to more than 90% of all children with disabilities taking the standard, grade-level CMT (Connecticut State Department of Education, 2004). The 20042005 school year marked a testing milestone in Connecticut. For the first time ever, virtually every student participated in the states standard assessments and less than 1% of all students participated in a Skills Checklist assessment reserved for a small portion of students with the most severe disabilities (see Table 1).


The states data reporting and disclosure policies prohibited any calculation of the precise impact of moving these students with disabilities or their demographic profile. Based on the relatively lower participation rates of Black and Latino students in the standard tests, it was likely that these groups of students were disproportionately among the group taking the OOL tests (Connecticut State Department of Education, 2004). Since administrators and teachers selected these students for the OOL tests, it was also likely that these students were among those in need of support. Without the more detailed state information, however, it was difficult to answer many basic questions about the students that participated in the OOL tests and then moved to the standard tests.


Table 1. Participation of Students with Disabilities in Fourth-Grade CMT Math

 
 

CMT 3

Participation (%)

2000

2002

2004

Standard CMT

63.6%

78.2%

92.8%

Absent

0.8%

0.3%

No Valid Score

1.1%

0.5%

0.7%

Skills Checklist

7.3%

5.8%

6.1%

Out of Level (Grade 2)

27.3%

14.4%

Modified Assessment

ELL Exempt*

0.7%

0.1%

Total

100%

99.8%

99.9%

Note. Adapted from Connecticut Mastery Test 3 participation reports.


Importantly, comparing different groups of students in the same grade over years on a pass/fail metric is fraught with potential problems. Comparing successive cohorts of students is a flawed feature of NCLB high-stakes testing because the composition of cohorts can quickly change. Also, NCLB used a single cut-off point (e.g., the proficient level) to identify success or failure (Ho, 2009). Comparing different groups of students in the same grade on the proficient metric from year to year compounded the issues with both approaches. Connecticuts testing system experienced a compositional effect (Koretz, 2008) as it was measuring success on a pass/fail basis. Koretz defined compositional effects as changes in performance arising from changes in the composition of the tested group (p. 87). As an example of a compositional effect, Koretz described a school district where students with disabilities, previously taking separate assessments, were suddenly required to take the same regular tests as a constant population of students. With all other factors remaining equal, Koretz (2008) predicted a decline in overall test results from this introduction of students with disabilities. The change in test results would indicate a change in the selection of students who were tested (p. 87) rather than a decline in learning or effectiveness of schools.


COMPARING PARTICIPATION AND PROFICIENT RATES FROM 20002004


In several grades and subjects, test results appeared to be worse when comparing successive cohorts of students from the start to the end of this period. The percentage of students at or above the proficient level was lower in 2005 than in 2000 for both the fourth and eighth grade on the standard math and reading tests (Connecticut State Department of Education, 2004). Thus, comparing the proficient rates of fourth-grade students in 2000 to fourth-grade students in 2005 shows a potentially troubling trend. For instance, 82% of all students scored at or above the proficient level on the standard eighth-grade math test in 2000. In 2005, 78.9% of eighth-grade students were at or above proficient on the standard tests.


On the other hand, these types of comparisons could appear positive. For example, the percentage of students at or above the proficient level was higher in 2005 compared to 2000 in sixth-grade math and reading, and fourth- and eighth-grade writing. Comparing successive cohorts meant comparing entirely different groups of students in the same grade from one year to the next. Depending on the grade and subject, comparisons could be positive or negative. Nevertheless, this was the same method that NCLB prescribed to measure educational progress.


From 20002005, in some grade levels and for some subjects, test participation was predictive of proficient rates. For example, there was a moderate to strong negative correlation between the percentage of all students participating in the standard test and the percentage of students at or above the proficient level participating in the fourth- and eighth-grade math and reading tests (see Table 2). However, this negative correlation between percent participation and at or above proficient level was only significant for fourth-grade reading (r = 0.898, p = 0.038) and math (r = 0.962, p = 0.009) tests. This strong association was part of a frequent but not universal pattern: an inverse relationship between participation and proficient rates.


Table 2. Correlations Between Participation Rate and Percentage At/Above Proficient Level: 20002004

 

CMT 3

Grade

Reading

Math

Writing

4

0.898*

0.962**

0.550

6

0.513

0.889*

0.606

8

0.598

0.571

0.392

     *p < .05, **p < .01, ***p < .001


On other subjects and in other grades, the relationship between participation and proficient rates was contradictory. For instance, there was a moderate to strong positive correlation between the percentage of all students participating in the standard test and the percentage at or above the proficient level during these five years for all sixth-grade students in reading and math, as well as all writing tests in all three elementary grades required to take these assessments. Additionally, there was a large, statistically significant positive correlation between participation in the standard test and the percentage of all students at/or above proficient level participating in the sixth-grade math test (r = 0.889, p = 0.044) between 2000 and 2004. For these subject tests and grades, higher participation correlated to higher rates of students at or above the proficient level for this five-year period. In other words, a large proportion of students with disabilities taking the standard tests did not uniformly mean lower average achievement results during these years.


Although only a small sample of five data pairs, the mixed results of calculating simple correlations between percentage of students participating and scoring at or above the proficient level offer key insight. At best, this period offered mixed achievement results over several years. A strong relationship between participation and percent of students at or above the proficient level suggests that moving children with disabilities onto the standard assessmentsthe main source of increased participation on the standard testslikely distorted overall performance measures in at least a number of grades and subjects, but perhaps not all. On some subjects, the exact opposite pattern emerged. This mixed record complicated the story of early NCLB implementation in Connecticut.


ASSESSING THE CHANGE FROM OOL TO STANDARD TESTS FOR STUDENTS WITH DISABILITIES


For several of the key data points that mattered for the NCLB Act, Connecticut schools appeared to regress as the state implemented the law. Although there were other achievement indicators such as average scale score that could have offered alternate, broad views of academic progress, they were not used in NCLB calculations of Adequate Yearly Progress." In the hierarchy of educational reporting, an apparent decline in the percent at or above the proficient level on fourth- and eighth-grade reading and math tests trumped the more positive sixth-grade results. News about the fourth- and eighth-grade achievement data often dominated educational reporting (Frahm, 2005). For instance, the apparent decline in fourth-grade reading results made major news between 2004 and 2006. Years after the implementation of the NCLB Act, people demanded to know why fourth-grade reading results declined. Yet media coverage overlooked more positive sixth-grade results. These mixed achievement outcomes spawned critical media coverage and a defensive administrative response.


Connecticut education officials acknowledged mixed results during this period, and their response was instructive. On the one hand, administrators touted and claimed improved overall test results in some areas. On the other hand, they attempted to explain the uncertainty caused by a shifting population of test takers. According to CT SDE, urban school districts benefitted the most from this improvement of non-disabled students (Frahm, 2005). The stagnating achievement for students with disabilities on regular tests tempered this news. Officials noted that the shifting composition of test-takers distorted overall results even as the results of non-disabled students improved. With prominent news articles such as Study Says Test Scores Going in the Wrong Direction, persuading the public was a difficult task (Frahm, 2006). After decades of working toward progressive improvement of Connecticuts public schools and defending her response to NCLB as Commissioner, Sternberg abruptly resigned in 2006. These mixed achievement results coincided with heightened demands for improvement and opened doors for more intense responses.


ADDITION THROUGH SUBTRACTION: REMOVING CHILDREN WITH DISABILITIES FROM STANDARD TESTS AND INFLATED ACHIEVEMENT RESULTS


In 2009, Connecticut piloted its modified assessment system (MAS) for students with disabilities in reading and mathematics. The movement of students from participation in the standard test to the modified assessment in 2009 effectively removed them from the standard assessment reports. As a result, participation also declined when compared to 2008 on the standard tests in math and reading. In 2010, the state fully implemented the MAS. Students participating in the different tests appeared on two separate state reports, one for the modified assessment and the other for the standard tests (Connecticut State Department of Education, 2014). This change in the participation of children with disabilities in the states testing system mirrored and nearly reversed the phase out of the OOL test earlier in the decade.


Comparing the participation reports of successive cohorts over consecutive years showed that there was a dramatic movement of students with disabilities off standard tests. Participation of children with disabilities in the standard tests dropped from roughly 90% to 66% and varied slightly by grade and subject test (Connecticut State Department of Education, 2014). Nearly one third of all children with disabilities went from taking the standard test to the modified assessment pilot in math and reading. Modifications on this new assessment included different typeface, removed distractors, fewer items on a page, graphic organizers, key text underlined and/or bolded, larger font size, simplified graphics, and simplified language (Hodgson, Lazarus, & Thurlow, 2010). Over the next several years, the participation of students with disabilities in the modified assessment increased as participation of students in the standard test decreased (see Table 3). Participation in the Skills Checklist also inched upward. By 2013, only 50% to 60% of students with disabilities took the standard tests in math and reading.


Table 3. Participation of Students with Disabilities in Fourth-Grade CMT Math

 
 

CMT 4

Participation (%)

2006

2008

2010

2012

Standard CMT

92.1%

89.5%

63.1%

60.7%

Absent

0.5%

0.3%

0.2%

0.1%

No Valid Score

0.5%

0.6%

0.3%

0.2%

Skills Checklist

6.7%

9.4%

10.2%

11.2%

Out of Level (Grade 2)

Modified Assessment

26%

27.6%

ELL Exempt*

0.2%

0.2%

0.2%

0.1%

Total

100%

100%

100%

99.9%

Note. Adapted from Connecticut Mastery Test 4 participation reports.


Planning and Placement Teams (PPT) teams selected students with disabilities for the modified assessment using criteria provided by the State Department of Connecticut. As Lazarus, Cormier, and Thurlow (2011) have shown, Connecticuts criteria for participation in the modified assessment was comparable to other states in terms of number and type. The states modified assessment eligibility worksheet (2009) asked three major questions:


(1)

Does the student receive special education services with an active IEP?  

(2)

Does objective evidence show with reasonable certainty that the student will not make grade-level proficiency in math and/or reading this year?  

(3)

Is the student unable to reach grade-level proficiency due to his or her disability and not due to lack of accommodations and modifications, lack of instruction, or other factors?


As the worksheet showed, the most important criterion for participation on the MAS for a student with an Individualized Education Program (IEP) was the stipulation that the student was not likely to meet the proficient level in math and/or reading tests at the students grade level because of his or her disability (Connecticut State Department of Education, 2009). Because these decisions were made during a PPT team meeting, the decision-making process was opaque and records were not available. Indeed, the validity of decisions regarding this specific criterion (i.e., the student would not pass the standard test because of a disability) becomes murky as the stakes rise on the tests. Under NCLB, there was a clear incentive to approve a separate, modified test for students with disabilities.


The process of identifying students with disabilities becomes fuzzy when the stakes of that diagnosis are high. For instance, there was a potential discrepancy between overall state proportions of students with disabilities in each category compared to students taking the modified assessment. Nearly half of all students taking the modified assessment came from the broadest disability category: specific learning disability (see Table 4). However, only one third of all students with disabilities in the state were labeled as having a learning disability (Connecticut State Department of Education, 2012). Unlike narrower categories (e.g., visual impairment), the learning disability category was arguably the most diffuse. In the context of high-stakes testing, it seems reasonable to predict that struggling students with disabilities might be selected at higher rates in order to enhance a school or districts overall test results. In this case, there appeared to be an over-selection of students in a disability category that offered teachers and administrators the most latitude in selecting instructional and assessment practices, such as the MAS.   


Table 4. Students with Disabilities by Category in 2011

Type of Disability

Grade 4 and 8 Modified CMT

Grade K12 Connecticut

Intellectual Disability

2.3%

3.9%

Hearing Impairment

1.3%

Speech or Language Impairment

18.9%

18.6%

Visual Impairment

0.3%

Emotional Disturbance

5.6%

8.4%

Orthopedic Impairment

0.1%

Other Health Impairment

6.9%

18.4%

Specific Learning Disability*

46.6%

33.3%

Deaf-Blindness

0.2%

Multiple Disabilities

2.5%

Autism

5.6%

9.2%

Traumatic Brain Injury

0.1%

ADD/ADHD

9.7%

Other Disability

8.2%

Total

100%

100%

*CT SDE used Specific Learning Disability on the fourth- and eighth-grade report and Learning Disability on the state report. There is a possibility that these categories do not match exactly.


By removing these specific students, Connecticut inflated its overall test results by several percentage points in math and reading during the period from 20092013 (Cotto, 2012a). This inflation can be calculated by simply reinserting students that participated on the modified assessment back into the standard testing sample as non-proficient students in the denominator and recalculating the percentage of students at or above proficient. For example, the percent of students at or above the proficient level on the fourth-grade standard CMT in reading was reported as 74.7% in 2011. That year, 36.3% of all students with disabilities took the MAS in reading. The 1,875 students with disabilities that took the MAS made up 4.5% of the entire fourth-grade population that year (Connecticut State Department of Education, 2014). When we include, or reintroduce, these 1,875 students that took the MAS in reading into the sample of all test-takers as not proficient, then the revised percent of students at or above the proficient level would have been 71.23% in 2011. This data inflation accounted for a substantial portion, but not all, of the yearly change in the percentage of students at or above the proficient level (Cotto, 2012a).


During these years, there was also a strong association between participation rates and higher achievement measures. For the years between 2006 and 2013, there was a large, statistically significant negative correlation between percentage of students participating in the standard tests and the percentage of students at or above the proficient level (see Table 5), as well as average-scale scores for math and reading. Put another way, lower participation, caused by moving students with disabilities to the modified assessment, related to higher achievement data between 2006 and 2013. This inverse relationship existed for grades 4, 6, and 8, and was a nearly one-to-one relationship in a number of instances (Cotto, 2012a). The only exception to this pattern was on the writing tests, for which there was no modified assessment in Connecticut.


Table 5. Correlations Between Participation Rate and Percentage At/Above Proficient Level: 20062013


   

 

CMT 4

 

Grade

Reading

Math

Writing

4

0.909**

0.987***

0.197

6

0.946***

0.859**

0.427

8

0.970***

0.925**

0.076

*p < .05, **p < .01, ***p < .001


The students with disabilities taking the modified assessment in math and/or reading (see Table 6) were disproportionately children of color and students from low-income families. As a single racial or ethnic group, non-Hispanic White students represented the largest number and greatest proportion of students taking the modified assessment. However, more than half of all students taking the MAS were either Black or Latino. For example, 36% of students taking the fourth-grade MAS were White, while 35% were Puerto Rican and Latino, and 25% Black (Connecticut State Department of Education, 2014). In contrast, only 13% of Connecticut fourth-graders were Black and 19% were Puerto Rican and Latino (Connecticut State Department of Education, 2015). Furthermore, only 16% of all fourth-grade students with disabilities were Black and 22% were Puerto Rican and Latino in 2011 (Connecticut State Department of Education, 2015).


Table 6. Student Characteristics on Fourth-Grade Math Tests: 2011


Characteristic

Modified

Standard

Gender

   

  Male

64.3%

51.1%

  Female

35.7%

48.9%

Race

   

  Black

24.7%

12.2%

  Latino

34.7%

17.8%

  White

36.0%

62.9%

Free/Reduced Meals

   

  Eligible

69.1%

35.2%

  Not Eligible

30.9%

64.8%

Special Education

   

  SWD

100.0%

8.2%

  Non-SWD

91.8%

English Language Learner

   

  ELL

4.8%

  Non-ELL

100.0%

95.2%

Note. Adapted from CMT 4 modified assessment and standard test performance reports.


Roughly two thirds of all students taking the modified assessment in math and reading were also eligible for free and reduced priced meals. This rate surpassed the states overall rate of 37% and the rate of eligibility among students with disabilities of 45% (Connecticut State Department of Education, 2014).  Finally, the majority of students taking the MAS were male, which was below their overall proportion of 60% in special education statewide. Interestingly, the characteristics of students in Connecticut that participated in the MAS were similar to students in one Midwestern state documented by Shaftel and Rutt (2012) in their study of modified assessment demographics.


THE GEOGRAPHY OF HIGH-STAKES TESTING AND THE MAS: INFLATED RESULTS AND REFORMS


After years of stagnation and frustration from 2000 to 2008, test results finally appeared to increase. When students with disabilities began taking the modified assessment in 2009, several high-profile cities appeared to improve test results more quickly than the state overall. Because Black and Latino students disproportionately participated in the MAS, the results for the standard tests appeared to rise more quickly for these groups in the cities and the state. The same year that the modified assessment pilot was introduced, Connecticut Commissioner Mark McQuillan stated, I am pleased to see improvement in the performance of students across the board, including somewhat larger gains by minority and economically disadvantaged students which helps to close Connecticuts large achievement gaps (Connecticut State Department of Education, 2009).


The compositional effect of moving children off the standard test and into the modified assessment played a major role in this apparent increase (Cotto, 2012a; Koretz, 2008). Connecticuts overall test results appeared better because nearly all districts in Connecticut selected students to take the MAS. Across the state, 271 town, city, charter, and special school districts had 20 or fewer students taking the fourth-grade MAS in reading in 2011 (Connecticut State Department of Education, 2014). Some school districts had minor percentages of all their students taking the MAS, while other districts had up to 12% of all students in a grade taking the MAS (Cotto, 2012a). The district average for participation on the standard tests in fourth-grade reading was 94.9% (SD = 3.5) of all students and 95.9% in math (SD = 3.0) in 2011. For the MAS, average district participation was 3.6% (SD = 2.8) in reading and 2.6% (SD = 2.3) in math. Other grades had a similar result.


District-level participation in the MAS correlated positively to percentages of children of color, eligibility for free and reduced priced meals, and special education prevalence rates (see Table 7). For example, there were moderate, statistically significant positive correlations between the percentage of students participating in the fourth-grade MAS reading test and district-level indicators such as special education prevalence (r = 0.348, p < 0.001), percent of children of color (r = 0.466, p < 0.001), and eligibility for free/reduced priced meals (r = 0.524, p < 0.001) in 2011. Despite clear selection criteria, individual school districts were more or less left alone to select students for the MAS as long as they followed the listed criteria. Facing the pressure of sanctions for group-based achievement targets (e.g., Black, Hispanic, Economically Disadvantaged), the State and Federal Departments of Education offered school districts a testing loophole, but few if any new resources or support for students with disabilities (Lee, 2008). This dynamic may help explain why race and income factors seemed more predictive of MAS participation compared to merely special education prevalence.


The districts that experienced the greatest number of years under penalties associated with NCLB had the second highest rates of MAS participation. The seven cities with large student enrollmentsBridgeport, Danbury, Hartford, New Haven, New Britain, Stamford, and Waterburyeach had more than 50 students taking the fourth-grade MAS in reading. These seven districts combined for slightly more than 40% of all students statewide taking the fourth-grade MAS in reading (Connecticut State Department of Education, 2012). In sharp contrast, these seven school districts only accounted for 21.3% of all Connecticut fourth-grade students in 20102011. By that year, all of these districts were in the eighth year of being in need of improvement under NCLB (Connecticut State Department of Education, 2011), the maximum number of years possible.


Table 7. Connecticut School District Characteristics by NCLB Group Average in 2011


Connecticut School District Characteristics by NCLB Group Average in 2011

 

Grade 4

 

District

Years in Improvement

Number Students

Participation Standard  CMT

Participation Modified CMT

Students of Color

Students With Disabilities

Free and Reduced/Meals

8 (N = 14)

781

89.0%

7.3%

 

70.7%

13.1%

66.6%

7 (N = 3)

309

87.8%

9.4%

 

39.7%

15.9%

49.1%

6 (N = 7)

434

93.8%

4.3%

 

45.4%

12.0%

39.8%

5 (N = 1)

 

76.0%

48.9%

4 (N = 0)

 

3 (N = 2)

400

95.2%

2.3%

 

23.9%

11.5%

24.6%

2 (N = 7 )

301

94.4%

3.6%

 

38.8%

11.3%

34.3%

1 (N = 1)

134

95.0%

4.2%

 

58.5%

11.6%

44.4%

0 (N = 127)

160

95.8%

2.9%

 

15.0%

10.9%

14.5%

Average

236

94.9%

3.6%

 

25.0%

11.3%

23.2%


INFLATED DATA VALIDATES REFORMS IN HARTFORD AND NEW HAVEN, CONNECTICUT  


In Hartford and New Haven, the resulting inflation in test results was used to validate and justify educational reform efforts in those cities (use of high-stakes testing, proliferation of school choice and charter schools). Hartford and New Haven were two of the three largest Connecticut cities in which nearly all of the students were racial and ethnic minorities and eligible for free and reduced priced meals. By 2011, roughly 9% of all students (all identified as having disabilities) in the Hartford and New Haven school districts participated in the MAS in either math or reading. This high participation rate inflated their results for standard assessments in those subjects (Cotto, 2012a). To be clear, the MAS did not cause or account for all of the increases in achievement test results, but moving children coincided with the distorted results after 2008.


In 2006, Hartford embarked on expanded inter- and intra-district school choice programs and hyper-accountability in response to years of relatively low achievement results and inconclusive reform efforts. Specifically, Hartford embarked on a portfolio model approach that offered public school choice, closed schools with low test results, and opened new schools in their place, often under private management (e.g., school turnarounds). School employees also earned bonuses for improve state tests results (CMT and CAPT). Making sense of this effort is complicated by the fact that the State also infused substantial resources and new suburban students because of the Sheff v. ONeill desegregation settlement.


Only a 30-minute drive down interstate I-91, New Haven adopted similar reforms in late 2009 with its School Change Initiative. The city also forged a new collective bargaining agreement that led the way for a nationally touted teacher evaluation system that incorporated student test results into teacher rankings. New Haven also expanded its public school choice options during this period. Like Hartford, New Haven also created a local school ranking system based on state test results. The reform initiatives continued as of this writing and the final year of the fourth version of the Connecticut Mastery Test (CMT 4) in 2013.


When the modified assessment caused participation to drop and inflated overall achievement on standard tests, administrators and pundits in New Haven and Hartford were quick to attribute these results to their local and state reforms. Including Hartford and New Haven, 15 towns or cities in Connecticut with high proportions of children of color posted increases during the MAS pilot year (2009). The Commissioner of Education noted that for the first time in a long time, we are seeing the beginning to close the achievement gap (sic) in grades 5 and 8 proficiency in math and reading."


Hartford and New Haven stood out. In 2009, ConnCAN, a charter school advocacy organization, called the City of Hartford a bright spot (Merritt, 2009). The group cited Hartfords school choice programs and, somewhat indirectly, their hyper-accountability policies. When New Haven posted unprecedented gains, administrators cited teacher training and the expanded use of student assessment data for making decisions about interventions as reasons for the improvement. Their practices quickly became the models of urban educational reform in the state.


In Hartford, district administrators called improved achievement results in reading the greatest increase in history (Hartford Public Schools, 2010). Alex Johnston, the Executive Director of ConnCAN, also credited reform efforts in Hartford and New Haven for improving test results. Johnston stated, I think overall, we saw some encouraging things happening in some of the large school districts, particularly Hartford and New Haven, but in many others they are not closing the gap (Merritt, 2009). While the test results improved in these two cities and others, albeit more slowly, participation in the standard tests continued to decline. The change in the population of students taking the standard test was not mentioned.


The Chairman of the Connecticut General Assemblys Education Committee attributed improvement in test results to local and state initiatives. In particular, he cited the statewide Connecticut Accountability for Learning Initiative (CALI) and high-stakes, test-based accountability in the form of school turnarounds, which were happening in places likes New Haven and Hartford (Merritt, 2010). In addition to the CALI initiative, these cities were early adopters of educational reforms in Connecticut such as inter/intra-district choice, hyper-accountability, and mayoral control of their Boards of Education. The suddenly improved test results in New Haven and Hartford validated their reforms. However, this test-result improvement was temporary and coincided with the removal of children with disabilities from standard testing in favor of a separate modified assessment.


DISTORTED DATA PRODUCES MISLEADING CONCLUSIONS ABOUT EDUCATIONAL REFORM


Despite improving test results, the lived experiences for children with disabilities may have been far different than test results indicated. Behind the scenes, Hartford Public Schools (HPS) received major complaints of special education violations. As a result, the Connecticut State Department of Education Bureau of Special Education conducted a monitoring visit in 2010 and produced a report the following year. Monitors visited six elementary and nine high schools in Hartford. The report (2011) found that children with disabilities in the emotional disturbance category were not provided the support services they needed, files for IEPs were incomplete or missing, academic processes did not coordinate with the special education office, and the student based funding system was also inadequate to fund these students needs. Concurrently, the Greater Hartford Legal Aid and Center for Childrens Advocacy alleged delays in evaluations for neuropsychological, reading, speech, and language evaluations for 13 children under the jurisdiction of HPS (Cochrane & Benton, 2011). In 2011, HPS was forced to follow a State of Connecticut corrective action plan for special education violations, and the schools IDEA funds were subjected to additional conditions (Connecticut State Department of Education, 2011). In 2012, HPS also settled complaints regarding violations of bilingual education law.


The temporary boost in test results in these cities may have also created more hospitable conditions for popular, neoliberal reforms (charters, high-stakes testing), particularly in places desperate for educational improvement for children of color. Hartford and New Havens apparent success made comparable cities such as Bridgeport appear to be lagging and in need of similar reforms. For example, New Haven had 9.4% and Hartford 9.9% of all students taking the eighth-grade MAS in reading, while Bridgeport had only 3.5% of all students taking that test in 2011 (Connecticut State Department of Education, 2014). The differences in participation and subsequent test inflation may have partially inspired state and local reformers behind the coup détat to eliminate the elected Bridgeport Board of Education with the goal of implementing reforms similar to those of Hartford and New Haven.


Indeed, it was ConnCANs Johnston, also a member of New Havens Board of Education, that connected the supporters of the coup with state officials in order to eliminate Bridgeports elected Board of Education (Conner Lambeck, 2011). Only the year before, Johnston lauded Hartford and New Havens sudden improvement on state tests, reportedly a result of their school choice, hyper-accountability, and governance policies. Bridgeport attracted national attention and the Connecticut Supreme Court eventually found the coup to be illegal. Test results marginally improved in Bridgeport under the administration Paul Vallas. During that period, MAS participation increased substantially in Bridgeport and the State also found that the school district violated state and federal special education laws (Connecticut State Department of Education, 2013).


Nevertheless, the State Department of Education moved forward with high-stakes testing policies in its NCLB waiver in 2012, while adopting many of Hartford and New Havens educational reforms (Cotto, 2012a). Not surprisingly, U.S. Secretary of Education Arne Duncan lauded Hartford and New Haven as a success and model for the rest of the country (Bailey, 2012; Pappano, 2010). Rather than raising questions about whether high-stakes testing improved education, the movement of children with disabilities, particularly those of color, reinforced the legitimacy of high-stakes testing and related reforms such as public school choice and mayoral control. It also deepened Connecticuts commitment to local and state reforms associated with high-stakes testing as a mean toward educational and racial progress.


THE SHIFTING RULES OF THE HIGH-STAKES TESTING GAME IN CONNECTICUT


The new modified assessment and related rules set up a loophole in high-stakes testing for districts in Connecticut. Up to 2% of students that participated in the MAS and achieved the level of proficient were included in school and district calculations of Adequate Yearly Progress (Cotto, 2012a). The remaining students taking the MAS were effectively removed from the high-stakes testing system and standard test reporting. Students that participated on the MAS test always helped a district by removing students more likely to score poorly on the standard tests and counting a small portion of those doing well enough on the MAS.


The NCLB Act counted a single measure, the percent of students at or above proficient on standard tests in math, reading, and writing. Under this high-stakes testing system, it did not matter how that measure was achieved. For the U.S Department of Education, students taking the MAS counted as participants in annual testing even though they took a separate test. So their movement onto separate tests did not prompt any response for falling below the NCLB Acts requirement of 95% assessment participation.


These policies offered conflicting messages. On the one hand, modified assessment participation counted as participation for NCLB. And, on the other hand, only a portion of students doing well on the modified assessments counted toward a districts accountability ranking. If modified assessment was a legitimate assessment, then it should have been fully recognized in the accountability system. In the context of high-stakes testing, where penalties came with poor rankings, there was a clear incentive to use, and perhaps overuse, the modified assessment, particularly for the students that posed a threat to accountability rankings (Valenzuela, 2005). But this incentive was created by the high-stakes testing rules.


The Individuals with Disabilities Education Act of 2004 offered a limited response to the modified assessment as well. The law required that states create policies, collect data, review procedures, and monitor data on the disproportionality of students identification in special education and placements in districts by categories such as race and ethnicity. In fact, the U.S. Department of Education determined that Connecticut met the requirements of the IDEA Part B and C between 20082011 (U.S. Department of Education, 20102013). But these procedures did not apply to the modified assessment. Applying these procedures to test-selection procedures could have provided key information about racial disproportionality on the MAS.


Finally, as part of its first NCLB Waiver (2012), the State of Connecticut abandoned the Adequate Yearly Progress measure and adopted a different achievement index that partially included MAS results. However, results from the MAS counted for fewer points than the standard test results at the same status levels (Cotto, 2012b; Connecticut State Department of Education, 2012). Although the intended goal appeared to be the creation of a disincentive to offer the MAS to students, this policy reproduced disadvantage among school districts that might have had high proportions of students with disabilities legitimately needing a modified assessment. This unusual policy showed that CT SDE viewed the treatment of modified assessments as a technical issue rather than a broader problem related to students with disabilities and other disadvantaged groups in high-stakes testing systems (Valenzuela, 2005).


With new testing requirements in Connecticut, students with disabilities will be reabsorbed into standard testing as they participate in the Smarter Balanced Assessment Consortium test (SBAC). The CT SDE reported that there would be modifications for students with disabilities on the new computer-based SBAC test. As part of a subsequent NCLB waiver application, CT SDE claimed that it would train educators on how to prepare for students with disabilities moving back onto standard tests. In their analysis of various states plans for phasing out the modified assessment, Lazarus, Thurlow, and Edwards (2013) called the transition, a wonderful opportunity to really think thoughtfully about how to best instruct and assess low performing students with dis­abilities and other struggling learners. To be sure, it is always important to reflect on best practices for the instruction and assessment of all children, including those with disabilities. However, the unmitigated optimism about reintroducing students with disabilities back into standard assessments ignored the recent history of high-stakes testing and students with disabilities. When students with disabilities take the standard, computer-based SBAC test, perhaps with some modifications, it will likely mirror the movement of students with disabilities from OOL to standard tests in the early 2000s.


A FINAL COMMENT


Connecticuts experience with high-stakes testing is instructive. The movement of students with disabilities was closely related to distorted achievement results. This relationship between participation and achievement allowed groups to make exaggerated claims of regress and progress for children of color. As students with disabilities moved to standard tests, NCLB accountability exacerbated the educational crisis. Mixed achievement results led the public to conclude that disparities worsened or their administrators were not effective. Similar to McSpadden McNeils (2005) findings in Texas, Connecticuts high-stakes testing system appeared to work only when large proportions of students with disabilities disappeared. Although a majority of children taking the modified assessment were either Black or Latino, their movement helped policymakers make claims that high-stakes testing promoted educational equity based on very narrow goals and skewed metrics. This moving of children and the inflated test results resolved local and state educational crises, at least temporarily.


PRACTICE AND POLICY IN RESPONSE TO CHANGING ASSESSMENT POLICIES


There are two areas where teachers and educators can negotiate these shifts in assessment for students with disabilities: robust instruction and careful advocacy. Classroom teachers, special educators, and support staff have a primary responsibility to provide instruction that is as robust, inclusive, and differentiated as possible for the students in their care. With adequate support and resources, students with disabilities can reach or surpass their learning goals. To be sure, not every district has adequate support and resources around students in the classroom. Furthermore, there is a potential contradiction here: The goals set by students and their PPT team must be connected to grade-level content standards and skills. In other words, an Individualized Education Program can often appear as a tailored experience toward a standard outcome (Annino, 2007). This is a contradiction that teachers must negotiate.


In order to negotiate this terrain, teachers and educators must engage in careful, perhaps even subversive, advocacy with their students with disabilities and families. In addition to their responsibility for instruction, teachers are in a position to clearly explain formative and summative assessment options to students and families. As instructors, classroom and special educators should be able to inform the discussion about what accommodations and assessments might best allow their students to demonstrate their skills and knowledge. This type of intervention by teachers might be seen as subversive in schools and districts with limits in resources or adversarial views toward special education.


In terms of policy development, there is a need for alternative versions of accountability. The premise that attaching stakes to test results improves educational quality is questionable. And the promise of higher test results as a measure of progress is simplistic. Instead, holistic accountability of education with authentic assessment of childrens knowledge, skill, and talent is a promising alternative (Valenzuela, 2005). However, this alternative is elusive with the dominance of high-stakes testing. Abandoning high-stakes testing in favor of smarter assessment practices, qualitative reviews of educational quality, and analysis of equitable resources in schools is both possible and needed (Rothstein, Jacobsen, & WIlder 2008). Recall that multiple site visits to Hartford schools, the force of the IDEA law, and financial interventionsrather than high-stakes testingultimately promoted equal educational opportunity for children with disabilities.


The movement of children with disabilities and distorted results are cause for serious review of how those students fit into an accountability and testing system. If students with disabilities require individualized education programs, then perhaps a blanket policy of inclusion or exclusion into testing systems is contradictory. A simple rule that all students with disabilities are included or excluded without regard for individual students and the context will continue to be insufficient. When considering children with disabilities, there is a critical need for educators and policymakers to practice and articulate alternate forms of accountability as well, despite the limited space for these practices in the current moment. For teachers and educators, assessing the academic progress of students with disabilities should be an ongoing classroom affair and must go beyond a yearly test, modified or standard.


References


Annino, P. G. (2007). Final regulations on school assessments: An attempt to align the NCLBA and IDEIA. Mental and Physical Disability Law Reporter, 31(6), 830833. Retrieved from http://www.jstor.org/stable/20786408.


Apple, M. (2006). Educating the right way: Markets, standards, God, and inequality (2nd ed.). New York, NY: Taylor & Francis Group.


Bailey, M. (2012, May 29). Teachers do the talking. The New Haven Independent. Retrieved fro www.newhavenindependent.org.


Berliner, D. (2011). Rational responses to high stakes testing: The case of curriculum narrowing and the harm that follows. Cambridge Journal of Education, 41(3), 287302.


Blumenthal, R. (2006). Why Connecticut sued the government over No Child Left Behind. Harvard Educational Review, 76(4), 564569.


Browder, D., Ahlgrim-Delzell, L., Flower, C., Karvonen, M., Spooner, F., & Algozzine, R. (2005). How states implement alternative assessments for students with disabilities: Recommendations for national policy. Journal of Disability Policy Studies, 15(4), 209220. doi:10.1177/10442073050150040301


Cawthon, S. (2007). Hidden benefits and unintended consequences of No Child Left Behind policies for students who are deaf or hard of hearing. American Education Research Journal, 44(3), 460492. doi:10.3102/0002831207306760


Cochrane, L., & Benton, H. (2011, March 11). [Letter to CT State Department of Education, Bureau of Special Education]. Greater Hartford Legal Aid & Center for Childrens Advocacy, Hartford, CT.


Connecticut State Department of Economic and Community Development. (2013). Connecticut per capita income, median household income, and median family income at state, county, and town level: ACS 2013 data. Retrieved from www.ct.gov/ecd/cwp/.


Connecticut State Department of Education. (2004). Connecticut mastery test, third generation, summary performance results, grades 4, 6, and 8 [Online data set]. Retrieved from http://cmt3.cmtreports.com/.


Connecticut State Department of Education. (2011). Connecticut Adequate Yearly Progress [Online

database]. Retrieved from http://ctayp.emetric.net/.


Connecticut State Department of Education. (2012). 2011 MAS Reading Final & 2011 MAS Math Final [Excel file]. Received from CT SDE on September 19, 2012 via e-mail.


Connecticut State Department of Education. (2012). Connecticut Education Data and Research [Online database]. Retrieved from http://sdeportal.ct.gov/Cedar/WEB/ct_report/CedarHome.aspx


Connecticut State Department of Education. (2014). Data Interaction for Connecticut Mastery Test, 4th generation [Online data set]. Retrieved from http://solutions1.emetric.net/cmtpublic/Index.aspx


Connecticut State Department of Education, Bureau of Special Education. (2011). Hartford public schools: Monitoring visit report. Retrieved from http://www.hartfordinfo.org/



Conner Lambeck, L. (2011, August 4). Outside interests were working behind the scenes to reconstitute school board. The Connecticut Post. Retrieved from www.ctpost.com


Cotto, R. (2012a). Addition through subtraction: Are rising test scores in Connecticut school districts related to the exclusion of students with disabilities? New Haven, CT: Connecticut Voices for Children.


Cotto, R. (2012b). Understanding Connecticuts application for a waiver from the No Child Left Behind act. New Haven, CT: Connecticut Voices for Children.  


Cuban, L. (2010). As good as it gets: What school reform brought to Austin. Cambridge, MA, & London, England: Harvard University Press.


Darling-Hammond, L., & Vasquez Heilig, J. (2008). Accountability Texas-style: The progress and learning of urban minority students in a high-stakes testing context. Educational Evaluation and Policy Analysis, 30(2), 75110. doi:10.3102/0162373708317689


Dougherty, J. (2011). On the line: How schooling, housing, and civil rights shaped Hartford and its suburbs. Hartford, CT. Retrieved from http://ontheline.trincoll.edu/author/jdoughe2/.


Elliot, S. N., Kettler, R. J., & Roach, A. T. (2008). Alternative assessments of modified achievement standards: More accessible and less difficult tests to advance assessment practices. Journal of Disability Policy Studies, 19(3), 140152. doi:10.1177/1044207308327472


Frahm, R. (2005, March 29). Urban schools make gains in test scores: Over five years, gap shrinks between poor and middle-class regular education pupils. The Hartford Courant. Retrieved from http://courant.com.


Frahm, R. (2005, October 20). 4th grade reading results take downturn. The Hartford Courant. Retrieved from http://courant.com


Frahm, R. (2006, March 3). Study says test scores going in wrong direction: Commissioner questions reports findings. The Hartford Courant. Retrieved from http://courant.com.


Hartford Public Schools. (2010, July 15). Connecticut Mastery Test and Connecticut Academic Performance Test 2010 Preliminary results: Closing the achievement gap. Retrieved from http://www.shipmangoodwin.com/


Hehir, T. (2006). New directions in special education: Eliminating ableism in policy and practice. Cambridge, MA: Harvard Education Press.  


Ho, A. D. (2008). The problem with proficiency: Limitations of statistics and policy under No Child Left Behind. Educational Researcher, 37(6), 351360. doi:10.3102/0013189X08323842


Hodgson, J. R., Lazarus, S. S., & Thurlow, M. L. (2010). Characteristics of states alternate assessments based on modified academic achievement standards in 2009-2010 (Synthesis Report 80). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://eric.ed.gov/

atsiyannis, A., Zhang, D., Ryan, J. B., & Jones, J. (2007). High-stakes testing and students with disabilities: Challenges and promises. Journal of Disability Policy Studies, 18(3), 160167. doi:10.1177/10442073070180030401


Kennedy, E. M. (2006). Forward. Harvard Educational Review, 76(4), 453456.


Koretz, D. (2008). Measuring up: What educational testing really tells us. Cambridge, MA, & London, England: Harvard University Press.


Langenfeld, K. L., Thurlow, M. L., & Scott, D. L. (1996). High stakes testing for students: Unanswered questions and implications for students with disabilities (Synthesis Report No. 26). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://education.umn.edu/NCEO/OnlinePubs/Synthesis26.htm.


Lazarus, S. S., Cormier, D. C., & Thurlow, M. L. (2011). States accommodations policies and development of alternate assessments based on modified achievement standards: A discriminant analysis. Remedial and Special Education, 32(4), 301308. doi:10.1177/0741932510362214


Lazarus, S. S., Thurlow, M. L., & Edwards, L. M. (2013). States flexibility plans for phasing out the Alternate Assessment Based on Modified Academic Achievement Standards (AA-MAS) by 2014-15 (Synthesis Report 89). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Lazarus, S. S., Thurlow, M., Lail, K. E., & Christensen, L. (2009). A longitudinal analysis of state accommodations policies: Twelve years of change, 19932005. The Journal of Special Education, 43(2), 6780. doi:10.1177/0022466907313524


Lee, J. (2008). Is test-driven external accountability effective? Synthesizing the evidence from cross-state causal-comparative and correlational studies. Review of Educational Research, 78(3), 608644. doi:10.3102/0034654308324427


Lipman, P. (2004). Inequality, globalization, and urban school reform. New York, NY, & London, England: Routledge Falmer.


McGrew, K. S., Thurlow, M. L., & Spiegel, A. N. (1993). An investigation of the exclusion of students with disabilities in national data collection programs. Educational Evaluation and Policy Analysis, 15(3), 339352. doi:10.3102/01623737015003339


McSpadden McNeil, L. (2005). Faking equity: High-stakes testing and the education of Latino youth. In A. Valenzuela (Ed.), Leaving children behind: How Texas-style accountability fails Latino youth. Albany, NY: State University of New York Press.


Merritt, G. E. (2009, July 30). Scores up for all but 10th-graders: Minority students narrow achievement gap. States top educator sees need for high school reforms. The Hartford Courant. Retrieved from http://courant.com.


Minnema, J., & Thurlow, M. (2003). Reporting Out-of-level test scores: Are these students included in accountability programs? (Out-of-Level Testing Project Report 10). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/onlinepubs/OOLT10.html.  


Nagle, K., Yunker, C., & Malmgren, K. W. (2006). Students with disabilities and accountability reform: Challenges identified at the state and local levels. Journal of Disability Policy Studies, 17(1), 2839. doi:10.1177/10442073060170010301


Nichols, S. L., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing corrupts Americas schools. Cambridge, MA: Harvard Education Press.


Nichols, S. L., Glass, G. V., & Berliner, D. C. (2006). High-stakes testing and student achievement: Does accountability pressure increase student learning? Education Policy Analysis Archives, 14(1). http://dx.doi.org/10.14507/epaa.v14n1.2006.


Nichols, S. L., Glass, G. V., & Berliner, D. C. (2012). High-stakes testing and student achievement: Updated analyses with NAEP data. Education Policy Analysis Archives, 20(20). http://dx.doi.org/10.14507/epaa.v20n20.2012.


No Child Left Behind Act of 2001, Public Law 107110, "101, 601, 115 Stat. 1425 (2002).


Pappano, L. (2010). Inside school turnarounds: Urgent hopes, unfolding stories. Cambridge, MA: Harvard Education Press.


Roach, A., & Elliott, S. (2006). The influence of access to general education curriculum on alternate assessment performance of students with significant cognitive disabilities. Educational Evaluation and Policy Analysis, 28(2), 181194. doi:10.3102/01623737028002181


Rothstein, R., Jacobsen, R., & Wilder, T. (2008). Grading education: Getting accountability right. Washington, DC, & New York, NY: Economic Policy Institute and Teachers College Press.


Sacks, M. (2008). Inequality and suburbanization in the Hartford metropolitan area, 1980-2000. Retrieved from Trinity College Center for Urban and Global Studies: http://www.trincoll.edu/UrbanGlobal/CUGS/Faculty/research/Documents/Presentation%20by%20Michael%20Sacks.pdf.


Schierberl, M. J. (2014, January 9). [Letter to Robert Arnold, Special Education Director,Bridgeport Public Schools]. Connecticut State Department of Education, Bureau of Special Education, Hartford, CT.


Shaftel, J., & Rutt, B. T. (2012). Characteristics of students who take an alternate assessment based on modified achievement standards. Journal of Disability Policy Studies, 23(3), 156167. doi:10.1177/1044207311424909


Sternberg, B. (2006). Real improvement for real students: Test smarter, serve better. Harvard Educational Review, 76(4), 557563.


Thurlow, M. (2002). Positive educational results for all students: The promise of standards-based reform. Remedial and Special Education, 23(4), 195202. doi:10.1177/07419325020230040201


U.S. Department of Education. (20102013). Determination Letters on State Implementation of IDEA. Retrieved from http://www2.ed.gov/fund/data/report/idea/partbspap/allyears.html.


U.S. Department of Education. Office of Educational Research and Improvement. National Center for Education Statistics. (2001). The Nations Report Card: Mathematics 2000 (NCES 2001-517). Washington, DC: Authors. Retrieved from http://nces.ed.gov/nationsreportcard/pdf/main2000/2001517.pdf.


U.S. Department of Education. Office of Educational Research and Improvement. National Center for Education Statistics. (2001). The Nations Report Card: Fourth-Grade Reading 2000 (NCES 2001-499). Washington, DC: Authors. Retrieved from http://nces.ed.gov/nationsreportcard/pdf/main2000/2001499.pdf.


Valenzuela, A. (Ed.). (2005). Leaving children behind: How Texas-style accountability fails Latino youth. Albany, NY: State University of New York Press.




Cite This Article as: Teachers College Record Volume 118 Number 14, 2016, p. 1-30
https://www.tcrecord.org ID Number: 21545, Date Accessed: 3/9/2022 7:17:23 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Robert Cotto Jr.
    Trinity College
    E-mail Author
    ROBERT COTTO is the Director of Urban Educational Initiatives and a Visiting Lecturer in the Educational Studies program at Trinity College in Hartford, CT. His academic work focuses on K–12 educational policy. His research focuses on educational reform movements in the United States and Puerto Rico. In particular, he studies the history and current impact of educational testing, school choice, teacher-led innovation, and management policies, particularly with respect to marginalized and racialized groups.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS