Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

Does the Organization of Instruction Differ in Charter Schools? Ability Grouping and Students' Mathematics Gains

by Mark Berends & Kristi Donaldson — 2016

Background: Although we have learned a good deal from lottery-based and quasi-experimental studies of charter schools, much of what goes on inside of charter schools remains a “black box” to be unpacked. Grounding our work in neoclassical market theory and institutional theory, we examine differences in the social organization of schools and classrooms to enrich our understanding of school choice, school organizational and instructional conditions, and student learning.

Purpose / Objective / Research Question / Focus of Study: Our study examines differences in students’ mathematics achievement gains between charter and traditional public schools, focusing on the distribution and organization of students into ability groups. In short, we ask: (1) How does the distribution of ability grouping differ between charter and traditional public schools? And (2) What are the relationships between ability group placement and students' mathematics achievement gains in charter and traditional public schools?

Research Design: With a matched sample of charter and traditional public schools in six states (Colorado, Delaware, Indiana, Michigan, Minnesota, and Ohio), we use regression analyses to estimate the relationship between student achievement gains and school sector. We analyze how ability grouping mediates this main effect, controlling for various student, classroom, and school characteristics.

Findings: We find significant differences in the distribution of students across ability groups, with a more even distribution in charter compared to traditional public schools, which appear to have more selective placements for high groups. Consistent with prior research on tracking, we also find low-grouped students to be at a significant disadvantage when compared with high- and mixed-group peers in both sectors.

Conclusions: Although we find some significant differences between ability group placement and student achievement gains in mathematics, these relationships do not differ as much by sector as market theory (with its emphasis on innovation and autonomy) would predict. Consistent with institutional theory, both sectors still group students by ability and have similar relationships between gains and grouping.

Amid the myriad of educational reforms that have occurred over the past several decades in the United States, one that receives a great deal of attention is school choice, which refers to a variety of programs providing families the option to choose the school their children attend. One choice option that has grown significantly in the past two decades is parents sending their children to charter schools—schools that are publicly funded but run under a charter by parents, educators, community groups, universities, or private organizations to encourage school autonomy and innovation. As the fastest growing area of school choice, there are currently nearly 6,500 charter schools serving nearly 2.5 million children across the United States (National Alliance for Public Charter Schools, 2015).

The charter school sector has grown tremendously in the United States, but the evidence base has been weak for scaling up charter school reform via federal policy and programs. However, in the last decade, the quality of the research has increased with the growth of state longitudinal data on student-level test scores (Betts & Tang, 2014). In addition, lottery-based studies of charter schools have allowed for a randomized design so researchers can compare achievement growth of lottery winners and lottery losers over time. Overall, experimental and quasi-experimental studies find positive, negative, and neutral effects, resulting in mixed and heterogeneous evidence about achievement effects (Berends, 2015; Betts & Tang, 2014; Bifulco & Bulkley, 2015; Teasley, 2009).

Some studies using randomized designs for examining students who win and those who lose lotteries in oversubscribed charter schools show positive effects on academic achievement gains for charter lottery winners vis-à-vis students who do not win in the lotteries (Abdulkadiroglu et al., 2009; Abdulkadiroglu, Angrist, Dynarski, Kane, & Pathak, 2011; Angrist et al., 2011; Dobbie & Fryer, 2009; Hoxby, Murarka, & Kang, 2009). For example, in New York, Dobbie and Fryer (2011) examined students who won and lost the charter school lotteries in the Harlem Children’s Zone, and they found that the effects of charter elementary schools were large enough to close the racial achievement gap across subjects—i.e., students gained approximately 0.20 of a standard deviation per year in both mathematics and English/language arts (see also Abdulkadiroglu et al., 2009. Abdulkadiroglu et al., 2011; Angrist et al., 2011; Hoxby et al., 2009). Other studies relying on broader samples of schools with lottery-based randomized designs reveal more mixed effects on student achievement (Furgeson et al., 2012; Gleason, Clark, Tuttle, & Dwoyer, 2010). In the largest lottery-based study of charter schools across the nation, Gleason et al. (2010) examined 36 charter schools in 15 states and found no significant effects on mathematics and reading achievement (see also Clark et al., 2015).

Researchers relying on quasi-experimental methods find mixed results for the effect of charter schools on student achievement (Bifulco & Ladd, 2006; Booker, Sass, Gill, & Zimmer, 2011; CREDO, 2013; Davis & Raymond, 2012; Hanushek, Kain, Rivkin, & Branch, 2007; Sass, 2006; Zimmer & Buddin, 2006; Zimmer et al., 2009, 2012). These types of studies reveal that students in charter schools perform similarly to, but not better than, students in traditional public schools.

In a recent meta-analysis of the more rigorous studies—i.e., lottery-based and quasi-experimental value-added analyses—Betts and Tang (2014) find overall that charter schools produce higher achievement gains in mathematics than traditional public schools do. Although most of the effects are positive, they find no overall statistically significant differences in reading achievement between charter and traditional public schools. In addition, they emphasize the heterogeneity of effects across different types of charters.

In our view, placing charter schools in a horserace between sectors (i.e., comparing charter schools to traditional public schools) is not as helpful as understanding the conditions under which school effects—traditional, charter, private, etc.—occur. However, comparisons between sectors can yield lessons learned about school and schooling effects, as well as inform theories that provide a framework for understanding the possibilities of charter school reform for students of differing achievement levels. Along these lines, we examine differences in students’ mathematics achievement gains between charter and traditional public schools, focusing on the distribution and organization of students into ability groups as a way to potentially explain any differences in student gains.

The analyses of this paper are embedded in a conceptual framework that aims to further our understanding of what goes on inside schools of choice. This framework is grounded in the sociological literature on the social organization of schools and classrooms (Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010; Bryk, Gomez, Grunow, & LeMahieu, 2015; Hallinan, Gamoran, Kubitschek, & Loveless, 2003), and also addresses how neoclassical market theory and institutional theory enrich our views of school choice, school organizational and instructional conditions, and student learning.


A number of theories inform the effects of school choice on student outcomes (see Berends, Springer, Ballou, & Walberg, 2009; Henig, 1995). Economists tend to rely on market theory and studies that examine the impacts of charter schools compared with traditional public schools. By contrast, sociologists tend to rely on theories about the social organization and institutional aspects of schools and studies that focus on the social context of charter schools. Here, we rely on market theory and institutional theory because they predict different outcomes for schools and students (Berends 2015; Berends, Goldring, Stein, & Cravens, 2010). Many reformers maintain that market mechanisms of consumer choice and competition between autonomous schools will encourage diverse and innovative approaches to school organization, teaching, and learning (e.g., Chubb & Moe, 1990; Walberg & Bast, 2003). The assumption is that as school choice undercuts bureaucratic political control of public education, it provides educators in schools of choice the opportunity and motivation to experiment with new organizational and instructional strategies for improving student achievement.

Proponents of choice argue that providing this freedom not only diversifies educational opportunities, but also creates incentives for the improvement of traditional public schooling through increased market competition for services (Chubb & Moe, 1990; Friedman, 1962). In large part, this argument is about how market competition decreases the amount and influence of historical bureaucratic structures to increase the opportunities to better relate to and address parents' demands. Chubb and Moe (1990, p. 67) argue that choice schools “operate in a very different institutional setting distinguished by the basic features of markets—decentralization, competition, and choice.” As the theory goes, such privatization and increased choice will lead to more schooling innovations, better outcomes, lower costs, and greater satisfaction of employees, parents, and students.

Critics of the market model, however, raise questions about the empirical validity of its key assumptions about parent-consumers (demand-side), schools (supply-side), and the products that a market in education would generate (Henig, 1999; Orfield & Frankenberg, 2013). Such criticism points to an alternative theory about the implementation and consequences of school choice: institutional theory. Stemming from broader organizational analysis, this new institutionalism characterizes schools as institutions with persistent patterns of social action that individuals take for granted (Meyer, 2010; Meyer & Rowan, 1977, 1978; Powell & DiMaggio, 1991; Scott, 2008; Scott & Davis, 2007; Scott & Meyer, 1994).

Both market and institutional theorists agree that the bureaucratic form of schooling dominates the public school sector in the United States (and other countries), but institutional theorists take a different tack in their analysis of the education environment. For instance, the increase in bureaucratization of schools has led to an increase in rational coordination among the nested layers of the school—from the federal government to the state, districts, schools, and classrooms. According to institutional theorists such as Meyer and Rowan (1977, 1978), this bureaucratic, rational network has resulted in a system of categories or rules, called “ritual classifications,” that define the actions of schools, teachers, and students. Over time, these ritual classifications become institutionalized and accepted as the norm for what constitutes a legitimate school and schooling activities (Bidwell & Kasarda, 1980).

Examples of taken-for-granted classifications include certified teachers, instructional time, standardized curriculum subjects, age-based classes of reasonable size, and use of curricular materials. In large part, these rules have shaped schools—whatever the type in whatever sector—with the consequence that they look much more alike than different. Institutional theorists refer to this as isomorphism and have documented its diffusion both in the United States and throughout the world (Meyer & Ramirez, 2000). To legitimize themselves within the broader community, school compliance to ritual classifications is important—more important, according to institutional theorists, than maximizing efficiency and innovations of school operations (Meyer & Rowan, 1977; Scott & Meyer, 1994). In other words, schools adapt to their environments by adopting accepted rules and structures.

When applied to school choice, institutional theory emphasizes that all schools operate within highly institutionalized environments, which define what counts as legitimate schooling. All types of schools, no matter the sector or organizational form, adopt rituals, norms, and myths to support their validity (Meyer & Rowan, 1977, 1978; Scott & Davis, 2007). Thus, even schools of choice pay attention to these institutional rules and classifications.

In short, institutional theorists argue that the institutional environment of schooling is so strong that significant changes in the organization of instruction are likely to be rare or short lived (see Elmore, 2007), while market theorists claim that increased choice will result in widespread autonomy promoting innovation, competition, and increased satisfaction and outcomes. Though charter school reform has existed or over 20 years in the United States, it remains an open question whether this form of school choice results in innovation, competition, and improved outcomes. The limited empirical research is mixed on improved and differentiated instruction and in-school organizational conditions, curriculum content, and pedagogy in schools of choice, supporting neither market theory nor institutional theory (see Berends, 2015; Berends et al., 2010; Lubienski, 2003; Preston et al., 2012).

Researchers have long looked at the organizational aspects of schools, particularly the grouping of students for instructional purposes, but often not in the context of school choice or sector differences. Because students come to school with different levels of achievement (Le, Kirby, Barney, Setodji, & Gershwin, 2006), because grouping students remains widespread in schools (Gamoran, 2010), and because grouping has important consequences for students’ opportunities to learn (Hallinan et al., 2003), it is important to examine whether the charter school sector, which claims autonomy and innovation, is in fact using instructional grouping arrangements differently than traditional public schools.


Though previous studies shed some light on the main effects of charter schools on student achievement in different locales, most of these studies provide limited information about the schools as organizations and the conditions within them that may promote student achievement. The organizational practices remain a “black box,” particularly those that are most likely to affect student learning. Many researchers and policymakers advocate looking inside schools to better understand the conditions under which schools of choice have (or do not have) positive effects on achievement, including examining curriculum, instruction, organizational conditions, and teacher characteristics and qualifications (Berends, 2015; Betts et al., 2006; Furgeson et al., 2012; Gill, Timpane, Ross, & Brewer, 2007; Gleason, Clark, & Tuttle, 2012; Zimmer & Buddin, 2007). To date, however, this is a neglected area of research by social scientists.

Many have argued that school improvement processes may work better in schools of choice (like charter schools) than in traditional public schools (Bryk, Lee, & Holland, 1993; Chubb & Moe, 1990; Gamoran, 1996; Walberg & Bast, 2003). Specifically, proponents of choice argue that characteristics long touted in the effective schools movement—e.g., flexible instructional grouping arrangements, instructional innovation, leadership, teacher professional communities, and teacher efficacy—are all characteristics that school choice will promote. Thus, schools of choice will have higher levels of these conditions than traditional public schools (see Chubb & Moe, 1990). Advocates of school choice tout these school organization conditions because they argue that schools of choice should be more effective than traditional public schools. Yet, few have looked at these issues empirically.

School effectiveness research indicates that the aspects of schooling closest to students—teaching, instruction, and curriculum—have the greatest impact on student learning (Gamoran, Nystrand, Berends, & LePore, 1995). In particular, one school organizational aspect that has received a great deal of attention in the empirical literature over the years is ability grouping and tracking—the assignment of students to different curricular programs purportedly based on their interest and academic achievement. Though often used interchangeably, the prevalence of tracking—the division of students into separate classes for all of their academic subjects—has become much less common in the United States over the past several decades; much more common now is the organizational practice of ability grouping—the assignment of students into classes on a subject-by-subject basis (Gamoran, 2010; Lucas, 1999).

According to its proponents, ability grouping is an effective response to students’ diverse academic needs, allowing teachers to adapt their instructional approaches accordingly. Critics, however, argue that ability grouping has harmful consequences. For instance, separating students according to social and economic characteristics contradicts many important social goals of schools (Oakes, 2005; Oakes, Gamoran, & Page, 1992). In addition, it may cause students in lower groups to receive inferior educational resources and low-quality instruction (Gamoran et al., 1995; Oakes, 2005).

There has been a great deal of research on ability grouping and tracking (for reviews, see Gamoran, 2010; Oakes et al., 1992; Slavin, 1987, 1990). Much of the research focused on academic outcomes, although there is a large body of research that examines other issues, such as placement into groups (e.g., Gamoran & Mare, 1989; Kelly, 2004a, 2004b, 2009; Lucas, 1999), tracking as form of within-school segregation (e.g., Clotfelter, 2004; Kelly, 2009; Kelly & Price, 2011; Mickelson, 2001), tracking differences by school sector (e.g., Bryk et al., 1993; Gamoran, 1996; Gleason et al., 2010; Hallinan, 1994; Kelly, 2009), teacher assignments to different tracked classes (Finley, 1984; Kelly, 2004a), school-to-school differences in the structure of ability grouping and tracking (Kelly & Price, 2011; Lucas & Berends, 2002), and effects of tracking on outcomes beyond achievement (e.g., Berends, 1995; Oakes, 2005).

Most of the research, however, has focused on students’ academic achievement. Comparisons of high-group students to low-group students reveal an advantage of high-group placement on academic achievement (Gamoran & Mare, 1989; Gamoran et al., 1995; Tach & Farkas, 2006). For example, Tach and Farkas (2006) examine the determinants and consequences of ability grouping in kindergarten and first grade in their analysis of the nationally representative Early Childhood Longitudinal Study-Kindergarten Cohort of 1998–99 (ECLS-K). Estimating multilevel models with an array of controls, they find that in classes where ability grouping is used, students placed in higher groups exhibit greater achievement gains in both reading and mathematics—a finding consistent with previous research. As the studies below reveal, this positive effect is offset by a negative low-group effect resulting in an overall net effect of zero (Gamoran, 2010). Though some have questioned these effects (Betts & Shkolnik, 2000; Figlio & Page, 2002; Slavin, 1990), the balance of evidence suggests that the learning gap between high- and low-group students increases over time (see Gamoran, 2010; Gamoran & Berends, 1987; Oakes et al., 1992).

Twenty-five years ago, Slavin’s (1990) review of the ability grouping research concluded that comparisons of secondary students who are grouped homogeneously to those who are grouped heterogeneously reveal that separating students by purported abilities and interests has no effect on their achievement levels. Yet, some researchers have found differential effects when comparing students placed in high, middle, and low groups to students who were not grouped. In data from Britain, Kerckhoff (1986) found that after controlling for initial achievement, students in the high-ability classes experienced greater test score gains, and students in low-ability classes exhibited lower gains when compared to students who were not grouped for instruction in mathematics and reading. In the United States, Hoffer (1992) examined middle school students and found that in mathematics, students in the high group learned at a greater rate, and students in the low group learned at a slower rate, when compared with students who were not grouped for mathematics instruction.

More recent analyses of ECLS-K 1998-99 also support the differential effects hypothesis (Condron, 2008; Lleras & Rangel, 2008). Lleras and Rangle (2008) examine the reading achievement differences of students in grouped and non-grouped classes in first and third grades. Using multilevel models, they find students in the lower groups learn less and students in the higher groups learn more when compared to students in non-grouped classrooms. Analyzing the same data with multilevel models and propensity scores predicting group placement to reduce the selection bias of students into such groups, Condron (2008) also finds support for the differential effects hypothesis—compared with students who not grouped, students in the lower groups learn less and students in the higher groups learn more.


In our analyses below, we examine whether the differential effects hypothesis differs between charter and traditional public schools. If charter schools are given the freedom and autonomy to organize their schools in innovative ways, we might expect that they will use ability grouping in ways that differ from traditional public schools. Few studies have examined ability grouping within charter schools. In their national lottery-based charter school study, we noted that Gleason et al. (2010) found no significant impact of winning a charter school lottery on student achievement. However, upon examination of the school-level organizational factors, Gleason et al. (2010) found ability grouping was positively correlated with charter school impacts on mathematics scores (but not English). However, this finding is informative only to an extent because the “use of ability grouping” measure was based on school reports about whether grouping was used in “some or all mathematics and/or English courses “ (p. D-18) and because the analyses involved correlations between this measure and the school-level impact measures for mathematics and English.

To further our understanding of instructional stratification within and between schools, it is advantageous to examine ability grouping between school types. Our study builds on prior findings by using more specific measures of ability grouping across classrooms in schools and directly links these grouping arrangements to student achievement gains. Does the distribution of ability grouping differ between charter and traditional public schools? What are the relationships between ability group placement and students' mathematics achievement gains in charter and traditional public schools? We address these questions in our analyses that follow.


The data we use for this paper are part of a larger research project conducted by the National Center on School Choice at Vanderbilt University that aimed to open up the “black box” of charter and traditional public schools. Unpacking this “black box” helps us to understand not only achievement differences if they occur, but also other distinctions among these school types, such as curriculum, instruction, and organizational conditions that promote achievement. The specific focus in this paper is on the grouping of students for instructional purposes.

Making charter-traditional public school comparisons is a challenging task if researchers want to examine schools across a variety of contexts in a cost-effective manner. Our approach was to collaborate with the Northwest Evaluation Association (NWEA), a nonprofit testing organization that currently partners with more than 4,300 districts and 12,300 schools in 50 states to provide computer-based, vertically equated assessments in mathematics, reading, and English/Language Arts. Taking advantage of the NWEA partnerships with large numbers of charter and traditional public schools, we constructed a matched sample to administer teacher and principal surveys in the 2007–2008 school year to link to student achievement gains (for technical details see Cannata & Engel, 2012; Cannata & Peñaloza, 2012; Cravens, Goldring, & Peñaloza, 2012; Goff, Mavrogordato, & Goldring, 2012). We returned to these schools in the late spring of the 2008–2009 school year to administer an end-of-year Survey of the Enacted Curriculum (SEC) (Porter, 2002) to those teachers who taught mathematics in grades two through eight. Given the potential for significant burden on participants to respond to the SEC for multiple content areas and resource constraints, the project team decided to focus only on mathematics. Mathematics is also typically constrained to particular classrooms and teachers, whereas reading is often taught across the curriculum.

Traditional public schools were matched to charters in two stages. At the first stage, we used the Common Core of Data (CCD) of the National Center for Education Statistics to identify the best match for schools. For schools to match, they had to be located in the same state and be the closest possible traditional public school (TPS) to the charter public school (CPS) in terms of geographic distance, grade range served, racial-ethnic composition, socioeconomic status, and size. Distance was a crucial criterion because we wanted the CPS and the matched TPS to reflect the choice of schools that families and students had in the same geographic area. Thus, we restricted matches to within 20 miles to ensure matched schools would be within the same choice set for parents; 79% of the matched schools were within 15 miles of each other (Bilfulco & Ladd, 2006; Holmes, DeSimone, & Rupp, 2003). We wanted to model choice with reasonable comparisons—a critical condition of examining any potential differences between school types, according to market theory. This is also consistent with a study that found matched comparison groups based on geographically defined criteria (rather than across states) produced estimates closer to randomized experiments (Cook, Shadish, & Wong, 2008).

For the matching process, we did not use propensity score matching because the different models we tested produced inconsistent matches, and we wanted a method by which we could weigh certain matching variables (e.g., distance between schools) differentially. Instead, we constructed an overall difference index, which measured the difference between the CPS and the TPS in terms of racial/ethnic composition, socioeconomic status, and school size.1 The index gave equal weight to racial/ethnic composition and socioeconomic status differences and much lower weight to school size differences. An index value of zero indicated a perfect match. Then, we sorted the school pairs by distance brackets and the index, and selected the pair with the smallest index and the greatest tested grade overlap within the closest distance bracket.2

The second stage of the matching process involved obtaining school participation in the teacher and principal survey. Once charter schools agreed to participate, we then approached the best-matched traditional public school (and its district) to participate in the study. All participating schools submitted teacher rosters, and those teachers filled out confidential, online surveys. We administered the mathematics version of the SEC to all mathematics teachers in grades two through eight in the school with an overall response rate of 70.8%, which is quite good when compared to other studies that utilized the SEC (e.g., Polikoff & Porter, 2014).  

We compared our sample of charter and traditional public schools to a subset of the 2008-09 CCD, using the population of schools in the six states: Colorado, Delaware, Indiana, Michigan, Minnesota, and Ohio. Based on CCD characteristics, the sample charter schools are generally similar to charter schools in the selected states in terms of racial/ethnic composition, school percentage of students who qualify for free and reduced lunch, and student–teacher ratios. One notable difference is that the charter schools in our sample are larger in size, on average, compared with charter schools in the sampled states. For traditional public schools, sampled traditional public schools served slightly different student populations than traditional public schools in the six states. In the sample, Black students comprised 28% of schools, compared to 13% for traditional public schools in the six states, and roughly 53% of students in sampled schools qualified for free and reduced lunch, compared with 41% for schools in the sampled states. Although the schools are similar to the general population of schools in the six states, any differences between the schools in the matched sample are controlled for in the multivariate analyses that follow. See Tables 1 and 2 for CCD and sample comparisons.

Table 1: Means and Standard Deviations for Charter Schools by Analytic Sample vs. CCD Subset of Schools in Sampled States





School Measures




Percentage of American Indian Students

0.589 (0.627)

1.509 (8.185)



Percentage of Asian Students

3.592 (6.596)

3.022 (10.796)


Percentage of Hispanic Students

7.561 (10.804)

8.639 (16.715)


Percentage of Black Students

36.411 (35.201)

42.560 (39.817)


Percentage of White Students

51.847 (33.140)

44.270 (37.366)


Percent of Free / Reduced Lunch Students



54.215 (34.991)


Student to Teacher Ratio

16.626 (4.187)

18.136 (13.723)


Number of Students per Grade

57.573 (24.144)

45.440 (42.485)



Student Enrollment

499.720 (238.726)

311.245 (433.693)


Note: NSAMPLE = 25; NCCD = 986

* p ≤ 0.05

Table 2: Means and Standard Deviations for Traditional Public Schools by Analytic Sample vs. CCD Subset of Schools in Sampled States





School Measures




Percentage of American Indian Students

1.227 (1.714)

1.116 (5.309)


Percentage of Asian Students

4.344 (6.614)

2.257 (4.301)


Percentage of Hispanic Students

10.100 (13.882)

8.273 (14.887)


Percentage of Black Students

28.187 (29.958)

13.011 (23.457)



Percentage of White Students

56.142 (35.500)

75.344 (28.078)



Percent of Free / Reduced Lunch Students



41.289 (25.336)



Student to Teacher Ratio

16.773 (2.376)

16.697 (8.531)


Number of Students per Grade

100.116 (78.935)

113.665 (113.491)


Student Enrollment

464.074 (200.792)

475.025 (370.174)


Note: NSAMPLE = 27; NCCD = 11,962

* p ≤ 0.05

Our measurement of ability grouping and classroom instruction relies on Porter (2002) and colleagues, whose work examined teachers’ content decision-making over the past 35 years. This research resulted in the development of tools for measuring content and cognitive complexity of instruction in subjects, like mathematics. The tools to provide these measures are based on surveys of teachers’ reports of content and cognitive complexity of their mathematics instruction. These Surveys of the Enacted Curriculum (SEC, see http://www.seconline.org) have undergone several studies over the years to verify their reliability and validity (for a review, see Porter, 2002). A major focus behind these SEC tools is the development of common languages of topics and categories of cognitive demand for describing content in different subject areas (e.g., mathematics, reading, and science).

When teachers complete the SEC, they describe their instruction of specific classes. In our study, mathematics teachers also selected students from target class lists presented to them during the online survey to ensure that teachers could be directly linked to the students they taught and students’ NWEA mathematics assessment scores. Teachers were given a $50 gift card if they completed the survey.

Because our focus is on ability grouping, the organization of instruction, and student achievement gains (controlling for prior achievement), we further restricted the sample to regular classroom teachers who taught in grades four through eight, and had complete data on the instructional measures. We also restricted the sample to students who have three seasons of mathematics scores (spring 2008, fall 2008, and spring 2009)—meaning these students remained in the same school from the 2007–2008 to 2008–2009 school years. These decisions rules result in an analytic sample of 2,955 students who were taught by 188 mathematics teachers in 47 schools. When broken down by sector, there were 1,800 students and 117 teachers in 25 charter schools, and 1,155 students and 71 teachers in 22 traditional public schools.



Student achievement in mathematics is based on the spring 2008, fall 2008, and spring 2009 vertically equated scores to examine gains and growth among students in different classrooms and school types (Kingsbury, 2003; Northwest Evaluation Association, 2008, 2010). We use the gains score (fall 2008 to spring 2009) as the dependent variable and control for prior achievement with the spring 2008 mathematics score. We also measure student demographic characteristics, such as gender, race/ethnicity, free-and-reduced lunch status, time between testing administrations, and grade level. We also include students’ prior test scores, which help further reduce selection bias in the estimates (Ballou, Sanders, & Wright, 2004).

Ability Grouping

One of the important developments in research on ability grouping and tracking has been the increasing awareness of measurement problems and their implications for school practice and policy (e.g., Gamoran, 1989; Lucas, 1999; Lucas & Gamoran, 2001). We have no interest in identifying the “best” way to measure students’ track or ability group placements because that may depend on the question under investigation. Our focus here is on students’ structural location in mathematics (i.e., where a student is in the curricular structure compared with where they perceive themselves to be in the academic hierarchy), so we rely on teacher reports of the class placement to examine whether these structural indicators are related to mathematics achievement gains.

Structural measures of organizational stratification are important for predicting student achievement because they are more likely to reflect students' experiences within and between different classes. If schools use ability grouping to make teaching easier and more efficient by allowing for individualized instruction that fits the needs of each group, then school records of ability group placement should be associated with student learning (see Slavin, 1990). Moreover, because school staff tend to agree more often about which students are in the advanced, regular, and remedial classes than about students' curricular programs (i.e., tracks), relying on school records results in more consistent grouping indicators across schools (Gamoran, 1989).

For their target class, teachers reported whether the mathematics class was composed of students of high, average, low, or of mixed (heterogeneous) achievement levels.3 We created dummy measures for each with students in the low group or the mixed group as the reference categories in the analyses that follow.

Teacher and Classroom

To control for classroom contexts, we included other covariates measured at the classroom level, including: the percent of the students who are racial/ethnic minorities, percent of students who are English language learners (ELL), class size, teacher certification (i.e., temporary certification), and school grade-level configuration. For descriptive statistics on these measures overall and by school type, see Table 3.

There has been a debate in the literature about the effects of class size. Although experimental methods, such as Tennessee’s Project STAR, have found positive effects of small class size for students’ achievement (Nye, Hedges, & Konstantopoulos, 2000, 2002), there are questions as to whether this intervention would produce similar results or be as effective if scaled up (Stecher, McCaffrey, & Bugliari, 2003). Approaches utilizing quasi-experimental methods have not found an effect of class size on achievement (Milesi & Gamoran, 2006). Meta-analyses of research on class size have found there to be more consistent positive effects of class size (Greenwald, Hedges, & Laine, 1996) while others have found no systemic, positive effects (Hanushek, 1999). Although the effect sizes and direction have been contested in the literature depending on methods, approaches, and controls, the one consistent takeaway from many of these studies is that class size on its own may not have as much of an impact on student achievement as it does in tandem with other factors, such as classroom instruction, additional resources, teacher aides or the like. Questions regarding how class size is used and under which conditions are they most effective are ultimately of interest (Hanushek, 1996).

Teacher certification has been examined in terms of setting the context for instruction that may differ in traditional and charter public schools (Podgursky, 2008). Studies have shown that teacher quality matters, and teachers who lack preparation in subject matter and teaching methods are significantly less effective in promoting student achievement than those who are fully certified (Darling-Hammond, 2007). Analyses of nationally representative longitudinal data have shown that teachers who hold standard subject certification have a positive impact on student achievement in that subject (Goldhaber & Brewer, 2000). Moreover, in some states like North Carolina, researchers have found that teacher certification and licensure contributes to both student achievement gains and achievement gaps by race/ethnicity and socioeconomic status (Clotfelter, Ladd, & Vigdor, 2010). Because of these findings and the gaps in state licenses between charter and traditional public school teachers (70% of charter and 93% of traditional public) nationally (Podgursky, 2008), we control for teacher certification in our models as a classroom context measure.

We also included dummy variables for sector and school grade configurations. Though the extant literature has mixed findings and explanations for school grade configuration (see Byrnes & Ruby, 2007; Kieffer, 2013; Rockoff & Lockwood, 2010; Schwartz, Stiefel, Rubenstein, & Zabel, 2011), this feature may have an impact on how students are organized within schools. In our sample, all of our traditional public schools fall into the elementary- and middle-only grade configurations. Of the 22 traditional public schools, 17 were elementary and five were middle schools. Our charter school sample differed from the sample of traditional public schools (see Endnote 2). Most of our charter schools combined different grade levels, either K–8 or K–12. Of the 25 charter schools, five were elementary-only, two were middle-only, 16 were K–8, and two were K–12. Since we cannot causally separate the effects of sector from grade configuration with these data, we included grade configuration as a control with dummies for K–8 and K–12 with K–5 as the reference group.


We conduct a series of descriptive analyses to examine whether ability grouping differs between charter and traditional public schools. After these descriptive analyses, we estimate regression models to assess the relationship between sector and student achievement gains, exploring how ability grouping mediates this effect, controlling for student, teacher, and classroom characteristics. To account for the nesting of students in classrooms, we used the cluster command in Stata to produce robust standard errors in our models.

Specifically, we estimate the following model:

ΔMathGainijk = Xijk β + Cijk θ + eijk

ΔMathGain is the measure for students' gains in NWEA mathematics achievement from the fall of 2008 to the spring of 2009 student i within teacher j’s classroom in school k. X is a vector of student-level controls that includes indicators for gender, race/ethnicity, grade level, and prior NWEA mathematics scores from spring of 2008. C is a vector of classroom and teacher characteristics, including ability group level (i.e., dummies for high, average, mixed, and “don't know” compared with low as well as high, average, low and don’t know compared with mixed), classroom composition (percent minority, percent English language learners, class size), teacher certification, and school grade-level configuration. The error term is indicated by eijk.


In the results that follow, we first highlight descriptive differences in the distribution of ability grouping within charter and traditional public schools. We then provide estimates for the associations between ability group placement and students' mathematics gains (overall and by school type), controlling for student, classroom composition, teacher characteristics, and school characteristics, and then go on to examine whether these relationships differ by sector by including interaction terms. Although we do not purport to make causal claims with these cross-sectional survey data, the following results describe relationships between school sector and school organization to begin to understand and unpack differences, if any, between the organization of charter and traditional public schools.

Ability Grouping Differences Between Charter and Traditional Public Schools

Does the distribution of ability grouping differ between charter and traditional public schools? Overall, charter schools have a more even distribution of students across ability groups (high, average, low, mixed), whereas traditional public schools have many students clustered in the average and few in the high ability groups. Specifically, we find some statistically significant differences in the proportion of students placed in different ability groups when comparing charter (CPS) and traditional public school (TPS) classrooms. For example, when examining descriptive differences (not controlling for other measures), a higher proportion of CPS students are placed in the high ability group (0.213 CPS compared with 0.085 TPS), but a lower proportion is placed in the average ability group in CPS classrooms (0.357) compared with TPS classrooms (0.484) (see Table 3). Other ability group differences are not statistically significant.

Table 3: Means and Standard Deviations for Variables by School Type






Student Measures


Math Achievement Gain

9.662 (7.692)

8.010 (7.768)

9.052 (7.758)



Fall 2008 IRT Scores









Spring 2009 IRT Scores







































Free and reduced lunch






Grade 4






Grade 5





Grade 6





Grade 7






Grade 8






Days Between Tests

231.899 (17.258)

224.023 (21.598)

228.821 (19.452)



Spring 2008 Math Score

214.481 (16.568)

210.438 (14.196)

212.901 (15.805)


Classroom Measures


Target Class Demographics (percent)


Percent ELL

2.293 (4.283)

2.981 (4.745)

2.562 (4.481)



Percent Minority

42.356 (32.959)

36.221 (32.228)

39.958 (32.806)



Class Size

22.322 (4.974)

23.945 (5.079)

22.956 (5.076)



Target Class Achievement Level (proportion)
























Don’t Know





Teacher Characteristics (proportion)


Temporary Certification











Grade-Level Configuration (proportion)

























Note: NTOTAL = 2,955; NCHARTER = 1,800; NTPS = 1,155

*** p ≤ 0.001, ** p ≤ 0.01, * p ≤ 0.05

Moreover, CPS student gains as a whole (Table 3) and for each ability group (Figure 1) are larger than those of TPS students in similar groups. CPS students have math gains of 9.66 compared with TPS students who have gains of 8.01, a statistically meaningful difference. Within groups, CPS students make higher mathematics gains—on average, about 1.65 points, or about 0.11 of a standard deviation (SD)—in each of the ability groups, except for the high group, which was not significantly different between CPS and TPS, even though the TPS high group scored higher than the CPS high group. For example, low-group students in CPS have gains of 8.14 over the school year, compared with a gain of 6.88 for TPS (a difference of about 0.16 of a SD). When comparing the average ability groups, CPS students have an average gain of 10.13 compared with 8.18 for TPS students (different of about 0.25 of a SD), and the mixed ability group in CPS had gains of 9.74 points compared with 7.98 points for TPS students in the mixed ability group (a difference of about 0.23 of a SD).

If we compare the simple differences within school type, we find that the high ability group gains are greater than those of the low group. For example, CPS students in the high group have average mathematics gains of about two points: 10.21 compared with low-group gains of 8.14. Similarly, TPS students in the high group make greater average gains (11.24) than TPS students in the low group (6.88), netting a gain of about 4.5 points (0.57 of a SD). In TPS, the high group has the greatest average gain of all other groups. Looking within sector and comparing across ability groups, we see a more even distribution of gains in CPS and much more variation and disparity in gains in TPS.

Comparisons within sector also reveal that the achievement gains of charter school students in the mixed group (9.74) look much more like the gains of charter students in the high (10.21) and average groups (10.13); CPS students in the low group score 1.6 points below students in the mixed group. By contrast, TPS student gains in the mixed group (7.98) are more similar to public school students in the average (8.18) and low groups (8.14) compared with students placed in the high group (11.24).

Figure 1. Mathematics gains in charter and traditional public schools, by ability group (2008–2009 academic year, grades 4–8).


Instructional Context and School Type

In addition to ability grouping differences across charter and traditional public school classrooms, other classroom measures that we examine include percentage of the class that is designated as English language learners, the percentage of the classroom that is Black or Latino (percent minority), class size, and whether or not teachers are have temporary certification.

Several of these classroom measures differ between school sector in statistically significant ways. The descriptive comparisons between charter and traditional public school classrooms in Table 3 reveal that charter classrooms have a larger percentage of Black and Latino students (42% in CPS compared with 36% in TPS). The charter classrooms also tend to be slightly smaller (22 students) than traditional public school classrooms (24 students). The proportion of teachers in charter schools who have temporary certification is significantly higher than those in traditional public schools. About 21% of charter public school teachers report that have a temporary certification compared with 9% of teachers in traditional public schools.

We also examined these additional classroom-level measures by school sector and by ability group levels. Figure 2 shows the number of students per classroom in charter and traditional public schools by ability group levels. Compared with traditional public schools, classroom size in charter schools is smaller across all ability group levels with the exception of the low ability group level, which tends to be somewhat larger.

Figure 2. Number of students per classroom in charter and traditional public schools by ability group (2008–2009 academic year, grades 4–8)


When considering the distribution across ability group levels of teachers with temporary certification in charter and TPS, some dramatic differences emerge (see Figure 3). There are no traditional public school teachers who teach high-ability group students and have temporary certification. By contrast, about one third of charter public school teachers have temporary certification and teach high-ability group students. In addition, about a quarter of the CPS teachers are temporarily certified and teach students in the average ability group compared with 12% of TPS teachers. When considering teachers of low-ability group students, about 7% of TPS teachers compared with 3% of CPS teachers have temporary certification. These descriptive analyses provide some evidence of difference in the organization and distribution of resources in charter and TPS. Next, we turn to multivariate analyses to examine the conditions under which these differences hold.

Figure 3. Proportion of teachers who are not certified in charter and traditional public schools by ability group (2008–2009 academic year, grades 4–8)


Student Achievement and Ability Group Placement, School Type, and Instruction

What are the relationships between ability group placement and students' mathematics achievement gains in charter and traditional public schools, controlling for other measures of students and classroom context? We find that there are some significant differences between ability group placement and student achievement gains in mathematics, but these relationships do not differ as much by sector as market theory (with its emphasis on innovation and autonomy) would predict. In Table 4, we estimate a series of models that examine estimates for charter school classrooms, student characteristics, classroom measures, and ability group placements, with low ability classrooms as the reference category. Prior research on ability grouping has relied on group comparisons, similar to the results presented in Table 4 (i.e., low as reference group). However, researchers have has called for comparisons of grouped (or tracked) classrooms to mixed-ability classrooms (e.g., Condron, 2008; Lleras & Rangel, 2008; Tach & Farkas, 2006; see Gamoran, 2010, for a review). In Table 5, we present estimates for the same models as Table 4, but use mixed ability classrooms as the reference category to facilitate grouped to mixed comparisons. In both, we first estimate a baseline model with only school sector, and then add in additional school organizational and demographic characteristics, ability group, classroom measures, and interaction terms to explore how the relationship between gains and school type vary by each of these factors.  

Table 4. Relationship of Ability Grouping to Student Mathematics Gains in Charter and Traditional Public Schools (Grades 4–8 with Low Ability Group as Reference Category)









































Spring 08 score



















Time between tests






































































Other race

















Free and reduced lunch

















Grade 5

















Grade 6

















Grade 7

















Grade 8















































Middle only















Average ability













High ability













Mixed ability













Don't know ability













Temp. teacher






















Class percent












Class percent












Class size











Charter x





average ability





Charter x high










Charter x mixed










Charter x don't





know ability





Charter x temp.

















































Adj. R-squared










Robust standard errors in parentheses


*** p < 0.001, ** p < 0.01, * p < 0.05, + p < 0.10


In Model 1, our baseline model, charter school students gain 1.563 points higher (0.20 of a SD) in mathematics than their TPS counterparts, a statistically significant difference. In the next model, we controlled for prior achievement and time between tests (between fall and spring), and though this somewhat moderated the sector relationship, CPS students still made higher gains that were statistically significant (1.095 points or 0.14 of a SD). Model 3 included student demographic characteristics, which increases the CPS achievement gains (1.410 points or 0.18 of a SD).

In Model 4, after including dummy variables for school grade configuration, the charter school estimate is no longer statistically significant. In this and subsequent models, students in K–8 schools—which, in our sample are all charter schools—make significantly larger gains than students in elementary schools. Students in other school grade configurations did not make statistically different gains from students in elementary schools.4 Since we do not have any TPS K–8 schools in our sample, we cannot make comparisons between the two sectors on this specific organizational dimension. However, it is clear that grade-level configuration of schools is associated with students’ mathematics achievement gains.

Next, examining the estimates for ability grouping, net of all other controls, we find that students in all but the average-ability group make significantly larger gains over the year than their peers in the low-ability groups (see Model 5). Specifically, high-group students have higher mathematics gains of 1.716 points (0.22 of a SD), mixed-group students have gains of 1.569 points (0.20 of a SD), and don’t-know-group students have gains of 1.844 points (0.24 of a SD) from fall to spring compared with students placed in the low-ability group. Average-group students do not make statistically different gains compared with their low-group peers. With mixed-group as the reference category (Table 5), only low-group students are statistically different—making about 1.569 (0.20 of a SD) fewer gains than mixed-group students. This difference between low- and mixed-group students persists in the rest of the models as well.

Table 5. Estimates for Relationship of Ability Grouping to Student Mathematics Gains in Charter and Traditional Public Schools (Grades 4–8 with Mixed Ability Group as Reference Category)





















Low ability













Average ability













High ability













Don't know ability













Charter x average ability









Charter x high ability









Charter x mixed ability









Charter x don't know ability















Robust standard errors in parentheses. Controls for grade level not shown


*** p < 0.001, ** p < 0.01, * p < 0.05, + p < 0.10


Model 6 adds classroom measures for temporary teacher certification, classroom composition (percent ELL and percent minority) as well as class size. When controlling for other student background and ability grouping measures, none of these classroom measures is significantly related to students’ mathematics gains over the course of the school year. Yet, these measures exacerbate high-, mixed-, and don’t know-group achievement gains in comparison with low-grouped students. Once we control for classroom measures, average-grouped students also make statistically greater gains (though marginally so) than low-grouped students.

To examine whether the organization of ability grouping differs between charter and traditional public schools in regard to the relationships between mathematics gains and ability group placement, we add interaction terms for charter by ability group level in Models 7 and 9. Here, only the charter x high group interaction approached statistical significance (p <. 0.10), suggesting that the use of high-ability grouping in CPS is different compared with TPS when considering student achievement gains in mathematics. In Model 9, the interaction term between charter schools and high group placement reveals that, while high-ability group charter students have greater achievement gains than charter students in the low group -9.553 (-10.340 – 0.389 + 3.910 – 2.734 = -9.553), this high-low group gap is larger than the high-low gap for traditional public school students (-8.637 = -10.677 + 2.040). In short, CPS students in high ability groups do not receive the additional boost from their structural location in high-grouped classrooms like students in TPS do (absolute difference in gaps is 9.553 - 8.637 = 0.916 or 0.12 of SD).

Models 8 and 9 add in interaction terms for sector with teacher certification. These interactions are positive, suggesting that teachers with temporary certification in charter schools are different than public school teachers who have temporary certification. Our final Model 9 explains about half of the between-classroom variance in achievement gains, from 16.62% to 8.39%.


Proponents of choice argue that students’ educational needs will be served better if parents have the opportunity to select their children’s school. Yet, often ability grouping arrangements are neither addressed within the context of curricular innovation, nor fully transparent to policymakers, parents, or even students (Hallinan, 1994; Lucas, 1999). Understanding the various forms of grouping students for instruction is important for addressing an important aspect of potential innovation between charter and traditional public schools.

When examining relationships of ability group placements to students’ mathematic gains over the school year, we find that there are statistically meaningful differences between ability group levels, similar to prior research. That is, students placed in high-ability groups learn at a greater rate when compared to students placed in the low-ability groups by a significant margin (about a quarter of a SD, on average, over the course of the school year). When considering the comparison with mixed-group students, those placed into low-ability classrooms consistently make lower gains (about a fifth to a quarter of a SD over the course of the school year). Consistent with prior research, we find low-grouped students are at a significant disadvantage in mathematics achievement gains when compared to both grouped and ungrouped (or mixed) classrooms.

In our analyses, we find significant differences in the distribution of ability grouping placements in charter and traditional public schools. Charter schools have a more even distribution of students across groups, whereas traditional public schools have large proportions of students in some groups (i.e., average level) and small percentages of students in other groups (i.e., high-ability group). This even distribution could be due to a number of factors, including greater selectivity in traditional public schools and more inclusivity in charter schools in their sorting of students into the high ability group; in TPS, only 9% of all TPS students are placed in high-grouped classrooms, compared with about 21% of CPS students. Future work opening up the “black box” of charter schools will want to examine schools’ placement procedures and requirements of students into ability groups to tease out the motivations behind the more even distribution of students into ability groups in CPS.

The interaction term between charter schools and high group placement suggests student in high groups do not receive the same returns in CPS as they do in TPS. Once possible explanation for this relates the previous point that CPS may be less selective in high-group placement than TPS schools are. If charter schools, on average, are less selective in placing students in the high group and if there is an interaction between high-group placement and charter schools, the finding that the high-low group gaps are larger in CPS than TPS is worthy of further investigation in future research.

The findings reported here suggest the continuing importance of ability grouping for students’ opportunities to learn, whether they are in a charter or traditional public school classroom. Near the start of the charter school movement in the early 1990s, Hallinan (1994) argued for the need for schools—particularly within the context of school choice debates—to be more public and transparent about their school’s ability grouping and tracking arrangements:

Since schools differ in the impact of tracking on achievement, and since tracking can create unequal learning opportunities for students, parents need to evaluate a school's tracking practices and policies when selecting a school. However, the relationship between tracking and achievement is complex and difficult to discern. Parents seldom have access to data on a school's tracking practices and policies. This limits their ability to evaluate a school's instructional program. Yet tracking [or ability group] information is critical to an informed decision about schools, and educators have a responsibility to make it available to the public. The more aware parents are of the differential impact of tracking across schools, the more capable they will be of making judicious decisions about their child's schooling. (p. 819)

Even though she was arguing within the context of the traditional public school and Catholic sectors, her argument is consistent with the evidence provided in this paper. Some ability grouping arrangements (distribution of students, class size, and teacher certification) do differ between charter and traditional public schools. Although the multivariate relationships do not reveal significant differences across sector, it is important within the current context of choice options for parents to have more information about the organizational and instructional conditions within schools—public, private, charter, etc.

Our findings also complicate the theoretical backdrop of market and institutional theory for predicting whether charter schools are more innovative in their instructional practices and whether such innovation leads to greater gains in student achievement. On the one hand, charter schools appear to be grouping students for instruction in ways that differ from traditional public schools. Moreover, even though charter schools hire a greater percentage of teachers with temporary certification, this does not appear to inhibit student learning. Further research into what is going on with such hires (e.g., different substantive qualifications, training, professional development) is warranted to further examine how the context of learning differs across school sectors and ability group levels.5

On the other hand, charter schools, like traditional public schools, group students for instruction, which is consistent with institutional theory. Even innovative charter schools have classrooms, grade levels, and use ability grouping to meet the educational needs of students, whether successfully or not (Berends et al., 2010; Preston et al., 2012). Moreover, evidence presented here and consistent with prior research reveals that there is increasing inequality in achievement gains between students placed in the high- and low-ability groups and mixed- and low-ability groups. This overall relationship of ability grouping to achievement gains does not statistically differ between charter and traditional public schools—further evidence that charter schools are no more innovative than traditional public schools in the processes of ability grouping related to learning mathematics.

Perhaps the picture is more complicated than market theory and institutional theory would predict. Future research should look further into the instructional processes that occur across ability groups and sectors to shed light on these theoretical perspectives, as well as empirically grounded theories of what is going on in schools and classrooms. The instructional variation among ability groups has been examined as one of the key mechanisms for explaining the increases in achievement gaps between groups. For example, studies have shown that high-ability group students experience more challenging curricula, move at a quicker pace, and have more experienced teachers (and teachers who compete for high-ability group teaching assignments). By contrast, students in low groups experience low-quality instruction, such as worksheets, overemphasis on basic skills, fragmented instruction, and teachers who are less experienced (for reviews of this research, see Oakes et al., 1992; Gamoran, 2004). In his review of international research on ability grouping and tracking, Gamoran (2010, p. 224) summarizes, “Ultimately, how students are arranged matters less than the instruction they encounter, so bringing together research on tracking with research on teaching offers the most useful way to continue to shed light on this topic of continuing interest.” Though beyond the scope of the current paper that focuses on the distribution of ability grouping and relationships to student achievement in charter and traditional public schools, further research on instruction as mediating variable is part of our ongoing research agenda.

Although additional research is warranted, to date, the descriptive and multivariate analyses that attempt to open up the black box of charter schools and examine innovative practices have shown charter schools are no more innovative than traditional public schools—consistent with institutional theory and inconsistent with market theory (see also Berends et al., 2010; Lubienski, 2003; Preston et al., 2012).


1. The formula for the Overall Difference Index was: 0.48*(socioeconomic difference) + 0.48*(racial/ethnic composition difference) + 0.04*(school size difference). Socioeconomic status difference was calculated by taking the absolute value of the difference in percentage of students qualifying for free and reduced lunch in a given matched school pair—school A (charter school) and school B (traditional public school). The difference in racial-ethnic composition was calculated in a similar way, calculating the summed difference in percentage American Indian, Asian, Hispanic, Black, and White students between matched schools. Finally, school size was calculated by taking the absolute value of the difference of students per grade between school A and school B, dividing by the number of students per grade in school A (again, the charter school in the pair) and multiplying the result by 100. Matches reflect schools with the smallest index scores and the greatest tested grade overlap within the closest geographical distance.

2. There are 61 matches in our sample. Due to differences in grade configurations between charter and traditional public schools, there are cases where we had more than one match for a charter school to match all the grade levels in the school. For instance, a K–8 charter could be matched to both an elementary (K–5) and a middle (6–8) traditional public school. Some traditional public schools were also used as matches for more than one charter school. Nearly 20% of these matches are located within 5 miles of each other; about 45% are located within 10 miles of each other; and approximately 79% are located within 15 or fewer miles of one another.

3. Specifically, teachers were asked, “What is the achievement level of most of the students in the target class, compared to national norms?” Their responses included, “high achievement levels,” “average achievement levels,” “low achievement levels,” and “mixed achievement levels.” A small fraction also reported “don’t know,” which could indicate the classroom is not grouped at all. We control for these “don’t know” classes to preserve the sample size but do not report on these coefficients as the interpretation of these estimates is uncertain.

4. We ran the models with different reference groups for grade-level configurations, and students in K–8 schools had significantly higher gains than those in elementary (as shown in Table 4), and also K–12 schools, but not middle schools.

5. In an attempt to tease out some of the differences between teachers with and without temporary certification, we compared the two groups on educational background (highest degree held) and number of formal math courses taken at the undergraduate or graduate level. There were no significant differences on these dimensions between teachers with and those without temporary certification. We also compared the two groups within and between CPS and TPS. We found no significant differences for teachers with temporary certification between CPS and TPS. However, for those with professional (non-temporary) certification, teachers in TPS had, on average, higher educational attainment than teachers in charter schools. About 71% of teachers with professional certification had master’s degrees in TPS, compared to about 37% in CPS. In future work, these relationships should be further analyzed, taking into account professional development and other training dimensions that may benefit teachers and may be unique to the environment and personnel of charter schools.


Abdulkadiroglu, A., Angrist, J. D., Cohodes, S. R., Dynarski, S. M., Fullerton, J. B., Kane, T. J., & Pathak, P. A. (2009). Informing the debate: Comparing Boston’s charter, pilot and traditional schools. Boston, MA: The Boston Foundation.

Abdulkadiroglu, A., Angrist, J. D., Dynarski, S. M., Kane, T. J., & Pathak, P. A. (2011). Accountability and flexibility in public schools: Evidence from Boston’s charters and pilots. Quarterly Journal of Economics, 126, 699–748.

Angrist, J. D., Cohodes, S. R., Dynarski, S. M., Fullerton, J. B., Kane, T. J., Pathak, P. A., & Walters, C. R. (2011). Student achievement in Massachusetts' charter schools. Boston, MA: Center for Education Policy Research at Harvard University.

Berends, M. (1995). Educational stratification and students' social bonding to school. British Journal of Sociology of Education, 16(3), 327–351.

Berends, M. (2015). Sociology and school choice: What we know after two decades of charter schools. Annual Review of Sociology, 41(15), 1–22.

Berends, M., Goldring, E., Stein, M., & Cravens, X. (2010). Instructional conditions in charter schools and students’ mathematics achievement gains. American Journal of Education, 116(3), 303-335.

Berends, M., Springer, M. G., Ballou, D., & Walberg, H. J. (Eds.). (2009). Handbook of research on school choice. New York: Routledge.

Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added  assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 27–65.

Betts, J. R., Hill, P., Brewer, D. J., Bryk, A., Goldhaber, D., Hamilton, L., Henig, J. R., Loeb, S., & McEwan, P. (2006). Key issues in studying charter schools and achievement: A review and suggestions for national guidelines. Seattle, WA: Charter School Achievement Consensus Panel, National Charter School Research Project, Center on Reinventing Public Education.

Betts, J. R., & Shkolnik, J. L. (2000). The effects of ability grouping on student achievement and resource allocation in secondary schools. Economics of Education Review, 19, 1–15.

Betts J. R., & Tang, Y. E. (2014). A meta-analysis of the literature on the effect of charter schools on student achievement. Seattle, WA: National Charter School Research Project, Center on Reinventing Public Education, University of Washington Bothell.

Betts, J.R., Hill, P., Brewer, D.J., Bryk, A., Goldhaber, D., Hamilton, L., Henig, J.R., Loeb, S., & McEwan, P. (2006). Key issues in studying charter schools and achievement: A review and suggestions for national guidelines. Seattle, WA: Charter School Achievement Consensus Panel, National Charter School Research Project, Center on Reinventing Public Education.

Bidwell, C. E., & Dreeben, R. (2006). Public and private education: conceptualizing the distinction. In M.T. Hallinan (Ed.), School sector and student outcomes (pp. 9–37). Notre Dame, IN: University of Notre Dame Press.

Bidwell, C. E., & Dreeben, R. (2006). Public and private education: conceptualizing the distinction. In M.T. Hallinan (Ed.), School sector and student outcomes (pp. 9–37). Notre Dame, IN: University of Notre Dame Press.

Bidwell, C. E., & Kasarda, J. D. (1980). Conceptualizing and measuring the effects of school and schooling. American Journal of Education, 88, 401–430.

Bilfulco, R., & Bulkley, K. (2015). Charter schools. In H. F. Ladd & M. E. Goertz (Eds.), Handbook of research n education finance and policy (pp. 423–443). New York: Routledge.

Bilfulco, R., & Ladd, H. F. (2006). The impacts of charter schools on student achievement: evidence from North Carolina. Education Finance and Policy, 1(1), 50–90.

Booker, K., Sass, T., Gill, B., & Zimmer, R. (2011). The effects of charter high schools on educational attainment. Journal of Labor Economics, 29, 377–415.

Bryk, A. S., Gomez, L., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Cambridge, MA: Harvard Education Press.

Bryk, A. S., Lee, V., & Holland, P. (1993). Catholic schools and the common good. Cambridge, MA: Harvard University Press.

Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S. & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago, IL: The University of Chicago Press.

Byrnes, V., & Ruby, A. (2007). Comparing achievement between K-8 and middle schools: A large-scale empirical study. American Journal of Education, 114(1), 101–135.

Cannata, M., & Engel, M. (2012). Does charter status determine preferences? Comparing the hiring preferences of charter and traditional public school principals. Education Finance and Policy, 7(4), 455–488.

Cannata, M., & Peñaloza, R.V. (2012). Who are charter school teachers? Comparing teacher characteristics, job choices, and job preferences. Education Policy Analysis Archives, 20(29), 1–21.

Center for Research on Education Outcomes (CREDO). (2013). National Charter School Study, 2013. Stanford, CA: Stanford University.

Chubb, J. E., & Moe, T. M. (1990). Politics, markets and American schools. Washington, DC: Brookings Institution.

Clotfelter, C. T. (2004). After Brown: The rise and retreat of school desegregation. Princeton, NJ: Princeton University Press.

Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2010). Teacher credentials and student achievement in high school. The Journal of Human Resources, 45(3), 655–681.

Condron, D. J. (2008). An early start: Skill grouping and unequal reading gains in the elementary years. The Sociological Quarterly, 49, 363–394.

Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27(4), 724–750.

Cravens, X.C., Goldring, E., & Peñaloza, R. (2012). Leadership practice in the context of U.S. school choice reform. Leadership and Policy in Schools 11(4), 452–476.

Darling-Hammond, L. (2007). Third annual Brown lecture in education research: The flat earth and education: How America's commitment to equity will determine our future. Educational Researcher, 36(6), 318–334.

Davis, D. H., & Raymond, M. E. (2012). Choice for studying choice: Assessing charter school effectiveness using two quasi-experimental methods. Economics of Education Review, 31(2), 225–236.

Dobbie, W., & Fryer, R. G. (2011). Are high quality schools enough to increase achievement among the poor? Evidence from the Harlem Children’s Zone. American Economic Journal: Applied Economics 3(3), 158-187.

Elmore, R. F. (2007). School reform from the inside out: policy, practice, and performance. Cambridge, MA: Harvard Education Press.

Figlio, D. N., & Page, M. E. (2002). School choice and the distributional effects of ability tracking: Does separation increase inequality? Journal of Urban Economics, 51, 497–514.

Finley, M. K. (1984). Teachers and tracking in a comprehensive high school. Sociology of Education, 57, 233–243.

Friedman, M. (1962). Capitalism and freedom. Chicago: Chicago University Press.

Furgeson, J., Gill, B., Haimson, J., Killewald, A., McCullough, M., Nichols-Barrer, I., The, B., Verbitsky-Savitz, N., Bowen, M., Demeritt, A., Hill, P., & Lake, R. (2012). Charter-school management organizations: Diverse strategies and diverse student impacts. Cambridge, MA: Mathematica Policy Research/Center on Reinventing Public Education.

Gamoran, A. (1989). Measuring curriculum differentiation. American Journal of Education, 97, 129–143.

Gamoran, A. (1996). Student achievement in public magnet, public comprehensive, and private city high schools. Educational Evaluation and Policy Analysis, 18(1), 1–18.

Gamoran, A. (2004). Classroom organization and instructional quality. In H. J. Walberg, A. J. Reynolds, & M. C. Wang (Eds.), Can unlike students learn together? Grade retention, tracking, and grouping (pp. 141–155). Greenwich, CT: Information Age.

Gamoran, A. (2010). Tracking and inequality: New directions for research and practice. In M. W. Apple, S. J. Ball, & L. A. Gandin (Eds.), The Routledge international handbook of the sociology of education (pp. 213–228). London: Routledge.

Gamoran, A., & Berends, M. (1987). The effects of stratification in secondary schools: Synthesis of survey and ethnographic research. Review of Educational Research, 57, 415–435.

Gamoran, A., & Mare, R. D. (1989). Secondary school tracking and educational inequality: reinforcement, compensation, or neutrality? American Journal of Sociology, 94, 1146–1183.

Gamoran, A., Nystrand, M., Berends, M., & LePore, P. C. (1995). An organizational analysis of the effects of ability grouping. American Educational Research Journal, 24, 687–715.

Gill, B. P., Timpane, P. M., Ross, K. E., & Brewer, D. J. (2007). Rhetoric versus reality: What we know and what we need to know about vouchers and charter schools. Santa Monica, CA: RAND.

Gleason, P., Clark, M., & Tuttle, C. (2012). What factors explain variation in charter school impacts? Paper presented at the Society for Research on Educational Effectiveness (SREE), Washington, DC.

Gleason, P., Clark, M., Tuttle, C., & Dwoyer, E. (2010). The evaluation of charter school impacts.Washington, DC: U.S. Department of Education.

Gleason, P., Clark, M., & Tuttle, C. (2012). What factors explain variation in charter school impacts? Paper presented at the Society for Research on Educational Effectiveness (SREE), Washington, DC.

Goff, P. T., Mavrogordato, M., & Goldring, E. B. (2012). Instructional leadership in charter schools: Is there an organizational effect or are leadership practices the result of faculty characteristics and preferences? Leadership and Policy in Schools, 11(1), 1–25.

Goldhaber, D., & Brewer, D. (2000). Does teacher certification matter? High school teacher certification status and student achievement. Educational Evaluation and Policy Analysis, 22, 129–145.

Greenwald, R., Hedges, L. V., & Laine, R. D. (1996). The effect of school resources on student achievement. Review of Educational Research, 66(3), 361–396.

Hallinan, M. T. (1994). School differences in tracking effects on achievement. Social Forces, 72(3), 799–820.

Hallinan, M. T., Gamoran, A., Kubitschek, W., & Loveless, T. (2003). Stability and change in American education: Structure, process, and outcomes. New York: Eliot Werner Publications, Inc.

Hanushek, E. A. (1996). A more complete picture of school resource policies. Review of Educational Research, 66(3), 397–409.

Hanushek, E. A. (1999). Some findings from an independent investigation of the Tennessee STAR

experiment and from other investigations of class size effects. Educational Evaluation and Policy Analysis, 21(2), 143–163.

Hanushek, E. A., Kain, J., Rivkin, S., & Branch, G. (2007). Charter school quality and parental decision-making with school choice. Journal of Public Economics, 91(5-6), 823–848.

Henig, J. R. (1995). Rethinking school choice: Limits of the market metaphor. Princeton, NJ:

Princeton Univ. Press

Henig, J. R. (1999). School choice outcomes. In S. D. Sugarman & F. R. Kemerer (Eds.), School choice and social controversy (pp. 37–53). Washington, DC: Brookings Institute Press.

Hoffer, T.B. (1992). Middle school ability grouping and student achievement in science and mathematics. Educational Evaluation and Policy Analysis, 14(3), 205-227.

Holmes, G. M., DeSimone, J., & Rupp, N. G. (2003). Does school choice increase school quality? Cambridge, MA: National Bureau of Economic Research.

Hoxby, C. M., Murarka, S., & Kang, J. (2009). How New York City’s charter schools affect achievement. Cambridge, MA: New York City Charter Schools Evaluation Project.

Kieffer, M. J. (2013). Development of reading and mathematics skills in early adolescence: Do K–8 public schools make a difference? Journal of Research on Educational Effectiveness, 6(4), 361–379.

Kelly, S. (2004a). Are teachers tracked? On what basis and with what consequences? Social Psychology of Education, 7, 55–72.

Kelly, S. (2004b). Do increased levels of parental involvement account for the social class difference in track placement? Social Science Research, 33, 626–659.

Kelly, S. (2009). The black-white gap in mathematics course taking. Sociology of Education, 82(1), 47–69.

Kelly, S., & Price, H. (2011). The correlates of tracking policy: Opportunity hoarding, status competition, or a technical-functional explanation? American Educational Research Journal, 48(3), 560585.

Kerckhoff, A.C. (1986). Effects of ability grouping in British secondary schools. American Sociological Review, 51(6), 842-858.

Kingsbury, G. (2003). A long-term study of the stability of item parameter estimates. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.

Le, V., Kirby, S. N., Barney, H., Setodji, C. M., & Gershwin, D. (2006). School readiness, full-day kindergarten, and student achievement: An empirical investigation. Santa Monica, CA: RAND.

Lleras, C., & Rangel, C. (2009). Ability grouping practices in elementary school and African American/Hispanic achievement. American Journal of Education, 115, 279–304.

Lubienski, C. (2003). Innovation in education markets: Theory and evidence on the impact of competition and choice in charter schools. American Educational Research Journal, 402, 395–443.

Lucas, S. R. (1999). Tracking inequality: stratification and mobility in American high schools. New York: Teachers College Press.

Lucas, S. R., & Berends, M. (2002). Socioeconomic diversity, correlated achievement, and de facto tracking. Sociology of Education, 75, 328–348.

Lucas, S. R., & Gamoran, A. (2001). Track assignment and the black-white test score gap: Divergent and convergent evidence from 1980 and 1990 sophomores. Invited paper presented at

the Brookings Institution, Washington, DC.

Meyer, J. W. (2010). World society, institutional theories, and the actor. Annual Review of Sociology, 36, 1–20.

Meyer, J. W., & Ramirez, F. (2000). The world institutionalization of education. In J. Schriewer (Ed.), Discourse formation in comparative education (pp. 111–132). Frankfurt: Peter Lang.

Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83, 340–363.

Meyer, J. W., & Rowan, B. (1978). The structure of educational organizations. In M. W. Meyer & Associates (Eds.), Environments and organizations (pp. 78–109). San Francisco: Jossey-Bass.

Mickelson, R. A. (2001). Subverting Swann: First- and second-generation segregation in Charlotte-Mecklenburg schools. American Educational Research Journal, 38, 215–252.

Milesi, C., & Gamoran, A. (2006). Effects of class size and instruction on kindergarten achievement. Educational Evaluation and Policy Analysis, 28(4), 287–313.

National Alliance for Public Charter Schools. (2015). Overview of public charter schools. Retrieved from http://www.publiccharters.org/dashboard/schools

Northwest Evaluation Association. (2008). RIT scale norms. Lake Oswego, OR: Northwest Evaluation Association.

Northwest Evaluation Association. (2010). Technical manual for measures of academic progress and measures of academic progress for primary grades. Lake Oswego, OR: Northwest Evaluation Association.

Nye, B., Hedges, L. V., & Konstantopoulos, S. (2000). The effects of small classes on academic achievement: The results of the Tennessee class size experiment. American Educational Research Journal, 37(1), 123–151.

Nye, B., Hedges, L.V., & Konstantopoulos, S. (2002). Do low-achieving students benefit more from small classes? Evidence from the Tennessee class size experiment. Educational Evaluation and Policy Analysis, 24(3), 201–217.

Oakes, J. (2005). Keeping track: How schools structure inequality (2nd ed.). New Haven, CT: Yale University Press.

Oakes, J., Gamoran, A., & Page, R. N. (1992). Curriculum differentiation: Opportunities, outcomes, and meanings. In P. W. Jackson (Ed.), Handbook of research on curriculum (pp. 570–608). New York: Macmillan.

Orfield, G., & Frankenberg, E. (2013). Educational delusions? Why choice can deepen inequality and how to make schools fair. Berkeley, CA: University of California Press.

Podgursky, M. (2008). Teams versus bureaucracies: Personnel policy, wage setting, and teacher quality in traditional public, charter, and private schools. In M. Berends, M. G. Springer, & H. J. Walberg, Charter School Outcomes (pp. 61–79). New York: Taylor and Francis.

Polikoff, M. S., & Porter, A. C. (2014). Instructional alignment as a measure of teaching quality. Educational Evaluation and Policy Analysis, 36(4), 399-416.

Porter, A. C. (2002). Measuring the content of instruction: uses in research and practice. Educational Researcher, 31(7), 3–14.

Powell, W. W., & DiMaggio, P. J. (1991). The new institutionalism in organizational analysis. Chicago, IL: The University of Chicago Press.

Preston, C., Goldring, E., Berends, M., & Cannata, M. (2012). School innovation in district context: Comparing traditional public schools and charter schools. Economics of Education Review, 31(2), 318–330.

Rockoff, J. E. & Lockwood, B. B. (2010). Stuck in the middle: Impacts of grade configuration in public schools. Journal of Public Economics, 94, 1051–1061.

Sass, T. R. (2006). Charter schools and student achievement in Florida. Education Finance and Policy, 1(1), 91–122.

Schwartz, A. E., Stiefel, L., Rubenstein, R., & Zabel, J. (2011). The path not taken: How does school organization affect eighth-grade achievement? Educational Evaluation and Policy Analysis, 33(3), 293–317.

Scott, W. R. (2008). Approaching adulthood: The maturing of institutional theory. Theory and Society, 37, 427–442.

Scott, W. R., & Davis, G. F. (2007). Organizations & organizing: Rational, natural and open system perspectives. Englewood Cliffs: Prentice Hall.

Scott, W. R., & Meyer, J. W. (1994). Institutional environments and organizations: Structural complexity and individualism. Thousand Oaks, CA: Sage.

Slavin, R. E. (1987). Ability grouping and achievement in elementary schools: A best-evidence synthesis. Review of Educational Research, 57, 293–336.

Slavin, R. E. (1990). Achievement effects of ability grouping in secondary schools: A best-evidence synthesis. Review of Educational Research, 60(3), 471–499.

Stecher, B. M., McCaffrey, D. F., & Bugliari, D. (2003). The relationship between exposure to class size reduction and student achievement in California. Education Policy Analysis Archives, 11(40). Retrieved from http://epaa.asu.edu/ojs/article/view/268

Tach, L. M., & Farkas, G. (2006). Learning-related behaviors, cognitive skills, and ability grouping when school begins. Social Science Research, 35, 1048–1079.

Teasley, B. (2009). Charter school outcomes. In M. Berends, M. G. Springer, D. Ballou, & H. J. Walberg (Eds.), Handbook of research on school choice (pp. 209–226). New York: Routledge.

Walberg, H. J., & Bast, J. (2003). Education and capitalism: How overcoming our fear of markets and economics can improve America’s Schools. Stanford, CA: Hoover Institution Press.

Zimmer, R., & Buddin, R. (2006). Charter school performance in urban districts. Journal of Urban Economics, 60(2), 307–326.

Zimmer, R., & Buddin, R. (2007). Getting inside the black box: Examining how the operation of charter schools affect performance. Peabody Journal of Education, 82(2-3), 231–273.

Zimmer, R., Gill, R., Booker, K., Lavertu, S., Sass, T. R., & Witte, J. (2009). Charter schools in eight states: Effects on achievement, attainment, integration, and competition. Santa Monica, CA: RAND.

Zimmer, R., Gill, R., Booker, K., Lavertu, S., Sass, T. R., & Witte, J. (2012). Examining charter school achievement effects across seven states. Economics of Education Review, 31(2), 213–224.

Cite This Article as: Teachers College Record Volume 118 Number 11, 2016, p. 1-38
http://www.tcrecord.org ID Number: 21636, Date Accessed: 12/14/2018 1:15:34 AM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Mark Berends
    University of Notre Dame
    E-mail Author
    MARK BERENDS, PhD, is a professor of sociology at the University of Notre Dame and director of the Center for Research on Educational Opportunity (CREO). His areas of expertise are the sociology of education, research methods, school effects on student achievement, and educational equity. Throughout his research career, Professor Berends has focused on how school organization and classroom instruction are related to student achievement, with special attention to disadvantaged students. Within this agenda, he has applied a variety of quantitative and qualitative methods to understanding the effect of school reforms on teachers and students. Professor Berends serves on numerous editorial boards, technical panels, and policy forums; he is currently co-editor of the American Educational Research Journal and recent editor of Educational Evaluation and Policy Analysis; a fellow of the American Educational Research Association; current (and former) vice president of the American Educational Research Association's Division L, Educational Policy and Politics; and the AERA Program Chair for the 2014 annual meeting. His books include Leading with Data: Pathways to Improve Your School (Corwin, 2009), the Handbook of Research on School Choice (Routledge, 2009), and School Choice and School Improvement (Harvard Education Press, 2011).
  • Kristi Donaldson
    University of Notre Dame
    E-mail Author
    KRISTI DONALDSON is a Ph.D. candidate in sociology and a research assistant in the Center for Research on Educational Opportunity at the University of Notre Dame. Her research interests include the expansion of advanced programs and courses; equity, access, and persistence in such programs; and the racial-ethnic stratification of students within and between schools.
Member Center
In Print
This Month's Issue