Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Comment on "Schools and Inequality: A Multilevel Analysis of Colemanís Equality of Educational Opportunity Data"


by Benjamin Domingue, Susan Thomas, Ruhan Circi Kizil & Gregory Camilli - September 16, 2011

Borman and Dowlingís (2010) Schools and Inequality is a re-analysis of reading achievement data originally collected for the Coleman Report (Coleman et al., 1966) using multilevel modeling techniques. In this review, we suggest that there are relatively small differences between Borman and Dowling and the Coleman Report when it comes to how and why schools impact student achievement. However, Borman and Dowling are able to use modern analytic techniques to show a substantial difference in how well schools serve different populations, an important result. We also situate the findings of Borman and Dowling with respect to trends in the racial achievement gap and the literature on school integration.


The Equality of Educational Opportunity study (EEO), or “Coleman Report” (Coleman et al., 1966), remains a highly influential document in education research. Borman and Dowling (2010) conducted a partial reanalysis of the EEO data using multilevel modeling techniques that were not available in 1966, intending to reexamine the fundamental finding “that a student’s family background is far more important than school social characteristics and school resources for understanding student outcomes” (Borman & Dowling, p. 1202). They appear to offer evidence that the social context of a school is a stronger predictor of student achievement when compared to family background than originally suggested. However, we argue that the EEO conclusions are reframed, rather than substantively altered, because there is nothing in the reanalysis to alter the conclusion that schools have little intentional effect on student learning in terms of their inputs.


Borman and Dowling wrote that, “even after adjusting for students’ family background, a large proportion of the variation among true school means is related to differences that are explained by school characteristics” (p. 1238). Analytically, this statement is framed first by breaking down total variance in reading scores into between and within school components, and then examining how and how much of the school variance can be explained. In step 1, it was found that student socio-economic (SES) variance explained about 42% of the between-school variance. Figure 1 illustrates subsequent steps in the analysis, which consist of adding blocks of predictors. The vast majority of between-school variance beyond individual SES variables is explained in step 2 when school context variables are added (percent African American, school mean family resources, parental education). The addition of “intentional services” in steps 3-5 as measured by curriculum, facilities, school resources, and teacher education explained about two percent of the between-school variance beyond step 2. Thus, nearly all of the between-school variance is explained by family and neighborhood characteristics rather than by intentional school inputs.


[39_16544.htm_g/00001.jpg]

Figure 1. The percent increase in between-school variance for Models 2-5 relative to Model 1 is shown. Note that even though the scale starts at 40% rather than 0%, there are only slight increases after the variables in Model 2 are introduced.



This result was clearly recognized by Coleman et al.: "Some considerable part of the school-to-school variation … is attributable to differences in the composition of the student body in different schools, and not to differences in school effectiveness…." (p. 293). Furthermore, Coleman et al. observed that, “comparison of school-to-school variations in achievement at the beginning of grade 1 with later years indicates that only a small part of it is the result of school factors, in contrast to family background differences between communities” (p. 297). From this perspective, it is misleading to conclude that “schools matter” without the substantial qualification that individual schools have little leverage over the primary factors that explain school differences. Borman and Dowling do acknowledge that “many of the traditional production function measures of school features may be ineffectual or irrelevant for understanding how school social context matters” (2010, p. 1239), but there does not appear to be a substantial difference between their findings and many of the conclusions of Coleman et al. in this respect.


On the other hand, Borman and Dowling’s MLM (multilevel modeling) analysis offers evidence that schools are differentially able to meet the needs of students with different racial backgrounds and family resources. This evidence can be given additional context using other information. Consider the standard deviation in the between-school effects for Black students (2.41 points in Borman and Dowling’s Model 3). This can be interpreted as variation between schools that is not explained by either context effects or student SES variables. Taking advantage of the assumed normal distribution for the random effects, this leads to a roughly 3 point gap between schools at the 25th and 75th percentile of this distribution. Compared to the overall reading standard deviation (s = 13) on the EEO verbal test, this corresponds to an effect size of 0.23.1 This effect size can be interpreted relative to trends obtained from the National Assessment of Educational Progress (NAEP). In 1971, there was a 39-point gap between White and Black students on the NAEP Reading Test among 13-year olds (Rampey, Dion, & Donahue, 2009). This had fallen to a 21-point gap by 2008.  When compared to the 1999 standard deviation of the NAEP Reading long-term trend scale (for 13 year olds) of 39 (Allen, McClellan, & Stoeckel, 2005), this corresponds to an effect size of 0.46. Thus, a student moving from a school at the 25th to one at the 75th percentile of the school effectiveness distribution (in 1966) is equivalent to about 1/2 of the reduction in the achievement gap that occurred between 1973 and 2008. Although this may be an overestimate, their analysis suggests that some schools possess impressive capacities for alleviating racial differences in achievement.


Wortman’s (1983) meta-analysis of the effects of school desegregation on Black students offers additional context for this effect. Segregated schools in the core study (the highest quality studies in the meta-analysis) had an average of 82% Black students while desegregated schools had 15% Black students. After accounting for pretest differences, these schools showed an effect size of 0.14 in Reading. Prorating the effect for African American students to this level of school composition results in an effect size of roughly 0.5 relative to the overall standard deviation of the EEO Reading exam.2 The substantial difference (0.5 or .23 compared to 0.14) between the observational EEO data reported and the meta-analysis suggests that drawing causal inferences from the EEO data may not be possible.3

The above discrepancy highlights that causal claims must be made quite carefully when based upon observational data. At a minimum, this requires the specification of a causal framework (Rubin, Stuart, & Zanutto, 2004; Reardon & Raudenbush, 2009). When the treatment is ill-defined, then it becomes difficult to establish a reasonable control condition. Consider the effect of integration. Are we interested in the effect of busing a student from an impoverished neighborhood to a more integrated school or the effect of a student attending the more integrated school while also having their family move to the school’s neighborhood? The school part of the treatment is the same, yet we would expect radically different effects from these treatments despite their similarities.


This leads to an important issue beyond the scope of Borman and Dowling’s analysis. We argue that it is unavoidable that the claim that schools matter is a claim about an outcome relative to control condition. In this regard, the claim that reducing school percent Black would lead to higher school achievement is too poorly specified to be useful. The difficulty is the nature of school composition variables: are they school effects or neighborhood effects? It is theoretically possible to integrate schools, but integrating neighborhoods creates a meta-physical counterfactual.4 This is the same conceptual issue that defeats so many interventions addressed to the achievement gap: the gap exists within a community, not a school, and closing the achievement gap would be tantamount to closing the housing gap, the crime gap, etcetera. It is important to emphasize that while placing a student in a different school is a plausible counterfactual, placing students within a different neighborhood is beyond the realm of current education policy.  


Borman and Dowling remind us that the social composition of a school is indeed relevant beyond a student’s individual social background and that certain school characteristics differentially affect students. They also demonstrate that schools are quite different in their capacities for overcoming pre-existing differences between students. We heartily endorse the notions that schools are powerful institutions in the lives of their students, and understanding how “schools matter” requires attention be paid to the context in which those schools are situated.


Notes


1. Using the within-school standard deviation of 10.3 leads to an estimated effect size of 0.29 but we work with the more conservative figure.


2. To obtain an equivalent result for the Borman and Dowling analysis, the -9.66 race-gap coefficient from Model 2 in Table 4 (estimated for % Black students) can be multiplied by 67% (the spread of percentages in the meta-analysis).


3. It may also indicate that comparing data from the period of integration to the EEO data (collected prior to substantial integration) is not feasible.


4. We do grant that this can be done on a small scale (Ludwig, Ladd, & Duncan, 2001), but bringing such an idea to scale is another matter entirel


References


Allen, N.L., McClellan, C.A., & Stoeckel, J.J. (2005, April). NAEP 1999 Long-term trend technical analysis report: Three decades of student performance. (NCES 2005–484). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, Washington, D.C.


Borman, G.D., & Dowling, M. (2010). Schools and inequality: A multilevel analysis of Coleman’s Equality of Educational Opportunity data. Teachers College Record, 112(5), 1201-1246.


Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M., Weinfeld, F. D., & York, R. L. (1966). Equality of Educational Opportunity. Washington, DC: US Department of Health, Education and Welfare.


Ludwig, J., Ladd, H., & Duncan, G. (2001). The effects of urban poverty on educational outcomes: Evidence from a randomized experiment. In W. G. Gale & J. R. Pack (Eds.), Brookings-Wharton papers on urban affairs (pp. 147–201). Washington, DC: Brookings.


Rampey, B.D., Dion, G.S., & Donahue, P.L. (2009). NAEP 2008 Trends in Academic Progress (NCES 2009–479). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, Washington, D.C.


Reardon, S.F., & Raudenbush, S.W. (2009). Assumptions of value-added models for estimating school effects. Educational Finance and Policy, 4(4), 492-519.


Rubin, D.B., Stuart, E.A., & Zanutto, E.L. (2004). A potential outcomes view of value-added assessment in education. Journal of Educational and Behavioral Statistics, 29(1), 103-116.


Wortman, P.M. (1983). School desegregation and black achievement: An integrative Review. Washington, DC: National Institute of Education. (ERIC Report # ED 239 003).





Cite This Article as: Teachers College Record, Date Published: September 16, 2011
https://www.tcrecord.org ID Number: 16544, Date Accessed: 10/23/2021 8:08:47 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Benjamin Domingue
    University of Colorado at Boulder
    E-mail Author
    BENJAMIN DOMINGUE is a doctoral student at the University of Colorado at Boulder studying research and evaluation methodology through the School of Education.
  • Susan Thomas
    University of Colorado at Boulder
    E-mail Author
    SUSAN THOMAS is a doctoral student at the University of Colorado at Boulder studying research and evaluation methodology through the School of Education.
  • Ruhan Circi Kizil
    University of Colorado at Boulder
    E-mail Author
    RUHA CIRCI KIZIL is a candidate for Ph.D. in the Research Methodology and Evaluation Program at the University of Colorado Boulder. Her research areas are educational measurement and testing, including test design and validation, design issues related to large-scale assessments and international assessments. She recently published: Dogan, E. & Circi, R. (2010). A blind item-review process as a method to investigate invalid moderators of item difficulty in translated assessment items. IERI Monograph Series (3), 157-172.
  • Gregory Camilli
    University of Colorado at Boulder
    E-mail Author
    GREGORY CAMILLI is a Professor at the University of Colorado Boulder. In addition to teaching classes in statistics, measurement, and meta-analysis, his research areas have focused on test fairness, equity, preschool interventions, school factors in mathematics achievement, and multilevel IRT models. Recent publications include Camilli, G. & Welner, K.G. (2011). Is There a Mismatch Effect in Law School, Why Might it Arise, and What Would It Mean? 37 J.C. & U.L. 491.; Allington, R. L.. McGill-Franzen, A., Camilli, G., Williams, L., Graff, J., Zeig, J., Zmach, C. & Nowak, R. (2010). Addressing summer reading setback among economically disadvantaged elementary students. Reading Psychology, 31(5), 411-427.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS