Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Increasing Racial Isolation and Test Score Gaps in Mathematics: A 30-Year Perspective


by Mark Berends & Roberto V. Peñaloza - 2010

Background/Context: Although there has been progress in closing the test score gaps among student groups over past decades, that progress has stalled. Many researchers have speculated why the test score gaps closed between the early 1970s and the early 1990s, but only a few have been able to empirically study how changes in school factors and social background characteristics relate to that convergence. The main reason for this is the lack of data for multiple student cohorts—information necessary for the examination of such relationships.

Purpose/Objective/Research Question/Focus of Study: We analyzed nationally representative data from 1972, 1982, 1992, and 2004, examining the mathematics achievement of four high school senior cohorts, and several school and family background characteristics. We examined how changes in these measures (in terms of means and coefficients) relate to the black-white and Latino-white test score gaps and to changes in school minority composition.

Population/Participants/Subjects: For our analysis, we used the following nationally representative data sets, which are part of the Longitudinal Studies (LS) program at the National Center for Education Statistics (NCES): National Longitudinal Study of the High School Class of 1972 (NLS-72); High School and Beyond senior cohort of 1982 (HSB-82); National Education Longitudinal Study senior cohort of 1992 (NELS-92); and Educational Longitudinal Study senior cohort of 2004 (ELS-04).

Research Design: Conducting secondary data analyses of these nationally representative data, we estimated a series of regressions for each senior cohort, entering the race dummy variables to estimate the unadjusted predicted mathematics test score difference between black and white students and between Latino and white students. Next, we estimated a series of multilevel regressions for each cohort to analyze how trends in school and social background measures are related to trends in the black-white and Latino-white mathematics test score gaps. Finally, we used the pooled coefficients in the decomposition of the difference between the predicted means of white and black test scores.

Findings/Results: Our estimates revealed that between 1972 and 2004, increases in school segregation corresponded to significant increases in the black-white and Latino-white test score gaps, outweighing the positive changes in family background measures for these minority groups.

Conclusions/Recommendations: Understanding how our society can address these countervailing forces—the improving socioeconomic conditions for black and Latino families on the one hand, and the increasing racial isolation of these students in schools on the other—necessitates innovative ideas and experimentation.

It has become common to refer to the United States as increasingly racially diverse. Yet, the implications of this diversity have yet to become fully manifested, particularly in the world of education. On the face of it, schools appear to be remarkably integrated. In 2004, public school enrollment figures revealed that across the nation, schools were 57% white, 16% black, and 19% Latino. These statistics, however, mask growing racial isolation in many of America’s schools. In 2004, the average white student attended a school that was about 80% white; the average black student attended a school that was 34% white; and the average Latino student attended a school that was 28% white (National Center for Education Statistics 2006). Researchers have commented that such racial isolation has increased over the past two decades (Orfield and Frankenberg 2008; Orfield and Lee 2007). In part, this is due to the recent expansion of the Latino population in the united States and to the isolation of Latino and black families in segregated neighborhoods (see Blank 2001; Massey 2007).


What are the consequences of this increasing racial isolation and segregation on the achievement gaps between black and white students, and between Latino and white students? Because of lack of data, it has been difficult to answer this question—that is, to tease out the net contributions of these school composition factors when accounting for changes in other factors, such as family background characteristics. Our aim in this article is to do just that, identifying the relative contributions of changes in school composition to the test score gaps over three decades.


Using national data available for four high school senior cohorts between 1972 and 2004, we empirically examined several school and family factors commonly related to black-white and Latino-white test score differences in mathematics. Our interest was in the following research questions: How did the test scores of blacks, Latinos, and whites change between the early 1970s and 2004? How did school minority composition change over this time period? To what extent were changes in school minority composition associated with the black-white and Latino-white test score gaps during this time period?


TEST SCORE GAPS AMONG BLACK, WHITE, AND LATINO STUDENT GROUPS


Data from the National Assessment of Educational Progress (NAEP) trend assessment reveal that when considering achievement proficiency in mathematics and reading, high school students in the United States were scoring about the same in 2004 as they were in the early 1970s. But these overall trends mask significant progress made among certain groups. For instance, over the past 30 years, minority students have made substantial steps toward closing the minority-nonminority test score gap in both mathematics and reading. In 2004, black students scored 15 points higher (or 15 percentile points) on the NAEP mathematics test and about 25 points higher (or 20 percentile points) in reading than those in the early 1970s. Similarly, Latino students made improvements in achievement, gaining 12 percentile points on the NAEP mathematics test and 10 percentile points in reading. In the late 1990s, black and Latino students’ reading gains did not increase as they did in the 1970s and 1980s. However, minority students are still performing markedly higher than similar students did over 25 years ago (see Porter 2005).


Although many researchers have speculated as to why the test score gaps closed between the early 1970s and the early 1990s (e.g., Ferguson 1998; Koretz 1986; Neal 2006; Porter 2005), only a few have been able to empirically study how changes in school factors and social background characteristics relate to that convergence (Berends et al. 2005; Cook and Evans 2000; Grissmer et al. 1994; Grissmer, Flanagan, and Williamson 1998; Hedges and Nowell 1998). The main reason for this is the lack of data for multiple student cohorts—information necessary for the examination of such relationships.


In a review of extant research, for example, Grissmer, Flanagan, and Williamson (1998) assessed the school factors that may have contributed to the closing of the black-white and Latino-white test score gaps. Specifically, they considered what may have changed between the early 1970s and early 1990s, such as desegregation, secondary school tracking, changes in the curriculum, per-pupil expenditures, pupil-teacher ratios, teachers’ educational background and experience, and school violence. The researchers concluded that both social investment in the 1960s and 1970s (i.e., the civil rights movement and the War on Poverty programs) and the school-based changes of desegregation, secondary school tracking, and smaller class size were the likely factors that explain the test score convergence. Their argument remains speculative, however, because Grissmer, Flanagan, and Williamson were not able to conduct original empirical analyses of these factors.


Limiting their analyses to black-white test scores, Cook and Evans (2000) were specifically interested in whether changes in family characteristics or changes in school quality (or both) were the influential factors in closing the gap. Using the NAEP trend assessment, their research focused not only on how changes in mean levels of family and school characteristics were related to test score trends, but also on how the effects of family and school measures on achievement were related.  They found that only about 25% of the overall convergence in black-white test scores could be attributed to changing family and school characteristics, arguing that the remainder was due to changes within schools.


The Cook and Evans (2000) study has several strengths. First, this study made fewer assumptions than previous studies. For example, in contrast to Grissmer et al. (1994), Grissmer, Flanagan, and Williamson (1998), and Hedges and Nowell (1998, 1999), Cook and Evans examined tests that were stable over time. In addition, their methods allowed them to examine how changes in the relationships between their measures and student achievement differ over time, again in contrast to Grissmer and colleagues (1994, 1998), who assumed stability of these relationships. Finally, Cook and Evans extended the previous research on changes in families to include changes in school quality.


That said, Cook and Evans’ (2000) study also has its drawbacks. The researchers were limited to examining family background changes as measured by parent educational attainment; the NAEP lacks other family measures such as parent income, occupational status, and other family characteristics (Berends and Koretz 1996; Grissmer, Flanagan, and Williamson 1998). In addition, Cook and Evans’ measure of school quality fell short in assuming that “school quality is the effect that attending a given school has on student performance after controlling for the student’s observable characteristics” (732). Their analyses lacked direct measures of schools, how these school measures changed, and how these changes (both in means and coefficients) were associated with student test score gaps.


Thus, despite this important past research, questions remain about minority-nonminority achievement differences, specifically how changes in school minority composition correspond to achievement gaps over time. Our study addresses these questions, extending previous research with longitudinal data analyses to understand what happened between 1972 and 2004 and provide results about how specific family and school factors are related to student achievement trends.


This is the twofold strength of our study. First, we used data that have several direct measures of students’ family and school characteristics, measured consistently over time. Second, we relied on methods that allowed us to examine changes in these characteristics and their relationships to student achievement. No studies have comprehensively analyzed this information across nationally representative data for different cohorts of high school seniors using comparable achievement outcomes, having a specific focus on changes in school composition, and taking into account the increasing racial isolation that has occurred over time.


DATA AND METHODS


For our analysis, we used the following nationally representative data sets, which are part of the Longitudinal Studies (LS) program at the National Center for Education Statistics (NCES): National Longitudinal Study of the High School Class of 1972 (NLS-72); High School and Beyond senior cohort of 1982 (HSB-82); National Education Longitudinal Study senior cohort of 1992 (NELS-92); and Educational Longitudinal Study senior cohort of 2004 (ELS-04).


In what follows, we discuss the data sets—hereafter referred to as LS data—as compared with the trend assessment of NAEP; examine the operationalization of the individual, family, and school measures analyzed across each set; and outline our methodological approach. (For more details about variable justification and operationalization, see Berends et al. 2005.)


NATIONAL LONGITUDINAL STUDY OF THE HIGH SCHOOL CLASS OF 1972 (NELS-72)


NLS-72 was designed to produce representative data at the national level on a cohort of high school seniors who graduated in 1972. The base-year sample is a stratified two-stage probability group of students from all public and private schools in the United States, with schools as the first-stage units, and students within schools as the second-stage units. The result is a nationally representative sample of 19,000 seniors in 1,061 high schools (Riccobono et al. 1981). The NLS-72 provides student, school administrator, and test score data for measuring students’ academic achievement and individual, family, and school characteristics. With a final sample of 14,469 students in 875 schools, we examined student mathematics test scores, student questionnaires, and information forms that provide details about the school.


HIGH SCHOOL AND BEYOND (HSB)


Similar to NLS-72, HSB is a two-stage stratified probability sample with schools as the first-stage units and students within schools as the second-stage units. In the first stage 1,100 schools were selected, and in the second stage, about 36 students were randomly selected in each school. Some types of schools were oversampled to ensure that adequate numbers of students were available in subpopulations of interest. HSB is unique in that it gathered student, school, and test score data on two high school grade levels (10th and 12th grades) in 1980 (HSB-80). The sophomore cohort was followed up two years later, when the students were seniors (HSB-82).


We analyzed about 26,000 students who were sophomores in HSB-80 and seniors in HSB-82. The follow-up sample retains the essential features of the base-year design: multistage, stratified, and clustered (see Jones et al. 1983). Although we previously analyzed the 1980 senior cohort (HSB-80), our descriptive and multivariate analyses of the effects of family and school measures on student achievement revealed no significant differences between the 1980 and 1982 senior cohorts (see Berends et al. 2005). For the sake of parsimony and presentation, we thus present the 1972, 1982, 1992, and 2004 comparisons when examining how the trends in the mathematics gap related to changes in family and school measures.


NATIONAL EDUCATION LONGITUDINAL STUDY (NELS)


NELS is a nationally representative data set that includes detailed information from students, schools, and parents (Ingels et al. 1993). The 1988 base-year NELS includes about 25,000 eighth-grade students in 1,035 schools. Students were followed up in the 10th grade (1990), 12th grade (1992), two years after high school (1994), and finally in 2000. These data contain extensive information about the achievement and school experiences of students prior to high school entry; school organization in middle and high school; students’ family and demographic characteristics; and students’ experiences beyond high school. In each of the first three waves of NELS, students were tested in various subject areas.


EDUCATIONAL LONGITUDINAL STUDY (ELS)


The data of ELS comprise achievement scores and educational perceptions and experiences of 10th-grade students in a national, clustered probability sample of 15,362 in 752 public, Catholic, and other private schools with 10th grades in the spring of 2002. The base-year data collection of ELS is the first wave of a new longitudinal study of high school students; students were followed up in 2004, when they were seniors. The base-year study includes surveys of parents, teachers, school administrators, library media specialists, and 10th-grade students in the spring term of the 2001–2002 school year. In 2004, during the first follow-up of the study, those base-year students who remained in the sample schools were surveyed and assessed again. The sample was freshened in the follow-up with 171 seniors to keep it representative of 2004 high school seniors. Base-year nonparticipants (n = 756) were also given a new opportunity to participate in the follow-up.


The ELS 2002 sampling design is a two-stage cluster with stratification. The sampling frame of schools, obtained from the Common Core of Data (CCD) and the Private School Survey (PSS), was first divided into over 350 strata; two or three schools were then selected from each stratum, with probability proportional to size. Finally, a random sample of students was selected from the roster of each selected school. Some types of schools (e.g., private schools) and students from particular racial-ethnic groups (e.g., Asians) were oversampled. For our analysis, we restrict the sample to those students who met the following criteria: They completed the 2004 follow-up assessments; they were seniors in the spring of 2004 and enrolled in any of the base-year schools; and they had survey information on the measures examined. Some students known to be seniors but who had transferred to other schools—preventing us from knowing the characteristics of those schools—were excluded. Our final sample contained a total of 12,267 students in 740 schools.


NATIONAL ASSESSMENT OF EDUCATIONAL PROGRESS


When examining test score trends, we compared our estimates in the LS data sets with the NAEP trend assessment, which contains longitudinal information on the same set of test score items for nationally representative samples of students. Although NAEP covers the same items over time, its data lack critical information about individual, family, and school characteristics that is necessary to examine family and school-based explanations over time (see Berends and Koretz 1996). However, NAEP provides a useful benchmark to compare the test score trends in NLS-72, HSB-82, NELS-92, and ELS-04 (Green, Dugoni, and Ingels 1995).


DEPENDENT MEASURE: MATHEMATICS ACHIEVEMENT


The dependent variable in our models was the individual student mathematics test scores (reading, science, and social studies tests did not have items in common across the cohorts). These math scores we assumed to be a function of individual, family, and school predictor variables that were directly comparable in the senior cohort data sets. The group differences that are the focus of this article are those between black and white students and Latino and white students during their senior year of high school.


To more accurately measure the extent of group differences within each of the senior cohorts, we linked the mathematics tests over time and calibrated them to be on the same scale so that it is as though students across cohorts had taken the same test (see Berends et al. 2005 for details on linking procedures). This is a vital step because mathematics tests are sensitive to school effects and subject to variation in scores across schools (Sørensen and Morgan 2000); in other words, the tests were not identical across cohorts.1 Thus, it is important to understand mathematics achievement trends and how other family and school changes relate to them, particularly for students from different racial-ethnic groups.


Although the equating, or linking, methods provide accurate measures of student scores throughout the proficiency distribution, it is important to remain aware that the tests did differ; they were not identical across the different cohorts. However, the tests were similar in structure and in the domains tested, and they did contain some common items to use for equating purposes. Moreover, research to date suggests that the tests across these cohorts were reliable and valid measures of students’ mathematics achievement in secondary school (see Berends et al. 2005; Koretz and Berends 2001; Rock et al. 1985; Rock and Pollack 1995). In addition, although we have not yet completed the linking to the 2004 mathematics test, NCES has a linked score that we can rely on in our analyses.


SCHOOL AND FAMILY MEASURES


Our selection of variables is dictated by the necessity of comparable measures as defined across the data sets for the senior cohorts (NLS-72, HSB-82, NELS-92, and ELS-04).


Because we focused on changes in minority composition, we used a measure comparable with what others have examined to argue that schools have become more racially isolated in the past two decades (Orfield and Frankenberg 2008; Orfield and Lee 2007). Across the four cohorts, school administrators were asked about the percentage of various population groups that attended the school. Based on this information, we created a school-level measure for the percentage of black and Latino students who attended each school. We included these students in our school minority composition measure because they have been historically disadvantaged within the United States.


Other measures are individual characteristics (race-ethnicity and gender), family background (parents’ educational attainment, occupational status, and family income), and school characteristics (socioeconomic composition, sector, urban locale). (See Appendix A; for a more detailed description and comparison of measures, see Berends et al. 2005.)


METHODOLOGY


Methods to assess the effects of individual, family, and schools over time need to factor in both changes in the characteristics of interest (means) and changes in the associations of these characteristics with achievement scores (coefficients) at different points in time. To decompose such effects, we relied on a technique widely used in labor economics called the Oaxaca decomposition (Oaxaca 1970; Cain 1986; Corcoran and Duncan 1979). Although attributed to Oaxaca, this technique was previously used by sociologists (Duncan 1967, 1968; Cancio, Evans, and Maume 1996; Sayer, Bianchi, and Robinson 2004). It has been a primary tool for explaining differences in wages across groups in cross-sectional data (Cain 1986; Corcoran and Duncan 1979) and the time-series pattern of wages in repeated cross-sections (Sahling and Smith 1983). Education researchers have applied it as well (Cook and Evans 2000; Goldhaber 1996; Gill and Michaels 1992). For example, as previously noted, Cook and Evans (2000) used the Oaxaca decomposition to investigate how changes in the mean differences and coefficients of family and school measures were related to the convergence of the black-white and Latino-white test score gaps. Our analyses aim to build on their findings with a similar approach.


We began by estimating a series of regressions for each senior cohort, entering the race dummy variables to estimate the unadjusted predicted mathematics test score difference between black and white students and between Latino and white students. Next, we estimated a series of multilevel regressions for each cohort to analyze how trends in school and social background measures were related to trends in the black-white and Latino-white mathematics test score gaps (Kreft and De Leeuw 1998; Raudenbush and Bryk 2002; Snijders and Bosker 1999).2 These regressions estimated the relationship of mathematics achievement to mother’s and father’s educational attainment, the higher of mother’s or father’s occupational status (Duncan’s socioeconomic index [SEI]), the family income quintile dummies, and minority and socioeconomic composition of the school, sector, and urban locale. Gender was also included as a covariate. Finally, we used the pooled coefficients in the decomposition of the difference between the predicted means of white and black test scores (Equation 1) (see Cotton 1988). Again, the LS data allowed for this analysis over four time intervals.


By looking at the results of these decompositions, we can begin to understand how black students’ mathematics scores changed relative to those of whites over this 30-year span. Moreover, we can examine in which decade the most notable changes occurred. Mathematically, for each of these intervals, we employed the following decomposition:


[39_15657.htm_g/00002.jpg]
click to enlarge


The explained component of this decomposition has two features of note. First, it weights the change in differences between white and black student means by the coefficient estimates from Time 0 (or 1972).  Thus, the explained component represents the change in the test score gap that we would expect to see if the black and white students at Time 0 had the mean characteristics of black and white students at Time 1, holding everything else constant. The same applies when analyzing Latino and white students. The decomposition can also be calculated using estimated coefficients from the 2004 as weights, so we show the results from both the 1972 and 2004 estimations.


Second, we relied on the student cohort coefficient estimates, as opposed to specific racial-ethnic coefficient estimates within a cohort. Since black, Latino, and white students in a given cohort are not schooled in total isolation from one another, nor indeed from students of other races, they are not distinct populations, but rather part of the same population. Thus, using the pooled coefficient estimates for each student cohort seemed more appropriate, since that pooled coefficient would be the estimate if race-ethnicity were not an issue (Cook and Evans 2000; Cotton 1988). This choice also avoids weighting the change in mean differences by the black or white student coefficient estimates or estimating a set of coefficients for both and then attempting to mediate between the two sets of results generated. (The results from the regression models appear in an Appendix C.)


RESULTS


Before we examine the results of this decomposition, it is important to address the trends: that is, the trends in the black-white and Latino-white test score differences in the senior cohorts and their comparison with other national achievement trends in the NAEP, and the trends in family and school measures. From there, we decompose the effects of family and school characteristics on the black-white and Latino-white achievement gaps into changes in the levels of the family background measures and their effects on achievement across cohorts.


TEST SCORE DIFFERENCES AMONG RACIAL-ETHNIC GROUPS OVER TIME


Consistent with other national data, NLS-72, HSB-82, NELS-92, and ELS-04 senior cohorts show that black and Latino students have made considerable achievement gains in narrowing the minority-nonminority test score gap. Figure 1 illustrates the estimates for the black-white convergence in mathematics. Note that the estimates for the three LS senior cohorts are plotted against those in the NAEP trend assessment, which is the strongest available in the United States and offers an important benchmark for the LS cohorts. Note, too, that the black-white difference is over a standard deviation (SD = 1.01) in the NLS-72 data but narrows by about 20% (SD = 0.81) in NELS-92 and remains stable until 2004 in ELS-04.


Similarly, in NAEP, the black-white difference in 1973 is over a standard deviation (SD = 1.14), narrowing 22% (SD = 0.89) in 1996 and further narrowing (SD = .080) in 2004. Thus, the narrowed gap occurs in both the NAEP and the LS, showing consistent patterns overall.


Figure 1. Black-white mathematics differences (SD units) in the senior cohorts compared with the NAEP trend assessment


[39_15657.htm_g/00004.jpg]
click to enlarge


Over this same time period, Latino students also made gains in student achievement and in closing the gap with white students. Estimates for the Latino-white convergence in mathematics appear in Figure 2. The Latino-white gap is not quite as large as the black-white mathematics achievement gap, but it follows a similar pattern. For example, in the NLS-72 data, the Latino-white difference is 0.79 of a SD, which narrows by 23% to 0.61 of a SD in NELS. In NAEP, the Latino-white difference in 1973 was 0.94 of a SD, narrowing by 27% to 0.69 of a SD in 2004. As with the black-white differences, the steady reduction in the test score gaps between Latino and white students remains consistent between the LS and NAEP samples and is worthy of examination.


Figure 2. Latino-white mathematics differences (SD units) in the senior cohorts compared with the NAEP trend assessment


[39_15657.htm_g/00006.jpg]
click to enlarge


CHANGES IN SCHOOL MINORITY COMPOSITION


Examining changes in school minority composition in the data sets spanning 1972 to 2004, we found increases in the number of students attending schools with a greater proportion of black and Latino students than white students. Specifically, in 1972, students attended schools in which the proportion of the nonwhite student body was 0.19; that number grew to 0.27, on average, in 2004.


Table 1. Racial-Ethnic Differences in School Minority Composition in LS Data, 1972–2004




1972


1982


1992


2004

Change

(2004−1972)

Proportion Minority Composition

0.19

0.26

0.25

0.27

+0.08

Whites

0.17

0.21

0.18

0.14

-0.03

Blacks

0.36

0.37

0.42

0.60

+0.24

Latinos

0.33

0.28

0.37

0.60

+0.27

B-W Difference

+0.19

+0.16

+0.24

+0.46

 +0.27

L-W Difference

 +0.16

+0.07

+0.19

+0.46

 +0.30


Breaking down the numbers by race, Table 1 shows that the average proportion of minority composition for schools attended by white students was 0.17 in 1972, compared with 0.36 for schools attended by black students. For white students, the number changed slightly to 0.14 in 2004, while for black students, the average increased to 0.60. Comparing minority composition in the typical schools between 1972 and 2004, we see that the difference between blacks and whites increased significantly, from 0.19 to 0.46.


Regarding Latino students over these time periods, Table 1 illustrates that the average Latino student in 1972 attended a school in which the proportion of minority students was 0.33, compared with the average Latino in 2004, who attended a school in which the proportion of minority students was 0.60—a 0.27 increase over these decades. In 1972, the average minority composition for schools attended by Latino students was 0.33, compared with 0.17 for white students. The Latino-white gap in minority composition increased markedly from 0.16 in 1972 to 0.46 in 2004, suggesting that changes in minority composition were unlikely to benefit Latino students.


If a high minority composition is viewed as a proxy for schools that have been historically underserved by the education system in terms of receiving high-quality resources, services, and instruction, then the increasing proportion of high-minority schools suggests a lack of progress for black and Latino students. Our results raise a further question: How do the changes in school minority composition relate specifically to black-white and Latino-white test score trends? It is to this important issue that we now turn our attention.


DECOMPOSING CHANGES IN THE BLACK-WHITE AND LATINO-WHITE GAPS IN MATHEMATICS


The methods we used allowed us to disentangle the changes that occurred for black and white students and Latino and white students between 1972 and 2004. We examined the changes in levels (means) of the individual, family background, and school measures. We then scaled these changes by the 1972 and 2004 regression coefficients to learn how school and family changes corresponded to changes in the test score gap between black and white students. We also examined the changes in the means of school minority composition (and other factors) and considered how they corresponded to mathematics achievement (as measured by the 1972 and 2004 coefficients).

Table 2. Decomposition of the Black-White Mathematics Score Gap to Changes in School Minority Composition, 1972–2004

 

 

 

1972 Coefficients as Weights

 

2004 Coefficients as Weights

 

 

 

 

1972-2004

 

 

D

%

 

D

%

 

Individual and Family Measures Total

 

-0.119

62.37

 

-0.069

36.24

 

           Female

 

-0.015

7.73

 

-0.007

3.68

 

           Family income

 

-0.021

11.14

 

-0.004

2.29

 

           Parental education

 

-0.058

30.24

 

-0.048

25

 

           Occupational status

 

-0.025

13.26

 

-0.01

5.27

 

School Measures Total

 

0.156

-81.95

 

0.146

-76.51

 

           School mean SES

 

0.011

-5.93

 

0.025

-13.25

 

           School percent minority

 

0.119

-62.5

 

0.116

-60.57

 

           Private school

 

0.01

-4.99

 

0.004

-2.2

 

          Urban school

 

0.013

-6.78

 

0

-0.16

 

          Rural school

 

0.003

-1.75

 

0.001

-0.33

 

Total Explained

 

0.077

-40.28

 

0.037

-19.57

 

Unexplained

 

-0.268

140.28

 

-0.228

119.57

 

Total Change

 

-0.191

 

 

-0.191

  


Table 3. Decomposition of the Latino-White Mathematics Score Gap to Changes in School Minority Composition, 1972–2004


 

 

 

1972 Coefficients as Weights

 

2004 Coefficients as Weights

 

 

 

 

1972–2004

 

 

D

%

 

D

%

 

Individual and Family Measures Total

 

0.018

-9.887

 

-0.107

59.048

 

           Female

 

0.002

-1.119

 

0.004

-2.351

 

           Family income

 

0.006

-3.199

 

-0.017

9.141

 

           Parental education

 

0.008

-4.446

 

-0.002

1.108

 

           Occupational status

 

0.002

-1.123

 

-0.092

51.150

 

School Measures Total

 

0.216

-119.450

 

0.183

-101.407

 

         School mean SES

 

0.078

-43.262

 

0.035

-19.372

 

         School percent minority

 

0.134

-74.044

 

0.138

-76.398

 

         Private school

 

0.004

-2.041

 

0.008

-4.618

 

        Urban school

 

0.000

-0.013

 

0.001

-0.535

 

        Rural school

 

0.000

-0.091

 

0.001

-0.482

 

Total Explained

 

0.234

-129.340

 

0.077

-42.360

 

Unexplained

 

-0.415

229.340

 

-0.257

142.360

 

Total Change

 

-0.181

 

 

-0.181

  
         



The results of these decompositions for black-white mathematics achievement scores appear in Table 2; the results for Latinos and whites appear in Table 3. In both tables, the column of ∆s denotes the change in the minority-nonminority test score gap for the period considered associated with the changes in the means for the variable (rows) being considered. The percent column (%) is the percentage of the total minority-nonminority test score gap for the period being considered to which changes in that particular variable correspond; positive percentages indicate that the predicted gaps would have narrowed or converged, while negative percentages indicate that the gaps would have widened or diverged.  


In both tables, the rows in bold highlight aggregate results. For example, the numbers in Individual and Family Measures Total are the sum of the cells below for female, family income, parental education, and occupational status measures; that is, these numbers provide the overall changes in the estimated minority-nonminority gap that correspond to the individual and family measures taken together. We follow a similar logic in for presenting the school measures (school mean SES, percent minority, private, urban, and rural). Because the decomposition can be calculated using estimated coefficients from 1972 and 2004 as weights, we show the results from both estimations. The last rows in each table represent the total change in the gap that is being explained. In Table 2, note that the estimated black-white gap in mathematics achievement is 0.191, which implies that the gap closed by about one fifth of a standard deviation. In Table 3, the estimated Latino-white gap in mathematics is -0.181, which also implies a significant closing of the gap.


As the tables reveal, between 1972 and 2004, the increases in racial isolation and segregation in U.S. schools corresponded to an increase in the minority-nonminority test score gap. For instance, if one considers only changes in the mean school minority composition measure when scaled by either the 1972 or 2004 regression coefficients in Table 2, one can see a corresponding substantial increase in the black-white test score gap between 1972 and 2004. Specifically, the increases in black students’ likelihood of being segregated in high-minority schools corresponded to a 62.50% increase in the black-white mathematics gap when scaled by the 1972 coefficients, and to a 50.57% increase when relying on the 2004 coefficients.


Note, however, that not all changes were detrimental to black students. Relative to white students, black students’ individual and family characteristics—parent education level, family income, and parent occupational status—improved. These changes were large, particularly when scaled by the 1972 regression coefficients, corresponding to 62.37% of the change in the test score gap. If scaled by the 2004 coefficients, the changes corresponded to 36.24% of the change in the test score gap.


As with black students, Table 3 shows that the increases in racial isolation and segregation for Latino students in U.S. schools between the early 1970s and 2004 corresponded to an increase in the Latino-white test score gap. Latino-white scores diverged by 74% if we use the 1972 coefficients as weights, and by 76% if we rely on the 2004 coefficients. Also similar to black students, not all changes in the measures we analyzed were disadvantageous for Latino students. Compared with white students, Latino students experienced positive mean changes in family background characteristics, such as parents’ educational attainment, occupational status, and income. However, when scaled by the coefficients for the 1972 and 2004 cohorts, we found inconsistent results. For the 2004 group, changes in family background characteristics corresponded to a 59% convergence in the Latino-white test score gap, but when scaled to the 1972 cohort, they were associated with a 10% divergence in the Latino-white mathematics gap.  These differences were due mainly to the negative coefficients in 1972 students who had missing data on family background measures.


DISCUSSION


Our analyses consider several school and social background factors related to black-white and Latino-white test score differences in mathematics over a 30-year period. We set out to address some limitations of past research by analyzing nationally representative data between 1972 and 2004, focusing on how selected family background and school measures changed during this time period and how these changes corresponded to the minority-nonminority mathematics gaps.  


Overall, we found that changes in the minority composition of schools—in which black and Latino students were more likely to attend higher minority schools in 2004 than in 1972 compared with white students—resulted in increased racial isolation for black and Latino students and an increased divergence in the racial-ethnic test score gaps over the past 30 years. Other authors have commented on the growing segregation of minority students in recent years (Orfield and Frankenberg 2008; Orfield and Lee 2007), but our results analyzed this trend within the context of achievement gaps.   


As highlighted previously, there were some positive changes in social background measures for black and Latino students relative to whites. But because the effects of desegregation changed the racial-ethnic composition of schools most dramatically during the 1960s and 1970s (Grissmer, Flanagan, and Williamson 1998; Armor 1995), our analyses may have missed the especially remarkable aspects of these changes. That said, changes in composition did not immediately result in changes in school activities and culture that benefitted black students. As Grissmer, Flanagan, and Williamson (1998) showed, black seniors who were tested in the early 1970s entered school in the early 1960s, a time when 60% of the black population were educated in schools in which more than 90% of the students were from minority backgrounds. Because of the dramatic desegregation in schools that occurred between 1968 and 1972 (especially in the South), students who entered school in the early 1970s were the first to experience a K–12 schooling career in less segregated schooling circumstances. These were the students who would be taking tests as seniors in the mid-1980s. However, as our analyses suggest, changes in the minority composition of high schools did not correspond to a narrowing of the black-white or Latino-white achievement gap, but rather had the opposite effect.


Thus, our analysis revealed a mixed picture of the progress of black and Latino students relative to white students. On the one hand, family background measures have changed across cohorts, and this corresponds for the most part to the narrowing of the black-white and Latino-white mathematics score gaps between 1972 and 2004. On the other hand, significant test score disparities remain between blacks and whites, and between Latinos and whites, just as substantial inequalities remain in family background measures despite the progress that has occurred. Notwithstanding the 20% reduction in the black-white mathematics test score gap in the LS databases we examined, the unadjusted differences remained large, at about 0.80 of a standard deviation in mathematics (28 percentile points). So too, for Latino students: Despite a 23% reduction in the mathematics gap, unadjusted differences remained large, at 0.61 of a standard deviation (nearly 21 percentile points). Moreover, as previously stated, despite the large gains in the family background measures considered here, black and Latino students continue to attend schools that are high-minority and low SES. Thus, while a great deal of progress has been made in improving some family background conditions of minority students relative to whites, substantial inequalities remain (see Neal 2006).


Several decades ago, Coleman et al.’s (1966) findings revealed that individual and family background measures are even more important to promoting educational outcomes than characteristics of schools that students attend. Although our analyses support these findings more than 40 years later, to continue to misinterpret Coleman et al. as saying that schools don’t matter is mistaken.


School context does matter, particularly when examining achievement gaps among students over time. Certainly, the improvement of family background measures of black and Latino students vis-à-vis white students plays a noteworthy role in closing the achievement gap. But segregation, so prominent in today’s schools, stands as a major obstacle to continued progress toward equal education for all.


As our nation becomes increasingly diverse, there appears to be a lack of societal awareness of, and will to, examine and test alternative arrangements in our schools—arrangements that maximize diversity to benefit every student’s achievement. There are points over the past several decades that reveal hope if decreased inequality is the goal (Rothstein 2004). Societal reforms expanding social, economic, and educational opportunities for students have contributed to the closing of the gap. Yet, other forces surrounding schools in general—and students of color attending racially and ethnically homogeneous schools in particular—either stall what little progress has been made, or worse, reverse it.


Understanding how our society can address these countervailing forces—the improving socioeconomic conditions for black and Latino families on the one hand, and the increasing racial isolation of these students in schools on the other—necessitates innovative ideas and experimentation. Schools must be organized in new ways that address inequalities in student achievement and the stalled progress toward closing the achievement gap (Magnuson and Waldfogel 2008). Educational policy and reform must be attentive to the opportunities between schools by addressing issues related to the increasing isolation of minority students in schools.


Recent Supreme Court decisions do not bode well for supporting this effort. State and local policy makers will no doubt need to tread carefully, but initiatives that tackle school funding and diversity could be worthwhile in improving the racial balance of schools. For example, though hotly debated as criteria for college admissions (see Kahlenberg 1996; Kane 1998; Wilson 1999), using socioeconomic circumstances for elementary and secondary school admissions may hold some promise in diversifying schools racially and ethnically (see Flinspach, Banks, and Khanna 2003). Although the correlation between the racial-ethnic and socioeconomic composition of schools is not perfect, such innovations may not only help preserve racially diverse schools but also ameliorate some school problems related to poverty (see also Kahlenberg 2001).


These ideas and others are worth examining within different communities. As such alternatives are tested, research needs to come alongside and document successes and failures so that future efforts can build on those of the past. Over time, implementation and analysis will yield useful lessons not only for policy makers and researchers but also for reformers and educators working to provide an equal education for every American child.


Acknowledgment


Please direct all correspondence to Mark Berends, University of Notre Dame, 1014 Flanner, Notre Dame, IN 46556 (mark.berends.3@nd.edu). We are grateful to Samuel Lucas, Thomas Sullivan, and R. J. Briggs, who collaborated on our previous research on test score gaps, and to Ann Primus for her comments and editing of our paper. We appreciate the generous cooperation of the National Center for Education Statistics, which provided the restricted-use data; of Donald Rock and Judith Pollack of ETS, who shared with us their IRT expertise and data; and of Mathilde Dutoit of Scientific Software International, who provided technical support. Of course, this paper does not reflect the views of these agencies or individuals; any errors are the responsibility of the authors.


Notes


1. To measure a broader range of abilities and the extent of cognitive gains between 8th and 12th grades, NELS included various forms of the 10th- and 12th-grade tests to avoid floor and ceiling effects. For example, 10th graders in the first follow-up test administration were given different forms of the test depending on how they scored in the eighth-grade base year. In mathematics, there were seven forms, and in reading there were five forms—all differing in difficulty to provide better estimates of achievement throughout the proficiency distribution (for further details on the psychometric properties of the NELS tests, see Rock and Pollack 1995). Specific test score information allowed us to link scores across all these NELS mathematics forms and the NLS and HSB cohorts. There were no common items to equate the reading scores in the senior NELS sample to the previous cohorts.


2. We conducted multilevel regressions because students were nested in schools. The intraclass correlations indicated a nonignorable amount of clustering of math scores within schools; these were statistically significant, ranging from .07 to .20, depending on the model. We also estimated the models in this article with OLS. By and large, we found that we overestimated the associations between mathematics scores and race-ethnicity, family background, and school measures. Thus, not only are multilevel models appropriate given the design of the data, but also the results provide more conservative estimates.


[39_15657.htm_g/00008.jpg]
click to enlarge


References


Armor, David J. 1995. Forced justice: School desegregation and the law. New York: Oxford University Press.


Berends, Mark, and Daniel Koretz. 1996. Reporting minority students’ test scores: How well can the National Assessment of Educational Progress account for differences in social context? Educational Assessment 3 (3): 249–85.


Berends, Mark, Samuel R. Lucas, Thomas Sullivan, and R. J. Briggs. 2005. Examining gaps in mathematics achievement among racial-ethnic groups, 1972–1992. Santa Monica, CA: RAND.


Blank, Rebecca M. 2001. An overview of trends in social and economic well-being, by race. In America becoming: Racial trends and their consequences, Volume I, ed. N. J. Smelser, W. J. Wilson, and F. Mitchell, 21–29. Washington, DC: National Academy Press.


Cain, Glen G. 1986. The economic analysis of labor market discrimination: A survey. In Handbook of labor economics, vol. 1, ed. O. Ashenfelter and R. Layard, 693–785. New York: Elsevier Science.


Cancio, Silvia A., T. David Evans, and David J. Maume. 1996. Reconsidering the declining significance of race: Racial differences of early career wages. American Sociological Review 61 (4): 541–56.


Coleman, James S., Ernest Campbell, Carol Hobson, James McPartland, Alexander M. Mood, Frederic Weinfield, and Robert York. (1966). Equality of educational opportunity. Washington, DC: U.S. Government Printing Office.


Cook, Michael, and William N. Evans. 2000. Families or schools? Explaining the convergence in white and black academic performance. Journal of Labor Economics 18 (4): 729–54.


Corcoran, Mary E., and Greg J. Duncan. 1979. Work history, labor force attachment, and earnings differences between the races and sexes. Journal of Human Resources 14 (1): 3–20.


Cotton, Jeremiah. 1988. On the decomposition of wage differentials. The Review of Economics and Statistics 70 (30): 236–43.


Duncan, Otis Dudley. 1967. Discrimination against negroes. The Annals of the American Academy of Political and Social Science 3:85–103.


Duncan, Otis Dudley. 1968. Inheritance of poverty or inheritance of race? In On understanding poverty: Perspectives from the social sciences, ed. D. P. Moynihan, 84–110. New York: Basic Books.


Ferguson, Ronald F. 1998. Can schools narrow the black-white test score gap? In The black-white test score gap, ed. C. Jencks and M. Phillips, 318–74. Washington, DC: Brookings Institution Press.


Flinspach, S. L., K. Banks, and R. Khanna. 2003. Socioeconomic integration as a tool for diversifying schools: Promise and practice in two large school systems. Paper presented at the Color Lines Conference, The Civil Rights Project, Harvard University, Cambridge, MA.


Gill, A. M., and R. J. Michaels. 1992. Does drug use lower wages? Industrial & Labor Relations Review 45 (3): 419–34.


Goldhaber, Dan D. 1996. Public and private high schools: Is school choice an answer to the productivity problem? Economics of Education Review 15 (2): 93–109.


Green, Patricia J., Bernard L. Dugoni, and Steven J. Ingels. 1995. Trends among high school seniors, 1972–1992. Washington, DC: National Center for Education Statistics. (NCES 95-380)


Grissmer, David, Ann Flanagan, and Stephanie Williamson. 1998. Why did the black-white score gap narrow in the 1970s and 1980s? In The Black-White Test Score Gap, ed. C. Jencks and M. Phillips, 182–226. Washington, DC: Brookings Institution Press.


Grissmer, David W., Sheila N. Kirby, Mark Berends, and Stephanie Williamson. 1994. Student achievement and the changing American family. Santa Monica, CA: RAND.


Hedges, Larry V., and Amy Nowell. 1998. Black-white test score convergence since 1965. In The black-white test score gap, ed. C. Jencks and M. Phillips, 149–82. Washington, DC: Brookings Institution Press.


Hedges, Larry V., and Amy Nowell. 1999. Changes in the black-white gap in achievement scores. Sociology of Education 72 (2): 111–35.


Ingels, Steven J., Kathryn L. Dowd, John D. Baldridge, James L. Stipe, Virginia H. Bartot, and Martin R. Frankel. 1993. NELS:88 Second Follow-Up Student Component Data File User’s Manual. Washington, DC: National Center for Education Statistics.


Jones, Calvin, Miriam Clark, Geraldine Mooney, Harold McWilliams, Ioanna Crawford, Bruce Stephenson, and Roger Tourangeau. 1983. High School and Beyond 1980 Sophomore Cohort First Follow-Up 1982 Data File User's Manual. Washington, DC: National Center for Education Statistics.


Kahlenberg, Richard D. 1996. Class, race, and affirmative action. New York: Basic Books.


Kahlenberg, Richard D. 2001. All together now: Creating middle-class schools through public school choice. Washington, DC: Brookings Institution Press.


Kane, Thomas J. 1998. Racial and ethnic preferences in college admissions. In The black-white test score gap, ed. C. Jencks and M. Phillips, 431–56. Washington, DC: Brookings Institution Press.


Koretz, Daniel. 1986. Trends in educational achievement. Washington, DC: Congressional Budget Office.


Koretz, Daniel, and Mark Berends. 2001. Changes in high school grading standards in mathematics, 1982–1992. Santa Monica, CA: RAND.  


Kreft, I., and J. De Leeuw. 1998. Introducing multilevel modeling. Thousand Oaks, CA: Sage Publications.


Magnuson, Katherine A., and Jane Waldfogel, eds. 2008. Steady gains and stalled progress: Inequality and the black-white test score gap. New York: Russell Sage Foundation.


Massey, Douglas S. 2007. Categorically unequal: The American stratification system. New York: Russell Sage Foundation.


National Center for Education Statistics. 2006. Digest of education statistics. Washington, DC: National Center for Education Statistics.


Neal, Derek A. 2006. Why has black-white skill convergence stopped? In Handbook of economics of education, ed. Eric Hanushek and Finis Welch, 512–76. Cambridge, MA: Elsevier.

Oaxaca, Ronald. 1970. Male-female wage differentials in urban labor markets. International Economic Review 14 (3): 693–709.


Orfield, Gary, and Erica Frankenberg. 2008. The last have become first: Rural and small town America lead the way on desegregation. Los Angeles: Civil Rights Project.


Orfield, Gary, and Chungmei Lee. 2007. Historic reversals, accelerating resegregation, and the need for new integration strategies. Los Angeles: Civil Rights Project.


Porter, Andrew C.  2005. Prospects for school reform and closing the achievement gap. In Measurement and research in the accountability era, ed. C. A. Dwyer, 59–95. Mahwah, NJ: Lawrence Erlbaum Associates.


Raudenbush, Stephen W., and Anthony S. Bryk. 2002. Hierarchical linear models: Applications and data analysis methods. 2nd ed. Thousand Oaks, CA: Sage.


Riccobono, J., L. B. Henderson, G. J. Burkheimer, C. Place, and J. R. Levinsohn. 1981. National Longitudinal Study: Base Year (1972) through Fourth Follow-Up (1979) Data File User’s Manual. Washington, DC: National Center for Education Statistics.


Rock, Donald A., Thomas L. Hilton, Judith Pollack, Ruth B. Ekstrom, and Margaret E. Goertz. 1985. Psychometric analysis of the NLS and the High School and Beyond test batteries. Washington DC: National Center for Education Statistics.


Rock, Donald A., and Judith M. Pollack. 1995. NELS: 88 Base Year through Second Follow-Up Psychometric Report. Washington DC: National Center for Education Statistics.


Rothstein, Richard. 2004. Class and schools: Using social, economic, and educational reform to close the black-white achievement gap. Washington, DC: Economic Policy Institute.


Sahling, Leonard G., and Sharon P. Smith. 1983. Regional wage differentials: Has the South risen again? Review of Economics and Statistics 65 (1): 131–35.


Sayer, Liana C., Suzanne M. Bianchi, and John P. Robinson. 2004. Are parents investing less in children? Trends in mothers’ and fathers’ time with children. American Journal of Sociology, 110 (1): 1–43.


Snijders, R., and R. Bosker. 1999. Multilevel analysis. London: Sage.


Sørensen, Aage B., and Stephen L. Morgan. 2000. School effects: Theoretical and methodological issues. In Handbook of research in the sociology of education, ed. M. T. Hallinan, 137–60. New York: Kluwer Academic Press.


Wilson, William Julius. 1998. The role of environment in the black-white test score gap. In The black-white test score gap, ed. C. Jencks and M. Phillips, 501–10. Washington, DC: Brookings Institution Press.


Wilson, William Julius. 1999. The bridge over the racial divide: Rising inequality and coalition politics. Berkeley: University of California Press.



APPENDIX A: OPERATIONALIZATION OF FAMILY AND SCHOOL MEASURES


Race-Ethnicity

All the surveys included items for students to report their racial-ethnic group. We included dummy variables to classify students into nonoverlapping categories for African American or black, Latino or Latina, non-Latino white, and other (mostly Asian and American Indian). In our analyses, we focused on the nonoverlapping student groups as blacks, Latinos, and whites; our overall sample estimates for the senior cohorts included the “other” category.


Gender

In the analysis, gender was included as a dummy variable equal to 1 if the student was female.


Parents’ Education

Both mother’s and father’s educational attainment were included as separate variables in our analysis. Each senior cohort survey provided information to create a measure for parents’ years of education, coded as 10 years if the parent did not finish high school, 12 if the parent was a high school graduate, 14 if the parent attended some college, 16 if the parent received a four-year college degree, and 18 if the parent received a graduate or professional degree.


Parent Occupational Status

We included a measure of parent’s socioeconomic index (SEI) or occupational status measure, based on the maximum status reported for the father or mother (range in the data sets: 7.33–70.21). On the surveys, respondents could select from a list of comparable occupations, which were then translated into Duncan’s SEI (Duncan 1961) scores. NLS-72 through ELS-04 included a measure of Duncan’s SEI (Duncan 1961), and this particular SEI measure was based on the 1960 census. Thus, the estimates of change provided used this earlier time frame for the SEI as a reference point.


Family Income

The income variable represents a particularly challenging problem because each survey used different intervals for students to select. Initially, we aimed to rescale all the income variables to 1972 using the annual average Consumer Price Index (CPI) value for each year. However, many categories in the upper tail of the income distribution for NELS-92 were not found in the other cohorts. As an alternative, we parsed each cohort’s income values into five categories (five quintiles) by assigning the income category midpoints to the responses; we then found the corresponding quintiles from the population as reported by the Census Bureau. We created dummy variables for each quintile; the median income category in each cohort was the reference group.1


School Socioeconomic Composition

The student-level measures for parent income and mother’s education level were aggregated to the school level. Thus, we were able to calculate the percentage of students within each school in the income quintiles as well as the average parent education level in the school.


School Minority Composition

School administrators NLS-72, HSB-82, and NELS-92 were asked about the percentage of various population groups that attended the school. Based on this information we were able to create two school-level variables that measured the percentages of black and Latino students that attended each school. In ELS-04, the percent minority was based on the combined student-level aggregate of black and Latino group reporting.


School Sector

Schools were classified into public or private schools. The categories were not directly comparable across NLS-72, HSB-82, NELS-92, and ELS-04, because NELS and ELS differentiated the private sector into additional categories. However, all the databases included a composite measure from which we were able to create a simple dummy variable for private schools (public schools as reference group).


School Urban Locale

Schools were located in urban, rural, or suburban locales.2 We created dummy variables for each with rural as the reference category.


Notes

1. Because of missing data for the family background measures, we first replaced missing values for mother’s and father’s education, parent occupational status, and family income using the cohort-level mean computed from students with nonmissing values. Our initial estimation of multilevel models that included dummy variables indicating that imputation had been performed showed large t-values for the slope estimates of these variables. This indicated to us that the imputation process was inadequate. We then imputed with the means based on other students within the same school and cohort and found that the t-values were still large. We then tried a multiple imputation routine and found similar results (see Little and Rubin 2002). As the final step, we replaced missing values with the mean from students within the same school and cohort and adjusted the imputation values based on the resulting slope coefficients on the imputation flags (if the flag had a negative slope, we reduced the imputed value to try to offset it). We then repeated the process and further adjusted the imputation values until successive iterations had no impact. We found that we needed to put bounds on the imputed values based on the max and min values of possible responses in the original survey (e.g., an imputed value for mother’s education could not be less than 10). Without the bounds, we could drive the slope coefficients closer to zero but only at the expense of nonsensical imputation values (e.g., a negative value for SEI). However, with the bounds, we found that the imputation values that minimized the t-values on the imputation flags were those at the minimum response level. For example, nearly all imputed values for mother’s education ended up being 10, the minimum. This suggests that the students whose parents were in the lower-income quintile had lower values of educational attainment, and those who had low-Duncan SEI were less likely to respond to the background questionnaire.


2. Locale is a 7-digit code on the Common Core of Data (CCD), defined as: 1. Large City—A central city of a consolidated metropolitan statistical area (CMSA) or metropolitan statistical area (MSA), with the city having a population greater than or equal to 250,000; 2. Mid-Size City—A central city of a CMSA or MSA, with the city having a population less than 250,000; 3. Urban Fringe of a Large City—Any incorporated place, census-designated place, or nonplace territory within a CMSA or MSA of a large city and defined as urban by the Census Bureau; 4. Urban Fringe of a Mid-Size City—Any incorporated place, census-designated place, or nonplace territory within a CMSA or MSA of a mid-size city and defined as urban by the Census Bureau; 5. Large Town—An incorporated place or census-designated place with a population greater than or equal to 25,000 and located outside a CMSA or MSA; 6. Small Town—An incorporated place or census-designated place with population less than 25,000 and greater than or equal to 2,500 and located outside a CMSA or MSA; 7. Rural—Any incorporated place, census-designated place, or nonplace territory designated as rural by the Census Bureau. The usual practice is to combine these into three categories: urban = 1, 2; suburban/large town = 3, 4, 5; and rural/small town = 6, 7.


Reference

Little, Rod J. A., and Donald B. Rubin. 1987. Statistical analysis with missing data. New York: Wiley.

APPENDIX B


Family Background, Individual, and School Measures for Longitudinal Studies High School Senior Populations

 

All

 

Black

 

Latino

 

White

1972 High School Seniors

M

SD

 

M

SD

 

M

SD

 

M

SD

Number of Students

14,469

 

1,719

 

642

 

11,370

Math IRT

51.94

10.00

 

43.10

9.48

 

45.04

9.67

 

53.24

9.40

Female

0.50

0.50

 

0.57

0.50

 

0.51

0.50

 

0.50

0.50

Income quintile 1

0.34

0.47

 

0.61

0.49

 

0.57

0.50

 

0.30

0.46

Income quintile 2

0.16

0.37

 

0.19

0.39

 

0.20

0.40

 

0.16

0.36

Income quintile 4

0.13

0.34

 

0.04

0.20

 

0.05

0.22

 

0.14

0.35

Income quintile 5

0.12

0.32

 

0.03

0.16

 

0.03

0.18

 

0.13

0.33

Missing income data

0.21

0.41

 

0.19

0.39

 

0.22

0.42

 

0.21

0.41

Father's education

12.54

2.43

 

11.27

1.83

 

11.32

2.12

 

12.73

2.44

Missing father's education

0.01

0.11

 

0.04

0.20

 

0.32

0.18

 

0.01

0.10

Mother's education

12.31

2.04

 

11.57

1.92

 

11.04

1.72

 

12.45

2.03

Missing mother's education

0.01

0.10

 

0.02

0.15

 

0.03

0.16

 

0.01

0.09

Parents' maximum SEI

36.93

26.81

 

19.72

24.07

 

30.75

17.97

 

39.55

26.23

Missing SEI data

0.19

0.40

 

0.44

0.50

 

0.35

0.44

 

0.16

0.37

School mean SES

-0.05

0.51

 

-0.21

0.47

 

-0.12

0.48

 

-0.03

0.50

School percent minority

19.08

25.94

 

36.21

28.01

 

32.53

26.34

 

16.60

22.13

Private school

0.07

0.25

 

0.05

0.21

 

0.06

0.23

 

0.07

0.25

Suburban school

0.48

0.50

 

0.38

0.48

 

0.04

0.49

 

0.49

0.50

Urban School

0.29

0.46

 

0.44

0.50

 

0.48

0.49

 

0.27

0.45


Appendix B (continued)


 

All

 

Black

 

Latino

 

White

 1982 High School Seniors

M

SD

 

M

SD

 

M

SD

 

M

SD

Number of Students

20,888

 

2,593

 

14,255

 

3,494

Math IRT

49.66

9.99

 

43.22

9.48

 

43.89

9.46

 

51.56

9.40

Female

0.51

0.50

 

0.34

0.50

 

0.45

0.50

 

0.52

0.50

Income quintile 1

0.29

0.45

 

0.51

0.50

 

0.25

0.49

 

0.24

0.43

Income quintile 2

0.13

0.33

 

0.14

0.35

 

0.14

0.35

 

0.12

0.33

Income quintile 4

0.15

0.36

 

0.08

0.28

 

0.12

0.32

 

0.17

0.37

Income quintile 5

0.16

0.37

 

0.06

0.24

 

0.11

0.31

 

0.19

0.39

Missing income data

0.10

0.30

 

0.13

0.34

 

0.08

0.27

 

0.10

0.30

Father's education

12.88

2.51

 

11.76

2.04

 

11.98

2.23

 

13.19

2.53

Missing father's education

0.09

0.28

 

0.23

0.42

 

0.12

0.33

 

0.06

0.24

Mother's education

12.65

2.13

 

12.22

2.12

 

11.90

2.05

 

12.84

2.10

Missing mother's education

0.05

0.23

 

0.10

0.31

 

0.08

0.28

 

0.04

0.19

Parents' maximum SEI

47.79

22.26

 

38.47

24.72

 

41.02

22.29

 

50.64

20.77

Missing SEI data

0.03

0.16

 

0.07

0.26

 

0.04

0.20

 

0.01

0.12

School mean SES

-0.05

0.56

 

-0.04

0.56

 

-0.06

0.55

 

0.04

0.54

School percent minority

26.11

31.13

 

36.67

31.87

 

28.25

26.33

 

20.82

25.32

Private school

0.12

0.32

 

0.10

0.30

 

0.10

0.31

 

0.12

0.33

Suburban school

0.47

0.50

 

0.44

0.50

 

0.47

0.50

 

0.50

0.50

Urban School

0.25

0.43

 

0.36

0.48

 

0.26

0.44

 

0.21

0.41


Appendix B (continued)


 

All

 

Black

 

Latino

 

White

 1992 High School Seniors

M

SD

 

M

SD

 

M

SD

 

M

SD

Number of Students

11,661

 

1,022

 

8,442

 

1,287

Math IRT

50.50

9.88

 

43.94

8.66

 

46.43

8.94

 

51.92

9.55

Female

0.51

0.50

 

0.54

0.50

 

0.50

0.50

 

0.50

0.50

Income quintile 1

0.25

0.43

 

0.41

0.49

 

0.49

0.50

 

0.19

0.39

Income quintile 2

0.14

0.34

 

0.18

0.39

 

0.17

0.38

 

0.13

0.33

Income quintile 4

0.19

0.39

 

0.11

0.32

 

0.09

0.29

 

0.21

0.41

Income quintile 5

0.13

0.33

 

0.04

0.19

 

0.04

0.20

 

0.15

0.36

Missing income data

0.16

0.37

 

0.14

0.35

 

0.30

0.46

 

0.14

0.35

Father's education

13.67

2.46

 

12.96

2.13

 

12.33

2.30

 

13.92

2.44

Missing father's education

0.14

0.35

 

0.25

0.43

 

0.22

0.42

 

0.11

0.31

Mother's education

13.29

2.30

 

12.96

2.26

 

12.03

2.19

 

13.50

2.25

Missing mother's education

0.11

0.31

 

0.12

0.32

 

0.21

0.41

 

0.09

0.28

Parents' maximum SEI

47.19

21.55

 

40.63

22.70

 

36.80

21.94

 

49.58

20.57

Missing SEI data

0.05

0.21

 

0.06

0.24

 

0.10

0.30

 

0.03

0.18

School mean SES

0.05

0.76

 

-0.08

0.69

 

-0.15

0.70

 

0.13

0.72

School percent minority

25.37

29.67

 

42.10

31.90

 

37.20

27.35

 

18.12

22.10

Private school

0.16

0.37

 

0.11

0.31

 

0.12

0.34

 

0.17

0.38

Suburban school

0.37

0.48

 

0.33

0.47

 

0.34

0.48

 

0.40

0.49

Urban School

0.36

0.48

 

0.44

0.50

 

0.45

0.48

 

0.30

0.46


Appendix B (continued)


 

All

 

Black

 

Latino

 

White

 2004 High School Seniors

M

SD

 

M

SD

 

M

SD

 

M

SD

Number of Students

12,267

 

1,434

 

7,285

 

1,517

Math IRT

50.51

9.97

 

43.90

8.27

 

45.69

8.26

 

51.93

9.38

Female

0.50

0.50

 

0.52

0.50

 

0.53

0.50

 

0.50

0.50

Income quintile 1

0.12

0.33

 

0.26

0.44

 

0.23

0.42

 

0.07

0.25

Income quintile 2

0.17

0.38

 

0.24

0.43

 

0.28

0.45

 

0.14

0.34

Income quintile 4

0.37

0.48

 

0.24

0.43

 

0.23

0.42

 

0.42

0.49

Income quintile 5

0.14

0.35

 

0.06

0.24

 

0.06

0.24

 

0.18

0.38

Missing income data

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

Father's education

13.74

2.44

 

13.33

2.30

 

12.56

2.42

 

14.05

2.35

Missing father's education

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

Mother's education

13.62

2.24

 

13.40

2.11

 

12.43

2.27

 

13.93

2.14

Missing mother's education

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

Parents' maximum SEI

55.97

19.19

 

51.31

21.35

 

45.81

22.61

 

59.55

16.33

Missing SEI data

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

 

n.a.

n.a.

School mean SES

0.06

0.41

 

-0.11

0.35

 

-0.19

0.43

 

0.15

0.37

School percent minority

27.17

29.72

 

59.52

28.74

 

59.76

30.37

 

13.52

16.64

Private school

0.08

0.28

 

0.03

0.17

 

0.05

0.21

 

0.10

0.30

Suburban school

0.52

0.50

 

0.43

0.49

 

0.43

0.50

 

0.55

0.50

Urban School

0.28

0.45

 

0.47

0.50

 

0.12

0.31

 

0.20

0.40

APPENDIX C


Relationships of Individual, Family Background, and School Measures to Seniors’ Mathematics Achievement in Longitudinal Studies Data, 1972–2004 (Weighted)


  

1972

 

1982

1992

2004

Variable

Coefficient

SE

Coefficient

SE

Coefficient

SE

Coefficient

SE

All Students

         
 

Intercept

43.167 a

0.592

37.076 a

0.492

35.391 a

0.711

37.668 a

0.704

 

Female

-2.532 a

0.149

-0.965 a

0.118

-0.645 a

0.162

-1.205 a

0.160

 

Income quintile 1

-2.417 a

0.264

-1.213 a

0.182

-1.982 a

0.280

-2.463 a

0.302

 

Income quintile 2

-0.691 b

0.235

-0.363

0.200

-1.139 a

0.264

-1.382 a

0.264

 

Income quintile 4

0.216

0.251

0.270

0.189

0.480 a

0.243

1.058 a

0.229

 

Income quintile 5

-0.237

0.276

-0.001

0.193

1.215 b

0.297

2.817 a

0.307

 

Missing income data

0.088

0.271

-0.835 a

0.234

-0.035

0.285

---

---

 

Father's education

0.506 a

0.039

0.619 a

0.031

0.673 a

0.045

0.457 a

0.042

 

Missing father's education

-1.018

0.809

-0.447

0.251

-0.644 b

0.281

---

---

 

Mother's education

0.401 a

0.044

0.326 a

0.035

0.423 a

0.047

0.471 a

0.045

 

Missing mother's education

-3.975 a

0.916

-1.845 a

0.311

-0.282

0.316

---

---

 

Parents' maximum SEI

0.027 a

0.004

0.049 a

0.003

0.037 a

0.005

0.024 a

0.005

 

Missing SEI data

-5.008 a

0.202

-1.544 a

0.390

-0.341

0.405

---

---

 

School mean SES

1.334 a

0.249

1.963 a

0.241

1.078 a

0.242

2.980 a

0.391

 

School percent minority

-0.045 a

0.005

-0.052 a

0.004

-0.037 a

0.005

-0.044 a

0.005

 

Private school

1.874 a

0.369

1.540 a

0.357

0.739

0.453

0.828 b

0.404

 

Urban school

-0.346

0.247

0.001

0.272

0.494

0.350

-0.008

0.297

 

Rural school

0.381

0.273

0.070

0.235

0.036

0.322

0.072

0.328

a p < .001.  b p < .05.  






Cite This Article as: Teachers College Record Volume 112 Number 4, 2010, p. 978-1007
https://www.tcrecord.org ID Number: 15657, Date Accessed: 1/23/2022 3:40:12 PM

Purchase Reprint Rights for this article or review
 
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Mark Berends
    University of Notre Dame
    E-mail Author
    MARK BERENDS (Ph.D. Sociology, University of Wisconsin-Madison) is professor of sociology at the University of Notre Dame, director of the Center for Research on Educational Opportunity, director of the National Center on School Choice, funded by the U.S. Department of Education’s Institute of Education Sciences (http://www.vanderbilt.edu/schoolchoice/), and vice president of the American Educational Research Association’s Division L, Policy and Politics in Education. His areas of expertise are the sociology of education, research methods, school effects on student achievement, and educational equity. Throughout his research career, Professor Berends has focused on how school organization and classroom instruction are related to student achievement, with special attention to disadvantaged students. Within this agenda, he has applied a variety of quantitative and qualitative methods to understanding the effect of school reforms on teachers and students. Recent publications: Goldring, E., & Berends, M. (2009). Leading with data: Pathways to improve your school. Thousand Oaks, CA: Corwin Press; Berends, M., Springer, M. G., Ballou, D., & Walberg, H. J. (Eds.). (2009). Handbook of research on school choice. Mahwah, NJ: Lawrence Erlbaum Associates/Taylor & Francis Group; Berends, M., Springer, M. G., & Walberg, H. J. (Eds.). (2008). Charter school outcomes. Mahwah, NJ: Lawrence Erlbaum Associates/Taylor & Francis Group; Berends, M., Lucas, S. R., & Penaloza, R.V. (2008). How changes in families and schools are related to Black-White test score trends. Sociology of Education; and Stein, M., Berends, M., Fuchs, D., McMaster, K., Saenz, L., Yen, L., et al. (2008). Scaling up an early reading program: Relationships among teacher support, fidelity of implementation, and student performance across different sites and years. Educational Evaluation and Policy Analysis.
  • Roberto Peñaloza
    Vanderbilt University
    ROBERTO PEÑALOZA (Ph.D., Economics, Vanderbilt University) is a economist and statistician at the National Center on School Choice at Vanderbilt University, Peabody College. His current research focuses on statistical methodology and data analysis techniques for exploring student achievement patterns in traditional public schools and charter schools. Dr. Penaloza has collaborated on multiple research projects in development economics, economic growth, international economics, applied statistics, and public institutions and health policy. After completing his undergraduate studies in Ecuador, Dr. Penaloza received his master of arts and doctor of philosophy degrees from the Department of Economics at Vanderbilt University. Immediately thereafter, he served as visiting assistant professor of economics in the Williams School of Commerce, Economics, and Politics at Washington and Lee University. Recent publications: Berends, M., Lucas, S. R., & Penaloza, R.V. (in press). How changes in families and schools are related to Black-White test score trends. Sociology of Education; and Berends, M., & Penaloza, R. V. (2008). Changes in families, schools, and the test score gap. In K. A. Magnuson & J. Waldfogel (Eds.), Steady gains and stalled progress: Inequality and the Black-White test score gap (pp. 66–109). New York: Russell Sage Foundation.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS