Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Understanding the Institutional-Level Factors of Urban School Quality


by Michael A. Gottfried - 2012

Background/Context: This article addresses which school-level factors contribute to school quality. Previous research has focused on assessing the effects of school-level variables on student-level quality (e.g., achievement). However, the field has been limited in not evaluating the effects of school-level factors directly on measured school-level quality. This present study takes this next step.

Purpose/Objective/Research Question/Focus of the Study: The purpose of this study is to determine which school-level factors across three categories—schoolwide programs, school-level personnel, and institutional environment—are significant predictors of school quality.

Population/Participants/Subjects: Two data sets from the School District of Philadelphia are employed. The first data set is longitudinal and comprises elementary school student data linked to teachers, classrooms, and neighborhoods. The second data set, linked to the first by way of school and year data, is longitudinal and comprises school-level variables for all elementary schools within the district over 3 years.

Research Design: This investigation first derives four quantifiable measures of school quality based on the student- and classroom-level data set. These measures are based on student reading achievement scores, math achievement scores, yearly attendance, and behavior grades. In the main analyses, this study separately tests each measure of quality in an empirical model that relates school-level inputs to school-level outputs. Each model does so while holding constant student, teacher, classroom, and neighborhood covariates as well as principal and school budget data.

Findings: Looking across all four measures of school quality, the study finds school quality to be higher in schools with music and language programs, more disciplinary resources per student with a behavior problem and more special education resources per special education student, having a school nurse, being a smaller sized campus, and being K–5 (versus K–8). Although there is some consistency in the predictors of school quality, this research also indicates that differentiating between all four measures of quality is critical: School-level factors provide distinct outcomes depending on the measure of school quality itself.

Conclusions: By identifying those school-level factors that directly relate school quality to its programs, personnel, and environment, this study has differentiated between the particular institutional resources of urban elementary schools that can potentially influence schooling experiences, above and beyond student or classroom factors. As such, this study can be used to more effectively identify those significant institutional challenges faced by urban schools, how these challenges are actualized, and, moreover, the types and levels of resources necessary to enhance school quality.

In the United States, students attending urban public schools are characterized by having low levels of educational attainment, high probability of dropout, and inadequate preparation for postsecondary opportunities (Tighe, Wang, & Foley, 2002). Further, the National Center of Education Statistics (NCES; 1996) has defined the institutional attributes of urban schools themselves with an analogously dire characterization. Compared with their suburban or rural counterparts, urban public schools are much larger in student body size: This is true for urban elementary, middle, and high schools (NCES, 1996). Urban schools also contain significantly larger concentrations of high-poverty students who are most likely of racial or ethnic minority groups and may have limited English language proficiency (NCES, 1996). Finally, urban schools are found in high-poverty neighborhoods that lack social service resources and that have high incidence of illicit activities (NCES, 1996)1.


With the number of children in poverty rising coupled with evidence that urban schools are disproportionally composed of high concentrations of poverty students as well as ethnic and racial minorities, urban districts are increasingly being populated with students at the lowest levels of academic achievement. As such, researchers and policy makers have identified urban minority children as particularly vulnerable to educational failure, and over the last few decades, evidence indicates that urban schools in the United States require serious improvement.


Simultaneously over the past several decades, the traditional notion of improving urban school performance—that the way to boost student achievement is through an increased allocation of funding—has been challenged. Although student expenditures have risen dramatically, it is not clear whether school quality has improved. Several studies have specifically examined the effects of school financial resources on improving educational outcomes (e.g., Ehrenberg & Brewer, 1994, 1995; Hanushek, 1986, 1996). However, they found that boosting school financial resources, such as increased per-pupil spending, did not necessarily increase school effectiveness.


Nonetheless, schools as institutions continue to be upheld as being critical factors of increasing student achievement (Firestone, 1991; Mortimore, Sammons, Stoll, Lewis, & Ecob, 1988; Reynolds & Creemers, 1990; Rutter, Maughan, Mortimore, Ouston, & Smith, 1979). As a result, researchers for several decades have consistently aimed to identify what it is that drives the success of effective schools (e.g., Good & Brophy, 1986; Gray, 1989; Murnane, 1975; Rowan, Bossert, & Dwyer, 1983). Mortimore (1991) has defined an “effective” school as one in which students progress further than might be expected from consideration of its student population. That is, an effective school adds value to its students’ outcomes in comparison with other schools serving similar students. By contrast, an ineffective school has students who progress less than expected. Ineffective schools, in other words, do not add value.


In this framework, provided that some schools can be credited with being capable of adding value more efficiently than others, and yet increased financial resources do not seem to provide a definitive answer as to how, both academic researchers and policy makers should thus be asking: What school-level factors do make a difference? Perhaps it is a question of how those finances are spent rather than the finances themselves, and thus, examining this is the focus of this present research study. In particular, this study uses school-level variables to assess the added value (or “quality”) of school performance over 3 years for the entire set of public elementary schools in the School District of Philadelphia. This study asserts that, after controlling for student, teacher, classroom, and residential neighborhood characteristics as well as school budget, there still remain school-level resources that relate positively to school quality—that is, school-level factors that can contribute to how a school adds value. Three categories of school-level inputs are put forth as significant in this study above and beyond the school’s budget: schoolwide programs, school-level personnel resources, and institutional environment.


Using an empirical model of education production, this study constructs four different quantifiable measures of school quality, derived from a first stage of results of multilevel analyses at the student and classroom levels. From the results of this first stage, measures of school quality are thus constructed based on the values of the coefficients of the school and school-by-year fixed effects for models of reading and math standardized achievement as well as for models of attendance and behavior. The results indicate that institutional-level resources are significantly related to school quality across three categories of school inputs (programs, personnel, and environment) and across multiple measures of school quality. Although there is some consistency in the predictors of school quality, this research also indicates that differentiating between subject tests and between attendance and behavior is critical: School resources provide distinctive outcomes depending on the measure of school quality itself.


FRAMEWORK AND BACKGROUND


In research and practice, schools as institutions have consistently been attributed with having the capability to influence student outcomes (Firestone, 1991; Mortimore et al., 1988; Reynolds & Creemers, 1990; Rutter et al., 1979). To guide further research on assessing the school-level inputs to education, this study adopts a framework for school effectiveness as developed by Scheerens and Creemers (1989) and Creemers and Scheerens (1994). In this framework, school quality is interpreted as being derived from a multilevel set of factors, in which student outcomes are driven by the interplay of inputs at three distinct levels: student, classroom, and school.


Student-level factors include individual attributes, such as background, demographic information, socioeconomic status, and measured ability. A second level is the classroom. According to Creemers and Scheerens (1994), factors at the classroom level include instructional effectiveness and peer effects (e.g., behavior and disruption). The final level, according to Scheerens and Creemers (1989), is at the level of the school, where the school is characterized by its institutional attributes and includes factors such as organization, structure, and management. In this study, these school-level inputs are designated into three categories: programs, personnel, and environment.


Together, these student-, classroom-, and school-level inputs can affect a range of student outcomes, such as standardized testing achievement. However, there has also been a long-standing interest in how these multilevel inputs can affect noncognitive student outcomes, including attendance and behavior: Early on, both Mortimore et al. (1988) and Rutter et al. (1979) included these two outcomes in their research on school effectiveness. Thus, to be consistent with these seminal articles, this study will create school quality measures based not only on achievement but also on attendance and behavior.


As will be described in the Method section, this study uses the input factors at the first two levels—that is, student and classroom—to construct four quantifiable measures of school quality based on achievement, attendance, or behavior. The focus of this study, then, is to understand which inputs at this third level of analysis—that is, the school as an institution—are significant factors of school quality.


Several early studies provided little evidence in being able to directly link school quality with school-level inputs (Betts, 1995; Grogger, 1995; Hanushek, 1986). However, more recently, an increasing body of literature has found some institutional-level variables to exert significant effects on measures of school quality. That being said, these studies have evaluated school-level factors in isolation from one another rather than considering how all may jointly influence school quality. Nonetheless, each prior study has provided insight into which factors have been proved to have a significant relationship with school quality even though assessed in isolation. The studies relevant to this investigation involve those analyses that have evaluated institutional-level inputs pertaining to schoolwide programs, personnel, and environment.


First, this study evaluates how school quality is related to having school-level (non-classroom-specific) special needs programs. For instance, Rolstad, Mahoney, and Glass (2005) conducted a meta-analysis on research of program effectiveness for English language learners (ELLs). Their results indicated that schoolwide bilingual education programs are effective in increasing achievement and that policy should encourage schools without these resources to develop and implement schoolwide ELL programs. Hanushek, Kain, and Rivkin (2002) employed data from the University of Texas at Dallas Texas Schools Project to track special education students who transferred in and out of targeted special education programs, thereby providing a measure of school-level program effectiveness over time for the same set of students. They found that schools with special education programs boosted effectiveness in mathematics achievement for special education students without inhibiting the performance of non-special-needs students.


Several studies have also examined school-level programs outside the scope of addressing special needs. For instance, some research has suggested that elementary school music programs may improve current and future educational outcomes. Specifically, much of this literature correlates music exposure and mathematics success. For instance, Gardiner, Fox, Knowles, and Jeffrey (1996) found that first- and second-grade students who received 7 months of supplemental music classes at school achieved higher standardized math scores than children in the control group who did not receive the treatment. Similarly, Graziano, Peterson, and Shaw (1999) found that the mathematical reasoning scores of children who received music instruction were significantly higher than their counterparts.


A second set of school-level factors evaluated in the literature includes noninstructional personnel. Several articles have demonstrated that shared administrative responsibility among the principal and other administrative staff in the school can lead to increased school quality (e.g., Sammons, Hillman, & Mortimore, 1995). The justification cited is that by expanding the responsibilities to a cabinet of administrative personnel rather than having to rely on the time constraints of a single principal, there may be increased efficiency in school operations. For instance, Flessa (2003) reported that having a specialized staff within a school’s administrative structure, such as a designated community liaison or disciplinarian, allows the principal to focus on envisioning and executing school curriculum and student learning while others could focus on school-specific operations. In the same vein, Grubb and Flessa (2006) reported that an expanded management staff may lead to closer attention being paid to instructional practices, for which principals in the study asserted that they often do not have time. If principals can allot their own resources to improving curriculum and instructional practices, this implies that another manager, such as an assistant principal, can focus on support and disciplinary services, which are often neglected in many schools and thereby bring down the level of quality.


Other studies have examined the effectiveness of nonadministrative school-level personnel. For instance, Allen (2003) examined the relationship between health-related student issues and testing performance for schools that had nurses versus those that did not. The results indicated that elementary schools with nurses had fewer absent children for medical reasons. The findings suggested that declines in student medical leaves of absence increased the quality of in-classroom instruction; this led to a decrease in a squandering of schooling resources from otherwise absent students. Similarly, Guttu, Engelke, and Swanson (2004) evaluated the number of school nurses in public schools in 21 counties in North Carolina and determined that the presence of a school nurse increased medical screenings and follow-ups for students with health issues. The results indicated a decline in the spread of sickness within schools as well as an increased capability of nursing personnel to match sick students with particular educational needs.


Finally, recent literature has also focused on school-level variables pertaining to the institutional environment. For instance, Tighe, Wang, and Foley (2002) employed multilevel models and found that total student enrollment was related to a higher degree of aggregate school obstacles to learning. Offenberg (2001) found a relationship between school structure and academic attainment. Specifically, he relied on a series of natural experiments to determine that on average, students in K–8 schools had higher levels of achievement than students in schools with only middle grades. Byrnes and Ruby (2007) used multilevel modeling to examine the educational outcomes of five cohorts of students in Philadelphia. Consistent with other literature, the authors also found a higher level of achievement for middle school students in K–8 schools, compared with their counterparts in 6–8 schools.


This present research builds on the extant literature in three major capacities. First, it is clear that a diverse set of school-level characteristics have been evaluated separately within the field: Each study has independently yielded evidence that the relationship between a specific institutional factor and subsequent educational outcomes is a significant one. However, each institutional characteristic has thus far been studied in isolation from one another. Thus, this investigation consolidates the evaluation of school-level characteristics into a single research agenda, thereby allowing for an evaluation of their joint influences on school quality. In particular, this study unifies different school effects into a single empirical model to allow for the evaluation of each while simultaneously controlling for the effects of the others.


Second, the field has predominantly focused on how school-level factors have significant effects on student-level outcomes. Although that approach provides insight as to how factors at the school level may be related to testing performance at the student level, this does not directly assess what contributes to a school’s measurable “added value”; rather, these prior studies have certainly estimated the effectiveness of particular school factors but have not necessarily assessed their influence on an institutional-level measure of school quality. To contribute in this capacity, this study employs an approach to examine the prediction of school-level factors on quantifiable measures of school-level quality for each institution. These measures, as described next, are derived directly from student standardized testing performance, per school, in both reading and math subject areas, as well as for nonacademic outcomes, including attendance and behavior.


Finally, this investigation is based on a large-scale, longitudinal data set of all schools in the School District of Philadelphia. To derive measures of school quality, this study relies on multilevel observations in the district: students, teachers, classrooms, and residential neighborhoods factors. Thus, this study evaluates the relationship between a range of institutional factors and school quality, but does so while simultaneously relying on a nonselective sample and by controlling for a host of pertinent factors at student and classroom levels. This study’s approach provides the capacity to parse out in more depth what it is in particular that can predict how an institution adds value.


DATA


The analysis of school quality is facilitated by a unique and comprehensive data set of school-level characteristics. The data encompass all elementary schools in the School District of Philadelphia over the academic years spanning 1997 through 2000. Overall, the sample contains 175 schools with elementary grades, either K–5 or K–8. Over the time span of the data, there are approximately 674 school-year observations to be implemented for the analysis of school quality, as measured by reading, math, attendance, or behavior.2 Information regarding school characteristics was provided by the administrative offices of the School District of Philadelphia.


DEPENDENT VARIABLES


Measures of school quality were derived from the analysis of a student- and classroom-level data set that was linked to a school-level data set by way of deidentified school and year information contained in both data sets. The student-level data set contains student, teacher, classroom, and residential neighborhood variables as well as school, classroom, and grade assignment information for each academic year. These data comprised all students within the entire elementary school system within the School District of Philadelphia. This student-level analytical data set in its entirety consisted of a total of approximately N = 31,000 student observations within elementary grades over the time period 1994/1995 through 1999/2000. Proceeding forward, the analyses rely on the fact that student, teacher, classroom, and neighborhood characteristics are held constant by the mere nature of the construction of the measure of school quality.


As will be described in more detail in the Method section, the dependent variables are constructed as the sum of school and school-by-year fixed effects coefficients from each of four unique regressions, depending on the selected measure of school quality. To derive reading and math measures of school quality, respectively, the school and school-by-year fixed effects coefficient estimates are found by regressing student-level normalized curve equivalent (NCE) SAT9 reading and math standardized achievement scores on a set of student, classroom, teacher, and residential neighborhood variables along with those school and school-by-year fixed effects (see the appendix). The process is analogous for deriving attendance and behavior measures of school quality.


Table 1 presents the descriptive statistics for all dependent variables and independent variables from the school-level data set employed in this study. The dependent variables are measures of school quality, as defined by each respective regression model as the sum of school and school-by-year fixed effects for each school-year observation. These measures provide indicators of year-specific contributed quality of school k in year t, based on performance on its students’ SAT9 reading and math, attendance, or behavior. The measures of school quality across all outcomes are constructed in this study to have a mean of zero. Hence, within this analysis, the average school in the district has been assigned a relative numerical value of school quality of zero so that other schools in the district can be compared with an interpretable average benchmark. Less-than-average-quality schools in the district will yield negative values, whereas greater-than-average-quality schools in the district will have positive values.


[39_16720.htm_g/00002.jpg]


Note that the correlation between reading and math school quality is approximately 0.40. This implies that a school with a strong reading quality tends to also have strong quality in mathematics. The same can be stated about lower performing schools: Those at the lower end of the performance spectrum tend to perform worse across both reading and math. Hence, there is a consistency within schools.


INDEPENDENT VARIABLES


Based on the factors identified in the framework and background on school quality, institutional-level inputs relating to school quality fall into one of three categories in this study: programs, personnel resources, and environment.  First, there are several variables related to schoolwide programs in the data set. As presented in Table 1, these programs include music, language skills,3 and English instruction for non–native speakers (ELL). Each program variable is binary, indicating whether or not a school has a designated non-classroom-specific program in music, language, or ELL, respectively. Note that there were no school-level math programs in the data set.


The first three columns of Table 2 present the results of three logistic regressions related to these schoolwide programs. The dependent variables in this table are binary, and each indicates whether or not a school had a music, language skills, or ELL program. The independent variables include school-level measures of student demographics and special needs. Following the methodology of Sacerdote (2001), the dependent variables are regressed on these school-level demographic and academic characteristics in order to determine if a significant predictive relationship exists. If it does, then this would provide evidence of a systematic relationship between programs at the school and school-level characteristics (i.e., perhaps lower poverty schools were more likely to have music programs, hence biasing the estimates of music on school quality). However, the results are methodologically consistent with the conclusions of Sacerdote: The lack of significant coefficients on school characteristics in Table 2 indicates that no systematic relationship is present between school characteristics and the presence of schoolwide programs. That is, certain schools with particular attributes in the district are not more inclined to have certain programs. Hence, the data are not unduly biased in any particular direction.


[39_16720.htm_g/00003.jpg]

A second set of school-level inputs includes personnel resources. First is the number of special education teachers per special education student in a given school. This measure describes the breadth of school-level special education resources within the institution: a larger value implies more special education teachers per special education student and hence greater availability of schoolwide resources for special education students. On average, an elementary school in the data set has approximately three special education teachers.


Another group of personnel inputs includes disciplinary resources. First, there are indicators for whether schools have assistant principals or safety officers. However, to ascertain a measure of the breadth of disciplinary resources, the total number of school disciplinarians (constructed as the total number of assistant principals and safety officers per school) is divided by the number of students with behavior problems in that school. This measure indicates the availability of disciplinary resources in a school that can address behavioral issues. Students are deemed as having behavior problems in school year t if they received a grade of D in behavior on their report cards from year t-1. It should be noted that teachers overidentify minority students as behavior problems more than White students and overidentify their behavior as intentional (O’Connor, Hill, & Robinson, 2009). Thus, although it is probable that a larger number of students with teacher-assigned behavior problems may actually identify larger numbers of students with behavior problems, it may also be serving as a signal for a negative issue regarding school culture.


A final set of personnel resources includes additional staff pertaining to school health and parental support and outreach. The data set includes indicators as to whether a school, in a given year, had a school nurse or school community liaison.


Table 2 also presents regression results regarding the assignment of school personnel resources to school characteristics. The institutional-level dependent variables are binary indicators as to whether a school had a particular staff member. As with programs, the results in this second portion of Table 2 indicate a lack of significance between school characteristics and variables pertaining to having particular personnel at the school. There does not appear to be evidence that certain schools are in some way more likely to have particular personnel simply based on school-level characteristics (i.e., lower poverty schools are no more or less likely to have an assistant principal than are higher poverty schools4).


Table 1 finally presents the means and standard deviations for variables measuring school environment. The first is a measure of the number of teachers per student in a school. This metric provides an indication of the breadth of overall adults in the school environment. On average, there are approximately 590 students per school and 20 teachers. Second, a measure of the school’s physical capital is constructed as the physical square footage per student as an indicator of school size. Third, a binary variable indicates if a school is K–8 (versus K–5).


Additional covariates are included in the proceeding analyses to serve as control variables. First, two principal characteristics are incorporated. The first is a binary variable, indicating if the principal holds a doctoral degree (either Ph.D. or Ed.D.). Approximately 22.6% of all principals in district schools with elementary grades had doctorates over the time period of the data set. The second variable is an indicator for principal gender. Approximately 52.8% of principals were male. Other characteristics of principals were not included in the data set, though the future inclusion of additional principal-specific covariates is addressed in the discussion.


A measure of school financial capacity is included in the model as total school budget dollars per student each year. Having this covariate in the model allows this study to control for the fact that larger budgets may be related to having the ability to bring particular programs or personnel into the school or the capacity to make school improvements. Once controlling for school budget, however, the research question that remains inquires into the school-level contributors of school quality, holding constant the financial capacity of the institution.


METHOD


A BASELINE MODEL


To examine empirically the direct link between school-level inputs and school quality, this study begins with a standard education production function, as first developed by Summers and Wolfe (1977), and Henderson, Mieszkowski, and Sauvageau (1978), and later revised by Todd and Wolpin (2003). This model is typically used to evaluate the relationship between school inputs and output measures of individual student achievement. In this study, it will be employed strictly as a foundation for a more rigorous assessment of the input-output process for schools at the institutional level. A basic form of the model is expressed as follows:


 A = f(G, F, N, T, C, S).

              (1)


where A represents student achievement (for the purposes of example, student achievement is used); G, the student’s characteristics; F, family characteristics; N, neighborhood characteristics; T, teacher characteristics; C, classroom characteristics; and S, school characteristics.


The linear representation of the education production function in Equation 1 hypothetically requires all current and past inputs pertaining to a student’s schooling history. However, it is a difficult and challenging task to acquire all inputs of a student’s educational background. As such, a solution to this problem is to take the difference of Equation 1 with respect to year t and year t-1 (e.g., Gottfried, 2010). With this result, all input requirements reduce to current inputs plus achievement from the t-1 period:


aijkt = β0 + β1aijk(t-1) + β2Git + β3Fit + β4Njkt + β5Tjtk + β6Cjt + β7Skt + εijtk                   (2)


where achievement a is for student i in classroom j in school k in year t as the dependent variable and in year t-1 as a lagged measure of ability5; G is a vector of student-level characteristics in year t; F is a function of family inputs for student i in year t; N includes neighborhood characteristics for student i in year t; T are teacher effects in classroom j in school k in year t; C are classroom-specific characteristics for classroom j in year t; S are school characteristics and institutional-level resources in year t; and the error term ε includes all unobserved determinants of achievement.


Incorporating a one-year lagged value of achievement is critical because this measure is assumed to capture the influences of prior years’ inputs for student i, thereby leaving only current measures to be estimated. Thus, biases that were created by omitted past variables only bias the estimated coefficient of lagged achievement (Gottfried, 2010; Hanushek, Kain, Markman, & Rivkin, 2003; Zimmer & Toma, 2000).


ACCOUNTING FOR SCHOOL-LEVEL FACTORS


According to Goldhaber and Brewer (1997), school-level resources in Equation 2, Skt, might consist of school-, teacher- and classroom-specific variables that directly relate to institutional quality. Furthermore, in a seminal piece on measuring the effects of managers or administrators, Mundlak (1961) asserted that any term in the model that pertains to institutional-level inputs (i.e., Skt) would also contain the effects of administration. If the variables in Skt in Equation 2 are correctly specified to incorporate all these institutional-level variables, then ordinary least squares regressions would yield consistent estimates of β1 through β7.


However, the reality is that the set of school-level variables Skt can be decomposed into two parts: observable characteristics, such as school size, and unobservable characteristics, such as administrative effort. Observable school characteristics can be directly included in the model, whereas unobservable factors cannot because of the lack of an unobservable measure of administrative effort or other similar difficult-to-measure institutional-level characteristics (Mundlak, 1961).


The omission of these unobservable school characteristics, however, can create two potential problems in estimating the effects of school-level variables. First, the coefficients on the observable school variables in the model may be understated because omitted school factors are not being included in the explained portion of the variance of student achievement. Second, the regression equation may yield biased estimates of any specific effects of particular schooling resources on student outcomes. If repeated measures over time are available for each school, one standard technique to account for this omitted variable bias is to estimate a school fixed effects model (Goldhaber & Brewer, 1997). In a sample of schools with multiple observations per school, it is possible to estimate the following equation to account for the potential unobserved school-level biases:


aijkt = β0 + β1aijk(t-1) + β2Git + β3Fit + β4Njkt + β5Tjtk + β6Cjt + γijkt.     

  (3)


The fixed effects are identified in the final term, which is decomposed as:


γijkt = dj  + wt  + ukt  + eijkt                 

             (4)


where dj are school fixed effects (i.e., binary indicators for school), wt are year fixed effects (i.e., binary indicators for year), and ukt are school-by-year fixed effects (i.e., binary indicators for school in a given year). eijkt is a random error capturing individual variations and variations that are common to all members of the same classroom. Note the absence of term S­kt in Equation 3. Observable measures of school-level inputs cannot be incorporated into this model along with school and school-by-year fixed effects; this would lead to perfect collinearity. This issue is addressed in the preceding subsection.


In more detail, school fixed effects dj control for the influences of school-level resources by capturing systematic differences across each unique institution. By, in essence, holding constant those time-invariant school-specific characteristics, such as curriculum, school neighborhood, leadership, organization, and hiring practices, the school fixed effects control for time-invariant school quality. In the estimation of institutional-level effects, Mundlak (1961) assumed that administration (among other related institutional-level resources) did not change over time during the period of estimation. Thus, he captured unobserved administrative effort when he employed school fixed effects in his analyses.


However, with the implementation of repeated school-level measures in a longitudinal data set in this present study, incorporating school-by-year fixed effects—in addition to school and year fixed effects—avoids having to rely on Mundlak’s (1961) assumption of strict time invariance of unobservable institutional-level characteristics. Because school-by-year fixed effects allow for the model to control for systematic year-to-year changes to institutional quality at the school-level, Equation 4 accounts for time variance in school factors, such as those related to managerial effort or curriculum changes. In other words, any pattern of school-level quality that is unique to a particular school in a given year will be estimated (and therefore held constant) by school-by-year fixed effects. In addition, those time-invariant factors that contribute to a school’s quality are estimated by school fixed effects.


QUANTIFYING SCHOOL QUALITY


Although this fixed effects framework does control for the influence of time-invariant and time-variant school-level influences in estimating student-level achievement, the specification thus far does not allow for the identification or evaluation how school-level inputs relate to school quality. Consequently, for the purposes of this study, it is necessary to take a new step in order to differentiate between the specific influences of particular school-level inputs.


The solution is two-part. The first step is to estimate Equation 3,which includes student, residential neighborhood, teacher, and classroom covariates and to estimate the school, year, and school-by-year fixed effects coefficients (Equation 4). The school and school-by-year fixed effects coefficients derived from this model provide the basis for the measure of institutional quality. The appendix provides the estimated coefficient results from this first step, from which it is possible to estimate a second relationship—one between school quality and school-level inputs.


The second step in this analysis involves constructing a measure of school quality for school k in year t (Qkt). In this second empirical model, which is this the focus of this study, the coefficient estimates of school and school-by-year fixed effects from the regression in the first step (appendix) are combined into a single measure of school quality for each institution in a given year and are subsequently regressed on observable school-level variables (Goldhaber & Brewer, 1997; Rausch, 1993). In other words, because the school and school-by-year fixed effects coefficients are numerically derived from the estimation of Equations 3 and 4, the following expression presents the dependent variable to be evaluated in this study:


dk +ukt = Qkt                     (5)


Both dk and ukt are the same measures as those presented in Equation 4. These are the dependent variables of the analysis, and they account for particular school quality over time, while holding simultaneously constant student demographics, neighborhood information, classroom environments, and teacher variables as empirically derived in Equation 3 and as depicted in the appendix. Equation 5 underscores the motivation behind this empirical method: Similar to what Mortimore (1991) had described, the fixed effects coefficient estimates in Equation 5, as derived from Equation 4, added values to or subtracted values from student achievement outcomes based on how effective or ineffective each school was estimated to be in a given year. In other words, the dependent variable in Stage 2 provides a quantifiable measure of school quality (McPherson, 1992).


The task of evaluating school quality is conducted with an analogous education production function specification that links the output of education at the school level to various school-level inputs. Like the student-level education production function explaining student educational outcomes through a series of inputs, the school-level education production function also has its roots in the economics of education literature (Cohn, 1968; Hanushek, 1986; Lee & Barro, 1997; Riew, 1966). In this study, it is expressed as follows:


Qkt = β0 + β1Pkt + β2Rkt + β3Ekt + ekt                (6)


where Q denotes school quality based on the school and school-by-year fixed effects coefficients. As an output, school quality is derived from a multitude of observable institutional-level inputs, including P, which are schoolwide programs; R, as personnel resources; E, describing the school environment; and error term e incorporating unmeasured factors affecting school quality.


RESULTS


In this section, year-specific school quality, as constructed by the sum of the coefficients of school and school-by-year fixed effects, is regressed on the three categories of school-level independent variables—programs, personnel resources, and school environment—while also controlling for principal demographics and budgetary resources. This empirical specification allows for the evaluation of the estimate of one school-level input on school quality while controlling for the joint influence of the other school-level inputs in the model. In this way, it is possible to examine how each institutional-level variable is related to the added value that a school provides. For academic school quality measures, the analysis is conducted twice, once for school quality in reading and once for math. Doing so enables a differentiation of the effectiveness of each institutional factor based on two subject areas. Additionally, school quality is derived and then assessed based on nonacademic outcomes, including attendance and behavior.


SCHOOL QUALITY IN READING


Table 3 presents parameter estimates, robust standard errors, and approximate p values from fitting the model in Equation 6 for school quality in reading. For comparability, the table presents two versions of the results. The first column provides unstandardized regression coefficient estimates, in which the results correspond to absolute gains or losses in school quality for school k in year t. The second set of estimates presents standardized regression coefficients (fully standardized for continuous independent variables and partially standardized for binary independent variables). This allows for the evaluation of effect sizes in nonexperimental empirical research (e.g., Hoxby, 2000; McEwan, 2003). Standardized betas represent the magnitude of the unique effect of a particular independent variable on the dependent variable, controlling for the effects of other independent variables in the model. Because the variables for the analysis in this second column are standardized, it is possible to compare the relationships of the effects of each independent variable with each other.


[39_16720.htm_g/00005.jpg]

Of the coefficients pertaining to school-level programming, one program is marginally statistically significant in its relationship to school quality. Specifically, schools with language skills programs tend to have a marginally higher measure of school quality in reading (β = 2.21, p < 0.10) than do schools without such programs, holding all else equal. The standardized estimates suggest an effect size of 0.19σ, suggesting that having a language program predicts higher reading quality by approximately 19% of a standard deviation in quality.


The results of personnel resources provide two significant findings. First, schools with a greater degree of disciplinary resources per behavior problem have higher levels of school quality in reading than do schools with fewer disciplinary resources per behavior problem (β = 24.57, p < 0.05). Recall that this variable is constructed as the number of disciplinarians per behavior problem per school, and the mean from Table 1 suggests that about 1% of a school’s disciplinary resources is allotted per behavior problem. This explains the large unstandardized coefficient, which can be interpreted at the sample average as 1% of 22.57. The standardized coefficient of this variable suggests that a one standard deviation increase in school disciplinary resources per behavior problem is related to a 0.06 standard deviation increase in school quality in reading. That is, increasing the allotment of disciplinary resources per behavior problem by 5% (i.e., one standard deviation) predicts higher reading school quality by 6% of a standard deviation in quality.


A second significant personnel resource in the reading model is the prediction of school nurses. Specifically, schools with nurses have higher quality than do schools without nurses (β = 4.61, p < 0.05), holding all else equal. The standardized coefficient yields a result of 0.39σ, which implies that schools with nurses have higher reading school quality by approximately 39% of a standard deviation in quality.


The third set of independent variables pertains to school environment. However, none are significant predictors of school quality in reading. Finally, neither principal characteristics nor budget covariates demonstrate a statistically significant relationship with the reading measure of school quality. In other words, the size of the school’s budget does not directly relate to a school’s measure of added value, as consistent with Ehrenberg and Brewer (1994, 1995) and Hanushek (1986, 1996).


SCHOOL QUALITY IN MATH


Table 4 presents parameter estimates, robust standard errors, and approximate p values from fitting the model in Equation 6 for math school quality. Note that measures of math school quality were derived from SAT9 math achievement in the second columns of the appendix. Similar to Table 3, there are two sections of results: one with unstandardized regression coefficients and one with standardized betas in order to interpret effect sizes.


[39_16720.htm_g/00007.jpg]

To begin, schools with music programs have higher math quality compared with schools that do not (β = 5.77, p < 0.01). The standardized regression coefficient on this parameter suggests an effect size of 0.33σ, indicating that having a music program predicts higher math quality by about 33% of a standard deviation in quality. Unlike reading, school-level language programs are not significantly related to school quality in math. However, similar to the results on reading, having an ELL program is also here not significantly related to school quality.


Results from the set of variables pertaining to personnel resources indicate that, holding all else equal, schools with nurses have higher school quality in math (β = 6.44, p < 0.05) than do schools without nurses. In terms of standardized betas, the associated effect size is 0.37 standard deviations, thereby suggesting that schools with nurses have higher math school quality by approximately 37% of a standard deviation in quality. This effect size of having a school nurse is consistent with the results of the reading.


Finally, results pertaining to school environment indicate that K–8 schools are marginally statistically significant and negatively associated with elementary schooling quality in mathematics (β = -3.82, p < 0.10). The associated standardized beta coefficient is -0.22σ. Other school-level environmental inputs are not significant, as consistent with the analysis of reading school quality. Principal characteristics and budget are not significant predictors of school quality measures in math.


ASSESSING ADDITIONAL MEASURES OF SCHOOL QUALITY


Recent research in the evaluation of school-level resources has supported the premise that school factors need not affect all outcomes in the same way (Rumberger & Palardy, 2005). Although the analysis has thus far focused on school quality measures as derived from standardized test performance, it is as useful to develop school quality measures beyond the scope of achievement. As such, this section presents the relationships between the set of school-level inputs and additional measures that can compose school quality.


Attendance


Evidence in the literature has suggested a significant relationship between missing school (or attending school) and achievement (Caldas, 1993; Gottfried, 2009; Lamdin, 1996). Indeed, Gottfried (2009) has suggested from his findings that schools ought to use patterns of attendance and absences as a gauge of schoolwide performance rather than strictly relying on standardized achievement measures. Based on this premise, this study develops a measure of school quality as derived from records of student attendance. The process to construct a school-level measure of quality based on attendance is analogous to the two-step process described in the Method section: First, a student- and classroom-level regression model is assessed in which individual school attendance is an outcome, and school, year, and school-by-year fixed effects in Equation 4 are independent variables (in addition to student and classroom variables presented in the appendix). The sum of the coefficient values of the school and school-by-year fixed effects coefficients become the dependent variables in Step 2, in which they are regressed on the set of independent school-level variables. Here, the measure of school quality gauges how school-level variables relate to school quality as determined by attendance.


Table 5 presents parameter estimates, robust standard errors, and approximate p values from fitting the model in Equation 6 for this school quality measure. The results show a great deal of consistency with the previous analyses of school quality, when quality was previously derived as school and school-by-year fixed effects from models of standardized testing performance.


[39_16720.htm_g/00009.jpg]

Consistent with the results for reading and math, Table 5 demonstrates that schools that have music programs (β = 1.63, p < 0.01) and language programs (β = 0.22, p < 0.01) tend to have higher school quality, as now measured in terms of attendance. The standardized regression coefficient for music programs suggests an effect size of 0.24σ, indicating that having a music program predicts higher quality by about 24% of a standard deviation in quality. For language programs, the effect size is approximately 0.03σ so that schools with language programs have higher quality by about 3% of a standard deviation in quality. Recall that this model controls for budgetary resources as budget per student, and thus, this model suggests that noninstructional programs have significant relationships with school quality above and beyond school finances.


In the category of personnel, schools with greater disciplinary resources per behavior problem (β = 1.70, p < 0.01) and schools with nurses (β = 0.99, p < 0.05) tend to have higher measures of school quality when quality is constructed in terms of attendance. The associated effect sizes are 0.25σ for disciplinary resources and 0.15σ for school nurses, thereby suggesting that schools with greater disciplinary resources have higher quality by about 25% of a standard deviation in quality, and schools with nurses have higher quality by about 15% of a standard deviation in quality.


Finally, square footage has only a marginally statistically significant relationship to school quality in attendance. In other words, physically larger schools tend to have lower school quality, when quality here is measured in terms of attendance.


Behavior


The study of noncognitive outcomes has gained recent momentum in empirical research (e.g., Heckman & Rubinstein, 2001; Heckman, Stixrud, & Urzua, 2006). Underlying this burgeoning interest in nonacademic outcomes is the fact that the solidification of noncognitive skill sets may play a significant role in both academic and economic success, both short term and long term. Therefore, this study developed a measure of school quality based on behavior, analogously to the process by which school quality was derived from reading, math, and attendance models.


Table 6 presents parameter estimates, robust standard errors, and approximate p values from fitting the model in Equation 6 for a school quality measure based on individual student behavior grades. Note that the behavior grade is teacher assigned and is on a scale of 1 (receiving a D [no Fs are in the data set]) through 4 (receiving an A). Hence, a higher school quality measure implies stronger patterns of behavior.


[39_16720.htm_g/00011.jpg]

The results in Table 6 are consistent with prior results. Music programs have a significant, positive relationship with school quality, measured in terms of behavior (β = 0.01, p < 0.05) with an effect size of 0.26σ. Hence, schools with music programs have higher school quality by approximately 26% of a standard deviation in quality. As in all other models, this result is controlling for all else, including budget.


Table 6 also highlights that several aspects of personnel resources are significantly related to school quality, when measured in terms of behavior. For instance, the number of special education teachers per special education student is positive and statistically significant (β = 0.15, p < 0.01). The effect size suggests that if the ratio of special education teachers per special education student increases by one standard deviation, then school quality increases by 0.22σ (i.e., 22% of a standard deviation in quality). Consistent with other analyses in this article, having greater disciplinary resources per behavior problem is significantly related to school quality (β = 0.20, p < 0.11), with an effect size of 0.11σ. A one standard deviation increase in the ratio of disciplinarians per behavior problem is related to school quality that is higher by 11% of a standard deviation in quality.


Additionally, schools that are K–8 (versus K–5) have a negative relationship with school quality and are marginally statistically significant (β = -0.06, p < 0.10). Recall that K–8 schools also had a negative relationship with school quality, when school quality had been previously constructed in terms of math achievement. Hence, there appears to be consistency across measures of school quality, thereby strengthening the model specified in this study.


DISCUSSION


This study has contributed new insight into the assessment of what directly contributes to school quality. Because research has suggested that increasing finances alone does not improve educational outcomes, then perhaps it is a question of how those finances are spent to improve institutional quality. In other words, this study inquired into what in particular adds value, budget aside.


By first building on the empirical model of the educational production function to construct a quantifiable measure of school quality, this study then assessed which school-level inputs relate significantly to school quality, holding constant student, teacher, classroom, and neighborhood information. By conducting empirical analyses on an institutional-level data set of all elementary schools within the School District of Philadelphia over the years 1997 through 2000, this study has provided new evidence that a range of school-level resources—as broken out by programs, personnel, and environment—have significant relationships with a measured quality of the institution.


The results indicate that significant relationships exist across multiple measures of school quality—in both reading and math achievement as well as in nonacademic outcomes, including attendance and behavior. Although the specific predictions of school-level covariates portray some differential results depending on the outcome itself,6 there is also a degree of consistency across all models. Thus, the conclusions of this investigation indicate that all three categories of variables relating school quality are represented significantly across the various analyses in this study. And even though there may be some distinctive results across quality measures (e.g., language programs in reading and music programs in math), in aggregate, each contributes to furthering the understanding of what it is in particular that relates to how a school adds value.


The analysis of schoolwide programs suggests a positive prediction of language skills and music programs on quality. Specially, schools with language skills programs have higher quality, as derived from reading scores, than do schools without such programs. Analogously, schools with music programs have higher quality, as derived from math standardized scores, than do school without music programs. Language and music were also significantly related to school quality when quality was alternatively measured in terms of attendance and behavior. These results are consistent with related literature, which has found evidence of significant relationships between school-level language skills programs and student-level reading test performance (Ball & Blachman, 1988), and school-level music programs and math performance (Gardiner et al., 1996).


The assessment of school-level personnel resources first indicates a positive relationship between the breadth of disciplinary resources and quality as measured in terms of reading, attendance, and behavior. No significant relationship shone through for disciplinary resources when quality was measured in terms of math achievement. The results thus suggest that in addition to the negative individual and classroom effects of behavior problems as indicated in the appendix, there is also a third effect at play: a reduction in educational quality at the school level related to having a higher level of behavior problems. This may be mediated through the breadth (or lack thereof) of disciplinary resources.


The measure of school-level special education resources—that is, special education teachers per special education student—was significant (and positive) only when school quality was constructed via student behavior grades. In all other instances (reading, math, and attendance), this factor was not significant. There are multiple special education designations for students with special needs (National Dissemination Center for Children with Disabilities, 2009). As such, many students in elementary schools are classified as having special needs for serious emotional and behavioral disorders (EBD), and that EBD students may exert spillover effects in schools (Fletcher, 2010). Therefore, the results found in this study suggest that having greater school-level special education resources is positively related to school quality (when school quality is measured via behavior) because a greater level of specialized school resources may be available to address the particular behavioral needs of EBD students. It can also be hypothesized that greater availability of special education resources per special education student may enable general education teachers to more effectively manage classroom conduct of both special education and non-special-education students in the room, thereby raising behavioral outcomes for all students, and hence improving measured school quality.


Additional analyses of school-level personnel indicate that for reading, math, and attendance measures of school quality, schools with nurses tend to have higher institutional quality. Much of the literature pertaining to health and education would find these results consistent. The research has suggested that upwards of 30 percent of children experience injuries around schools (Peterson, 2002). Thus, the impact of a having a school nurse has been shown to mitigate health issues and injuries: When students can be treated on site, prior research has shown a subsequent decrease in health-related absences and an increase in classroom time and instruction (Allen, 2003; Guttu et al., 2004). The results in this study support these prior findings by documenting the extent to which nurses significantly relate to school absences.  


Generally speaking, school environmental resources do not indicate any systematically significant relationships to school quality measures. There is one exception, however: Schools that span kindergarten through eighth  grade have lower math school quality and lower behavioral school quality than do schools that are strictly elementary. This result may at first glance seem contradictory to much of the literature, which has found positive effects of K–8 schools compared with those in separate elementary and middle schools (Byrnes & Ruby, 2007; Coladarci & Hancock, 2002; Offenberg, 2001). However, those studies had solely evaluated the educational and psychological effects of the institutional environment of K–8 schools on middle school outcomes, whereas the results here would suggest a negative effect for those elementary school students coupled in the same buildings as middle schoolers.


Because this study focuses on a set of urban elementary schools that serve high-poverty and minority populations, the contributions of this investigation extend beyond an empirical evaluation of the relationship between school-level inputs and measures of institutional quality. Rather, this study has unified previous research by bringing to the foreground a joint evaluation of those institutional-level factors that have both positive and negative relationships with the characteristics of urban schools that serve high-needs youth in early years of education. Because the consequences of educational failure are exacerbated for children in large urban districts such as Philadelphia (Beaton et al., 1996; Byrnes & Ruby, 2007; Schmidt, McKnight, Jakwerth, Cogan, & Houang, 1999), having this new school-level insight, in addition to understanding previous student- and classroom-level analyses, offers a deeper perspective into which factors relate to improved educational quality and hence to a decline in the probability of failure.


What this research has shown, then, is that even after accounting for student, teacher, classroom, and neighborhood data, school-level factors persist as important contributors in a measured quality of urban schools. Thus, by identifying those school-level resources that directly relate school quality to its programs, personnel, and environment, this study has differentiated between the particular institutional resources of urban elementary schools that can potentially influence schooling experiences, above and beyond student or classroom factors. As such, this study can be used to more effectively identify those significant institutional challenges faced by urban schools, how these challenges are actualized, and, moreover, the types and levels of resources necessary to enhance school quality.


This study also highlights the value of having detailed school-level data in determining relationships between institutional arrangements and outcomes. By distinguishing among three particular categories of school-level resources for each elementary institution in the School District of Philadelphia, this research has been the first to facilitate methods for researchers, policy makers, and practitioners to construct direct measures of school quality as a way to more efficiently evaluate the channels through which the institution adds or subtracts value. Further, by evaluating these measures of school quality on an array of school-level resources above and beyond budget information, this research has provided a more in-depth understanding regarding whether school quality matters and which factors play a more significant role than others. As a result of doing so, this study has not only demonstrated a more refined implementation of an empirical model with school-level data, but also presented a greater elucidation by quantifying how specific resources can predict the quality of an urban educational institution across multiple measures.


Further research can build on the work in this article in several capacities. For instance, although the data have been comprehensive in their use of student, classroom, teacher, neighborhood, and school variables, there remains the opportunity to increase information relating to management and leadership. Specifically, the data do not contain information on the specific practices of school principals, and thus the model could not parse out indicators relating to principal managerial style. Thus, an extension of this research would link the data used in this study to additional administrator data regarding principal attributes as well as survey data containing principal and teacher reflections on concurrent school leadership. Although cross-sectional surveys have been conducted around school leadership styles in the School District of Philadelphia (e.g., Tighe et al., 2002), the longitudinal nature of this present research study poses an additional challenge for future work to link measures of school management over time because the fundamental goal of this study has been to quantify year-specific school quality.


Second, and related to the first research extension, this study examined school quality as defined by test scores, attendance, and behavior. This certainly extends the literature on school quality by examining both academic and nonacademic measures. However, with the appropriate data set, it might also be possible to define school quality with additional nonacademic attributes. These may include motivation, school culture, and connection to the community, among others. Doing so would continue to expand the ways in which it is possible to define and assess school quality.


Third, the data used in this study pertain to the analysis of elementary school outcomes. However, a longitudinal data set that contains elementary, middle, and high school observations could provide insight on the relationship between early effects of schooling resources and future academic success. Recent increases in school accountability certainly provide future research prospects of empirically unifying these different levels of education and across many levels of data—from individual students to institutional leaders.


Fourth, this study assessed whether having particular programs at a school was related to school quality. Future research would entail a more detailed examination of the characteristics of particular programs that are significantly related to school quality in order to understand what precisely about the programs is making an impact. In addition to providing educators with information about which programs to implement, a finer grained level of analysis would also provide data on which aspects of each program can positively influence a range of outcomes, both academic and nonacademic. In other words, a next stage of research would assess the quality of the inputs to school quality.


Finally, although the findings of this study are derived from the analysis of data from the School District of Philadelphia, the NCES (1996) categorization of urban schools, as laid out in the introduction, applies to numerous districts in the United States. Hence, this study should extend to similar contexts. As such, the NCES categorization used in this study can define the sample space of districts for future analyses, and the results here should be compared with those using data from additional urban districts to arrive at multidistrict conclusions. In doing so, further research in this area will be able to determine the degree to which the findings and interpretations of this study generalize to urban schools across the United States. Nonetheless, within the urban context of this study and with the empirical methods employed here, this research has contributed new insight and laid the foundation for future investigation into how school-level inputs relate to institutional quality and into how a more complete understanding of these relationships can ultimately lead to improved student success.


Notes


1. All elementary schools in the School District of Philadelphia would be considered urban schools under this NCES characterization. In addition to having large student bodies and being in highly urbanized neighborhood settings, students are predominantly economically disadvantaged and of racial minority. On average, 82.5% of the students in an elementary school in the district receive free or reduced price lunch (with a standard deviation of 13.2). Further, 87.6% of the students in an elementary school in the district are of a minority racial group (with a standard deviation of 20). Even in those particular schools with a lower percentage of minority students, the percentage of economically disadvantaged students remains quite high—the lowest percentage of students receiving free or reduced price lunch in the district is approximately 50%.

2. To address missing data, two subsidiary analyses were conducted on both the first-stage analyses (appendix regressions) as well as the school-level regressions found throughout the main tables of this study. First, missing data were replaced with mean sample values without missing data for the variable, and additionally, a dummy variable for each variable of missing data was included in the analyses (e.g., Fletcher, 2010). Second, multiple imputation was conducted with 10 sets of imputation. Model results were aggregated across the multiply imputed data sets using standard procedures (Schafer & Graham, 2002). In both approaches, the results from analyses with imputed missing observations were compared with original estimates. The results were not significantly altered by the inclusion of missing data. Hence, the original specifications do not appear to be unduly biased by missing observations.

3. A language skills program refers to students who require additional language needs in the English language (e.g., speech therapy).

4. Note that having an assistant principal was not a district requirement during the time span that this data set covers.

5. Although some of the literature implements the difference between current and lagged achievement as the dependent variable, this study places lagged achievement on the right hand side of the equation to avoid restricting the parameter to a value of one (Todd & Wolpin, 2003).

6. A secondary analysis tested for interactions between independent variables in all three categories and the school’s relative quality in the district, as determined by percentiles. However, no significant relationship existed for any interaction.


References


Allen, G. (2003). The impact of elementary school nurses on student attendance. Journal of School Nursing, 19, 225–231.


Ball, E., & Blachman, B. A. (1988). Phoneme segmentation training: effect on reading readiness. Annals of Dyslexia, 38, 208–225.


Beaton, A. E., Mullis, I. V. S., Martin, M. O., Gonzalez, E. J., Kelly, D. L., & Smith, T. A. (1996). Mathematics achievement in the middle school years. Boston: TIMSS Study Center.


Betts, J. (1995). Does school quality matter? Evidence from the National Longitudinal Survey of Youth. Review of Economics and Statistics, 77, 231–250.


Byrnes, V., & Ruby, A. (2007). Comparing achievement between K–8 and middle schools: A large scale empirical study. American Journal of Education, 114, 101–135.


Caldas, S. J. (1993). Reexamination of input and process factor effects in public school achievement. Journal of Educational Research, 86, 206–214.


Cohn. E. (1968). Economics of scale in Iowa high school operations. Journal of Human Resources, 3, 422–434.


Coladarci, T., & Hancock, J. (2002). The (limited) evidence regarding effects of grade-span configurations on academic achievement: What rural educators should know. Journal of Research in Rural Education, 17, 189–192.


Creemers, B. P. M., & Scheerens, J. (1994). Developments in the educational effectiveness research program. International Journal of Educational Research, 21, 125–140.


Ehrenberg, R. G., & Brewer, D. (1995). Did teachers’ verbal ability and race matter in the 1960s? Coleman revisited. Economics of Education Review, 14, 1–21.


Ehrenberg, R. G., & Brewer, D. (1994). Do school and teacher characteristics matter? Evidence from High School and Beyond. Economics of Education Review, 13, 1–17.


Firestone, W. A. (1991). Introduction: Chapter 1. In J. R. Bliss, W. A. Firestone, & C. E. Richards (Eds.), Rethinking effective schools: Research and practice. Englewood Cliffs, NJ: Prentice Hall.


Flessa, J. (2003). What’s urban in the urban school principalship: Case studies of four middle school principals in one city school district. Unpublished doctoral dissertation, University of California, Berkeley.


Fletcher, J. (2010). Spillover effects of inclusion of classmates with emotional problems on test scores in early elementary school. Journal of Policy Analysis and Management, 29, 69–83.


Gardiner, M. F., Fox, A., Knowles, F., & Jeffrey, D. (1996). Learning improved by arts training. Nature, 381, 254.


Graziano, A., Peterson, M., & Shaw, G. L. (1999). Enhanced learning of proportional math through music training and spatial-temporal training. Neurological Research, 21,139–152.


Goldhaber, D., & Brewer, D.J. (1997). Why don’t schools and teachers seem to matter? Assessing the impact of unobservables on educational productivity. Journal of Human Resources, 32, 505–523.


Good, T. L., & Brophy, J. E. (1986). School effects. In M. C. Wittrock (Ed.), Handbook of research on teaching (pp. 570–602). New York: Macmillan.


Gottfried, M. A. (2010). Evaluating the relationship between student attendance and achievement in urban elementary and middle schools: An instrumental variables approach. American Educational Research Journal, 47, 434–465.


Gottfried, M. A. (2009). Excused versus unexcused: How student absences in elementary school affect academic achievement. Educational Evaluation and Policy Analysis, 31, 392–419.


Gray, J. (1989). Multilevel models: Issues and problems emerging from their recent application in British studies of school effectiveness. In D. R. Bock (Ed.), Multilevel analyses of educational data (pp. 127–145). Chicago: University of Chicago Press.


Grogger, J. (1995). School expenditures and post-schooling wages: Evidence from High School and Beyond. NSF Review of Economics and Statistics conference paper.


Grubb, W. N., & Flessa, J. J. (2006). “A job too big for one”: Multiple principals and other nontraditional approaches to school leadership. Educational Administration Quarterly, 42, 518–550.


Guttu, M., Engelke, M. K., & Swanson, M. (2004). Does the school nurse-to-student ratio make a difference? Journal of School Health, 74, 6–9.


Hanushek, E. A., Kain, J. F., Markman, J. M., & Rivkin, S. G. (2003). Does peer ability affect student achievement? Journal of Applied Econometrics, 18, 527–544.


Hanushek, E. A., Kain, J. F., & Rivkin, S. G. (2002). Inferring program effects for special populations: Does special education raise achievement for students with disabilities? Review of Economics and Statistics, 84, 584–599.


Hanushek, E. A. (1996). A more complete picture of school resource policies. Review of Educational Research, 66, 397–409.


Hanushek, E. A. (1986). The economics of schooling: Production and efficiency in public schools. Journal of Economic Literature, 24, 1141–1177.


Heckman, J. J., & Rubinstein, Y. (2001). The importance of noncognitive skills: Lessons from the GED testing program. American Economic Review, 91, 145–149.


Heckman, J. J., Stixrud, J., & Urzua, S. (2006). The effects of cognitive and noncognitive abilities on labor market outcomes and social behavior. Journal of Labor Economics, 24, 411–482.


Henderson, V., Mieszkowski, P., & Sauvageau, Y. (1978). Peer group effects and educational production functions. Journal of Public Economics, 10, 97–106.


Hoxby, C. (2000). The effects of class size on student achievement: New evidence from population variation. Quarterly Journal of Economics, 115, 1239–1285.


Lamdin, D. J. (1996). Evidence of student attendance as an independent variable in education production functions. Journal of Educational Research, 89, 155–162.


Lee, J. W., & Barro, R. J. (1997). Schooling quality in a cross section of countries (NBER Working Paper No.  6198). Cambridge, MA: National Bureau of Economic Research.


McEwan, P. (2003). Peer effects on student achievement: Evidence from Chile. Economics of Education Review, 60, 131–141.


McPherson, A. F. (1992). Measuring added value in schools (NCE Briefing No. 1). London: National Commission of Education.


Mortimore, P. (1991). The nature and findings of school effectiveness research in the primary sector. In S. Riddell & S. Brown (Eds.), School effectiveness research: Its messages for school improvement. London: HMSO.


Mortimore, P., Sammons, P., Stoll, L., Lewis, D., & Ecob, R. (1988). School matters: The junior years. Wells, MI: Open Books.


Mundlak, Y. (1961). Empirical production function free of management bias. Journal of Farm Economics, 43, 44-56.


Murnane, R. J. (1975). The impact of school resources on the learning of children. Cambridge, MA: Ballinger.


National Center for Education Statistics. (1996). Urban schools: The challenge of location and poverty. Washington, DC: United States Department of Education, Office of Educational Research and Improvement.


National Dissemination Center for Children with Disabilities. (2009). Categories of disability under IDEA. Washington, DC: Author.


O’Connor, C., Hill, L. D., & Robinson, S. (2009). Who’s at risk at school and what’s race got to do with it? Review of Research in Education, 33, 1–34.


Offenberg, R. (2001). The efficacy of Philadelphia’s schools compared to middle grades schools. Middle Schools Journal, 37, 14–23.


Peterson, B.B. (2002). School injury trends. Journal of School Nursing, 18, 219–225.


Rauch, J. E. (1993). Productivity gains from geographic concentration of human capital: Evidence from the cities. Journal of Urban Economics, 34, 380–400.


Reynolds, D., & Creemers, B. (1990). School effectiveness and school improvement: A mission statement. School Effectiveness and School Improvement, 1, 1–3.


Riew, J. (1966). Economies of scale in high school operations. Review of Economics and Statistics, 43, 280–287.


Rolstad, K., Mahoney, K., & Glass, G. (2005). The big picture: A meta-analysis of program effectiveness research on English language learners. Educational Policy, 19, 572–594.


Rowan, B., Bossert, S. J., & Dwyer, D. C. (1983). Research on effective schools: A cautionary note. Educational Researcher, 12, 24–31.


Rumberger, R. W., & Palardy, G. J. (2005). Does segregation still matter? The impact of social composition on academic achievement in high school. Teachers College Record, 107, 1999–2045.


Rutter, M., Maughan, B., Mortimore, P., Ouston, J., & Smith. A. (1979) Fifteen thousand hours: secondary schools and their effects on children. Cambridge, MA: Harvard   University Press.


Sacerdote, B. (2001). Peer effects with random assignment: Results for Dartmouth roommates. Quarterly Journal of Economics, 116, 681–704.


Sammons, P., Hillman, J., & Mortimore, P. (1995). Key characteristics of effective schools: A review of school effectiveness research. London: Office for   Standards in Education.


Scheerens, J., & Creemers, B. P. M. (1989). Conceptualising school effectiveness. International Journal of Educational Research, 13, 691–706.


Schmidt, W. H., McKnight, C. C., Jakwerth, P. M., Cogan, L. S., & Houang, R. T. (1999). Facing the consequences: Using TIMSS for a closer look at U.S. mathematics and science education. Dordrecht, The Netherlands: Kluwer.


Schafer. J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7, 147–177.


Summers, A. A., & Wolfe, B. L. (1977). Do schools make a difference? American Economic Review, 67, 639–652.


Tighe, E., Wang, A., & Foley, E. (2002). An analysis of the effect of children achieving on student achievement in Philadelphia elementary schools. Philadelphia: Consortium for Policy Research in Education.


Todd, P. E., & Wolpin, K. I. (2003). On the specification and estimation of the production function for cognitive achievement. Economic Journal, 113, F3–F33.


Zimmer, R. W., & Toma, E. F. (2000). Peer effects in private and public schools across countries. Journal of Policy Analysis and Management, 19, 75–92.


[39_16720.htm_g/00013.jpg]








Cite This Article as: Teachers College Record Volume 114 Number 12, 2012, p. 1-32
https://www.tcrecord.org ID Number: 16720, Date Accessed: 10/23/2021 9:57:37 AM

Purchase Reprint Rights for this article or review
 
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Michael Gottfried
    Loyola Marymount University
    E-mail Author
    MICHAEL A. GOTTFRIED, PhD, is an assistant professor of urban education at Loyola Marymount University. He is also an adjunct policy researcher in the education division at RAND. His research interests pertain to issues in urban education, including school quality and effectiveness, classroom peer effects, and attendance and truancy. Recent articles include: “The Detrimental Effects of Missing School: Evidence From Urban Siblings” (American Journal of Education) and “Evaluating the Relationship Between Student Attendance and Achievement in Urban Elementary and Middle Schools: An Instrumental Variables Approach” (American Educational Research Journal).
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS