Quantifying the Consequences of Missing School: Linking School Nurses to Student Absences to Standardized Achievement
by Michael A. Gottfried — 2013
Background/Context: Parents, policymakers, and researchers uphold that missing school has negative implications on schooling success, particularly for students in urban schools. However, it has thus far been an empirical challenge within educational research to estimate the true effect that absences have on achievement outcomes. This study addresses this issue by applying multiple quasi-experimental methods and, subsequently, contributes a more accurate understanding of the pervasive, negative effects of missing school.
Purpose/Objective/Research Question/Focus of the Study: The purpose of this study is to determine the effects of individual-level absences on individual-level standardized testing achievement (reading and math) in an urban school district.
Population/Participants/Subjects: The dataset compiled for this study is multilevel and longitudinal and is comprised of five elementary school cohorts within the School District of Philadelphia, for a total of N=20,932 student observations over three academic years. Individual student records were linked to teacher, classroom, and school administrative data as well as to census residential-block neighborhood information.
Research Design: This study combines secondary data analyses and quasi-experimental methods. This study begins with a baseline, linear model of achievement, where the dependent variables are Stanford Achievement Test Ninth Edition (SAT9) reading and math scores. To address issues pertaining to omitted variable bias, this study employs three methods: fixed effects, value-added, and instrumental variable models.
Findings: Consistently across all methods employed in this study, the results indicate a potentially causal, detrimental negative effect of absences on both reading and math standardized achievement. The effects remain significant even after accounting for additional student, neighborhood, teacher, classroom, and school factors.
Conclusions/Recommendations: This study demonstrates that after accounting for omitted variable bias in multiple capacities, the negative relationship between absences and achievement is even more detrimental than reported in previous research. With a more detailed and realistic prediction of the negative ramifications of missing school, it now becomes possible to develop data-driven educational policy.
In recent years, renewed attention has been paid to the negative effects of school absences on student success. Part of this attention may have been driven by advances in educational scholarship. As a consequence of greater district record keeping as well as the establishment of federal longitudinal datasets, recent studies on missing school have had the luxury of exploring large-scale data and of utilizing more rigorous empirical techniques (e.g., Gottfried, 2009, 2011a; Neild & Balfanz, 2006; Ready, 2010). Part of this interest may also have been driven by national policy: In conjunction with the enactment of No Child Left Behind (NCLB), Adequate Yearly Progress (AYP) formulae for elementary schools oftentimes include absence rates to gauge school success (U.S. Department of Education, 2002). Hence, there appears to be an alignment between researchers and policymakers in the belief that absences are in some fashion linked to youth outcomes.
The message underlying these voiced concerns surrounding missing school, however, is undeniably grim: Increased patterns of school absences have been shown to have a pervasive, detrimental relationship to multiple areas of student development and can be connected specifically to academic, sociological, and economic ramifications. As such, determining the drivers and consequences of missing school has appeal to an interdisciplinary set of scholars and policy makers, including educational researchers, sociologists, psychologists, and economists, among others.
Academically, absent students miss in-school instructional time, thereby increasing the propensity to score more poorly on assessments (Chen & Stevenson, 1995; Connell, Spencer, & Aber, 1994; Finn, 1993; Nichols, 2003). In fact, a greater number of school absences has been linked to negative achievement outcomes on multiple measures of assessment (Dryfoos, 1990; Finn, 1993; Gottfried 2009; Lehr, Hansen, Sinclair, & Christenson 2003 Stouthamer-Loeber & Loeber, 1988). Moreover, student absences in one year have been shown to have consequences on future years of assessment (Dryfoos, 1990; Finn, 1993; Lehr et al., 2003 Stouthamer-Loeber & Loeber, 1988). In addition to concurrent and future negative correlations to achievement, having consistently high levels of absences has been linked to other indicators of academic decline, including grade retention (Neild & Balfanz, 2006) and eventual high school dropout (Rumberger, 1995; Rumberger & Thomas, 2000).
Sociologically, students who frequently miss school tend to exhibit greater behavioral issues when present at school, including disengagement and alienation (Ekstrom, Goertz, Pollack, & Rock, 1986; Finn, 1989; Johnson, 2005; Newmann, 1981). This heightened feeling of alienation from peers and teachers may not only inhibit student success for the individual student but may also detract from the outcomes of his or her classmates (Gottfried, 2011b; Lazear, 2001). Students who are frequently absent from school have also been shown to have higher tendencies to engage in both current and future health risk behaviors, such as smoking and alcohol and drug use (Halfors et al., 2002; Wang, Blomberg, & Li, 2005).
Students who are absent have a higher probability either to be grade retained or to drop out of school altogether. Hence, students who miss school more frequently also tend to face greater future economic hardships, such as unemployment (Alexander, Entwisle, & Horsey, 1997; Broadhurst, Patron, & May-Chahal, 2005; Kane, 2006). Indeed, these academic, sociological, and economic ramifications associated with missing school are exacerbated for students in urban districts (Balfanz & Legters, 2004; Fine, 1994; Orfield & Kornhaber, 2001).
It is evident that an significant, interdisciplinary body of research consistently upholds the fundamental premise that missing school negatively relates to multiple gauges of student educational and life success. This study contributes new research to the former: academic achievement. Thus far, it has been an empirical challenge in educational research to estimate the true relationship between absences and achievement. One key issue in estimating with precision arises from omitted variable bias. Oftentimes, unobserved factors, such as school policies or student motivation, can be simultaneously influencing both the frequency of student absences and student achievement levels. If these unobserved relationships are not incorporated properly into an empirical model, then the estimates of absences remain biased and policy becomes difficult to design and implement.
This study addresses these issues by contributing unique analyses aimed at reducing both time-invariant and time-varying unobserved biases. It does so by employing fixed effects, value-added, and instrumental variable models for multiple cohorts of students in a large urban school district. Hence, with the sample and methods employed in this study, this present research contributes a more accurate understanding of the extent to which absences negatively affect schooling success. The findings are particularly crucial for policymakers and practitioners in schools serving urban minority youth; this population of students is especially vulnerable to the negative ramifications of missing school.
The literature relevant to this study is empirical, uses large-scale data, and assesses the relationship between absences as a predictor and educational success as an outcome. In an early yet seminal work, Summers and Wolfe (1977) employed an economic model of educational attainment (commonly known as the education production function) and estimated the prediction of absences on middle-school standardized testing performance. Using cross-sectional data analyses, they found a negative correlation between a students number of absences and individual-level achievement. While their dataset contained observations from the 1970-1971 school year, the authors findings underscored the importance of quantifying the effects of missing school. Moreover, the authors motivation for assessing large-scale district data remains relevant in discussing policy implications in current research: With their data and methods, they were among the first to be able to draw conclusions that applied to the educational outcomes of entire student populations in large urban school districts.
More recently, several key empirical research studies have been conducted. Like Summers and Wolfe (1977), Balfanz and Byrnes (2006) evaluated the prediction of absences on educational success for middle school students. The context of their research study was based on examining the effects of comprehensive school reform models aimed at closing the math achievement gap in urban middle schools. Among the range of covariates predicting math improvement were patterns of missing school. The findings indicated that there was a 20% difference in the probability of successful math performance for a student who had a 40% absence rate compared to a student who did not miss a single day of school.
Neild and Balfanz (2006) also focused on high school students in their analyses of missing school and academic success. With a cross-sectional dataset of students in the 1999-2000 academic year, the authors employed logistic regression modeling to predict what influences the probability of nonpromotion from 9th to 10th grade. Within their findings, they established that for each additional percentage point decline in attendance in eighth grade, the probability of having to repeat ninth grade increased by five percent. Though their study focused on high school, their research nonetheless supports the premise that higher patterns of missing school in one year can forecast diminished schooling outcomes in future years.
Several recent studies have examined the relationship between absences and achievement at the elementary school level. Gottfried (2009) distinguished the academic implications for students with varying rates of excused and unexcused absences in a school year. The findings indicated that for any given number of missed school days, students with increasingly higher rates of unexcused absences tended to have worsened negative academic outcomes when compared with their peers with increasingly higher rates of excused absences.1 These results are noteworthy in providing new evidence that there are significant relationships between missing school and school outcomes early in education.
Finally, Gottfried (2011a) utilized a quasi-experimental methodology and relied on a sample of elementary school siblings to examine the effect of absences on standardized achievement. By employing family fixed effects, the study controlled for unobserved, time-varying family factors that may have influenced prior estimates of the effect of absences on achievement. The results indicated negative statistical relationships between missing school and test scores; using more robust empirical techniques like fixed effects modeling revealed the relationship between absences and achievement to be even more detrimental. Hence, the case for utilizing quasi-experimental methodologies was made evident in the study, and the findings have facilitated the opportunity for further in-depth analyses into quantifying the nature of the absence-achievement relationship.
This current investigation follows in the direction of these recent studies in its effort to utilize rigorous empirical methods to control for unobserved factors that may be influencing the relationship between absences and achievement. The ultimate goal is to derive causality between missing school and educational attainment. With this in mind, this study will advance the literature on absences in three significant ways.
First, this research evaluates the predicted effect of student absences on achievement while also employing a wider range of control variables than had been previously used. In particular, covariates are organized at the student level (i.e., student and student neighborhood characteristics), at the classroom level (i.e., teacher and peer covariates), and at the school level (i.e., school measures). In examining the effect of absences, the multilevel analyses to follow are not only relying on a commonly accepted set of predictors of achievement but are also considering how new variables may contribute to an explanation of the variance in standardized test performance.
Second, this study contributes to the field by utilizing a school fixed effects framework on a large-scale administrative dataset, a method highly supported in quantitative educational research by Schneider et al. (2007) as well as in absence research (e.g., Gottfried, 2009; 2011a). Employing this framework allows the empirical models in this study to account for unobservable school influences that may otherwise have been confounding the estimates of absences. Additionally, this study employs a value-added model of achievement in the analyses, as there are confounding individual-level influences that may bias the estimates of absences. There is a key benefit of including a lagged achievement score: Under the assumptions of the model, it is no longer necessary to incorporate a full historical panel of information on any particular student.
Third, this study employs a quasi-experimental methodology to study absences that extends beyond a sample of siblings (e.g., Gottfried, 2011a)one that can examine full cohorts of elementary school students. In doing so, however, a model other than family fixed effects is required to address the issue of confounding variables. That is, even after controlling for unobserved, concurrent school or individual-level factors through school fixed effects or value-added models, there may still remain unobserved factors affecting both absences and achievement. As one example of the many potential unobserved factors, it might be hypothesized that unobserved parental involvement in the current school year may simultaneously influence both a students number of absences (i.e., the independent variable) and his or her test scores (i.e., the dependent variable). This study addresses this confounding empirical issue by implementing an instrumental variables strategy. That is, if it were possible to find a measure that embodies an exogenous source of variation affecting only student absences but not achievement, then this quasi-experimental approach would be appropriate (Schneider et al., 2007).
This study supports having found such a variable: the number of nurses in a school in the current academic year. This variable is compelling because absences tend to decline as the number of nurses increase at school. Previous research confirms this relationship between having a greater number of school nurses (i.e., in some cases, greater than zero) and declined student absences (Allen, 2003; Guttu, Engelke, & Swanson, 2004; Weismuller, Grasska, Alexander, White, & Kramer, 2007). Indeed, research in school nursing and student health agrees that fewer children are absent when nurses are available at school.
On the other hand, nurses in a strictly health-related role are non-instructional school personnel. Hence, it is not evident that nurses should have a direct effect on standardized achievement. Additionally, it is not clear that the number of school nurses should have a relationship with any other of the independent variables to be described in the proceeding section. Rather, the influence of nurses appears as exclusive to student absences. With these relationships, the analysis presented in this study allows for the estimation of a potentially causal relationship between absences and achievement.
This article relies on a comprehensive dataset that was compiled uniquely for the purposes of this study. The dataset contains student, neighborhood, teacher, classroom, grade, and school observations for the entire elementary school system within the School District of Philadelphia over three complete academic years. Student, teacher, classroom, and school data were obtained from the School District of Philadelphia via the Districts Student Records department and through the Districts Personnel department. Student residential neighborhood data were obtained from the United States Census flat files at the census block level.
Table 1 presents descriptive statistics on the sample of students over the three years of observations. This analytical sample consists of a total of N = 20,932 observations within 175 public, neighborhood schools with elementary grades. The sample in this study is restricted to third- and fourth-grade observations because students were included in the analyses only if data exist on their current and lagged standardized achievement tests in reading or math or both. Students in this dataset only have standardized testing information for second, third, and fourth grades. Furthermore, to be included in the sample, data must have also existed for other measures, including gender, race, academic indictors (some include lagged information), classroom and teacher characteristics, assignment information (room, grade, and school identifiers), and neighborhood information.2
Each student is observed by classroom, grade, and school assignment for every academic year that the student is in the Philadelphia School District. Student observations are no longer available in the dataset once students exit the school district. That being said, students in the district keep their unique student identification number in the Districts record system; hence, if students return into the Philadelphia School District, they can be matched back to their original record. Because of this intricate tracking mechanism of incoming, outgoing, and returning students, the dataset in its entirety includes entire cohorts of elementary students in the Philadelphia public school system.
There are two outcomes in this study: the normalized curve equivalent (NCE) test scores from both the reading and math sections of the Stanford Achievement Test Ninth Edition (SAT9). The NCEs are the generally preferred measurement for methodological reasons; they have statistical properties that allow for evaluating achievement over time (Balfanz & Byrnes, 2006). Normal curve equivalents range in value from 1 to 99.
All independent variables are also presented in Table 1. To begin, absences are the key measures in this study. First, absence variables for each student observation include number of absent days per year. On average, a student was absent approximately 11 days in a given academic year. Yearly measures of absences are provided for each student and academic year: It is not possible from the data to determine when in the year students were absent. Moreover, details are not provided on the reasons for specific absences. Finally, suspension is always considered an absence in the district. From the dataset, however, it is not possible to distinguish suspension from other types of absences. Future research may entail examining a dataset in which this distinction is made.
The second key measure utilized in this study is the number of full-time school nurses, which is used as an instrument in a two-staged, least squares analysis, described in the following section. The average number of full-time nurses in a school is 0.33, which implies that most schools do not have full-time nurses, per se, but rather have part-time nurses (accounting for one third of school-year time). In the dataset, some schools have 0 nurses, while others have 1.4 school nurses. However, a correlation between the number of nurses and school size (proxied by grade size, as total school size was not available) is only 0.03, implying that there is little correlation between school size and number of nurses in the elementary schools in this district. There is also little correlation between the number of nurses and total school budgeta correlation value of -0.02. Hence, relatively richer schools in the district do not seem to be more likely to have a greater number of nursesa result that would have biased the analysis between absences and achievement.
Table 2 presents these two correlations and their significance values. Additionally, the table presents correlations and their significance values for all other variables included in this study. The results in this table indicate very little correlation between any of the variables and the number of nurses in the school. Indeed, these correlation coefficients tend towards a value of 0 in each row. Hence, there is nothing systematic about the relationships between having a school nurse and the characteristics of the students and their neighborhoods, teachers, classrooms, grade size, or school budget. Hence, none of the correlation coefficients suggest any particular significant systematic relationship that would bias the data.
For every student in a given academic year, the dataset contains demographic information for student characteristics, such as gender and race. Additional student identifiers include indicators for special education status, English language learning status, and whether or not the student has a behavior problem, as determined by his or her behavior grade from the previous academic year.
Data at the student level of analysis also contain a vector of neighborhood information. The empirical model contains four attributes that describe the census block on which the student resides. They include the percentage of a students census block that is White, the percentage of a students block at or below poverty, the household vacancy rate for the block, and the blocks average household income. While the former three attributes are calculated as percentages, the latter is evaluated as a natural logarithm. Note that in the absence of other direct measures of family data, neighborhood information can serve in empirical models as proxies for family (and economic) background (e.g., Hanushek, Markman, Rivkin, & Kain 2003), as they are based on direct observation of family and neighborhood characteristics (e.g., household and census block incomes).
Data on teachers are sourced both from student records and from the Districts Personnel department. The student record provides the name of the teacher assigned to a students classroom in the given academic year. In addition, a detailed dataset was obtained from the Districts Personnel department. For the purposes of this study, teacher academic variables are utilized. Teacher educational history variables include Bachelors degree school code and name and Masters degree school code and name. Having this information allows for a dichotomous designation indicating whether a students teacher had acquired a Masters degree.
Two classroom measures are included. First, to control for the possibility that high rates of absences may occur in larger rooms, class size is included in the set of independent variables in this study. The average class size is approximately 28 students. Additionally, Gottfried (2011b) finds that students whose peers are absent more frequently have lower achievement levels. As such, the average number of classroom peer absences is calculated for each student in the room. To construct the average number of peer absences for each student, the peer absence variables for student i do not include student is own measure of absences. In other words, the effect of absent peers does not contain a students own unique absence information, but instead only incorporates the absences measures of this students classmates. As such, every student with absences will have a slightly different value for classroom peer effects, depending on his or her unique individual record.
Finally, two school-level measures are included. The first is the number of students per grade. Because aggregate school size information is not available for each school in a given year, this study relies on a proxy measurethe number of students in a given grade. It seems logical that a larger grade will imply a larger school in general. Second, a measure of school financial capacity is included in the model as total school budget dollars per student each year (included in the analysis as the natural logarithm). Having this covariate in the model allows this study to control for the fact that larger budgets may be related to having the ability to bring particular programs or personnel into the school or the capacity to make school improvements. Once controlling for school budget, this study holds constant school finances.
Starting with a Baseline Model
This study first begins with a linear model of educational attainment based on the educational production function (e.g., Summers & Wolfe, 1977). This model, with its roots in both the economics of education and sociology of education bodies of literature, is typically employed in empirical educational research as a way of evaluating schooling input factors and measured student outcomes (in this study, these outcomes are standardized testing achievement). The linear representation of the education production function in this study is presented as follows:
Aijgkt = β0 + β1Mit + β2Iit + β3Nit + β4Tjgkt + β5Cjgkt + β6Skt + εijgkt (1)
where the dependent variable is indicated by A, standardized test performance (either reading or math) for student i in classroom j in grade g in school k in year t. On the right-hand side of the equation, there are six sets of independent variables. M is the number of school days that a student missed in a given school year (i.e., the key measure of absences in this study). At the student level, two additional sets of independent variables are included in the model: I, a vector of individual student characteristics (i.e., gender, race, special education status, ELL, and behavior issue status); and N, student residential neighborhood census block characteristics (i.e., percent of block that is White, at or below poverty level, and vacant and mean block income).
At the classroom level, the model contains two sets of variables: C are classroom characteristics (i.e., class size and classmate rate of absences) for classroom j in grade g in school k in year t, and T are teacher characteristics (i.e., gender and degree) for classroom j in grade g in school k in time period t. At the school level, the model contains one set of variables, S, for a given school in a given academic year (i.e., a proxy for school size and total budget in that year). Finally, the error term ε includes all unobserved determinants of achievement and accounts for the fact that students in a single classroom share similar unobserved experiences (i.e, classroom-level clustering).
Accounting for Unobserved Heterogeneity: Part 1 of 3
One issue that arises with the empirical specification as laid-out so far is that there may be unobserved school factors that are correlated with student absences (the key independent variable in this study) and with student achievement (the dependent variable in this study). As an example, schools that uphold rigorous standards for academic achievement might also be more likely to uphold more stringent student absence policies. If this were the case, the baseline model would underestimate a negative influence of missing school, as the unobserved school environment would be contributing simultaneously to boosting student achievement and also to mitigating student absences.
As a result, a second linear specification is employedone that incorporates school-level fixed effects into the equation:
Aijgkt = β0 + β1Mit + β2Iit + β3Nit + β4Tjgkt + β5Cjgkt + β6Skt + δk+ εigjkt (2)
where δk are delineated as school fixed effects for school k. In terms of modeling, school fixed effects are a set of binary variables for each school that indicate if the student has attended a particular school (for each school variable in the dataset, 1 = yes; 0 = no); this set of school indicator variables leaves out one school as the reference group (this is similar to creating indicator variables for race, where one racial category is left out as the reference group).
It is important to employ school fixed effects, δk, because they account for unobserved influences at the school level by controlling for institutional differences across each school. In holding constant those unobserved, time-invariant, school-specific characteristics, such as a schools educational investments (mentioned in the example above equation (2)), school-wide practices, and absence policies, the principal source of variation used to identify the effect of absences occurs between classrooms within each school. In other words, by controlling for unobserved, school-level factors and by utilizing classroom clustering in the error structure, school fixed effects allow for a focus on within-school variation.
Accounting for Unobserved Heterogeneity: Part 2 of 3
Even after accounting for the unobserved school environment, the estimate of β1 may still remain biased. The reason is that there are many potential current and past unobservable, individual-level factors that may be influencing both how frequently students miss school and, additionally, how students perform on achievement tests. As one example of a plausible unobserved, student-level factor, a high level of student motivation might be related simultaneously to a student missing fewer days of school and to higher testing performance. Hence, the effect of absences on achievement would be underestimated in this example. This next step in the analysis is an attempt to mitigate any potential under- or overestimation in the estimate of β1, using a value-added (or lagged model) strategy. This particular strategy operates under the assumption that unobservable, student-level influences of achievement are time-invariant. In this case, if the attributes of a students unobserved environment (e.g., motivations, family life) are consistent over time, then this model would more accurately estimate the relationship between attendance and achievement.
The conceptualization behind this strategy lies in the construction of the value-added specification, which begins with a hypothetical historical model of academic achievement. Conceptually, the historical model assumes that current achievement (i.e., the dependent variable) is a function of all current years and all past years factors (i.e., independent variables) related to schooling. However, acquiring all required variables to estimate an equation that incorporates all aspects of a students educational history remains a difficult, if not impossible, empirical task. A commonly accepted solution to this problem is to take the difference of the historical model of equation (1) with respect to year t, the current year of schooling, and the historical model with respect to year t-1, the previous year of schooling. The result is known as the value-added specification, where all input requirements reduce to current inputs plus achievement from the t-1 period:
Aijgkt = β0 + β1Mit + β2Iit + β3Nit + β4Tjgkt + β5Cjgkt + β6Skt + β7Aijgk(t-1) + δk + εigjkt (3)
The result of subtracting the historical model of equation (1) with respect to t-1 from a model with respect to t is the removal of the empirical requirement to directly estimate historical measured and (key to this method) unmeasured influences on student achievement. Rather, because equation (3) contains a measure of lagged achievement as an independent variable, this covariate is assumed to difference out the omitted, time-invariant influences for student i, thereby leaving only current variables to be estimated. Thus, biases that were created by omitted historical variables (e.g., unwavering motivation or family life) only bias the estimated coefficient of lagged achievement (Hanushek et al., 2003; Zimmer & Toma, 2000) and not any of the current independent variables. As such, the estimate of the effect of student absences on achievement will be more robust.
Accounting for Unobserved Heterogeneity: Part 3 of 3
Even when employing school fixed effects and lagged individual achievement scores, these models have been constructed under the assumption that unobserved variables are time-invariant. However, there can also be unobservable factors that are time-variant (i.e., occurring in the current school year), and the implementation of the prior three models would not necessarily remedy the problem of estimating the effect of absences. As an example, the prior models do not allow for reduction in biases resulting from concurrent unobservable influences in year t, such as this years unique effort level or motivation. Thus, despite the use of a lag and other fixed effects, the relationship between student absences and achievement may still reflect the impact of omitted factors. It is necessary to turn to an estimation strategy that is more robust with respect to the influences of time-variant, omitted-variable bias. In this study, in conjunction with baseline, fixed effects and value-added models, an instrumental variables strategy is employed.
An instrumental variables estimation strategy involves a two-stage process, in which there is a unique regression equation for each stage. Rather than an immediate evaluation of the relationship between absences and achievement in a regression model, there is an intermediate, first stage in the analysis. In the first stage, the number of days absent is now the dependent variable, and achievement scores are not part of the analysis quite yet.
The independent variables include all other variables designated in equations (2) and (3) (the analysis is run twice within each testing subject, once without a lagged achievement score and once with the lagged achievement score). In addition, a new independent variable is included that is unique to the first-stage analysis and has not been discussed up to this point: the instrument. This instrument is included as an independent variable in this first stage. The instrument, or exogenous independent variable, is the number of nurses at a school. According to the empirical literature, an instrument cannot be directly correlated with achievement (i.e., the outcome in the second stage) except through a direct relationship with the key independent variable (Greene, 2000). As presented above in Table 2, the correlation between the number of school nurses and reading achievement is 0.01, and the correlation between the number of school nurses and math achievement is 0.03. Hence, there is virtually no relationship between the number of school nurses and the final outcome of interest. Rather, as the school nursing literature suggests (Allen, 2003; Guttu, Engelke, & Swanson, 2004; Weismuller, Grasska, Alexander, White, & Kramer, 2007), the influence of nurses is driven through school absences.
The first stage regression model is presented as follows:
Mit = b0 + b1NURSEit + b2Z + δk + εigjkt (4)
where NURSEit is the instrumental variablein this case, the number of nurses per school per year. Z represents all additional independent variables that were used in previous equations in this study. The final two terms are the school fixed effects described in equation (2) and the error term.
In the second stage, the dependent variable is once again achievement and is regressed on fitted values of absences based on the first stage regression and all additional independent variables, either from equation (2) or equation (3) (depending on whether lagged achievement was or was not included in the first stage). This equation is known as the structural equation, in which Mit is the endogenous predictor of interest. One assumption made in this method is that the instrumental variable, NURSEit, is uncorrelated with any omitted variables. Consequently, the second-stage predicted value of absences is also uncorrelated with omitted variables as a result of implementing stage one (Greene, 2000). In other words, the bias in the estimation of the relationship between absences and standardized achievement that may have arisen previously from the exclusion of any omitted variables is potentially removed with the use of an instrument.
To judge the quality of using NURSEit as an instrument in this analysis, it is important to examine the relationship between school nurses and school finances in more detail, as it is possible that the number of school nurses might be directly related to larger school budgets. However, examining Table 2, the correlation coefficients presented suggest almost zero correlation between number of nurses and school budget: a correlation value of -0.02. In fact, the coefficients indicate extremely low correlations between number of school nurses and all other variables as well. This suggests that there is nothing necessarily unique about being in a school with a greater number of school nurses.
Table 3 presents an alternative view of the relationship between the number of school nurses and and the budget of a school. To arrive at the figures presented at the table, schools in the sample were ranked and assigned to budget quartiles based on the annual size of each schools budget.3 Once budget groups were assembled, intraclass correlation coefficients (ICCs) were calculated for the number of school nurses per school and all other independent variables in the analysis. Notably, the ICCs are extremely small throughout the table, highlighting a value of 0.02 for the number of school nurses. This implies, then, that there is a significant amount of heterogeneity in the number of school nurses across all schools in the district rather than a clustering of school nurses in specific schools based on budget size.
Hence, schools in any budget quartile are not similar to each other in terms of the number of school nurses. Rather, there is a diversity throughout the sample of schools that is not determined uniquely by budget.
BASELINE MODEL ANALYSIS
Table 4 presents all results pertaining to the baseline, fixed effects, and lagged-outcome/value-added models for both reading and math standardized achievement outcomes. The values in each table represent regression equation coefficients and the associated HuberWhite/sandwich robust standard errors, adjusted for classroom clustering, in italicized parentheses underneath. Because students are nested in schools by classroom and hence share common but unobservable characteristics and experiences, clustering student data by classroom provides for a corrected error term given this non-independence of individual-level observations.
The first model under each testing outcome strictly incorporates one independent variable, namely the number of days absent in a given school year. In other words, there are no other covariates. The second model includes all covariates, as described by equation (1). The baseline results (i.e., models 1 and 2 under each testing subject) suggest several noteworthy findings. First, the relationship between absences and test performance is negative and statistically significant. Hence, a greater number of missed school days is correlated with lower testing performance in both reading and math.
Second, in moving from model 1 to model 2, the results show that statistically controlling for student, neighborhood, teacher, classroom, and school characteristics increases the explanatory power of the model, as evidenced in the increase in the value of R2. While the size of the absence coefficient becomes smaller in model 2, the effect remains unequivocally negative and statistically significant. Effect sizes in this study are defined as the standardized regression coefficient . Effect sizes allow for the interpretation of regression coefficients in such a way that a one standard deviation change in the number of absences is related to a percentage standard deviation change in testing performance. In reading, the effect size decreases from -0.14σ to -0.11σ between models 1 and 2 and from -0.17σ to -0.13σ in math. These effects indicate that the negative relationship between absences and achievement is slightly stronger in math. Regardless of the reduction in magnitude, the baseline results across both testing subjects indicate that that student absences yield initial signs of having a detrimental relationship on achievement and hence merit further empirical exploration.
FIXED EFFECTS AND VALUE-ADDED MODELS
To derive more robust estimates of the effect of student absences on achievement, this study builds upon the baseline models by employing both fixed effects and value-added (i.e., lagged-outcome) approaches, as described previously. Continuing left-to-right, Table 4 presents these results for both achievement outcomes. Each column builds upon the model presented directly before it. That is, the school fixed effects model incorporates all covariates in model 2 of the table. The final model incorporates lagged achievement as well as school fixed effects and all covariates. Hence, each new column accounts for an additional dimension of the analysis by accounting for a greater possibility of unobserved factors that might be influencing the empirical estimate.
There are several noteworthy points to make based on the results in the last two columns within each testing subject; these points are consistent across both reading and math tests. First, incorporating fixed effects and then a lagged achievement measure greatly increases the explained portion of the variance in achievement, as seen by the upward movement of the value for R2. The greatest improvement in the explained portion of the total variance can be seen between model 3 and model 4, in which a lagged measure of achievement is incorporated into the regression. This large increase in R2 is consistent with the derivation of the empirical model (see equation (3)). Recall that by including a one-year lagged measure of achievement as an independent predictor, this lag is assumed to capture historical information about a student (Hanushek et al., 2003; Zimmer & Toma, 2000).
Second, the coefficients on absences remain negative and statistically significant across all four models within each testing subject. Indeed, incorporating both school fixed effects in model 3 and then a lagged measure of achievement in model 4 has reduced the size of the coefficient as well as the effect sizes. This is logical, however: The more rigorous empirical models show that there are unobservable school and individual effects that were previously influencing the estimates of the absences. This led to an upward bias of the estimate in the baseline models.
Third, the implications of the estimates of student absences in any of the four columns within a testing subject provide a consistent and clear interpretation. That is, the inclusion of school fixed effects and a lagged measure of achievement does not alter the fundamental premise of this study: that the relationship between absences and achievement is negative and significant. What the inclusion of these more robust models does do, however, is more accurately quantify the extent to which school absences may harm schooling success.
INSTRUMENTAL VARIABLES ANALYSIS
Although incorporating school fixed effects and a lagged measure of achievement accounts for time-invariant, unobservable factors that may be simultaneously influencing absences and achievement, it does not control for possible omitted concurrent influences. To rectify this estimation issue, this study employs an instrumental variables strategy, as described previously. This approach is implemented twice: once for a model including school fixed effects (i.e., equation (2)) and once for a model also including a lagged measure of achievement (i.e., equation (3)).
Table 5 presents the first-stage results, as described in equation (4). As explained, the dependent variable in this first stage of the instrumental variables approach is the number of days absent. The independent variable includes the instrument (i.e., the number of nurses at the school during that academic year) in addition to all other observable covariates. Model 3 in Table 5 is in the same format as model 3 in Table 4: The regressions include school fixed effects and HuberWhite/sandwich robust standard errors adjusted for clustering at the classroom level. Model 4 in Table 5 is in the same format as model 4 in Table 4: This regression also includes a lagged measure of achievement. For the sake of presentation clarity, Table 5 provides only those estimates for the instrument and for student characteristics.
The key findings in Table 5 are found in the top row. Here, the negative and statistically significant coefficient values on the number of school nurses imply that students in schools with nurses tend to have fewer days absent. Alternatively, having a greater number of school nurses is related to a decrease in missed school.
Within each testing subject, these findings are consistent between models. Thus, controlling for student, neighborhood, teacher, classroom, and school characteristics as well as unobserved factors via school fixed effects and lagged achievement, there remains evidence in Table 5 indicating that absences tends to decrease with an increase in the presence of school nurses. Turning to the student characteristics in the table, students with higher prior testing performance tend to have fewer absences in the current year. Other demographic characteristics are significantly related to absences; notably, Asian students have many fewer absences. Finally, students with behavioral issues tend to have a greater number of absences as compared to students who do not have behavioral issues.
Table 6 presents the results from the second stage of the instrumental variables analysis, in which the outcome is standardized testing achievement. Note that Table 6 compares the coefficients on absences from the instrumental variables regressions (i.e., from the second stage of the analysis) to the estimates of absences presented in Table 4.
Although all estimates on the effect of absences are statistically significant, the results from the instrumental variables regressions indicate that the negative prediction of absences on student achievement is much larger in magnitude than is suggested by any of the prior models. Again, the pattern in the reduction of coefficient magnitude is analogous to the pattern between models 3 and 4, in which the inclusion of a lagged achievement value in model 4 tempers the size of the estimate in model 3. Hence, there is consistent evidence that that the lag accounts for a significant portion of the historical, unobserved measures of current achievement. Additionally, although not presented in the table, the R2 values in model 4 are again larger than those in model 3, in the instrumental variables approach. Nonetheless, the negative effect of absences is revealed to be much more marked once an instrumental variables approach is taken into account.
In addition to the coefficient estimates, the effect sizesagain defined as the standardized beta coefficientare also much larger than those in Table 4: -0.82σ and -0.38 in reading and -1.40σ and -0.74σ in math. This implies that a one standard deviation increase in the days a student is absent from school is associated with a statistically significant 0.38 to 0.82 or 0.74 to 1.40 standard deviation decrease in reading and math testing performance, respectively. Hence, the results from the instrumental variables analysis depict an underestimation of the effect of absences on achievement when current omitted factors are not taken into account.
The findings of the instrumental variables strategy indicate that the number of nurses at school provided an exogenous indication of absences in this analysis, one that has rid the analysis of the potentially confounding influences of unobserved variables affecting both absences and achievement. For example, in the regressions from Table 4, unobserved, highly academically-involved family environment in the current school year could be simultaneously affecting the measure of absences (a more involved family may ensure that a child attends school more in year t) and achievement (a more involved family may help a child study for school exams in year t). As such, from the regressions reported in Table 4, it may not be possible to separate out the effects of absences and achievement, because concurrent factors, like parental involvement, can be related to both independent and dependent variables. In the instrumental variables regressions, however, these unobserved, confounding effects associated with both absences and achievement are excluded. Thus, when the number of school nurses is used as an instrument in the regression, it minimizes the effects of unobserved, concurrent influences that simultaneously affect both dependent and independent variables. Hence, the results are more robust and point towards causality (Schneider et al., 2007).
The findings of this study have brought forth new evidence that there is a quantifiable, detrimental, and plausibly causal effect of absences on school success. The relationship was evaluated based on a sample of entire elementary school cohorts as they progressed through three academic years in a large urban school district. The process of finding unbiased estimates of the effect of absences on achievement entailed three distinct approaches. The first analyzed a baseline model of student achievement, in which standardized reading and math outcomes were separately modeled on the basis of observable individual, neighborhood, teacher, classroom, and school measures (i.e., models 1 and 2 under each testing subject in Table 4). The coefficients on the number of days absent indicated consistent findings: Indeed, this provided initial evidence that significant, negative relationships existed between individual absences and student-level achievement.
A second approach built directly on the baseline model to account for unobserved influences on both absences and achievement. First, school fixed effects were employed to control for unobservable, time-invariant, school-level characteristics of the educational experience (model 3 in Table 4). Second, this study then introduced a lagged measure of achievement as predictor of current achievement (model 4 in Table 4). Through the process by which the lagged measure was incorporated into the model, it was assumed that this covariate accounted for measured and unobservable historical factors that were associated with current achievement. The coefficients on absences, though generally consistent with those of the baseline models, were reduced in magnitude with the introduction of these two modifications to the baseline approach. The explained portion of the total variance significantly improved, however, particularly with the latter approach. This indicated that prior achievement was greatly responsible for a large portion of the variation in student outcomes.
The inclusion of school fixed effects and a lagged achievement measure may have controlled for time-invariant, unobservable factors. However, these methods did not account for omitted concurrent influences. As such, this study proposed that the models thus far may not be able to separate the relationship of absences and achievement from unobservable, concurrent factors that might be influencing the estimates of both independent and dependent variables. Thus, this study employed a third approach.
In this final method, an instrumental variables strategy was implemented in two stages. In the first stage, the number of school nurses in the current academic year was utilized as an instrument and days absent was used as the dependent variable. The number of nurses in a school was assumed to be free from the effects of concurrent unobserved influences that otherwise might influence both absences and achievement. The fitted values from this regression were then utilized in a second stage in which a more robust estimate of the effect of absences on achievement was determined. The analyses, conducted on both school fixed effects and value-added models, suggest plausible causality, as defined in instrumental variables approaches (see, Schneider et al., 2007). Indeed, the results suggest a significant underestimation of the detrimental effect of absences on achievement. The negative effect sizes from the instrumental variables approach are much more pronounced than those produced by baseline, fixed effects, or value-added models individually.
What truly stands out from these analyses, then, is that consistently, across all three methods employed in this study, the results indicated a detrimental negative effect of absences on standardized achievement. This effect remained significant even after controlling for student, residential neighborhood, teacher, classroom, and school characteristics and after incorporating increasingly rigorous empirical approaches. The results also suggest that there were differential outcomes based on reading and math achievement. In general, the results indicated that the negative effects of having a higher number of absences were larger in math. This finding is relevant to this studys sample, as urban schools are falling drastically behind in math (Baflanz & Byrnes, 2006).
Prior research has shown that greater numbers of absences negatively correlate with school success (Dryfoos, 1990; Finn, 1993; Gottfried, 2009; Lehr et al., 2004; Stouthamer-Loeber & Loeber, 1988). The findings of this study are unequivocally supportive of these previous results. Notably, however, this study also demonstrates that after accounting for unobserved heterogeneity in a multitude of capacities, the negative relationship between absences and achievement is even more detrimental than much of previous research had found. Thus, given that the findings of this study support the premise that absences have a plausible, causal negative effect on achievement, unbiased, empirical estimates of absences can lead to more accurate descriptors of what contributes to academic risk and decline. With this more realistic prediction of the negative ramifications of missing school, it becomes possible to develop data-driven educational policy.
The first policy implication of this study is one that directly links educational scholarship to practice: The utilization of robust estimation strategies has provided evidence that the relationship between absences and achievement is more than simply a correlational one. Rather, through the use of more stringent approaches that build upon an initial baseline model of achievement, the analytical process in this study more finely documents the extent to which unobserved factors can bias the estimation of absenceshence, in the end, pointing in the direction of causality. Utilizing these approaches has revealed a verifiable and quantifiable test score decline for those students who miss more days of school. Thus, school leaders can rely on these more robust findings by drawing directly from research and directly quantifying test score decline as a result of a greater number of absences. In linking absence behavior with an expected deterioration of test performance, school practitioners can more efficiently underscore the urgency of curtailing school absence behavior and hence promote educational practices that would directly mitigate the negative educational outcomes arising from missing school.
The second policy implication is grounded in the fact that this study focused on a sample of elementary youth. This analysis demonstrates not only that absences are detrimental in terms of standardized testing performance, but also that this deleterious effect is persistent for students in their formative years of education. With more robust estimates of absences, both researchers and practitioners can more efficiently identify young students exhibiting patterns that are directly linked to greater educational failure. Hence, schools can use this information to instill practices and inhibit absence behavior (and its consequences) early in education, rather than delaying until middle or high school levels. Moreover, because the analyses in this study were based on a sample of students in urban schools, this studys examination of elementary youth has additional implications. Particularly for urban students, absences in early years of schooling have been found to be related to an even higher dropout rate, fewer employment opportunities, and a heightened chance of engaging in illegal activities (Alexander et al., 1997; Broadhurst et al., 2005; Kane, 2006). Hence, policies and practices in urban districts must underline the importance of attending school as early as elementary school in order to support current school success and improve the probability of better socioeconomic and academic outcomes later in life.
A third policy implication relates to the fact that this study documents a negative relationship between the number of school nurses and days absent (and subsequently between days absent and achievement, using instrumental variables). This finding corroborates prior studies that examined the effects of nursing personnel and student health in schools. For instance, Allen (2003) found that elementary schools with nurses had fewer absent children. Similarly, Guttu, Engelke, and Swanson (2004) determined that the presence of a school nurse increased medical screenings and follow-ups for students with health issues and further decreased the spread of sickness within schools. Consistent with these previous research findings, this current study supports these mechanisms as to why nurses are valuable school personnel and hence underscores the importance of the presence of nurses at school. If student absences cause a decline in testing performance, as this study upholds, then the role of school nurses appears to be quite critical, as the presence of nurses diminishes absences. In allocating resources available to hire non-instructional personnel, school administrators must consider the importance of the presence of nurses even if their school is not necessarily required to hire such a staff member at their site. This study supports the view that having school nursing personnel may lead to a chain of positive educational effects, beginning with declined student absences and leading to improved achievement outcomes.
A final implication is grounded in the fact that this study focused on a large, urban school district. Hence, the findings are germane to the improvement of high-needs schools. To make AYP each year under NCLB, schools must conform to state-established targets based on the number of students in each tested grade and subgroup who score at or above proficiency (as defined by the state) in both reading and mathematics. For elementary schools, states AYP formulae oftentimes also include absence rates, though looking aggregately at the 50 states shows that incorporating this measure is optional (U.S. Department of Education, 2002). However, this study has highlighted the detrimental effect that absences have on school success, beginning at an early age. Therefore, policymakers may consider reexamining the merits of having absence measures be optional as opposed to mandatory in calculating school improvement, given that this study has found consistent evidence that missing school directly inhibits educational success. Perhaps by upholding a compulsory focus on absence metrics in the calculation of AYP, states would incentivize practitioners to solidify school absence policies and remediate practices, hence improving student achievement and long-term student and school success.
This study has advanced the research pertaining to quantifying the deleterious effects of student absences. Even after accounting for a range of observable student, neighborhood, teacher, classroom, and school factors and for unobserved heterogeneity, a negative relationship between absences and achievement persisted. Hence, the findings of this study support the view that missing school has negative effects on academic performance and that these effects take hold on students who are in their foundational years of schooling. By exploring these issues across multiple empirical approaches and multiple educational outcomes, this study contributes new insight into which factors in elementary school contribute to concurrent academic decline and potential future risk of failure. Moreover, by utilizing urban district data, this study provides a platform to discuss these issues for high-needs youth in high-needs schools.
There are several ways in which this study can be used to further the research on missing school: Additional educational research in this area can help guide changes in policy. First, this study focuses on academic outcomes, namely reading and math standardized achievement. Because there is correlational evidence that absences are related to non-testing outcomes such as disengagement (e.g., Bealing, 1990; Harte, 1996; Reid, 1983; Southworth, 1992), future research may entail evaluating whether a potentially causal relationship exists between absences and these additional non-achievement measures. As such, future research might evaluate a large-scale dataset that incorporates both academic and non-academic outcomes. Researchers could thus identify which outcomes are most strongly affected by missing school, and practitioners could more efficiently utilize those findings to adopt and develop policies and programs to target specific outcomes most influenced by absences.
Second, the findings of this study pertain to elementary school students. Future research may incorporate analysis at the middle or high school levels in two capacities. First, individual studies may be conducted separately on middle school or high school samples. This would provide researchers and practitioners with an estimated effect of absences on students at different schooling levels. In such a way, it would be possible for researchers to quantify whether the negative effect of absences is tempered or worsens as students age. These findings would allow practitioners to develop policies and practices to mitigate these negative effects when they are the strongest. Additionally, future research may entail examining a longitudinal dataset that contains elementary, middle, and high school observations in the same study, hence observing student absences and outcomes over a longer period of time. In this way, it would be possible to assess the early effects of missing school as they play out in future years of schooling.
Finally, while the data in this study were non-selective and comprehensive of entire cohorts of students, it still remains possible that alternative results and interpretations might surface in the assessment of other districts or of national-level data. Thus, the findings in this study could be compared to the results that would be generated by applying these methods to other districts or national datasets. Additional applications of the approaches in this study will further the generalizability of the findings from this study, thereby continuing to quantify the negative influence that absences have on student success across multiple districts and for the nation as a whole.
1. Gottfried (2009) found that after holding rate of type of absence constant, an increase in missing school is negatively associated with achievement. Because an absolute value increase in the number of days absent was found to be a detriment to achievement for all students, this study focuses entirely on a comprehensive measure of absences as a means to focus on the academic risk of missing in-school time, regardless of the characterization of the absence.
2. Though student observations were dropped from the analysis because of missing test score data, the correlation between missing data and test scores is extremely small. The correlation between missing test score data and number of days absent is .05. Hence, no systematic relationship appears to exist between students with missing test scores and absence rates.
3. The budget quartiles are grouped as follows: 25% and below have budgets less than $1,805,684; 2650% have budgets between $1,805,685 and $ 2,182,919; 5175% have budgets between $2,182,920 and $2,547,719; and 76% and higher have budgets greater than or equal to $2,547,720.
Alexander, K. L., Entwisle, D. R., & Horsey, C. S. (1997). From first grade forward: Early foundations of high school dropout. Sociology of Education, 70, 87-107.
Allen, G. (2003). The impact of elementary school nurses on student attendance. The Journal of School Nursing, 19, 225-231.
Balfanz, R., & Byrnes, V. (2006). Closing the mathematics achievement gap in high poverty middle schools: Enablers and constraints. Journal of Education for Students Placed at Risk, 11, 143-159.
Balfanz, R., & Legters, N. (2004). Locating the dropout crisis. Baltimore, MD: Center for Research on the Education of Students Placed at Risk, Center for Social Organization of Schools, Johns Hopkins University.
Broadhurst, K., Patron, K., & May-Chahal, C. (2005). Children missing from school systems: Exploring divergent patterns of disengagement in the narrative accounts of parents, careers, children, and young people. British Journal of Sociology of Education, 26, 105-119.
Chen, C., & Stevenson, H. W. (1995). Motivation and mathematics achievement: A comparative study of Asian-American, Caucasian, and East Asian high school students. Child Development, 66, 1215-1234.
Connell, J. P., Spencer, M. B., & Aber, J. L. (1994). Educational risk and resilience in African-American youth: Context, self, action, and outcomes in school. Child Development, 65, 493-506.
Dryfoos, J. (1990). Adolescents at risk: Prevalence and prevention. New York: Oxford University Press.
Ekstrom, R. B., Goertz, M. E., Pollak, J. M., & Rock, D. A. (1986). Who drops out of high school and why? Findings from a national study.Teachers College Record, 87, 356-373.
Fine, M. (1994). Chartering urban school reform. In M. Fine (Ed.), Chartering urban school reform (pp. 5-30). New York, NY: Teachers College Press.
Finn, J. D. (1989). Withdrawing from school. Review of Educational Research, 59, 117-142.
Finn, J. D. (1993). School engagement and students at risk. Washington, DC: National Center for Education Statistics.
Gottfried, M. A. (2009). Excused versus unexcused: How student absences in elementary school affect academic achievement. Educational Evaluation and Policy Analysis, 31, 392-419.
Gottfried, M. A. (2011a). The detrimental effects of missing school: Evidence from urban siblings. American Journal of Education, 117, 147-182.
Gottfried, M. A. (2011b). Absent peers in elementary years: The negative classroom effects of unexcused absences on standardized testing outcomes. Teachers College Record, 113.
Greene, W. H. (2000). Econometric Analysis (4th ed.). New York, NY: Macmillan Publishing Company.
Guttu, M., Engelke, M., & Swanson, M. (2004). Does the school nurse-to-student ratio make a difference? Journal of School Health, 74, 6-9.
Halfors, D., Vevea, J. L., Iritani, B., Cho, H., Khatapoush, S., & Saxe, L. (2002). Truancy, grade point average, and sexual activity: A meta-analysis of risk indicators for youth substance use. Journal of Social Health, 72, 205-211.
Hanushek, E. A., Kain, J. F., Markman, J. M., & Rivkin, S. G. (2003). Does peer ability affect student achievement? Journal of Applied Econometrics, 18, 527-544.
Kane, J. (2006). School exclusions and masculine, working-class identities. Gender and Education, 18, 673-685.
Lazear, E. (2001). Educational production. The Quarterly Journal of Economics, 116, 777-803.
Lehr, C. A., Hansen, A., Sinclair, M. F., & Christenson, S. L. (2003). Moving beyond dropout prevention to school completion: An integrative review of data based interventions. School Psychology Review, 32, 342-364.
Neild, R. C., & Balfanz, R. (2006). An extreme degree of difficulty: The educational demographics of urban neighborhood schools. Journal of Education for Students Placed at Risk, 11, 123-141.
Newmann, F. (1981). Reducing student alienation in high schools: Implications of theory. Harvard Educational Review, 51, 546-564.
Nichols, J. D. (2003). Prediction indicators for students failing the state of Indiana high school graduation exam. Preventing School Failure, 47, 112-120.
Orfield, G., & Kornhaber, M. L. (Eds.) (2001). Raisings standards or raising barriers? Inequality and high-stakes testing in public education. New York, NY: Century Foundation Press.
Ready, D. D. (2010). Socioeconomic disadvantage, school attendance, and early cognitive development: The differential effects of school exposure. Sociology of Education, 83(4),271-286.
Rumberger, R. W. (1995). Dropping out of middle school: A multilevel analysis of students and schools. American Education Research Journal, 32, 583-625.
Rumberger, R.W., & Thomas, S. L. (2000). The distribution of dropout and turnover rates among urban and suburban high schools. Sociology of Education, 73, 39-67.
Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W. H., & Shavelson, R. J. (2007). Estimating causal effects using experimental and observational designs. Washington, DC: American Education Research Association.
Summers, A. A., & Wolfe, B. I. (1977). Do schools make a difference? American Economic Review, 67, 639-652.
Southhamer-Loeber, M., & Loeber, R. (1988). The use of prediction data in understanding delinquency. Behavioral Sciences and the Law, 6, 333-354.
Paige, R. (2002). Dear colleague letter to education officials regarding implementation of No Child Left Behind. Retrieved from http://www2.ed.gov/policy/elsec/guid/secletter/020724.html
Wang, X., Blomberg, T. G., & Li, S. D. (2005). Comparison of the educational deficiencies of delinquent and nondelinquent students. Evaluation Review, 29, 291-312.
Weismuller, P. C., Grasska, M. A., Alexander, M., White, C. G., & Kramer, P. (2007). Elementary school nurse interventions: Attendance and health outcomes. The Journal of School Nursing, 23, 111-118.
Zimmer, R. W., & Toma, E. F. (2000). Peer effects in private and public schools across countries. Journal of Policy Analysis and Management, 19, 75-92.