Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements

The Effects of Student Demographics and School Resources on California School Performance Gain: A Fixed Effects Panel Model


by Mei-Jiun Wu ó 2013

Background/Context: Recently emerged with the implementation of the Californiaís Public Schools Accountability Act of 1999 and the NCLB Act of 2001 is an increase in the number of education production function studies estimating the relationship between educational inputs and APIs. While the majority of past research on California school performance focuses on the impacts of different demographic measures and school resources on API scores at interschool level, few are done to study the effects of changes in similar factors on performance gain at intraschool level. Given that school performance is to be measured against oneself over time under California's current accountability system, the need is great to understand how school performance gain is affected by changes in student demographics and school characteristics within the school.

Objective Of Study: The primary objective of this study is to investigate how APIs change with student demographics and school resources within individual schools. It is hypothesized that changes in factors contributing to interschool variations in API may also affect school API gains. In addition, the impacts of these variables on API gains of individual schools are then compared with results from prior cross-sectional studies to see if their effects on school performance differ between and within the schools.

Research Design: Using the fixed effects regression a hypothetical causal relationship proposed between API gains and changes in nine student demographic variables, including seven racial/ethnic, free and reduced price meal and English language learning subgroups, plus seven school resource variables was estimated.

Findings: School API gains appeared very sensitive to changes in all 16 variables. A 1% change in student demographics at school level was significant enough to change API by an average of -5.0077 to 1.2372 points, while changes in school resources by 1 unit was found to affect school API in the range of -0.0212 to 2.5013.

Conclusions: While California places great responsibility on individual schools for student growth, little policy consideration is given to the likely effects of demographic and resource changes on school performance within the school. Moreover, this studyís confirmation of the positive impact of teachersí advanced degree and full teaching credential on performance gains suggests that teacher qualifications may hold the key to improving student achievement.

Production functions have long been used to estimate the relationship between inputs of capitals, labors, and other factors and outputs of services and goods. In education, the majority of production function research has been dedicated to exploring relevant educational and non-educational determinants of student achievement. Ever since the 1966 Coleman Report linking school resources and student backgrounds to educational outcomes (Coleman et al., 1966), production function analysis has attracted growing interest amongst researchers and policymakers. Production function research holds considerable promise for optimizing educational outputs because it attempts to capture the most influential attributes that affect academic performance. Recently emerged with the implementation of the California’s Public Schools Accountability Act of 1999 and the No Child Left Behind (NCLB) Act of 2001 is an increase in the number of education production function studies estimating the relationship between educational inputs and public school’s Academic Performance Index (API) in California. From student demographics to socioeconomic status to class size to teacher qualification, factors beyond and under control of schools have been closely scrutinized for their impacts on APIs (Catron & Wassmer, 2006; Driscoll, Halcoussis, & Svorny, 2003; Goe, 2002; Powers, 2003; Powers, 2004a; Powers, 2004b; Trujillo, 2007). However, the majority of past research on California school performance predominantly focused on the impacts of different demographic measures and school resources on API scores at the interschool level. Given that a school’s performance is to be compared with its own API over time rather than with that of other schools under California’s accountability system, the need to understand how school performance is affected by changes in student and school characteristics should not be overlooked.


Mandated by the NCLB Act, all states are required to have statewide systems of accountability to assure that all public schools and districts make adequate yearly progress (AYP) and all students are brought to grade-level proficiency in English language arts (ELA) and mathematics by the year of 2014. Under the act, each state has the flexibility to design its own accountability system, path to proficiency, subgroups rules, assessment criteria, and AYP targets, as well as rewards and sanctions plan. To work toward the NCLB’s 100% proficiency goals in ELA and mathematics, emerged are two primary accountability approaches taken by many states to monitor school progress—status and growth models. The two models differ as to how school performance is measured under the accountability system. In the first model, a status index is used to measure all schools against a common target, while schools are measured against individual targets by using a change index in the second model. Similar to the NCLB’s requirement on the use of a common AYP target for all, states like Texas and North Carolina have initially evaluated schools and districts simply based on the proportion of students passing the minimum proficiency scores required by each state every year (Gong, Blank, & Manise, 2002). While cross-sectional samples are used to provide a snapshot of how proficient students are once a year in a school, this type of accountability system is referred to as a status model (Hanushek & Raymond, 2002, p. 6; Illinois State Board of Education, 2007, p. 6; Kane & Staiger, 2003, p. 153). Under such a model the determination of school performance is made at one point in time on successive groups of students, and whether students of a school are proficiently meeting the AYP targets every year is the primary concern of the accountability system.


Instead of using the proficiency status of students at a single point in time to evaluate school performance, many states are more concerned with school progress over time. This type of longitudinal accountability scheme, commonly termed the growth model, gives credits to public schools and districts that demonstrate academic improvements, even if they fall short of the state’s proficiency line. In attempt to increase the accuracy and equity of accountability decisions, states incorporating growth models into their accountability systems acknowledge that not every school starts at the same point so all schools should not be measured against a common target. Each school should instead be measured against individual goals to encourage performance growth in school accountability system. As the growth model emphasizes academic growth of students and educational improvement of schools, it continues to gain momentum and is being incorporated in a growing number of accountability systems across the nation. California, like many others in the country including Kentucky, Louisiana, Massachusetts, Vermont, and Oregon, uses the growth model to measure the academic success of schools on the basis of how much they improve (Gong et al., 2002). The purpose of adopting such model by California is to assure that each school is evaluated against individual targets and the improvement efforts of schools, especially those at a competitive disadvantage, are not left unrecognized (California Department of Education, 2004, p. 2; Doran & Izumi, 2004, p. 2).


Aiming to improve public school performance, California’s educational accountability system was initially instituted in 1999 under the Public Schools Accountability Act (PSAA). The system has been designed to hold each public school accountable for the academic performance and progress of its students in California. Because its accountability system has been established prior to the NCLB Act, California has been using the state-developed Academic Performance Index (API) as the key measurement tool of school performance since 1999. Base API, or more commonly referred to as API, is a numeric index designed to describe how well pupils in Grades 2 through 11 perform on standards tests in aggregate. It is calculated annually for each public school on a scale of 200 to 1,000 (California Department of Education 2004). APIs are determined based on results from a series of state tests, comprising of the California Standards Tests (CSTs), California Modified Assessment (CMA), Alternate Performance Assessment (CAPA), and California High School Exit Examination (CAHSEE). Different from the federal emphasis on English language arts (ELA) and mathematics proficiency, the state accountability system requires performance on additional CST content areas to be taken into consideration for API calculations. Results from CST in science are also required to be included in the API calculation for Grades 3 to 5, but outcomes from both the science and social science CSTs must be factored in the calculation of APIs for Grades 7 through 11. As a result of these variations in testing requirement, different test weights as well as content area weights are fixed annually by the state for the separate calculation of APIs by three different grade span sections, specifically grade levels 2 through 6, 7 and 8 and 9 through 11. The computation of a school’s API would therefore be the average of the APIs for the grade span sections, weighted by the sum of the products of multiplication of test weights and the number of valid scores across content areas for that school.


In order for all schools to reach the state API goal of 800 or higher, California monitors schools on a 2-year cycle and evaluates the success of school progress based on growth targets. Schools’ base API scores from the first year are used to project their API growth targets for the second year, which is estimated by taking 5% of the distance between its base API and the statewide performance target of 800 or a minimum of 1-point growth (Izumi & Cox, 2003, p. 14). The base API achieved in the second year must exceed the sum of the base API from the first year and growth target projected for the second in order for a school to meet the state API growth target requirement. The larger the distance between a school’s base API score and state goal of 800, the bigger the growth target. The growth targets should, however, progressively shrink in points if a school continues to improve. The more progress a school makes across the years, the easier it becomes for the school to meet its individual growth targets subsequently. In other words, the state affords opportunities for schools, especially underperforming ones, to set their own paces to reaching the 800 threshold as long as the minimum growth requirements are met annually. To some extent, schools are allowed to take the driver’s seat when it comes to achieving their individual growth targets and setting their own improvement paces under the growth accountability model of California. Neither individual growth target nor own improvement pace is applicable toward meeting the federal AYP requirements under NCLB since the proficiency bars and improvement timetables set by the state are universal across schools with the same grade span. While API remains the keystone of California's accountability model and school-level performance gain is a central element of the state’s evaluation of school success, understanding what affects API gains at individual school level becomes increasingly important. Schools that successfully meet their annual API growth targets may be eligible for awards, but schools that consecutively fail their individual growth targets may face serious consequences ranging from a state takeover to wholesale replacement of staff (California Department of Education, 2004). As performance outcomes have substantial consequences for students, teachers, parents, schools, districts, and communities, a wide range of research efforts has been made in understanding what factors affect school performance.



PRODUCTION FUNCTION ANALYSIS OF CALIFORNIA SCHOOL PERFORMANCE


Following in the footsteps of Coleman and others, the production function remains a popular approach to causal analysis of factors affecting academic performance of schools. The production function, which is a function developed by economists to describe level of output that can be generated with a given combination of inputs, has been applied in education to estimate the relationship between various educational inputs and school performance outcomes. Many researchers have opted for this approach to their investigations because production functions afford the advantage of isolating the effects of each parameter while controlling for a multitude of factors that affect outcomes (Catron & Wassmer, 2006, p. 6). Simply put, in an organization as complex as the school, an educational production function conveniently provides a relative efficiency measure of performance for every single input. Echoing the Coleman report, most production function analyses of California schools have been devoted to assess the impacts of various student characteristics and school resources on API scores. As demonstrated by Coleman and others that students’ ability to achieve may be hindered by parameters such as ethnic background, socioeconomic status, parental education level and English fluency (Coleman et al., 1966), the causal relationship between these student characteristics and APIs has come under close scrutiny.


When assessing whether student characteristics have an effect on a school’s APIs, three variables were under frequent reviews in production function literature: racial/ethnic composition of schools, percentage of students enrolled in free and reduced price meal programs, and proportion of English language learners (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004a; Powers, 2004b; Trujillo, 2007).  . In addition to their relevance to the Coleman report, these variables were selected most likely because of California’s subgroup rules. Besides meeting their school-wide API growth targets, California’s PSAA also requires schools to “demonstrate comparable improvement in academic achievement by all numerically significant ethnic and socioeconomically disadvantaged subgroups within schools” (California Legislative Information, n.d). By definition, when an ethnic or socioeconomically disadvantaged subgroup accounts for at least fifteen percent of a school's total pupil population and has 30 or more students with valid test scores, or when it consists of a minimum of 100 students with valid test scores, the subgroup would then be considered numerically significant (Goertz, Duffy, & Le Floch, 2001, p. CA-5). There were, initially, seven ethnic subgroups (namely African American or Black, American Indian or Alaska Native, Asian, Filipino, Hispanic or Latino, Pacific Islander, and White) and two socioeconomically disadvantaged subgroups (including students who receive free and reduced price meals, and students with neither parent having graduated from high school) that schools must take into consideration in the evaluation of subgroup performance (California Department of Education, 1999). Two additional subgroups, English language learners and students with disabilities, were added to the definitions of subgroups used in the API in the 2005-2006 school year. While data on parental education and students with disabilities are hardly complete, the majority of production function studies on API have instead focused on the effects of the remaining subgroups.


Similar to the 1966 Coleman Report’s findings on student characteristics and academic achievements, a few production studies showed a crucial interplay between racial/ethnic composition and API scores of schools. Despite their sampling differences, three studies commonly suggested statistically significant positive correlations between API scores and proportions of White and Asian students (Catron & Wassmer, 2006; Goe, 2002; Trujillo, 2007). No statistically significant difference in API was noted between schools with large or small American Indian student populations. However, the effects of the other three ethnic subgroups, Hispanic, Black, and Filipino, were less consistent among previous investigations. The associations between API scores and proportions of Hispanic and Black students were both shown statistically negative by Goe (2002), positive by Trujillo (2007), but none by Catron & Wassmer (2006). Opposite trends were also reported by Trujillo (2007) and Catron & Wassmer (2006) when the causal relationship between the share of Filipino students and API score was tested: the former found it positive while the latter showed it negative. But interestingly, all three studies were in strong agreement on the statistically significant negative impacts on APIs of English language learners and students receiving free and reduced price meals at school. Several other studies also confirmed that school’s API scores were likely to decrease when the proportions of English language learners and free and reduced price meal recipients at school increased (Driscoll et al., 200; Powers, 2003; Powers, 2004a; Powers, 2004b).


School resource patterns and their impacts on school performance, on the contrary, appeared more consistent across production function studies. Contemporary investigations focus on input variables that are controllable at institutional level, such as school size, class size, availability of instructional technology, and teachers’ qualifications, instead of those used by Coleman and his colleagues including physical facilities, length of school year or day. There was a general consensus among previous production researchers that school performance was significantly and positively correlated with teachers’ education level and teaching experience (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004b). In other words, the higher the average years of teacher experience or the greater the proportion of teachers with master's and higher degrees was, the better the school performance was. Notwithstanding their sampling differences, these studies likewise demonstrated statistically significant negative API reactions to average class size and the proportion of first-year teachers at school. However, past research results seemed to disagree on the effects of school size on school API scores. According to Driscoll et al. (2003) and Goe (2002), there was a statistically significant negative association between API score and enrollment size. Catron & Wassmer (2006), on the other hand, found that APIs were unrelated not only to school size but also to the proportion of fully credentialed teachers and the number of students per computer. Altogether these studies seemed to imply that, compared to school size, teacher certification, or instructional technology, teachers’ experience and education level, along with class size, had greater impacts on school’s API scores.


ADVANTAGES OF PANEL REGRESSION MODELS


Even though the types of contextual characteristics and school resources explored in education production function literature continue to diversify, a consensus over the most dominant determining factor is yet to be reached. The clue to why it is relatively difficult to arrive at a consensus in education production studies may lie in either the estimation method or data type. It appears that researchers have often adopted the “ordinary least squares” (OLS) approach to measure the causal effect of input parameters on API scores. By estimating the sensitivity of API scores to a matrix of independent variables, such as student contexts and school inputs, OLS regression models allow the impacts of different explanatory factors to be analyzed and compared among schools. However, as summarized by Levačić and Vignoles, estimation biases can arise primarily in three ways when using OLS regressions: (1) the unaccounted effects resulted from the omission of variables or so-called unobserved heterogeneity; (2) the correlation of an equation’s error term with more than one or more independent variables; and (3) the simultaneous causality between independent and dependent variables (2002, pp. 318-319). Therefore, any dissimilarity in the selection of variables or in the specification of estimation equation could have resulted in different causal relationship among variables. Moreover, the reliance on single-year and cross-sectional data presents another problem for most education production studies. As racial/ethnic composition and socioeconomic status of students as well as level of school resources are subject to change over time, it is unsure whether taking a one-time snapshot of these variables may be sufficient enough to accurately portray their causal relationships with school performance (Odden & Picus, 2000, p. 291). This may help explain, at least in part, why an estimator determined relevant to API in one production research is sometimes found otherwise in another.


While the problem associated with single-year data can be easily avoided if longitudinal data are used (Odden & Picus, 2000, p. 291), one of the most popular forms of longitudinal data analysis—panel data analysis—also offers a feasible workaround to reduce estimation biases of the OLS (Wooldridge, 2002). Panel data analysis, also known as cross-sectional time series data analysis, deals with data that are collected from the same individuals over time. In addition to studying the dynamics of individual change with time, panel analysis offers unique advantages to control for omitted variables that may bias observed relationships (Yaffee, 2003). There are two principle types of panel analysis that researchers use to control for the effects of unobserved variables: the fixed and random effects panel models. The fixed effects model, also termed the “within” estimator, examines within-individual variations in time-varying attributes and measures how an outcome variable changes as each explanatory variable changes within each individual unit. In a fixed effects model the correlation between unobserved heterogeneity and each explanatory variable is assumed individual-specific and time-constant, so this heterogeneity can be eliminated from data through differencing within each cross-section unit between time periods. Unlike the fixed effects model wherein time-invariant attributes cannot be included, the random effects model is appropriate when time-constant explanatory variables are of research interest. By combining both the within and between dimensions of data, the random effects model allows both group and individual differences over time to be analyzed (Van Cleave, n.d.). As described by Kennedy (2003), it is actually “a matrix-weighted average of the fixed effects estimator and the ‘between’ estimator” (p. 316). Estimation bias can be reduced in a random effects model since individual differences are considered random and unobserved unit effect is assumed uncorrelated with any explanatory variables. So the random effects model is applicable when the selection of cross-sectional units is randomly made from a large population (Miller and Yang, 2007, p. 588).


As the accountability system of California places great emphasis on each school’s annual growth in API, understanding how performance of schools changes with student demographics and resource levels is of great importance. Since the fixed effects approach estimates the “within” effect and the random effects model measures both “within” and “between” effects, both models appear to have an advantage over OLS regression models in the understanding of within-school effects. However, after comparing the underlying assumptions of these two panel approaches, the fixed effects panel model was believed more appropriate between the two for this study for a number of reasons. First of all, the primary objective of this study was to investigate only the within effects of student demographics and school resources on API scores, so adopting the fixed effects approach or the “within” estimator would sufficiently serve this purpose. Next, the random effects model allows the inclusion of both time-invariant and time-variant attributes of individuals, but variables to be included in the fixed effects model are exclusively limited to attributes that are time changing. For the reason that API scores, student demographics and resource levels of schools do, indeed, vary from one year to the next, the non-inclusion of time-invariant variables in this study suggests that the fixed effects model would be a more favorable choice between the two. Moreover, this study was not abided to take the random effects approach given that the random effects model would be more appropriate if time-constant explanatory variables were of interest. Similar argument could be made in support of the fixed effects model, as cross-sectional data included were not obtained from a random sample of schools but from all schools in California. Owing to the use of non-random sample data, neither individual difference nor unobserved school heterogeneity would be assumed uncorrelated with explanatory variables. This further supported the argument in favor of the fixed effects approach and suggested that the fixed effects model should serve the purpose of this study better than the random effects model.


HYPOTHESIS, METHOD AND DATA


HYPOTHESIS


To analyze the within-school effects of student demographics and school resources on API under California’s accountability system, it was hypothesized in this study that changes in factors contributing to interschool variations in API scores might also affect school performance gains. Based on previous production studies on California school performance, API scores were correlated with various student demographics and school resource factors (Catron & Wassmer, 2006; Goe, 2002; Trujillo, 2007; Driscoll et al., 2003; Powers, 2003, 2004a, 2004b). As demonstrated by Catron & Wassmer (2006), Goe (2002) and Trujillo (2007), the size of English language learning and socioeconomically disadvantaged subgroups of students was noted to play a key role in API scores. The greater the proportion of students with limited English proficiency and free and reduced price meal recipients at school, the less likely the school was to outscore others. School racial/ethnic composition, on the other hand, was also found an important determinant of API score. According to Catron & Wassmer (2006), Goe (2002), and Trujillo (2007), the proportion of students from each racial/ethnic subgroup was found to have a statistically significant correlation with school APIs. Notwithstanding the direction of effects was inconsistent across subgroups, a broad consensus among these studies was that schools with one or more large racial/ethnic subgroups were more likely to find themselves belonging to either one of two extremes of API—high-performers or low-performers. As the level of school performance appeared to be sensitive to the size of socioeconomically disadvantaged, English language learning, and racial/ethnic subgroups, it was assumed that school performance gains may be affected by changes in the size of not only socioeconomically disadvantaged but also English language learning and racial/ethnic subgroups.


There was an equally large empirical literature on the relationship between APIs and school resource factors. In general, the majority of education production function literature concurred that schools perform better as class size reduced (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004b). However, past research results seemed to disagree on the effects of school size on API scores. According to Driscoll et al. (2003) and Goe (2002), the larger the enrollment size, the lower the API. In contrast, Catron and Wassmer (2006) found school size an insignificant determining factor of API score. The relationships of API scores with the number of students per computer and the proportion of fully credentialed teachers were also noted insignificant in Catron and Wassmer’s study (2006). Comparatively speaking, the impacts of teachers’ experience and education level on API were more consistent across production function studies. Schools generally perform better with more experienced teachers and post-baccalaureate teachers (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004b). While these school resource variables were shown to have affected API at interschool level, they were expected to have similar impacts on API gains at intraschool level. So in addition to changes in student demographics, it was assumed that a school’s gain in API score may be affected, either favorably or adversely, by changes in teacher qualifications like education, experience, and credential type, as well as school size, class size, and availability of instructional technology.


METHOD


While the primary objective of this study is to establish the within-school effects on API gains of proposed student demographic and school resource variables, the fixed effects regression model is used to test the proposed hypothesis. Based on the fixed effects assumption summarized by Wooldridge (2002) and Greene (2003), the following empirical model is adopted in this study to estimate the hypothesized relationship:

[39_16926.htm_g/00001.jpg]     (1)

where [39_16926.htm_g/00002.jpg]represents the value of dependent variable measured by API score for school i at time t; [39_16926.htm_g/00003.jpg]is a vector of student demographic and school resource variables to be estimated; and the variable [39_16926.htm_g/00004.jpg]is a latent variable that measures the unobserved individual school effects which are constant over all time periods. As standardized in a fixed effects model, the inclusion of the school-specific fixed effect, [39_16926.htm_g/00005.jpg], precludes the inclusion of time-invariant school characteristics in the control vector of[39_16926.htm_g/00006.jpg], such as geographic location, managerial quality, or organizational structure of schools. The mean-zero error, [39_16926.htm_g/00007.jpg], is assumed random with neither serial correlation nor correlation across cross-sectional units. The fixed effects transformed equation used in this study to estimate API gains is obtained by first averaging Equation 1 to get the mean API score for school i over t=1 … T:

[39_16926.htm_g/00008.jpg]    (2)

where [39_16926.htm_g/00009.jpg], [39_16926.htm_g/00010.jpg], and [39_16926.htm_g/00011.jpg]. API gains, which are defined as the deviations of API scores ([39_16926.htm_g/00012.jpg]) from individual school means  ([39_16926.htm_g/00013.jpg]), are then derived by subtracting Equation 2 from Equation 1 to estimate how they regress with student demographic and school resource variables and takes the form:

 [39_16926.htm_g/00014.jpg]

    (3)

Since panel data contain a large number of repeated measures and inferences about school effects are prone to bias if the additional variations due to unobserved attributes of schools are ignored, the fixed effects regression helps extend control for all possible characteristics of schools, explanatory and omitted alike, as long as these characteristics are time constant. Assuming that unobserved school heterogeneity is not only dependent of explanatory variables but also fixed, the fixed effects technique provides a simple means to remove time constant omitted and observed effects from the analysis by differencing away unobserved heterogeneity and errors, [39_16926.htm_g/00015.jpg]. This assumption, however, limits its application to analyses wherein time-variant variables are of interest. To assess the hypothetical causal relationship of changes in student demographic and school resource variables with API gains, all estimates were completed using the STATA command for the fixed effect regression, xtreg, fe., in this study.


DATA


Detailed school-level API reports and staff data reports from DataQuest of California Department of Education, as well as student and school data reports from Ed-Data for the school years between 1998-1999 and 2007-2008 were used as the primary source of information in this analysis. As this study aimed to examine how changes in student demographic composition and school resource level affected API scores within the school over a 10-year period, base API was the chosen dependent variable. The independent variables were basically divided into two categories: (1) student demographics, and (2) school resources, and all were selected mainly based on their relations to APIs as documented in education production literature. Among all the student demographic data available in the two databases, variables chosen for this category were limited to those previously observed correlated with API (Catron & Wassmer, 2006; Goe, 2002; Trujillo, 2007), including percentage distributions of seven racial/ethnic subgroups, percentage of English language learners, and percentage of students enrolled in the free or reduced price meal programs at school.


By the same token, school resource variables selected for this analysis were also resource factors demonstrated to have impacts on API in the past. The term “school resources,” similar to the definition of Betts, Zau, and Rice (2003, p. vi), was used here to refer to school size, class size, teacher training, and school technology. So variables chosen for this category consisted of enrollment size, class size, percentage of fully credentialed teachers, percentage of first-year teachers, average number of years of teaching experience, percentage of teachers with advanced degree, and number of students per computer (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004b). A few variables found relevant to API in previous literature were excluded from this school resources category due to either limitation of the model or redundancy issue. For instance, school calendar and school location were excluded from this model as they were limited by the non-inclusion of time constant variables assumption. Another example of variable not used in this study was the percentage of teachers with emergency credentials. This variable was complementary to the percentage of fully credentialed teachers as both added up to 100%, so only the percentage of fully credentialed teachers was chosen for this study instead both.


A balance panel data set of 5,750 schools in California over the 10-year period was used in this study. Based on data obtained from the school-level API reports, complete API information was available from 6,223 schools in California between the school years of 1998-1999 and 2007-2008. Given that a balanced panel of data was required for the analysis of this study, only schools with complete data on base API score and all independent variables of choice were selected. Reports from the DataQuest and Ed-Data revealed that complete data on base API and all 16 explanatory variables of interest were available from 5,750 schools, representing 92.4% of all schools with API information from 1999 to 2008. A total of 473 schools that had any missing data for any of the selected independent variables from 1999 to 2008 were excluded from the study. A detailed definition of all variables used in this study was provided in Table 1. Means and standard deviations for all variables across all 5,750 sample schools of the study were presented in Table 2.


RESULTS & DISCUSSION


Table 3 shows the fixed effects regression results of student demographics and school resources on school performance from 1999 to 2008. As predicted, all the student demographic and school resource variables hypothesized influential to school API gains were found statistically significant at a .001 level except for the percentage of Pacific Islander students which was statistically significant at a .01 level. This study confirmed that student demographic and school resource variables found important to school performance at interschool level also had a significant impact on API gain at intraschool level. Simply put, not only did demographic composition and resource availability matter to individual school scores, they were also important to school API gains. API gains appeared quite sensitive to annual changes in student demographics and resource levels within a school. Among all the explanatory variables tested in the model, Table 3 reveals that changes in the percentage distributions of racial/ethnic subgroups generally had a greater impact on school API gains than changes in school resources. As displayed in Table 3, a 1 percent increase in the distribution of a racial/ethnic subgroup within a school could result in an average API gain that ranged from -5.0077 to 1.2372 points. Increases in two racial/ethnic subgroup populations, namely Filipino and Hispanic, were noticed to have a positive relationship with API gain. Annual API gain was averaged at 1.2372 and .8547 points respectively when the percentage of a school’s Filipino and Hispanic students grew by 1%. In contrast, a 1% expansion rate of American Indian, Asian, Black, Pacific Islander, and White populations at the school level was estimated to decrease API by an average of 1.1825, 1.3524, 5.0077, 1.0051 and 2.7185 points respectively. These largely negative relationships between racial/ethnic subgroup size and API gain appeared quite different from what were observed in previous school effects studies (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004a; Powers, 2004b; Trujillo, 2007). But what is important about these different findings is not whether they arise from estimation method or from model specification; they reveal potential subgroup effects on school performance under different accountability practices.


While the majority of past empirical studies focused on how school APIs differed on the basis of specific student characteristics at a defined time, results from previous cross-sectional studies could be interpreted as if school performances were being evaluated under a status accountability model. Should all schools be required to meet the same achievement target under a status model, researchers have consistently attested in previous literature that schools with higher proportions of Asian and White students are more likely to score higher on API. The positive impact of having a larger Asian or White student population is, however, unseen when school API gain is concerned. As the existing knowledge from previous cross-sectional studies overwhelmingly supports the positive correlation between API scores and percentages of White and Asian students, it is uncertain why API score is found to grow inversely with the expansion of White or Asian students within a school. The observed negative association between these two subgroups and API gains might, however, be more easily understood if previous cross-sectional findings are also taken into consideration. Assuming that better performing schools are likely to have more White and Asian students than their counterparts as suggested in previous research, at least some of these schools, if not the majority, should have reached the 800 threshold. While schools whose API is 800 or higher are no longer required to show growth but maintain their performance above the state goal line, long-term average loss could most likely have arisen from fluctuations of these top performers’ APIs across the years regardless of changes in subgroup populations.


The fact that many schools with large Asian and White subgroups are already better performing schools, the positive impact of having additional percentage of Asian and White students on API gain may have tapered due to possible diminishing returns effect as well (Driscoll, Halcoussis, & Svorny, 2008). The concept of diminishing returns refers to the decrease in per-unit returns, or school achievement in this case, when the amount of a single factor is increased, while all others are held constant. In education, as proposed by Driscoll, Halcoussis, & Svorny (2008), diminishing returns predominantly affect the achievement gains of high performing schools. High scoring schools would show the smallest gains over time as this diminishing returns effect in achievement could overshadow any superior ability of high scoring schools to maneuver their scores by familiarizing students with the tests (pp. 215-216). It is therefore speculated that the potential advantage of having more Asian or White students on API gains could have been masked, at least in part, by diminishing returns associated with the small gains of high performing schools over time. The absence of a positive association between API gain and the expansion of Asian and White student populations, on the other hand, may suggest that there is likely a ceiling on the effect of these two top-performing subgroups’ sizes. That is, if there is a limit on how large these two subgroups may be advantageous to API gain, the positive effect of having more Asian or White students in schools would no longer be measured or estimated above the limit. As dominant as diminishing returns and ceiling effects may have been on achievement gain, the presence of either of these two effects could have overshadowed the potentially positive impact exerted by the expansion of these top-performing subgroups on API gain and resulted a negative relationship between the two. Whether the atypical relationship found in this study between API gain and the expansion of either Asian or White subgroup a negative consequence of diminishing returns or ceiling effect, however, remains in question until confirmed by further study. The observed downward API trends associated with these two supposedly advantaged subgroups, instead, raise concerns over whether racial/ethnic composition of students affects school performance differently between and within schools. The speculated effect of diminishing returns or ceiling must be explored and differentiated should the downward association of these two subgroups with performance gain be proven not a model-specific but general phenomenon at intraschool level. While the federal and state governments are working strenuously towards narrowing the performance gaps between subgroups, understanding how performances vary with subgroup changes at both interschool and intraschool levels would be of great value.


Among the remaining five racial/ethnic subgroups whose correlations with APIs were previously reported insignificant or inconsistent, three were also noted to have negative associations with API gains—the American Indian, Black, and Pacific Islander subgroups. While the mean population size of the American Indian and Pacific Islander subgroups across schools was as small as .90% and .66% respectively (Table 2), a 1% increase in either of these two subgroup populations at school level would appear quite significant. Although only the proportion of Pacific Islander students was previously demonstrated to have a negative association with API between the two (Catron & Wassmer, 2006), results of this study showed that school API would grow unfavorably from one year to the next with the increase in either subgroup population at school level. This suggests that the impact exerted on API by changes in the distribution of these small racial/ethnic subgroups would be more substantial within schools than between schools.


A 1% population growth of African American students was similarly observed to associate with an API loss of 5.0077 points on average. Inconsistent trends across literature suggested it was relatively hard to explain in light of previous studies of the effect of this subgroup on API. An increase in African American students within a school was found to affect API not only significantly, but also unfavorably. Given that the API score of all sample schools was averaged approximately at 716 (Table 2), the annual 5% growth target required by the state for a school with an average API score would be around 4.2 points [(800-716)*5%] under the current policy. Taking an average-performing school that also happens to experience an increase in its share of African American students as an example, reaching the annual growth target could require the school twice the efforts to do so. In addition to meeting the growth target of 4.2 points, this particular school may need to worry about combating the possible loss of 5.0077 points associated with changes in its racial/ethnic composition. Since changes in student demographics are beyond school control, efforts of students and schools may be overlooked if annual changes in student demographics at school level are not taken into account.


Another discrepancy found between this study and the majority of other school effects research was the impact of socioeconomically disadvantaged subgroup size on API. Although conventional wisdom holds that the bigger the socioeconomically disadvantaged and English language learning subgroups the lower the school API, this is found true only in the case of English language learners in current study. On average, the annual API gain was estimated at -.669 point when the proportion of students with limited English proficiency increased by 1%. However, a school’s API score was likely to grow by .4145 points when the ratio of its free and reduced price meal recipients became 1% larger from one year to another. The positive correlation between the proportion of subsidized meal recipients and API gain was similarly reported in Driscoll et al.’s study (2008). Although it was rather arcane that the practical effect of the meal plan variable was positive with API gain but negative with API score, combining the two trends might help us understand the cause better. The average percentage of subsidized meal recipients across schools shown in Table 2 was as high as 50.4%; it suggested that there were a good number of schools that already had large shares of students enrolled in subsidized meal plans. While schools with higher percent of students on subsidized meal plans were less likely to be API overachievers, Driscoll et al. (2008) explained that larger API gains associated with lower scoring schools a result of the bigger improvement room afforded underachieving schools to grow under the current growth accountability model. In other words, schools with large share of students participating in subsidized meal programs may appear less competitive under a status accountability model, but the growth model allowed school’s improvement efforts to become much more visible.


Turning to the results related to school resources, the regression coefficients for all school resource variables were noticed generally smaller than coefficients for student demographic variables except for the percentage of fully credentialed teachers and average number of years of teaching experience (Table 3). The effects on API gains of school resources, in most cases, were not as strong as that of student demographic factors. A similar conclusion concerning the small effect of school resources on API was reached by several researchers in previous literature (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004b). Betts, Rueben, and Danenberg (2000) speculated that the limited effects of school resources on school performance was an outcome of the state’s equalization of school resources, while Powers (2004a) suggested that it was California’s budget constraints that had kept the impacts of resource factors to a minimum. Nevertheless, school resources levels appeared very important to school performance gains even if the impacts of resource factors were generally slighter than that of student demographics. As displayed in Table 3, changes in all seven resource variables investigated in this study are shown to have significant effects on school performance gains, albeit small. Changes in five resources variables were shown to have negative impacts on API gains. These were school enrollment, class size, percentage of first-year teachers, average number of years of teacher experience, and number of students per computer. The largest API loss, -1.5447, was observed to associate with every additional year of teaching experience. While previous school effects analyses overwhelmingly noted a positive correlation between teacher experience and API (Powers, 2003; Powers, 2004b), the negative effect on API gain of the length of teaching experience found in this study may be the result of a ceiling effect. As speculated by Powers (2004b), the effect of teacher experience may have a ceiling as she too noticed a negative relationship between teacher experience and API at high school level (p. 783). The benefit of having additional years of teaching experience at school could have been capped, as the average year of teaching experience among teachers across schools was already as high as 12.85 years in this study (Table 2). So despite the fact that length of teaching experience mattered quite considerably to school APIs, adding more experienced teachers to California’s highly experienced teacher workforce did not appear to have any stimulating effect on score improvements for individual schools.


Class size was noted to have the second largest negative impact on API gain (-0.8958), followed by the proportion of first-year teachers (-0.8109), number of students per computer (-0.7689), and then school enrollment size (-0.0212). Except the number of students per computer whose association with APIs was previously noted statistically insignificant, the other three variables had been attested to have similar negative correlations with APIs in the past (Catron & Wassmer, 2006; Driscoll et al., 2003; Goe, 2002; Powers, 2003; Powers, 2004b). So not only could smaller classes, fewer new teachers, as well as smaller enrollment help schools get better API scores, they could also help individual schools improve their APIs across years. API was found to go down by an average of .7689 points as the number of students per computer increased by 1. This appears quite encouraging because if a school can afford to purchase more computers, the reduction in its student-to-computer ratio will likely lead to an increase in API gains. Understandably, having more computers within a school gives every child a better chance to use the computer as needed since the group of students to share one computer then becomes smaller. Compared to that of class size, proportion of new teachers or number of students per computer, the impact of school size was not as prominent because API score was estimated to go down only by an average of .0212 points with the addition of every hundred students to enrollment. A plausible explanation for the impact of school size on API gain being fairly slight may be related to the large enrollment size of schools in California, averaging 835 students per school (Table 2). As explained by Kane & Staiger that the marginal impact of performance improvement is little for large schools, the chance of achieving drastic changes in test scores is much smaller for large schools (2002, pp. 256-257).


The benefits of having more fully credentialed and highly educated teachers were quite noticeable in this study too. Both the percentage of teachers holding a full credential and percentage of teachers with a master's or advanced degree at school had positive effects on API gains. Having an additional percent of fully certified teachers as well as post-baccalaureate teachers was estimated to facilitate an average gain of 2.5013 and 0.3936 points respectively every year (Table 3). This seems to echo with the NCLB’s call for more “highly qualified” teachers of core subject areas and California’s recruitment and hiring focus on highly qualified teachers in recent years (Powers, 2004a, p. 784). School API gains appeared far more sensitive to changes in the proportion of teachers holding full credentials than advanced degrees. While the positive impact of teachers with advanced degrees on APIs had been previously demonstrated (Powers, 2004b), the relationship between the proportion of fully certified teachers and API score was noted statistically insignificant before (Catron & Wassmer, 2006). Not only was annual API found to grow with the percent of fully credentialed teachers positively and significantly, but its effect also happened to be most sizeable among all school resource variables estimated. The substantial API gain resulting from having an additional percent of fully credentialed teachers is rather encouraging because it is a not only feasible, but controllable, approach that the state and schools can consider and adopt. Staffing every school with 100% of fully credentialed and highly educated teachers would be the ideal ultimate goal, but top policy priority should be given to underachieving schools, especially those that are repeatedly in jeopardy of missing their annual API growth targets. In addition to teacher qualifications, legislating the maximum number of students per computer and grade- or subject-specific class size would be worthy of consideration by policymakers. Collecting data on more indicators of resource use and incorporating them into the state databases would be a great start as they may have significant bearing on the betterment of school performance.


CONCLUSION


Given that the performance of individual schools is not to be compared against each other but oneself over time under California’s accountability system, this study provides an empirical analysis of school performance gain in relation to school-level changes in student demographics and school resources. By identifying the relative contributions of different influences to API gains, I hope the potential causes underlying school performance gains, strengthening and undermining alike, can be better understood. Using the fixed effects regression the hypothetical relationship between API gain and changes in nine student demographic and seven school resource variables is first proposed and then estimated. The use of the fixed effects model as well as the selection of independent variables naturally brings forth many limitations as far as the model specificity of the results is concerned. Based on the empirical setting and particular data gathered, a hypothetical model of schools’ performance-changing behavior is developed and estimated to provide a description of factors that trigger performance changes and the size of the changes when triggered. The use of fixed effect regression method primarily limits the choice of independent variables to only time-varying ones. The impacts on performance gains of several important time invariant factors such as geographic location and calendar type of schools are therefore not captured alongside that of time-changing variables in the study. Besides, as only balanced school-level data set from the DataQuest and Ed-Data between 1999 and 2008 are considered for the estimation purpose, a few relevant time-varying variables including parents’ education, students with disabilities subgroup, student mobility, and teacher attendance rate are excluded from the model due to either sampling or availability problem. The empirical analysis of school performance conducted in this study compares therefore the effects of selected time-changing variables on API gains of individual schools within California over the period 1999-2008. Some of these limitations can, however, be seen as productive avenues for future research addressing a similar theme. To assess the impacts of additional time variant and invariant factors on individual or subgroup performance growth over time outside California with different statistical models are sample challenges awaiting future research in this topic area. For instance, the variable limitation imposed by fixed effects models may be resolved by using a different statistical model, such as random effects, so the impacts of both time variant and constant variables can be estimated alongside for comparison purpose. Similar research design with different school subsamples should be fruitful for differentiating the effects of educational variables by subgroups or performance ranks, while different data availability in another state would allow the impacts of more or different educational factors on school performance gain to be unearthed. As the causal analysis of school and subgroup performance has been related to emerging policies and practices in response to school accountability, this study hopes to expand the scope of analysis to include within-school factors and eventually bring this new perspective to study of school performance gain.


Despite these limitations, the fixed effects regression results obtained in the present study may offer a clearer picture of the impacts brought about by changes in student and school factors on API gains at intraschool level. By bringing together so many demographic and resource variables, it is evident how significant the influence of school-level changes can be on school performance outcomes. The results showed that school API gain was very sensitive to school-level changes in both student demographics and school resources. A 1% change in each of the nine subgroup population shares was significant enough to affect API scores by an average of -5.0077 to 1.2372 points from one year to the next. API was found to grow with the expansion of Filipino and Hispanic student populations, as well as the proportion of free and reduced price meal recipients within individual schools. The annual school-level increases in the largeness of American Indian, Asian, Black, Pacific Islander, and White, plus English language learning subgroups were noticed to correlate inversely with API gains. In comparison, annual changes in school resources at the school level can result in a significant gain on API, ranging from -1.5447 to 2.5013 points. School performance improved with the proportion of teachers holding full credentials and advanced degrees but deteriorated with school enrollment, class size, teaching experience, percent of first-year teachers, and number of students per computer. As strong as the relationships are between API gains and annual changes in subgroup size and resource level, it seems reasonable to assume that a school’s improvement or deterioration in performance from one year to the next could have resulted from, at least in part, changes in its student demographics or school resources. Real efforts of students and schools may have been disregarded if school performance and growth are tracked without taking school-level changes in student composition and school resource into consideration. The results from this study instead suggest the possibility of using the annual school-level changes in subgroup size and school resource to predict which schools might be at greater risk of not meeting their annual performance goals. Moreover, a parallel tracking of performance growth and school-level changes in student demographics and school resources would yield important information about possible causes for deterioration, i.e., if students’ deterioration on tests is a result of continuous loss of quality teachers or drastic change in student composition. Such information may be of great importance for policymakers and officials to consider before rendering sanction or reward decisions.


While some outcomes of this study are similar to results obtained by earlier cross-sectional regression studies on performance variation between schools, some are surprisingly the opposite of what would have been predicted. The discrepancies as well as similarities of findings presented between this and prior studies have implications not only for accountability policies but also for school practice in general. One reservation is that while California places great responsibility on individual schools for student growth, little policy consideration is given to the likely effects of demographic and resource changes on school performance within the school. California public schools are required to meet their school-wide and subgroup growth targets every year, and a school’s targeted goal is considered accomplished only when its students outperform themselves on standardized tests from one year to the next. So whenever students of a school fail to improve on standardized tests, the system views the deterioration of API scores as an indication of the school being inept. Underlying causes of students’ decline in academic performance within a school are neither sought nor brought to light under the current outcome-based accountability system. In view of the fact that the ultimate goal of the accountability system is to have all schools bring their students to proficiency, it may be equally important, if not more so, to understand why schools fail to meet their performance targets as well as how much they fail. As significant as the impacts are of changes in student demographics and school resources, tracking these school-level changes along with performance growth should help identify potential underlying causes of school performance. If a school that repeatedly fails to meet its growth targets is also found to have an inadequate provision of resources one way or another, filling in the shortage of school resources may be one strategy worthy of attempt to raise its performance. But if a school’s continual failure seems to have a stronger tie to its drastic changes in student composition, sanctioning the school in this particular case would hardly be a fair assessment of its management team’s effectiveness.


The opposite relationship of API with the size of several racial/ethnic subgroups is surely another issue that deserves close attention. Should all schools be measured against a common target as if a status accountability model were in effect, previous analyses show a strong agreement in the positive relationship of API with the percentage of Asian and White students. Conflicting findings in literature on the effect of other racial/ethnic subgroups’ sizes on API score, on the other hand, make it relatively hard to predict which schools are more vulnerable in the system simply based on the largeness of other racial/ethnic subgroup. But under the growth accountability system of California, the expansion of five racial/ethnic subgroups, including two presumably in advantageous position under the status model, appears a strong determinant of negative API gains at intraschool level. The assumption that the bigger the White and Asian subgroup shares the better the performance is not confirmed in this study, it raises concerns over whether subgroups in advantageous position under the two accountability models differ. The closing of achievement gaps in California principally focuses on bring up the performance level of four subgroups considered disadvantaged in the state: African American, Hispanic, socioeconomically disadvantaged, and English language-learning. In order for schools to be eligible for consideration of the Distinguished School honors, all these numerically significant disadvantaged subgroups within the school must improve more than the average statewide API growth required for their respective subgroups and outgrow their White and Asian peers if they are also numerically significant (California Department of Education, n.d.). In other words, not realizing the negative impact changes in subgroup size may have on performance gain, the state’s approach to narrow the performance gaps within the school still relies solely on comparisons among subgroups in achievement gain. Notwithstanding the growth model has been implemented by California mainly to guard schools against unfair comparisons, it seems to defeat the purpose of adopting the growth model if schools are awarded on the basis of how equivalently their numerically significant subgroups’ improvements measure against one another. As the appropriate policies to narrow the achievement gaps within a school may depend on that school's particular demographic situation, it may be more feasible for individual schools to set a performance growth target for each subgroup to pursuit annually. The more subgroup targets met or the bigger subgroup gains achieved, the more successful a school should be considered in the closing of the achievement gaps.


This study’s confirmation of the positive impact of teachers’ advanced degree and full teaching credential on performance gains suggests that teacher qualifications may hold the key to improving student achievement. APIs are found to grow favorably with the percent of fully credentialed and highly educated teachers from a year to the next. Although more teaching experience is not found to have a positive impact on API gains perhaps to due to a ceiling effect as speculated by Powers (2004b, p. 783), understanding that both teaching credential and advanced degree are of great help to school improvement offers hopes for schools in danger of failing their performance targets. This is especially important for schools whose poor performance might be in relation to factors beyond their control, i.e., changes in student demographics, staffing these schools with more fully credentialed and highly trained teachers could yield promising results. Furthermore, in hope of finding more effective ways to close the achievement gaps between subgroups, it would be even more ideal if the impact of these school resource variables can be differentiated per subgroup. With a thorough understanding of which resource factor works best for enhancing the academic performance of each subgroup at intraschool level, assuring every subgroup in disadvantageous position is not in short supply of stimulating resources may be a good start. The differentiation of school resource factors’ impact by subgroup, however, awaits further studies to evaluate.


In sum, although California has chosen the growth model to eliminate the potential between-school performance variance associated with the status bar model, the findings from this study suggest that the current accountability system of California is not without flaws. Non-consideration of the impacts brought about by school-level changes in student demographics and school resources on school performance makes it extremely difficult to evaluate whether factors other than student efforts play a role in the success or failure of schools. As significantly as school API gains are affected by school-level changes in resource level and especially in racial/ethnic composition every year, it is of great importance to consider how these annual changes within the school should be embedded into performance evaluation framework. Because a drop in one racial/ethnic subgroup’s population share would have led to the enlarging of one or more complementary subgroups, the problem of negative API growth would only be compounded if a school happens to have more than one enlarged subgroups that are associated with API loss. Instead of pinpointing which schools or subgroups outperform their peers or putting any particular groups under the spotlight, the main objective of this study is to identify causes underlying school performance gain under the current accountability system. After all, both the federal and state accountability systems aim to bring all students to proficiency, helping schools and subgroups overcome obstacles standing in their way of meeting their school-wide, statewide, and nationwide targets remains a priority task.


Table 1. Variable Label and Definitions

Variable Label

Definition

Dependent

 

API

Academic performance index

Independent: Student Demographics

American Indian

The percent of students who are American Indian

Asian

The percent of students who are Asian

Black

The percent of students who are Black

Filipino

The percent of students who are Filipino

Hispanic

The percent of students who are Hispanic

Pacific Islander

The percent of students who are Pacific Islander

White

The percent of students who are White

English Language Learners

The percent of students who are not yet proficient in English

Free Meal

The percent of students enrolled in the program for free or reduced price meals

Independent: School Resources

School Enrollment

Number of students in the school

Class Size

Average class size, calculated by dividing enrollment by the number of classes with 1-50 students, excluding special education and a few other minor categories

Full Credential Teachers

The percent of teachers who hold a full credential

First-Year Teachers

The percent of first-year teachers in the school

Teaching Experience

Average number of years that all teachers have been instructing students.

Graduate Degree Teacher

The percent of teacher with master's degree or more in the school

Student per Computer

Number of students per computer


Table 2. Descriptive Statistics for Study Variables

Variable

Mean

SD

Dependent

  

API

715.88

53.41

Independent: Student Demographics

American Indian

    0.90

  0.56

Asian

    8.65

  2.06

Black

    7.61

  1.89

Filipino

    2.48

  0.96

Hispanic

  43.08

  4.56

Pacific Islander

    0.66

  0.47

White

  35.15

  5.17

English Language Learners

  25.96

13.07

Free Meal

  50.41

17.62

Independent: School Resources

School Enrollment

834.53

98.49

Class Size

  24.00

  1.41

Full Credential Teachers

  90.95

  7.63

First-Year Teachers

    6.37

  5.60

Teaching Experience

  12.85

  1.78

Graduate Degree Teacher

  31.30

  8.13

Student per Computer

    7.36

  7.40

Note: Standard deviation is within school estimation.



Table 3. Fixed Effects Regression Estimation

Variable

Coefficient

SE

Student Demographics

  

American Indian

-1.1825

0.3093***

Asian

-1.3524

0.1089***

Black

-5.0077

0.1156***

Filipino

 1.2372

0.1929***

Hispanic

 0.8547

0.0841***

Pacific Islander

-1.0051

0.3638**

White

-2.7185

0.0716***

English Language Learners

-0.6690

0.0290***

Free Meal

 0.4145

0.0214***

School Resources

  

School Enrollment

-0.0212

0.0018***

Class Size

-0.8958

0.1244***

Full Credential Teachers

 2.5013

0.0259***

1st Year Teachers

-0.8109

0.0337***

Teaching Experience

-1.5447

0.1063***

Graduate Degree Teacher

 0.3936

0.0227***

Student per Computer

-0.7689

0.0236***

Constant

649.6092

8.3933***

R2 (within)

0.4758

 

F value

17.54

 

Prob. < F

0.000

 

Number of School

5750

 

Number of Observation

57500

 

Note: Dependent variable is API. *p < .05. **p < .01. ***p< .001.


References


Betts, J. R., Rueben, K. S., & Danenberg, A. (2000). Equal resources, equal outcomes? The distribution of school resources and student achievement in California. San Francisco: Public Policy Institute of California.


Betts, J. R., Zau, A. C., & Rice, L. A. (2003). Determinants of student achievement: New evidence from San Diego. Sacramento, CA: Public Policy Institute of California.


California Department of Education. (1999). Explanatory notes for the 1999-2000 Academic Performance Index (API) growth report. Retrieved from http://www.cde.ca. gov/ta/ac/ap /expnotes99g.asp


California Department of Education. (2004). 2004 Accountability progress report. Retrieved from http://www.cde.ca.gov/ta/ac/ay/documents/infoguide04.pdf


California Department of Education. (n.d.). California school recognition program 2010 eligibility criteria. Retrieved from http://www.cde.ca.gov/nr/el/le/documents/yr09encl1029.pdf


California Legislative Information. (n.d.). Education code section 52051-52052.6. Retrieved  from http://www.leginfo.ca.gov/cgi-bin/displaycode?section =edc & group=52001-53000&file=52051-52052.6


Catron, S., & Wassmer, R. W. (2006). Beyond the basics: The effects of non-core curricular enrichments on standardized test scores at high schools. Michigan Journal of Public Affairs, 3, 1-41. Retrieved from http://www.mjpa.umich.edu/uploads/2/9/3 /2/2932559/catron_wassmer_mjpa_final.pdf


Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M., Weinfeld, F. D., & York, R. L. (1966). Equality of educational opportunity. Washington DC: U.S. Government Printing Office.


Doran, H. C., & Izumi, L. T. (2004). Putting education to the test: A value-added model for California. San Francisco: Pacific Research Institute.


Driscoll, D., Halcoussis, D., & Svorny, S. (2003). School district size and student performance. Economics of Education Review, 22, 193-201. doi:10.1016/S0272-7757(02)00002-X


Driscoll, D., Halcoussis, D., & Svorny, S. (2008). Gains in standardized test scores: Evidence of diminishing returns to achievement. Economics of Education Review, 27, 211-220. doi:10.1016/j.econedurev.2006.10.002


Goe, L. (2002). Legislating equity: The distribution of emergency permit teachers in California. Education Policy Analysis Archives, 10(42). Retrieved from http://epaa.asu.edu /epaa /v10n42/


Goertz, M. E., Duffy, M. C., & Le Floch, K. C. (2001). Assessment and accountability in the fifty states: 1999-2000 California. Retrieved from University of Pennsylvania, Consortium for Policy Research in Education website: http://www.cpre.org/images/stories/cpre_pdfs/ca.pdf


Gong, B., Blank, R. K., & Manise, J. G. (2002). Designing school accountability systems: Towards a framework and process. Washington, DC: Council of Chief State School Officers. Retrieved from http://www.ccsso.org/content/pdfs/designing _school_acct_syst.pdf


Greene, W. H. (2003). Econometric analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall.


Hanushek, E.A., & Raymond, M.E. (2002). Lessons about the design of state accountability systems. Paper prepared for “Taking Account of Accountability: Assessing  Policy and Politics,” Cambridge, MA: Harvard University. Retrieved from http://edpro.stanford.edu/hanushek /admin /pages/files/uploads accountability.Harvard.publication%20version.pdf


Illinois State Board of Education (2007). Illinois task force on growth model. Retrieved from http://www.isbe.net/assessment/pdfs/growth_model_rpt2007.pdf


Izumi, L. T., & Cox, M. (2003). California education report card: Index of leading education indicators (3rd ed.). San Francisco: Pacific Research Institute.


Kane, T. J., & Staiger, D. O. (2002). Volatility in school test scores: Implications for test-based  accountability systems. In D. Ravitch (ed.), Brookings papers on education policy, 2002 (pp. 235-283). Washington, DC: Brookings Institution Press.


Kane, T. J., & Staiger, D. O. (2003). Unintended consequences of racial subgroup rules. In P. Peterson & M. West (eds.), No child left behind? The politics and practice of school accountability (pp.152-176). Washington, DC: Brookings Institution Press.


Kennedy, P. (2003). A guide to econometrics (5th ed.). Cambridge, MA: The MIT Press.


Levačić, R., & Vignoles, A. (2002). Researching the links between school resources and student outcomes in the UK: A review of issues and evidence. Education Economics, 10, 313-331.


Miller, G. J., & Yang, K. (2007). Handbook of research methods in public administration (2nd ed.). New York, NY: CRC Press.


Odden, A., & Picus, O. (2000). School finance: A policy perspective (2nd ed.). New York: McGraw-Hill.


Powers, J. M. (2003). An analysis of performance-based accountability: Factors shaping school performance in two urban school districts. Educational Policy, 17(5), 558–586.


Powers, J. M. (2004a). Increasing equity and increasing school performance—Conflicting or compatible goals? Addressing the issues in Williams v. State of California. Education Policy Analysis Archives, 12(10). Available at http://epaa.asu.edu/epaa/v12n10/


Powers, J. M. (2004b). High stakes accountability and equity: Using evidence from California's public schools to address the issues in Williams v. State of California. American Educational Research Journal, 41(4), 763-795.


Trujillo, M. (2007). Bilingual education in California: Is it working? Penn McNair Research Journal, 1(1). Retrieved from http://repository.upenn.edu/mcnair_scholars/vol1/iss1/3


Van Cleave, J. H. (n.d.). Panel models in sociological research [PowerPoint slides]. Retrieved from http://www.yale.edu/ciqle/POWERPOINT/panel1_files/panel1.ppt


Wooldridge, J. M. (2002). Econometric analysis of cross section and panel data. Cambridge, MA: The MIT Press.


Yaffee, R. (2003). A primer for panel data analysis. Retrieved from New York University, Social Sciences, Statistics and Mapping Group of ITS’ Academic Computing Services website: http://www.nyu.edu/its/pubs/connect /fall03 /yaffee_primer.html





Cite This Article as: Teachers College Record Volume 115 Number 4, 2013, p. 1-28
http://www.tcrecord.org ID Number: 16926, Date Accessed: 9/22/2017 8:23:14 PM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Mei-Jiun Wu
    University of Macau
    E-mail Author
    MEI JIUN WU is an assistant professor of educational administration in the Faculty of Education at the University of Macau. His research interests include educational administration and policy, principal preparation programs, as well as quantitative methods. His recent publication is: Wu, M.J. (2007). Assessment of the Internal Efficiency of Macauís Basic Education: An Alternative Application of the Reconstructed Cohort Method. Education Journal, 35(1), 63-91.
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS