Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

State Strategies to Improve Low-Performing Schools: California’s High Priority Schools Grant Program


by Thomas B. Timar & Kris Kim Chyu - 2010

Background: School accountability policies and high-stakes testing have created new demands on state policy makers to provide assistance to low-performing schools. California’s response was the Immediate Intervention/Underperforming Schools Program (II/USP) and the High Priority School Grants Program (HPSGP).

Objective/Research Question/Focus of Study: This study explores the effects of the HPSGP on improving academic performance of the lowest performing schools in California. The study focuses on the organizational factors that influenced resource allocation decisions. The discussion addresses what might be done to ameliorate some of the key problems implicated in nonperforming schools.

Participants: Data for this study came from site visits to 15 schools that received HPSGP funding. Of the 15 schools we studied, 10 were high schools, and the remainder elementary schools. Eleven of the schools were urban, and four were rural.

Program Description: Schools in the bottom 10th percentile are eligible to apply for HPSGP funds. The State of California provided 655 schools with $400 per pupil each year for three years, with an optional fourth year. Participating schools also could apply for an optional planning grant of $50,000 in the initial year.

Research Design: Using qualitative case studies of 15 schools in California, the study compares HPSGP recipient schools that made significant academic improvement with HPSGP schools that remained stagnant. The site visits, which took place between February and May 2006, comprised structured interviews with principals, teachers, HPSGP and special program coordinators, and school site council members, as well as classroom observations and focus groups. During a two-day visit, at least five people at each school were interviewed.

Conclusion: This study found that organizational characteristics, such as leadership of principals, member participation in decision-making, and existence of coherent goals and plans, have a significant influence on the ability of schools to make effective use of grant funding and to achieve higher student performance. The study’s main finding was that improving schools were deliberative and purposive in their use of program funds. Nonimproving schools, on the other hand, were opportunistic, lacking a plan or vision for using funding to build effective regimes of teaching and learning.

THE POLICY CONTEXT OF RAISED EXPECTATIONS FOR STUDENT ACADEMIC PERFORMANCE


In the days when there still was such a thing as a California Assessment Program (CAP), newspapers around the state routinely published school test scores. For those who kept track and cared, it was a disheartening ritual, mostly for its predictability. It was easy to predict a school’s scores from year to year, knowing nothing more than the previous year’s scores. And while there may have been some marginal changes here and there, yearly rankings were quite consistent. Schools at the bottom tended to stay there. What made bad news worse was the inexorable connection between the socioeconomic status (SES) of a school’s student population and test scores. California has long since stopped administering the CAP, but low achievement among large numbers of students—particularly those who are poor, who do not speak English, and whose parents lack formal education—persists.


What has changed is how policy makers have structured the problem of persistently low levels of student achievement. The present study assesses state efforts to improve instruction in the lowest-performing schools by providing those schools with additional resources to engage in improvement efforts. The impetus for that change has been the enactment of state accountability laws (in combination with No Child Left Behind) and the development of curriculum and performance standards. As a result, it is now much more difficult, if not impossible, for policy makers, teachers, administrators, school boards, and the public to simply accept persistently low student performance as an unpleasant but inevitable fact. Enactment of California’s Public School Accountability Act (PSAA) in 1999 created a massive and complex regulatory structure that holds schools responsible for student achievement. A critical feature of the accountability system is a variety of state interventions—a combination of technical support and sanctions—that are meant to force schools to address the problem of low student performance.


The year PSAA was enacted, roughly 1.2 million students were in Decile 1–3 schools. The average Academic Performance Index (API) of those schools was 473, with a low of 297 and a high 561.1 Of those students, 79% were eligible for free or reduced meals, 45% were English learners, 67% were Hispanic, and 12% were African American; 32% of students’ parents had only a high school education, and 38% of students’ parents did not have a high school education.2 The characteristics of Decile 1 schools in 1999 were similar, though more pronounced. The average API for the 685 Decile 1 schools was 417. Of those students, 75% were Hispanic and 13% were African American, 56% were English learners, and 86% were eligible for free meals. About one-half of the parents of children in Decile 1 schools did not have a high school education.


In stark contrast to the state’s Decile 1 schools, in 1999, slightly more than 1 million students were in Decile 8–10 schools. The average score in these schools was 787. Among students in those deciles, 14% were Hispanic, 4% were African American, 8% were English language learners, and 18% were eligible for free meals. Among the parents in this group, 35% were college graduates, and 21% had graduate degrees.


School accountability and the sanctions associated with them raise the question of what to do about low-performing schools to help them improve. The Immediate Intervention/Underperforming Schools Program (II/USP) was the legislature’s initial response for state intervention. Policy makers believed that a combination of discretionary funds, schoolwide planning, and external technical assistance by consultants would coalesce into solid gains in teaching and learning in these schools. These programs were voluntary and only enrolled a fraction of eligible schools. In response to concerns that II/USP funding was too diffuse to be of much benefit, the legislature created the High Priority Schools Grant Program (HPSGP) to target Decile 1 schools.3


The measure was built on the assumption that low-performing schools needed to “tighten up”; they needed to align teaching with state standards. That meant adopting textbooks that followed state curriculum standards, engaging principals and teachers in professional development so they could successfully implement state standards, and hiring math and literacy coaches to show teachers how teach to the state’s curriculum standards. The additional resources that the program provided were meant to provide schools with the capacity to substantially improve teaching and learning. The present study assesses state efforts to improve instruction in the lowest performing schools—those receiving HPSGP funds—by providing those schools with additional resources to engage in a three-year improvement effort.


We first examine past strategies to address the problems of persistent low achievement in schools, a problem that is most acute among schools that serve large numbers of non-English-speaking minority students from disadvantaged socioeconomic backgrounds. Second, we discuss the research on the relationship between resources and school improvement. Third, we present findings from our study of 15 HPSGP schools to address how some low-performing schools improved over the funding period. The fourth section discusses policy, conclusions, and recommendations.


COMPENSATORY PROGRAMS AND SCHOOL IMPROVEMENT


California’s Public School Accountability Act (PSAA) signaled an important shift in how policy makers and practitioners think about the problem of low student performance. In theory, the change is away from a compensatory, regulatory model based on categorical program support—the pervasive model for the past 35 years—to a capacity-building and accountability model. It is a major change in how the problem of low student achievement is defined and how solutions to it are structured. The most significant dimension of this change is that focus has shifted from low-achieving children to low-performing school. That by itself represents a sea change in education policy because it redefines roles, responsibilities, and professional relations in education.


The history of compensatory education is synonymous with the history of Title I of the Elementary and Secondary Education Act. A primary purpose of the law was to provide financial assistance to school districts that suffered from the adverse “impact that concentration of low-income families has on [their] ability . . . to support adequate educational programs.”4 Its other purpose was to provide direct support to children by funding programs to meet their “special needs.” In an effort to secure local compliance—to guarantee that federal funds were flowing only to eligible students—the U.S. Office of Education cast an ever-widening regulatory net. While these efforts are well documented, it is important to note that regulations implementing Title I focused on changing the legal and political organization of schools.5 The elaboration of substantive and procedural rights, the requirement for clear audit trails for local expenditure of federal dollars, federal and state sanctions for misuse of funds, the growth of a vast state and local bureaucracy to monitor local compliance, and the empowerment of local community groups as a countervailing force to local school authorities eclipsed the pedagogical dimensions of federal compensatory aid.


The policy framework of Title I shaped behavior in schools in several unintended ways that in the long term inhibited instructional effectiveness. The preoccupation of policy with regulatory compliance denigrated instructional practice by undercutting professional judgment and authority and fragmenting both schools and students. Instead of focusing on the whole child, policy dissected children into disparate program targets. Although it seems naïve in retrospect, federal policy makers believed that stretching a regulatory net over schools could overcome the incapacity, ineptitude, or indifference of local schools serving poor, low-achieving students. While such strategies did force some schools to improve, it undermined those educators who were making good-faith efforts to serve those children.


State policies directed at schools serving disadvantaged students mirrored federal policy. California’s counterpart to Title I was Economic Impact Aid, funding targeted to low-income minority students. At the end of the 1970s, the largest administrative unit within the California Department of Education was the Field Services Unit, the unit responsible for monitoring and reviewing local compliance with federal and state compensatory programs. The state regulatory framework for education was rooted in distrust of the motives and capacity of local school officials. At the state level, officials came to share Washington’s belief in stressing compliance, as distinguished from assistance.6


The major difference between the compensatory, regulatory model and the accountability model is that under the previous model, schools could be sanctioned for failing to follow rules, but they could not be sanctioned for not teaching students. Implicit in both federal and state policies was the belief that schools could develop effective programs for disadvantaged children without paying attention to the overall quality of the school. Simply put, they believed that good programs could trump bad schools.


The shift toward accountability and student outcomes began with the Hawkins-Stafford Amendments to Chapter 1 (which replaced Title I during the Reagan years) enacted in 1988. Among the many changes initiated by the legislation, the most important were those concerning program coordination, schoolwide projects, school performance accountability, and parental involvement. The amendments marked a significant shift in Chapter 1 policy by emphasizing program effectiveness and accountability. Chapter 1 schools were required to develop student outcome goals, and schools that failed to meet those goals were required to develop school improvement plans. Congress also urged districts to adopt local standards and measures of student progress based on proficiency.7


California moved in a similar direction. In part, this was due to federal requirements contained in the reauthorization of Title I. The law required states to develop performance standards and assessments as a condition of receiving federal funds. California was first among states to develop curriculum frameworks, academic content standards, and assessments. Enactment of the PSAA in 1999 completed the shift to an outcomes-based accountability system in which schools, theoretically at least, were responsible for the academic progress of all students, and instructional improvement would supersede regulatory compliance.


THE HIGH PRIORITY SCHOOL GRANTS PROGRAM


Among the principal features of California’s school accountability system are programs to engage low-performing schools in improvement efforts. One of these was the Immediate Intervention/Underperforming Schools Program. The other, the High Priority Schools Program, is similar but targets Decile 1, rather than Decile 1–5, schools. While the two programs are structurally similar, the HPSGP places some additional requirements on schools.


The HPSGP was created by Assembly Bill 961 (Chapter 747, Statutes of 2001) to provide additional funds to the state’s lowest-performing schools. To be eligible for funding, schools had to rank in the bottom decile of the state’s API. Participating schools receive $400 per pupil per year for a period of three years. Districts are required to match state funding with $200 per pupil annually. Over the life of the program, this amounts to $1,800 per pupil, or $1.4 million (including the local match) for a school of 900 students—the average school size in the HPSGP. A few schools received over $5 million in program funding. Over the three-year funding cycle—2002–2003 through 2004–2005—HPSGP allocations to districts were slightly over $754.9 million. In return, schools had to meet state benchmarks for improved student academic performance. Schools failing to improve face various sanctions and interventions, including state takeover and dissolution. There are a few requirements that HPSGP schools must meet, such as participation in legislatively mandated professional development programs for teachers  and principals, and purchasing state-adopted textbooks in reading/language arts and mathematics.


The structure of the HPSGP is such that schools receive funding for three years, with the possibility of an additional year if they are making “adequate progress.” The implied rationale is that three years of funding will yield investments that result in significant capacity-building and sustainable organizational improvement. Whether schools are successful in sustained improvement is conditioned by several factors. First, it requires schools to take a long-term view. Rather than focusing on “quick fix” solutions, schools must be willing to focus on building organizational competence. Second, it assumes that schools will invest new resources to maximize collective, organizational benefits rather than individual benefits.


In addition to annual HPSGP funds, schools may apply for a $50,000 planning grant to develop their actions plans. They must engage external consultants to assist in development of a school action plan. The action plan must be based on an initial needs assessment, it must be “research based” and “data driven,” and it must encompass a strategic plan for helping low-performing students.8 The legislation lists a number of options that may be included in the strategic plan. They include common planning time for teachers, support staff, and administrators; mentoring for site administrators and peer assistance for teachers, particularly new teachers; professional development activities, particularly in mathematics and reading and literacy; and incentives to attract credentialed teachers and quality administrators. External evaluators are required to engage parents throughout the planning process, and each school’s site council is required to sign off on the school improvement plan.


While legislation creating the HPSGP provides a lengthy list of school improvement actions that schools may take, the legislation gives schools discretion in structuring their programs. Legislation provides various examples of how schools may address improvement strategies but leaves it to districts to adjust the details regarding the specific needs of each school. More than anything, the legislation embodies a set of expectations for schools about how they might address instructional improvement. The bill’s language places considerable emphasis on comprehensiveness, collaboration, planning, assessment, reading across the curriculum, community engagement, mentoring, professional development, and beginning teacher training. The measure delineates the essential components of a school improvement plan but leaves schools room to develop a plan that meets local conditions and needs. To assess their progress in meeting academic growth in core curriculum areas and to monitor the efficacy of their school improvement plan, schools are strongly encouraged to revisit their action plans and to modify them as necessary.


After three years of participating in the program, a school that has not met its growth targets or has failed to show “significant growth,” as determined by the State Board of Education, is required to enter into a contract with a School Assistance and Intervention Team (SAIT). Members of the SAIT are individuals who “possess a high degree of knowledge and skills in the areas of school leadership, curriculum and instruction aligned to state academic performance content and performance standards, classroom discipline, academic assessment, [and] parent-school relations . . . and have proven . . . expertise specific to the challenges inherent in low-performing schools.”9 Finally, schools that fail to improve are subject to various sanctions. These include reassigning students to other schools, reassigning teachers, renegotiating the collective bargaining agreement, reorganizing the school, and closing down the school.


THE HPSGP AND SCHOOL IMPROVEMENT


The shift from a school accountability system driven by inputs, regulation, and compliance to a system based on outcomes necessitates a major shift in the process of schooling and a new conception of the organization of schooling. One early study on the organization of schooling noted the prevalence of teacher autonomy, which is “reflected in the structure of the school system, resulting in what may be called their structural looseness.”10 The literature on the organization of schools generally regards schools as a collection of classrooms. Teachers are responsible only for their classrooms—for what goes on behind the classroom door. Organizational theories describe schools as “loosely ­coupled organizations” whose commonalities are anchored in “myth” and “ritual” that have little to do with the underlying technology of teaching and learning.11 Organizational and instructional coherence was assumed to be imposed by textbooks, the professional norms of teachers (in theory, inculcated by teacher preparation and professional development programs), and some level of supervisory oversight.12 Consistent with theories of “loose coupling” are schools’ decision-making theories described as the “garbage can” model. Instead of a coherently articulated model of decision-making based on organizational goals and strategies to attain them, decision-making in schools was best described as individuals reaching for readily available solutions to satisfy immediate organizational needs. Both the traditional school organizational model and the decision-making model that characterize schools are the antithesis of coherent, long-term organizational planning.


Consequently, a central hypothesis of this study is that improving HP schools are ones that were able to transform themselves from collections of classrooms into coherent, purposive organizations. We assumed that this would occur because accountability shifts the focus directly on schools, holding them accountable for the performance of their students. It does not matter if the first-grade teacher is doing a wonderful job with her students if the other teachers in the school are not.


The need for organizational coherence and collaboration is all the more important in low-performing schools. As Table 1 shows, Decile 1 schools serve large numbers of poor children, many of whom are English learners and come from families lacking formal education. Unlike high-performing schools that serve high-SES students, organizational factors are likely to be much more important in schools that serve educationally disadvantaged students. As organizations, high-performing schools (largely because of their students) can continue to do what they were doing. Low-performing schools, on the other hand, need to learn how to do things very differently. Doing more of the same is unlikely to raise levels of student achievement.13 The importance of this point cannot be overstated. Decile 1 schools not only have the most challenging students to teach, but also must reinvent themselves in order to raise student achievement levels. Doing more of the same, or doing what one high school principal labeled “math louder,” is not likely to have positive results.


The policy underpinnings of the HPSGP are that an infusion of money, external technical support, a comprehensive school plan, and the threat of sanctions for failure to perform will catalyze the kind of organizational transformation that turns low-performing schools into high-performing (or at least higher-performing) schools. The policy assumes, moreover, that three years is sufficient time to build the necessary capacity in schools to effect those changes.14 The question is whether teachers and administrators are willing to invest in a complex undertaking that has high transaction and opportunity costs and a highly uncertain probability of success. In an organizational environment (such as education) that is characterized by a weak technical base and considerable uncertainty of goal attainment, the incentives to individuals to invest time and energy are meager. Improving teaching and learning, building trust, creating organizational coherence, and similar capacity-building activities are high risk. However, some schools were willing to make the effort and succeed. Others were not. As we argue in this article, the difference between those schools that were willing to make the investment and succeeded and those that did not is attributable largely to organizational factors—a school’s capacity to forge a coherent set of instructional goals and the necessary strategies to achieve those goals.


CONCEPTUAL FRAMEWORK AND STUDY METHODOLOGY


The magnitude of the state’s investment in low-performing schools through the HPSGP—over $4 billion—raises the question, What difference has that investment made? This study does not attempt to answer that question in a comprehensive way.15 That is, the study does not address the question of whether schools receiving HPSGP funds did better as a group than those schools that did not. At the outset, the study sought to answer the question of how participating schools used program funding to meet student achievement goals. Were there differences in how schools allocated new resources, and were some allocation strategies more successful than others? Did schools look for quick fixes or long-term improvement strategies? Did schools that met their API growth targets each year and by all subgroups share common strategies? As noted earlier, the legislation that created the HPSGP gives schools some flexibility in designing improvement strategies. While such flexibility allows schools to tailor school improvement strategies to their particular needs and circumstances, it also creates the possibility that program funds will be mismanaged and wasted. The schools we studied exemplify both. Specifically, our study set out to answer the following research questions:


How were HPSGP monies spent by schools?

On what basis were allocation decisions made, and who made them?

How much flexibility and autonomy did schools have in developing action plans and allocating resources for their implementation?

Given that the HPSGP funding stream is for three years, what is the long-term impact on school improvement?

Do schools invest with an eye to sustainable improvement over the long term, or quick fixes for short-term improvements in test scores?

What changes in teaching and learning can be attributed to the use of HPSGP funds?


The conceptual framework that guides our analysis is what may be called the “improved” school finance. This approach to school resource allocation—as distinct from the “new” school finance—proposes a new framework for understanding the relationship between resources and learning.16 Where the “old” school finance “assumed an unmediated relationship between resources and learning,”17 the new model regards resources as either the facilitators of or inhibitors to teaching and learning. Resources do not cause learning. Instead, they provide the potential for creating systems of teaching and learning. This study builds on that model and proposes that improving schools were those that successfully converted resources into effective instructional regimes.18


Data for this study came from site visits to 15 schools that received HPSGP funding. Of the 15 schools we studied, 10 were high schools and the remainder elementary schools. Eleven of the schools were urban, and four were rural. The detail description of sampling design is delineated in the next section. The site visits, which took place between February and May 2006, comprised structured interviews with principals, teachers, HPSGP and special program coordinators, and school site council members,19 as well as classroom observations and focus groups. During a two- day visit, at least five people at each school were interviewed in order to find participants’ perceptions of the organizational characteristics of the schools and their perceptions of program resource allocation decisions.


Each interview lasted 45 minutes to 1 hour. The interviews with the principals focused on the role of HPSGP funding and the principals’ perspectives on funding priorities, budget approval processes, school improvement strategies, and resource allocation. Interviews with teachers provided information on the coherence of the organization of the schools, including collegiality, collaboration, professional development, and shared goals and objectives of the HPSGP. Interview protocol was developed in a standardized open-ended interview format to reduce possible influence by the interviewer.20 Most interviews were one-on-one, with the exception of parents groups, the school site council group, and focus groups. The research team interviewed at least five people at each school: one principal, two to three teachers, one parent, and one or two school council members. Questions (see Appendix A) were asked to determine participants’ perceptions of the organizational characteristics of the schools and of school budget decisions. The interviewees were asked to describe their personal histories, their roles in the school, and the influence that they had over decisions at the school in such areas as resources, curriculum, and evaluation, as well as their interactions with colleagues. The interviews with principals focused on the role of HPSGP funding and the principals’ perspectives on funding priorities, budget approval processes, school improvement strategies, and resource allocation. Interviews with the teachers provided information on the coherence of the organization of the school, including collegiality, collaboration, professional development, and shared goals and objectives of the HPSGP. Interviews with the school council members focused on gaining understanding of the organizational structure of the schools and other local influences on decision-making.


Some of teachers who participated in the interview were randomly selected from a roster, and other teachers were selected because of their active involvement in school resource decision-making. Interviews with school council members and parents focused on gaining a better understanding of the organizational structure of the schools and other local influences on decision-making. The purpose of the interviews was to elicit specifics about the process for determining how program funds would be used and how schools allocated those funds over the three years of the program. Each interview was recorded and transcribed verbatim immediately after the interview, and a set of codes was developed to categorize and organize interview responses and observation results.


The data coding and reduction process was supported by the qualitative analysis program Atlas Ti. This program is a highly reliable qualitative analysis program and is mainly known as a conceptual or “theory building” program because it permits the user to explore answers to specific research questions through its hierarchical code-based index system. The program supports text searching, retrieval, and coding. For reliable data analysis, the interview results were compared with the summary notes that were created at the end of each school visit.


Final analysis of the interviews and observations follows a simple logic. The approach followed was to create a cross-case matrix organized by the primary constructs across schools to sum individual, schoolwide, and job-type responses to items representing a single construct to arrive at a valid measure of that construct. This allows identifying patterns to examine commonalities and differences across the sample of schools. With respect to theory predictions, the study follows Yin’s approach of “pattern matching” in case study analysis.21 The idea here was to review the overall pattern so that responses were consistent with theory predictions. Patterns are compared between interview and observations, respectively, for each data source. In addition, patterns are compared between data sources to achieve internal validity. For example, coded and categorized interview response tables are compared to assess whether there is consistency across all teacher interviewees and between teachers and the principal. Patterns emerging from interviews, observations, and archival data are then compared with each other to assess consistency. For instance, we had the action plans and budgets that schools submitted to the state as part of their program applications. We then compared those data against our interview data by respondents (e.g., teachers, principals, and school site councils).


The most secure findings are those for which there is a consistent pattern across the majority of respondents and data sources. While such interpretations are valuable to this study, in order to decrease internal validity threats, different techniques were used to determine the trustworthiness and credibility of its results. First, triangulation of sources was used with interviews, observations, and documents. In addition, peer debriefing was used in which different interviewers were asked to summarize the analyses and interpretations independently at the end of each school visit.22 Their summaries were compared and used in discussion to come to agreement about the interpretation of each school. For reliable data analyses, the interview results were compared with the summary notes that had been created at the end of each school visit. This triangulation of interviews from different individuals at each school provided reliable data analysis.


This study’s inferences made on the basis of the results of each construct have been pulled together to form meta-inferences at the end of the study. Thus, its interpretative rigor, especially conceptual consistency and interpretative agreement, becomes extremely important. Findings from each school visit are summarized by school, and then a cross-case matrix is created and organized by these primary constructs across schools to identify patterns and examine commonalities and differences across the sample of schools (see Appendix B).


SAMPLING DESIGN


In the sampling process, out of 658 schools that received HPSGP funding, this study selected 285 “pure” HPSGP schools—those that only received HPSGP funding—and excluded schools that received both HPSGP and II/USP funding. After excluding schools without three years of API scores and school characteristics information, our final sample consisted of 211 schools. From those, we selected one group of HPSGP schools that met their designated API growth target, and a second group of schools that did not meet their designated growth targets.23 The California Department of Education assigned traffic lights to identify a school’s AYP status. A green light indicates that the school met annual measurable objectives (AMOs) and the participation requirements; a yellow light indicates that the school met AMOs but the percent of students tested fell short of the requirements; and a red light means that the school did not meet one or more AMOs. In the AMO, the state sets annual targets for how many students must test proficient or above in order to make AYP.


We defined improving schools as those that made AYP goals (green lights) and API growth targets for two years or more during the funding periods. We prioritized those schools that made growth during the last two years of funding periods since it takes time to reap the benefits of HPSGP funding. Those schools that did not made AYP goals and API growth targets for three years or more we defined as nonimproving schools. Out of those 285 schools (255 elementary and 30 high schools), 116 schools (104 elementary and 12 high schools) were defined as improving schools, and 169 schools (171 elementary and 18 high schools) as nonimproving. In the data analyses, out of 285 HPSGP participating schools, 5% of the sample schools participated in the case studies. Ten improving and five nonimproving schools were selected by school size. While the size of high schools in our sample varies between 800 and 4,000 students, the sizes of elementary schools across our sample are very similar, with total enrollments between 400 and 500 students. Our nonimproving high school sample has one small school (under 100 students), one medium school (between 1,000 and 1,500 students), and one large school (over 4,000 students). There are two small, two medium, and two large improving high schools in the sample. Our designated “improving” high schools made, on average, 134 API point gains during the funding period, while nonimproving schools made 43 API point gains. Elementary schools in our sample exhibited similar patterns: improving schools increased, on average, 117 API points, while nonimproving schools decreased 1 API point on average.


As Table 1 shows, both improving and nonimproving schools are comparable, based on student, teacher, and school characteristics. In most instances, improving schools have a slightly higher percentage of minority students and a higher percentage of students eligible for free or reduced meals.


The data in Table 1 show that the study sample of schools mirrors the general population of Decile 1 schools fairly closely. It also shows that Decile 1 schools generally—and it is certainly true for the schools in our sample—have a higher percentage of poor, non-English-speaking students than the average school in the state. Parents of students in Decile 1 schools and in our sample have lower levels of formal education than the state average. Roughly half of the parents in our sample have not completed high school. The average parent education level for all students, on the other hand, is having completed some college. Parent education is particularly important in relation to student achievement because of the high correlation between parent education and student achievement as measured by the API. Nearly 50% of the difference in average school API scores is explained by differences in the average education levels of parents.


Table 1. Comparison of Selected Mean Characteristics of Case Study Schools With all California Schools (2004–2005)

 

Comparison Groups

 
 

All Decile 1

Schools

Improving

HP Schools

Nonimproving

HP Schools

All CA

Schools


% English learners


% Free meals1


% Minority2


46


80


90


40


72


92


40


70


91


25


50


64

% Full teacher credential


API base scores 04–05


Average API gain  

(01–02 to 04–05)


Avg. parent education3

88


--


--



2.0

94


650


166



--

83


564


94



--

94


--


--



2.56

1.

 “Free meals” represents students who are eligible for the free and reduced lunch program; it is a proxy for poverty.

2.

The minority category comprises Hispanic and African American.  

3.

Average parent education is represented by values ranging from 1 to 5, where 1 represents not high school graduate and 5 represents graduate school.


Source: California Department of Education.


Initially, we thought that it might be possible to divide all HP schools into improving and nonimproving schools by API scores alone. While we classified schools as improving or nonimproving in selecting schools for study, in reality, the classification was not clear-cut. The complicating factor was significant variation among both improving and nonimproving schools. Some nonimproving schools may have missed their targets by only a couple of points or may not have tested enough students. Therefore, to achieve a fair comparison, we considered the federal guideline of improving schools by including AYP target measures. To establish a clearer distinction between improving and nonimproving schools, we selected those improving schools that showed the highest rates of improvement in the time that they participated in the program and designated nonimproving schools as those that improved the least.24 As a result, our school site visits enabled us to identify some key factors that affected implementation of the HPSGP and to identify some of the key factors determining a school’s improving or nonimproving status.


An important observation from our study is that relying only on changes in API base scores may not be a reliable basis for evaluating a school’s success in meeting sustained improvement goals. As noted, the definition of a “failing” school is simply too elastic to be useful in measuring a school’s progress toward improvement. On the flip side, schools are regarded as “successful” if they make their API target goals. Current definitions of success are also too broad to provide schools, policy makers, or parents with meaningful information. How should one interpret a 10-point gain by a school on the API? What does that mean in terms of student subject matter mastery—not to even speak of the more elusive and difficult-to-measure goals of education? More robust indicators are necessary to assess school quality. Consequently, while not abandoning the API as a measure of improvement, we examined school improvement through a wider lens—the relationship between resource allocation and organizational improvement.


STUDY FINDINGS


THE HPSGP AND SCHOOL RESOURCE ALLOCATION


While just about everyone whom we interviewed in HP schools agreed that program funding “had made a difference,” what the difference was and what it meant varied widely. In some schools, it meant being able to “backfill existing needs.” In these schools, funding was regarded as a windfall to the school to pay for a long list of items that the school could not afford to purchase out of its regular budget. In others, it meant funding new administrative positions; hiring teaching coaches or other supplemental personnel; purchasing computers, software, and instructional materials; supporting a variety of professional development activities, including paying teachers’ costs to attend conferences; buying time for teacher collaboration; contracting for technical assistance; purchasing assessment instruments; and purchasing supplies.


One way in which schools differed in the way they used HPSGP resources involved whether those resources were integrated into a coherent program of school improvement or whether they funded schools’ “wish lists” without any prior plan for school improvement. (It is worth remembering that in the initial years of HPSGP funding, the state had a serious budget deficit.) Schools that had already committed to a school improvement strategy generally regarded HP funding as an opportunity to enhance their existing school improvement efforts. Schools whose improvement strategy focused on improving literacy hired literacy coaches to work with each grade level in elementary schools or each department in high schools. Moreover, spending decisions were made collaboratively by all staff. “Wish list” schools, on the other hand, had no coherent strategy for spending HP funds. Money was regarded as an opportunity for these schools to spend on their immediate needs without much regard to the larger goal of school improvement. Some schools had little or no idea of how much money they had from the HPSGP. In some cases, they were given a budget by the district and told to spend until the money ran out. In these schools, decision-making regarding where to spend was highly centralized, and teachers voiced their lack of participation in resource allocation decision-making.


How HP funds were used in a particular school said a great deal about the school’s organizational culture. Schools that were collections of classrooms and teachers with minimal interaction, planning, or collaboration—in short, schools that were organizationally fragmented—used HP monies in fragmented, opportunistic ways. On the other hand, schools with a vision and a coherent plan tended to use funds in a purposeful manner. Another way to understand differences among schools in how they use resources is to locate schools along a continuum, with program spending that is “need-driven” at one end and “goal-driven” at the other.25


Exemplifying the former, one school allocated its HP monies by categories: 25% to technology, 35% to professional development and supplemental instruction, 15% to materials, 5% to improving the school environment, and 20% to administrative services. According to those involved in developing the HP budget, the allocation ratios reflect the need to “give a little to everyone.” The principal was pessimistic about the benefits of additional resources, as he saw that it was more money for “just doing more of the same.” Exemplifying goal-driven schools were those with highly focused strategies for changing teaching and learning. HP funding was allocated to support improvement goals. In the best of circumstances, this meant establishing schoolwide instructional improvement goals based on needs assessments—the difference between current student achievement levels and target levels; developing program- and school-specific strategies for addressing the gap between current and desired achievement levels; determining the resources needed to implement strategies; and evaluating the effects of strategies in meeting desired student achievement goals. The allocation of resources, whether to hire teaching coaches, provide professional development activities, or buy time for planning and collaboration, was school specific and goal focused.


From the schools’ perspectives, there were several recurring issues regarding HP funding. One concerns the timing of funding distribution and the inability of schools to carry over unexpended funds. When funding was not distributed on time before each school year, the recipient schools could not purchase resources in a timely manner; as a result, resources were wasted since they only had a few months in which to spend program funds. Everyone whom we interviewed was concerned about the termination of funding at the end of three years. Several interviewees noted that making decisions about the use of resources for school improvement was not something that teachers, parents, and administrators had much experience doing. Implementing the HPSGP requires schools to develop new decision-making skills about the use of marginal resources to achieve specific education outcomes. In fact, there is very little research on the topic of how well-equipped or qualified school site councils, teachers, and administrators are to make good decisions about the most effective and efficient use of resources.


Schools generally have not had large amounts of discretionary money. The usual practice is for principals to be given their annual budgets by the district office. The amounts are usually not significant and are mostly for supplies and instructional materials. The practice of providing schools with significant discretionary funding for instructional improvement is, for most schools, unknown. In high schools, department heads may have a budget for books or supplies, but those budgets are generally fixed by the district or principal.


Using HP monies for school improvement places entirely new demands on schools. School site councils, administrators, and teachers must be able to conduct needs assessments, develop multiyear improvement goals and strategies, evaluate progress in meeting those goals, and revise strategies as necessary. As noted, to implement HP successfully, schools have to learn new skills and experiment with different strategies until they find those that work. This all takes time. No one person among those whom we interviewed thought that three years was enough time to develop those skills, much less be successful in applying them. Improving schools seemed to recognize this problem and solved it by hiring someone to coordinate and manage the improvement process. In nonimproving schools, there was no organized process for managing improvement; it was part of the principal’s overall responsibility, with minimal oversight from the school site council.


Another question that guided our study concerned the process for deciding how HP monies would be spent. In some instances, spending decisions were made by the principal with the approval of the site-based committee. This tended to occur in schools in which the principal and a handful of people developed the school action plan and budget for HP. The approach was pervasive in those schools that regarded the HP program as a funding opportunity—just another categorical program. In other schools, planning and budget development were the products of ad hoc groups of teachers and administrators—whoever happened to be available and willing to work on the plan. In a large multitrack school, the HPSGP plan was developed by teachers who were not teaching. Consequently, the plan was developed by those teachers who happened to be available and who were interested in taking on additional work during their vacation time. There was no broad-based teacher involvement in the school improvement plan. As the principal noted, it was a document for getting money from the state, not a document for improving student learning.


In the best situations, schools had leadership teams that conducted assessments and worked with a competent external evaluator. In all schools, the planning process was guided by an external evaluator. Consequently, the process for determining how monies are allocated is an important variable. But, like other variables affecting HPSGP implementation, it was important in combination with other variables, not by itself. Schools that appeared to us to be making progress toward developing the capacity for substantial and worthwhile learning were the ones that used leadership teams to develop HP budgets and action plans. They also tended to engage teachers and parents and had ongoing relationships with universities or other providers of technical assistance. The planning process was not just about allocating resources, but also about how to most effectively use resources for school improvement.


As noted, one issue raised by some school-level administrators and teachers concerns the lack of oversight and accountability for program expenditures. As long as schools are meeting their API growth targets, the state assumes that all is well. When they do not meet their growth targets and, say, a SAIT is assigned to the school, there is no review of how HP monies were used. In one school, computers that were purchased with HP funds disappeared, while in others, no one really knew how much money the school had, how it had been spent, or how HP monies were budgeted for the current year. Lack of budget records is particularly evident in schools that had administrative turnover. Some schools simply did not have the budgets for HP for prior years. Others show expenditures that had been charged against HP funds, but no budget to show how those monies had been allocated or how they fit into an overall program of school improvement.


ORGANIZATIONAL FACTORS, RESOURCES, AND SCHOOL IMPROVEMENT AND THEIR RELATIONSHIP TO FUNDING


In this section, we discuss the factors that either facilitated or impeded implementation of the HPSGP and its relevance to resource use. Table 2 contrasts factors that facilitate school improvement with those that impede school improvement. As already noted, these do not map perfectly onto all improving and nonimproving schools, but they do identify general organizational characteristics of improving and nonimproving schools. This section focuses on what we consider to be the most important differentiating features of improving and nonimproving schools.


TRUST


A school is a cultural organization in which relationships, trust, and mutual dependence form an organization’s vital interior structure. Therefore, the organizational characteristics of a school—collegiality, leadership, and the relationships among staff and administrators—determine the culture of the school, which in turn is regarded by some researchers as critical dimensions of organizational capacity.26 A school’s organizational characteristics influence its academic culture by causing it to function and react in particular ways. Some schools may engender a nurturing environment in which children are recognized and treated as individuals; elsewhere, one finds authoritarian structures in which rules are strictly enforced and hierarchical control is strong. Thus, the school’s cultural features construct social capital and condition a school’s capacity to improve. There were clear distinctions in the level of social capital between improving and nonimproving schools among the schools we visited.


Table 2. Factors Facilitating Improvement versus Factors Impeding Improvement

 

Factors Facilitating Improvement

Factors Impeding Improvement


High degree of social capital and trust


Stable teaching staff


Stable and competent leaders


Focus on developing leadership among teachers


Focus on school as an organization rather than a collection of classrooms


Leadership and vision


Action plan that is a working document


Organizational coherence


Collaboration and professional development


Coherent program funding tied to strategic plan


External support


Ongoing assessment and evaluation


Organizational instability and constant change


Organizational fragmentation and individual isolation


Classroom rather than school-centric focus


High turnover among teacher and administrators


Compromised leadership (e.g., lack of district support, high staff turnover, lack of leadership skills)


Action plan is developed for funding purposes and is

ignored once funding is approved


No coherent or consistent improvement strategy


No commitment to change


Little or no technical assistance or support


Program budgeting is opportunistic and ad hoc


James Coleman addresses the manner in which social capital develops around sustained social interactions.27 In schools, social capital creates a community where there is a network built on trust. Social trust in school communities turned out to be a key element to a school’s progress. In one of the improving schools in this study, teachers trust each other enough to invite one another into classrooms to videotape their teaching in order to help improve pedagogical practice. Trust denotes not only interpersonal comfort and respect, but also the feeling of confidence in others: other members will carry out their roles successfully and responsibly in the organization. As was evident in many improving schools, teachers had confidence in their principal’s ability to lead, and, in turn, the principal had confidence in his teachers’ ability to provide substantial and worthwhile instruction. In other words, they trusted that each person would take his job seriously and perform well. According to Bryk and Schneider,28 the presence of trust creates strong social bonds among members and a strong sense of identity with the institution. The findings of this study also show that improving schools create a culture in which teachers think of themselves as a team and a school in which the classroom is an integral part of a larger organizational unit.


Several improving schools in our sample worked hard to build trust among the entire school staff, parents, students, and the community. The school leadership team at one high school, for example, created a student mentorship program. Students in 11th and 12th grade were trained during a two-week summer program to mentor ninth graders who were coming into the school from junior high school. At the beginning of the school year, each mentor worked with a group of ninth graders to ease the transition to high school. Based on the evidence collected by the leadership team, that transition was a critical point for many students, and how it was managed made a difference in how a student navigated the next four years of school. In one district, the superintendent created a leadership program for teachers to provide teachers with the necessary skills for school-level decision-making. In the final analysis, the schools that were on the road to improvement all had a strong sense of community. One high school not only would routinely turn out 1,200 parents for open house, but also got parents actively engaged in the planning the event. This was particularly noteworthy since the parents were mostly Hispanic immigrants, many of whom spoke little or no English. It is important to note that the HPSGP did not cause parent and community engagement in the school, nor was it the spark for the student mentoring program. Both existed prior to the school’s participation in the program.


Schools in which there was a high level of trust and social capital evidenced a high level of commitment among teachers, administrators, and parents (and most likely students as well, given that they are the ones who make or break improvement goals). One of the consistent features of improving schools was teachers’ commitment and willingness to spend time working with students before and after school, on Saturdays, or during vacations . In one nonimproving school, the principal could only recruit a handful of teachers—out of a faculty of over 200—to provide instruction to students outside the regular school day even when it meant earning about $1,000 more per month.


ORGANIZATIONAL STABILITY AND CONTINUITY


Without doubt, among the most significant factors facilitating school improvement are organizational stability and continuity. It is especially important in the context of trust. It is impossible to build organizational coherence, community, and trust when there is constant turnover of faculty, administrators, and students. In interviews, teachers, program specialists, and administrators talked about the importance of working together as a team. In one high school in particular—coincidentally, the one with the greatest increase in API scores—teachers talked about how they not only enjoyed working together, but also socializing together. In addition to being colleagues, they considered many of their colleagues to be their friends. It should be noted that this was a fairly young faculty, with five or fewer years of teaching experience on average. Many had gone through the same master’s program in teaching at a nearby university and so knew one another from that program. Even though the school did not have a principal (two individuals served as acting principals), individuals whom we interviewed had a great deal of respect for their competence, dedication, and leadership. They trusted their professional judgments, regarded them as knowledgeable about school improvement, and looked to them for professional support.


The loss of a sense of community and lack of program continuity were quite apparent in a nonimproving high school that had become a SAIT school. The school attendance area had changed within the past five years, and students were bused in from another area because the high school there had been converted to a junior high. According to staff at the receiving school, students did not feel connected to the school. There was a large turnover among students. According to teachers and the principal, high numbers of disciplinary referrals, frequent altercations among students, and a lack of respect for others became routine. When asked what seemed to be the major problems confronting the school, a parent, who chaired the school site council, noted the high turnover among staff and students. She did not think that there was an incentive for teachers to stay in an underperforming school. As she saw it, there was little continuity from one year to the next. She noted rather ruefully (and at times tearfully),


We moved from years of having money—money that was well spent and helped kids—to SAIT, and now it’s as though it [the improvements] never happened. Teachers were excited about the professional development that they got. And now the state is here with its scripted learning. It’s really demoralizing for teachers.


A consistent theme among improving schools was the importance of teachers taking responsibility and leadership for school improvement. This took several forms. One was a sense among teachers that school improvement is a collective responsibility and a cooperative effort. Common planning time, school- or, in the case of high schools, department-specific and teacher-planned professional development activities were another. Teachers, administrators, and staff working together to achieve a common goal was a consistent theme that ran through the interviews in improving schools.


LEADERSHIP


The importance of leadership is closely connected to the importance of organizational stability and continuity and social capital in schools. In both improving and nonimproving schools, leadership or lack of it played a central role. Leadership played out in various ways: stability and longevity, expertise, collegiality, and authority. In improving schools, principals had been at the school for a number of years, and generally their tenure at the school preceded the school’s participation in the HPSGP. Some had taught at the school before becoming principal, while others had held various administrative positions.


The relative longevity of principals in improving schools contrasted dramatically with the rapid turnover of principals and other administrative staff in nonimproving schools. The most egregious case of leadership instability was in a school that, over a 30-year period, had had only one principal in the position for more than two years. One principal lasted for three years and was demoted by the board in the middle of her fourth year. The constant turnover at the school, moreover, mirrored the turnover of superintendents at the district. The school board hired and fired principals at will just as it did superintendents. In another district, a school had had six principals in eight years. That it was a huge school with year-round attendance tracks made the need for stability and continuity even more acute.


Teachers and administrators in schools and districts with high administrative turnover and dysfunctional district leadership were generally operating in survival mode. As a result, there was little interest in focusing on school improvement, nor opportunity to do so. In such schools, most teachers soldiered on as best they could, but were demoralized and seeking other jobs. In another school, teachers complained about the high level of fragmentation. Programs would be created but never implemented. Money for programs was nonexistent, and teachers had no idea about what funds might be available for school improvement. According to teachers at one school, most school improvement programs like Advancement Via Individual Determination (AVID) existed only on paper, not in reality. In this particular district, HPSGP funds were controlled by the district and, until our interview, the principal had no idea that there even was such a program at the school.


Leadership is about more than just continuity and stability. The principal’s ability to help shape a school’s vision for reform, guide development of a strategic plan, and elicit cooperation and support from the school community is another significant factor. The extent to which the school remained faithful to the goals guiding the action plan was largely attributable to the principal’s leadership. While the principal’s role in managing improvement within the school is important, so is the principal’s role in connecting the school to the community. In a rural elementary school that serves largely Latino children, the principal stressed the importance of providing leadership to the community. As a Latina, she emphasized the importance of being “a role model for girls so that they can see that they can have professional careers.” In a school where 63% of students are English learners and 100% are eligible for the free lunch program, the principal believed that an important aspect of her job was to make the school a “community place.” She herself knew the names of “99%” of the students. She made it a point to know students’ families and to have dinner with them.


Leadership was the glue that held school improvement efforts together. It was the principal who helped shape a vision for school improvement, kept the school on track and focused, mobilized the necessary resources, and generally helped shape the school’s culture. Teachers in improving schools consistently raised the importance of leadership. They praised their principal for his or her dedication, hard work, and commitment to improvement. Among schools with the greatest improvement, teachers readily acknowledged the critical role played by the principal. “It would not have happened without her,” and “her leadership and dedication were what has made the school successful” were common responses to questions about the principal’s role in improving schools. Similar statements endorsing the importance of leadership were common in improving schools.


THE ACTION PLAN


All schools had to develop action plans that detailed their reform strategies over the course of the HPSGP. The plan had to be approved by the school site council. Regulations related to the HPSGP required schools to contract with an “external evaluator,” who helped the school develop its plan. Schools were required to report annually to the California Department of Education on their progress in meeting improvement goals that they had established in their plans. In addition, schools had to develop a budget that showed how program funds were connected to specific improvement strategies. On this dimension of the HPSGP as well, there were significant differences between improving and nonimproving schools. The main difference was in what the document represented for the school.


In improving schools, the action plan tended to be a “living” document, one that mapped a strategy for school improvement. Action plans and the program budgets that supported those plans were reviewed on a regular basis. Plans in these schools stated measurable school improvement goals and benchmarks—based on state content and performance standards—that could be used to measure a school’s progress toward meeting its goals. If needed, strategies were changed and resources reallocated in order to meet improvement targets. The overall vision did not change, however; what changed was specific activities. In some schools, those in which improving student literacy was central to improving achievement for instance, the schools might change specific professional development activities or focus on different kinds of supplemental support if initial improvement goals had not been met. However, the focus on literacy as a school improvement goal remained constant.


In nonimproving schools, improvement goals tended to be fragmented and expressed as disparate programs or activities. One school, for instance, spent most of its HPSGP funds in the first year on hand-held computers that students could use to help them with homework. For various reasons (according to some teachers, students simply got bored with them), the computers were not used, and the program was abandoned after the initial year. In nonimproving schools, action plans not only were lacking a coherent, articulated plan for improvement, but also tended to be ignored after their initial submission for funding. A principal in one school not only knew nothing about the school plan, but also knew nothing about the HPSGP. The principal only learned of it two days before our site visit. The principal had been in the position for six months after taking over from the previous principal, who was demoted to vice principal. In other schools, the action plan had nothing to do with the school’s improvement efforts or its budget for the HPSGP. It was submitted with the application and then shelved. The major difference between improving and nonimproving schools regarding their action plans was that in improving schools, the action plan was exactly what it was meant to be—a strategic plan for charting the course for school improvement. In one school in particular, faculty were surveyed each year to evaluate the school’s success in meeting the goals in the action plan. If there were deficiencies in the action plan, teachers were asked how those might be remedied. In nonimproving schools, the action plan existed principally to qualify for state funds and to satisfy compliance with state requirements.


How the action plan served as a document that guides improvement efforts was illustrated by several of the improving schools. In one school, all teachers and administrators in the school were surveyed each year to assess the school’s progress toward meeting the goals specified in the action plan. Teachers and administrators were asked to rate how well the school had met its various instructional objectives. Teachers were also asked to comment on the objectives that had not been achieved and what could be done to achieve them. The responses were discussed within each of the departments and schoolwide. The next step was a process of revision of the strategies. According to the interviewees, the object of this exercise was not to change school improvement goals, but to revise the strategies to attain them. In another improving school, there was also a structured annual process for reviewing the action plan. It was not as elaborate and detailed as the survey, but the school was much smaller—800 students as opposed to 2,400—and for that reason, the school could get by with a less formal planning process. In the smaller, improving school, faculty, administrators, and the school advisory committee, which included parents from the site council, met on a regular basis to review. The centrality of the action plan was strengthened in both of these schools by the fact that the principals had been at the schools for over eight years (in one school, the principal had been in the position for only four years but had been at the school as a curriculum specialist for more than 10). In the nonimproving schools, there was no plan to guide school decision-making. In one of the worst cases, there was no operational plan. The HPSGP was invisible. Because of the change in principals, prior plans that had existed had been abandoned. Teachers whom we interviewed noted with some frustration the lack of program continuity. In another nonimproving school, there was a plan, and the school leadership team was quite focused on it, but the plan lacked support among teachers and certainly among students. In one of the most promising improving schools, on the other hand, everyone, including students, was aware of the school’s improvement goals. In a biology class that we visited, the principal asked students what “the goal” was, and they all responded, in unison and without prompting, “proficiency.”


COLLABORATION AND PROFESSIONAL DEVELOPMENT


One of the most striking features of improving schools is their attention to collaboration among teachers. In improving schools, it was a singular focus among teachers at each grade level or department level. To facilitate collaboration, school schedules were changed to leave a portion of one day each week for various activities, such as program planning or peer coaching. In other instances, teachers would be paid to participate in various school-organized workshops held either on Saturdays or during summer break—often both. In improving schools, planning, strategizing, and evaluating activities were fixed features of a school’s regular schedule. They were ongoing and focused on the school’s improvement goals. Similarly, professional development in improving schools was not generic—that is, provided by the district, county, or some other provider—but school specific. Professional development activities were an integral component of a school’s improvement goals.


Professional development activities in improving schools took various forms as they were tailored to school needs. Most common were peer and literacy coaching. It is worth noting that all improving schools placed considerable emphasis on literacy and writing (this is unsurprising, of course, since this is what is tested). Most often, schools would hire literacy coaches to work on a regular full-time basis with teachers across all subject areas, with the goal of improving students’ reading and writing skills. One high school had instituted a program of reading and writing across the curriculum so that students in all courses had regular reading and writing assignments. This included physical education courses: in dance classes, students read works of literature and had to discuss themes, movement, tone, and the like in the works and discuss how that might relate to choreography. In biology, students were taught to use the Cornell method of note-taking. In another school, peer coaching took the form of teams of four teachers engaged in yearlong activities. They would meet regularly to discuss teaching and learning strategies, videotape one another’s classes to observe instructional strategies, and debrief on what they had learned from observing one another’s teaching.


In nonimproving schools, professional development was quite different. The most pronounced difference was the generic nature of those activities. Generally, they tended to be district sponsored, focusing on broad issues related to state content standards and assessments. Both principals and teachers participated in state-specified and funded professional development that focused on aligning instruction with state standards. While standards alignment is important, it does not necessarily connect to higher student achievement. In nonimproving schools, it was most often teachers, on an individual basis, who decided what professional development activities to attend. One nonimproving school, for instance, allocated part of its HPSGP budget for teachers to attend conferences on gifted and talented education programs.


EXTERNAL SUPPORT


One of the chief factors facilitating school improvement among schools in our study was an ongoing relationship between the school and an external agency. The strongest and most enduring relationships were between the school and a university. One school had a seven-year collaborative relationship with a school of education on one of the California State University campuses. The school’s participation in HPSGP was initiated by its university mentor, and university personnel assisted in the proposal’s development and implementation. The university conducted needs assessments, helped the school identify the resources necessary to meet its improvement goals, develop program priorities, and assist in writing an action plan for the school. The collaboration preceded and goes beyond the HPSGP. The university partners with the school in its teacher training program. Students in the teacher credentialing program are placed in the school for their practice teaching experience. In turn, new faculty are hired from this pool.


Another school had a close relationship with one of the University of California campuses. At the time of our study, the school had been open for about six years. Most of the teachers were young, and most had recently completed the master’s program in teaching at the university. The university, in turn, relied on the school for its teacher training program. Teachers whom we interviewed all regarded the ongoing relationship with the university as a key feature of school improvement. The relationship also provided teachers with a professional anchor—a way to stay in touch with educational issues and problems beyond the immediate school setting.


In some instances, the external evaluator provided mentorship and technical assistance to the school, but the role of the external evaluator as a source of support was quite uneven among our study schools. Some external evaluators helped schools to conduct needs assessments, assisted them with data analysis, guided development of school action plans, and continued to work with the school over the course of the HPSGP. Others would provide what seemed like an off-the-shelf action plan (in one instance, the external evaluator had not bothered to change name of the school). There was little or no ongoing engagement or rapport with the school.


The specific source of technical assistance (whether from the district, a university, a nonprofit organization such as WestEd, or a private consultant) did not appear to be significant. On the other hand, the level of engagement, the nature of the relationship with the school and its faculty, does. As discussed earlier, low-performing schools have tremendous challenges to overcome in order to improve. They need assistance from an outside source that is willing to take time to understand a school’s problems and to put in time and energy to develop strategies to overcome those problems.


It is evident from the case studies that factors like organizational stability and continuity, leadership, a strategic school improvement plan, professional development, and external support are closely related to the use of HPSGP resources. Schools in which there was little stability or program continuity (high turnover) had little chance of effecting a long-term plan for change.


CONCLUSIONS AND RECOMMENDATIONS


Based on our study schools, the HPSGP was not the predictable catalyst for school improvement that policy makers had intended it to be. While some schools were able to benefit from the program and regarded it as an opportunity to transform the school into an effective organization that serves the educational needs of its students, others regarded the program as a financial windfall and a source of discretionary money. The difference in how the program was regarded is largely attributable to the commitment that teachers and administrators in the school made to school improvement. Such commitment in turn was influenced by a variety of organizational factors. Those included leadership, stability, trust, collaboration and planning, and instructional program coherence. The fact that these features of schools turned out to be significant in determining the effects of reform efforts may not be news to those who are familiar with the research on effective schools. What is new from this study, however, is that effective instructional regimes cannot be created by an infusion of new money and mandated school improvement plans. Money and a school improvement plan alone do not cause school improvement, nor is school improvement linked to how money is spent. Investing in literacy or math coaches, professional development, or extended learning time, for instance, does not lead predictably better outcomes than spending on smaller classes, small schools, or computers.


A key feature that sorts schools in our study is the reason for participating in the HPSGP. As noted, some participated because they simply saw it as an opportunity for more money. Some interviewees frankly admitted that the HP funds were the only source of “new” money available to schools, and they applied just for that reason. For these schools, program participation was opportunistic. They tended to be the schools that exercised little accountability over HPSGP funds, shelved the action plan after it was written, and used funds as spending needs arose without any systematic plans. Other schools applied because they were told to do so by district administrators.


At the other end of the continuum were schools that viewed HP funds as an integral part of their vision for school improvement. This group of schools in our study were thoughtful, purposive, and strategic in their use of HPSGP funds. These schools worked at developing a coherent, integrated school improvement plan, which they revisited on a regular basis. For them, the HPSGP was not just a new revenue source, but another way to engage teachers, administrators, students, and parents in a process of school improvement.


What, then, are the lessons to be drawn from the HPSGP? What kinds of policy strategies might policy makers consider to improve teaching and learning in the state’s low-performing schools? In the short term, there are several modifications they may want to consider.


OVERSIGHT AND ACCOUNTABILITY


Nearly all individuals whom we interviewed argued for greater external oversight and accountability for schools’ and districts’ use of HPSGP funds. While HPSG funds are intended to flow to schools, some of those we interviewed complained that HPSGP funds were controlled by their district. At the school level, there was little accountability for how schools spent funds once they received them. Schools that were committed to a reform agenda used the funds as they proposed in their school action plans. They reviewed the action plans at least annually to see what modifications were needed. In those schools, program expenditures were guided by an improvement plan. The school site council was regarded as an effective means of oversight. Often the school site councils themselves had little or no knowledge of the HPSGP beyond the knowledge that it provided a resource stream to the school.


What is missing from the state’s reform arsenal is a more robust system of school oversight, accountability, and technical assistance. While the state tracks improvement in HPSGP schools by means of its API scores, it does not monitor how the HPSGP is implemented. Did schools stick with the proposed programs? There was no ongoing monitoring or oversight by the state. Once schools received the program funds, state officials assumed that they would faithfully implement the program as proposed in their funding request. The assumption turned out to be based more on faith than experience because, as we discuss in the article, our data showed that there was, in a number of schools, considerable slippage between what schools proposed and what they did.


DURATION OF PROGRAM FUNDING


Three years is not sufficient time for most schools, especially Decile 1 (lowest performing) schools, to develop the skill and capacity to successfully implement the HPSGP. Some of our interviewees suggested that the additional funding per pupil that schools received over three years should be spread over a five- or seven-year period. As noted, some schools needed three years just to develop the organizational skills to learn how to conduct needs assessments, identify student learning needs, determine which resources would be most effective in addressing those learning needs, establish goals and objectives for the use of those resources, measure progress to meeting those goals, and make the necessary changes, as needed, if goals are not met. Individuals in most schools simply lack the skills to engage in these activities with predictable rates of success. In James March’s terms, most schools lack the organizational intelligence to undertake those tasks. This does not mean that they cannot develop the necessary intelligence, but as March would argue, organizations need time to learn.29 The current policy of funding Decile 1 schools for three years assumes that whatever problems Decile 1 schools face will go away after three years. However, as long as Decile 1 schools continue to serve predominantly poor non-English-speaking minority students, half of whose parents do not have a high school education, the problem will persist. In many of these schools, HPSGP funds are used to purchase supplemental services like tutoring, time for collaboration and planning, teacher support, and the like. These are ongoing needs that persist beyond the three years of funding that schools are given.


REDEFINE THE PROBLEM


Policy makers need to rethink their current approach to fixing low-performing schools. Some argue that the current structure of the HPSGP simply rewards failing schools. Critics argue that many schools and districts in the state serve similar student populations to the HPSGP schools but have higher levels of student achievement and therefore receive no additional funds. Program proponents argue that Decile 1 schools have been consistently underfunded and therefore need the additional resources.


Our primary recommendation is to shift attention from Decile 1 or low-performing schools to schools that serve student populations that mirror those of Decile 1 schools— a high percentage of low-SES students. This group of schools represents a unique set of policy problems: the schools face greater challenges and need more assistance than the average school. In addition to financial and human resources assistance, they need technical assistance and mentorship. The demographics of Decile 1 schools are pronouncedly different from those of the average school in the state. These differences will not go away after three years. To serve those students well, schools need more resources and more support than the average school. Consequently, state strategies to improve low-performing schools need to go beyond just fixing the HPSGP to how schools like the Decile 1 schools should be funded and supported. The important policy question that needs to be addressed is not how to fix low-performing schools, but what state policy can do for schools that serve large numbers of educationally and economically disadvantaged students. It is a more complex and politically difficult problem than making adjustments around the edges of the current program. Its solution touches on the structure of the system of school finance and the system of governance. But in the long run, it is a problem that needs to be addressed.


Our study also suggests that school reformers need to think differently about the relationship between resources and outcomes. To be sure, money buys inputs—teachers, instructional materials, professional development—but inputs only provide the potential for improving educational outcomes. It is how those inputs are transformed into instructional regimes that matters.30 One way of facilitating that transformation is to monetize the things that matter to school improvement—stability and continuity, and leadership, for instance.31 Current salary structures for teachers reward years of teaching and academic credits and degrees earned—neither of which can be shown to have predictable effects on student achievement. However, they do not reward teachers who are willing to teach in schools that serve predominantly at-risk children. While leadership, staff motivation, effective planning, and the use of time are resources that matter for education, they are not connected to the dollars that are spent on education. The salaries of principals with good leadership skills are generally no different from those with poor leadership skills. Similarly, teachers in schools where staff motivation is high, use of time is appropriate, and high-quality education is the norm are not compensated for those things. While teacher and staff stability, collegial interactions among faculty, and collective commitment to vision and strategy have all been identified as relevant inputs for effective schools, our finance systems are not structured to support these kinds of organizational characteristics. When they do occur, they occur by chance, not by design. Finance is rarely used purposively to reinforce desirable organizational behaviors.


In addition to having a finance system that incentivizes desirable behaviors, teachers and administrators also need the authority over hiring and firing. Schools need to have the flexibility and autonomy to build an instructional team that supports school goals and objectives. One of the clear features of our improving schools was the broad commitment by all staff to common, agreed-on goals. Under the current system, a majority of teachers may commit to specific goals and objectives and to specific strategies to attain them. However, if some teachers do not agree with the goals and strategies, they cannot be compelled to do so. They can continue to pursue their own vision of education using whatever means they believe to be most appropriate. It is difficult to imagine another profession that tolerates that kind of individual autonomy within an organization. While individual teachers may have considerable autonomy, school administrators do not. In California, they have little control over general resource allocation because of the state’s funding system. They must also live with collective bargaining agreements that are made at the district level.


This study of state efforts to improve teaching and learning in low-performing schools suggests that neither marginal resources, nor outside consultants, nor school improvement plans are sufficient to produce meaningful change. Simply buying more equipment, hiring a few math or language arts coaches, and providing before- or after-school programs will not, by themselves, have any sustained impact on school improvement. Policy makers, however, continue to focus on these kinds of improvement strategies (class size reduction, for instance) because they are easy to design, easy to implement, and easily understood by the public.


The purpose of our study was to find out how low-performing schools used targeted marginal resources for school improvement. Would additional dollars be just “noise in the system,” or would they be used to effect ongoing improvement in teaching and learning in participating schools? Is it reasonable to assume that marginal resources could have a lasting effect on the quality of teaching and learning in schools? What we found was that it was not the things that new money could buy that mattered; what seemed to matter instead was qualitative differences in organizational behavior—the organizational conditions that allowed schools to deploy resources effectively. Bringing that about will not happen with new programs; it will require a fundamental reformulation of state and local responsibilities and creating an incentive system that leads to high-quality schools.


Acknowledgments


The study of California’s High Priority Schools Grant Program (HPSGP) was supported by a grant from the Bill and Melinda Gates Foundation. The study was conducted in collaboration with American Institutes for Research (AIR). The collaboration benefits greatly from AIR’s generous sharing of their interview protocols, study findings, and assistance in arranging for the school site visits. I am especially indebted to Jenifer Harr, Tom Parrish, and Paul Gubbins for their collaboration and support. The authors assume full responsibility for the observations and conclusions in this article.


Notes


1. The scale is from 200 to 1,000.

2. The actual number of students is understated. While 94% of students in Decile 1–3 schools were tested, in some schools, mostly high schools, just over 50% of students were tested. The percent of students eligible for free lunch is probably also understated since some students, mostly in high schools, either do not eat in the school cafeteria or do not admit that they are eligible.

3. J. O’Day and C. Bitter, Evaluation Study of the Immediate Intervention/Underperforming Schools Program and the High Achieving/Improving Schools Program of the Public Schools Accountability Act of 1999 (Palo Alto, CA: American Institutes for Research, 2003).

4. M. Yudof, D. Kirp, B. Levin, and R. Moran, Educational Policy and the Law (Belmont, CA: Wadsworth Group/Thompson Learning, 1992), 699.

5. F. Wirt and M. Kirst, The Political Dynamics of American Education (Richmond. CA: McCutchan, 2001).

6. D. Kirp, “Introduction: The Fourth R,” in School Days, Rule Days, ed. D. Kirp and D. Jensen (Philadelphia: Falmer Press, 1986), 1–17.

7. T. Timar, “Federal Education Policy and Practice: Building Educational Capacity through Title I,” Educational Evaluation and Policy Analysis 16, no. 1 (1994): 51–66.

8. See http://www.cde.ca.gov/ta/lp/hp/resources.asp.

9. See California Education Code Section 52055.650 (1) (A)

10. C. Bidwell, “The School as a Formal Organization,” in The Handbook of Organizations, ed. James G. March (Chicago: Rand McNally, 1965), 972–1022.

11. R. Scott, Organizations: Rational, Natural, and Open Systems, 3rd ed. (Englewood Cliffs, NJ: Simon & Schuster, 1992). Also J. Meyer and R. Scott, Organizational Environments: Ritual and Rationality (Beverly Hills: CA. Sage, 1983).

12. T. Sergiovanni and F. D. Carver, The New School Executive: A Theory of Administration (New York: Harper and Row, 1980).

13. M. Cohen and E. Sproull, eds., Organizational Learning (Thousand Oaks, CA: Sage, 1996). March differentiates, for instance, between organizational “exploitation” and organizational “exploration.” Exploitation essentially means doing what you have been doing, particularly if you are doing it successfully. Exploration, on the other hand, means finding new directions and new ways of doing things in order to be successful. See James G. March, The Pursuit of Organizational Intelligence (Malden, MA: Blackwell Business, 1999).

14. Several evaluations by the American Institutes for Research of the II/USP and HPSGP have shown that the programs have either no impact or insignificant impact on student achievement. See C. Bitter et al., Evaluation Study of the Immediate Intervention/Underperforming Schools Program of Public Schools Accountability Act of 1999 (Palo Alto, CA: American Institutes for Research, 2005). Also C. Bitter and J. O’Day, “California’s Accountability System,” in Crucial Issues in California Education 2006: Rekindling Reform, ed. H. Hatami (Berkeley: Policy Analysis for California Education, School of Education, University of California, 2006), 51–74.

15. See J. Harr, T. Parrish, M. Socias, P. Gubbins, and A. Spain,  Evaluation Study of California’s High Priority Schools Grant Program: Year 1 (Palo Alto, CA: American Institutes For Research, 2006).

16. See D. Cohen, S. Raudenbush, and D. Lowenberg-Ball, “Resources, Instruction, and Research,” Educational Evaluation and Policy Analysis 25, no. 2 (2003): 119–42. See also W. N. Grubb, “Multiple Resources, Multiple Outcomes: Testing the ‘Improved’ School Finance with NELS88” (Unpublished manuscript, School of Education, University of California, Berkeley, 2006); W. N. Grubb, “What Should Be Equalized? Litigation, Equity, and the ‘Improved School Finance’” (Paper prepared for the Earl Warren Institute on Race, Ethnicity, and Diversity Project on “Rethinking Rodriguez: Education as A Fundamental Right,” University of California, Berkeley, 2006); W. N. Grubb and L. Huerta, Straw into Gold, Resources into Results: Spinning Out the Implications of the “New” School Finance (Unpublished manuscript, University of California, Berkeley, 2000).

17. Grubb, “Multiple Resources,” 13.

18. Cohen et al., “Resources, Instruction, and Research.”

19. Each school in California is required to have a school site council. The council comprises parents of children in the school, teachers, and administrators. School site councils had to sign off on the school’s HPSGP plan and its budget. Site councils also had to approve any changes in plans or budgets.

20. E. Patton and S. H. Appelbaum, “The Case for Case Studies in Management Research,” Management Research News 26, no. 5 (2003): 60–71.

21. R. Yin, Case Study Research: Design and Methods, 2nd ed. (Beverly Hills, CA: Sage, 1994).

22. A. Tashakori and C. Teddlie, Mixed Methods in Social Science and Behavioral Research (Thousand Oaks, CA: Sage, 2003).

23. Adequate progress for state accountability purposes is a 5% growth schoolwide and by subgroups toward the state-designated API goal of 800. Selected schools were also those that met AYP growth targets under No Child Left Behind.

24. In selecting sample schools, we did not use cutoff points for “improving” and “nonimproving.” Among improving schools, we looked for those that had made the largest API gains among all schools that we had classified as improving. Among nonimproving schools, we selected those that had made the least (or in some instances, negative gain) in API scores. Our ability to select on this basis alone was somewhat limited in order to achieve variation in location and type of school. We did not want a sample of study schools that were all urban or rural with mostly children of migrant farm workers.

25. For a discussion of the theoretical framework for ad hoc/need-driven funding on the one hand and goal-driven funding on the other, see Cohen et al., “Resources, Instruction, and Research.”

26. A. S. Bryk and B. Schneider, Trust in Schools: A Core Resource for Improvement (New York: Russell Sage Foundation, 2002).

27. J. S. Coleman , “Social Capital and the Creation of Human Capital,” American Journal of Sociology 94, s1 (1988): 95–120.

28. Bryk and Schneider, Trust in Schools.

29. March, The Pursuit of Organizational Intelligence.

30. Cohen et al., “Resources, Instruction, and Research.”

31. Grubb, “Multiple Resources, Multiple Outcomes.” Also T. Timar and M. Roza, “A False Dilemma: Should Decisions about Education Resource Use Be Made at the State or Local Level?” Connecting the Dots and Closing the Gap (Papers prepared for CA State Superintendent Jack O’Connell’s P-16 Council Symposium on Closing the Achievement Gap, Center for Applied Policy in Education, University of California, Davis, May 18, 2008). Paper available at http://cap-ed.ucdavis.edu.


APPENDIX A: SCHOOL AND DISTRICT INTERVIEW PROTOCOLS


Since three-fourths of the case studies were done in collaboration with the American Institutes of Research, we used the protocols that they had developed for the overall evaluation of the HPSGP. Since our interests focused on resources use, we augmented their protocols with ours. We especially wish to acknowledge the generous assistance of Jennifer Harr and Tom Parrish, who directed the AIR evaluation.


SELECTED TEACHER PROTOCOLS


1.

Could you describe to me what you would consider funding priorities in your school? Did HP help in achieving those priorities?


2.

Could you describe to me what you would consider funding priorities in your school? Did HP help in achieving those priorities?


3.

I would like to call your attention to the HP budget that the school submitted with its action plan.

Probes: make sure you ask these. They are very important.


How were funding priorities determined?

How would you describe funding priorities: professional development, instructional materials, longer school day, etc. ?

How much debate was there over how monies should be allocated?

On balance, would you say the program’s focus was on improving test scores in the short run, or on improving teaching and learning (capacity building) over the long run?

Did you feel a tension between test score improvement and school improvement in determining resource allocation?

What was the budget approval process?

How often over the course of three years did you revise the initial budget?

If you revised it, why did you?

How would you assess the impact of the funding from HP on short- and long-term school improvement?

4.

What role is the action plan playing in your school? Have your goals and strategies changed since you developed your action plan?

How did those changes affect the budget for HP?


5.

What happens to school improvement when the funding for HP comes to an end?


6.

In retrospect, if it had been entirely up to you how to allocate HP funds, would you have allocated those monies differently?



7.

What has been the role of the district in efforts to improve this school? More specifically, what has been the role of the district in the HPSGP?

Probes:

What resources has the district provided to improve student performance at this school?


1.

Was the action plan implemented the way it was intended? If not, why not?


2.

What do you think the strengths and weaknesses of the HPSGP are as they relate to funding? Policy makers in California will read this evaluation, so this interview is a chance for you to share your thoughts (anonymously, of course). What advice would you give to either state policy makers or district administrators about the program, and particularly its funding?


SELECTED PRINCIPAL PROTOCOLS


3.

We understand that you received $50,000 as a planning grant in your first year of participation in the HPSGP. How was that used?


4.

The HPSGP provides $400 per pupil in state money and requires $200 in local matching money. Did your school actually receive $600 per pupil?

Probes:

If not the full $600, how much per pupil did the school actually receive?


1.

Do you have other funds besides HPSG funds for school improvement?


2.

What is the source of matching ($200/pupil) funds?


3.

Do the requirements for use of other discretionary funds support or conflict with the HPSGP?


4.

Could you describe to me what you would consider funding priorities in your school?


5.

Do HPSG funds allow you to fund those priorities?


6.

I would like to call your attention to the HP budget that the school submitted with its action plan.

Probes: make sure you ask these. They are very important.


How were funding priorities determined?

How would you describe funding priorities: professional development, instructional materials, longer school day, etc. ?

How much debate was there over how monies should be allocated?

On balance, would you say the program’s focus was on improving test scores in the short run, or on improving teaching and learning (capacity building) over the long run?

Did you feel a tension between test score improvement and school improvement in determining resource allocation?

What was the budget approval process?

How often over the course of three years did you revise the initial budget?

If you revised it, why did you?

How would you assess the impact of the funding from HP on short- and long-term school improvement?


7.

What did you expect that HP funding would allow you to do for improving teaching and learning that you could not do before?


APPENDIX B: TRIANGULATION METHODOLOGY


The following is an example of software triangulation and coding of “collaboration.” For each case study, transcribed interviews were searched for use of the word “collaboration.” From that, it is possible to identify differences among schools in how collaboration was perceived by interviewees.


One I can really see in this group [of teachers] from being a teacher to now as associate principal [is that] teachers are highly collaborative with each other. I’ve never seen organized collaboration this much. They voluntarily call meetings. (Lincoln, Improved School 1, AP 1, 2006)


Collaboration is departmental collaboration, school-wide, data analyses, and interdisciplinary collaboration. Teachers bring student work and students to present about what they got out of it, just to encourage communication across departments. (Lincoln, Improved School 1, AP 2, 2006)


A teacher (Lincoln, Improved School 1, Teacher 2, 2006) commented, “We have district collaboration by department every 2 months. A whole day, we work together, and it’s extremely helpful to bounce off ideas and learn from each other.”


In our school, teachers collaborate so much—they do this on their own. They work as a whole group. It’s amazing—new teachers buy into it right away. (Lincoln, Improved School 1, Teacher 3, 2006)




Cite This Article as: Teachers College Record Volume 112 Number 7, 2010, p. 1897-1936
https://www.tcrecord.org ID Number: 15920, Date Accessed: 1/25/2022 2:28:23 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Thomas Timar
    University of California, Davis
    E-mail Author
    THOMAS TIMAR is professor of education policy and director of the Center for Applied Policy in Education at the University of California at Davis. His areas of specialization are politics and policy, finance, and governance. Recent articles include “A False Dilemma: Should Decisions About Resource Allocation Be Made at the State or Local Level?” in The American Journal of Education (May 2010); “Exploring the Limits of Entitlement: Williams v. the State of California,” in Peabody Journal , 80(3); and “School Governance in California: Shaping the Landscape of Equity and Adequacy” in Teachers College Record, 106 (11).
  • Kris Chyu
    University of California, Los Angeles
    KRIS KIM CHYU recently completed her PhD in education policy at the Graduate School of Education and Information Studies at the University of California, Los Angeles. Her research interests focus on policy analysis and implementation, finance, school organization, and school leadership.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS