Title
Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Popular but Unstable: Explaining Why State Performance Funding Systems in the United States Often Do Not Persist


by Kevin J. Dougherty, Rebecca S. Natow & Blanca E. Vega - 2012

Background/Context: Performance funding in higher education ties government funding to institutional performance on indicators such as retention, graduation, and job placement. Performance funding can also be found in state K-12 funding policies and higher education quality assurance programs abroad. One of the puzzles about higher education performance funding is that half of the states establishing it later abandoned it.

Purpose/Objective/Research Question/Focus of Study: This study examines the factors that have led many states to drop performance funding for higher education.

Research Design: This qualitative case study contrasts the experiences of three states that dropped performance funding in whole or in part (Missouri, Washington, and Florida) and a fourth (Tennessee) that has retained it more than 30 years.

Discussion: Our analysis is based on documentary records and extensive interviews with higher education officials, legislators and staff, governors and advisors, business leaders, minority group leaders, researchers, and outside consultants. Our findings concur with but also go beyond prior analyses of the demise of state performance funding systems. We concur that higher education opposition played a key role in this demise, stimulated by a perception of inadequate consultation with higher education institutions, use of performance indicators that institutions found invalid, high implementation costs to institutions, and erosion of campus autonomy. At the same time, our analysis turned up other causes of higher education opposition to performance funding that were not discovered by previous studies. A major cause of higher education opposition was the downturn in state finances in the early 2000s, which led institutions to focus on preserving their core state funding and giving up performance funding. Higher education opposition was also provoked if performance funding took the form not of adding to existing state funding but instead holding back a portion of the state appropriation and requiring institutions to earn it back through improved performance. These findings agree and disagree with theories and findings in the research literatures on policy termination and program sustainability.

Conclusions/Recommendations: If its advocates are to create a sustainable basis for state performance funding, they must find ways to insulate its funding from the ups and downs of the state revenue cycle, better secure the support of public institutions, and expand its breadth of political support by reaching out, for example, to business and to social groups driven primarily by the values of educational equality rather than educational efficiency.

Over the past three decades, policymakers have become concerned about finding ways to secure better performance from higher education institutions, whether in the form of greater access and success for less advantaged students, lower operating costs, or improved responsiveness to the needs of state and local economies. As a result, great effort has gone into designing incentives for improved college performance. One of the key incentives that U.S. state governments have tried is performance funding, which ties state funding directly to institutional performance on specific indicators, such as rates of retention, graduation, and job placement (Alexander, 2000; Burke, 2002c, 2005; Dougherty & Hong, 2006; Ewell & Jones, 2006; Gaither, Nedwek, & Neal, 1994; Layzell, 1999; McLendon, Hearn, & Deaton, 2006; Ruppert, 1994; Shulock & Moore, 2002, 2005; Zumeta, 2001).1


We are now entering a period of renewed interest in performance funding in state policymaking in the United States. In 2010, the National Governors Association and the National Conference of State Legislatures held conferences that addressed performance funding. Moreover, several states have recently enacted or readopted performance funding, including Washington and Texas in 2007 (Southern Regional Education Board, 2008; Washington State Board for Community and Technical Colleges, 2007).


Still, despite performance fundings apparent popularity, postsecondary performance funding has experienced only limited and unstable institutionalization at the state level in the years since it was first introduced. While half of US states established a performance funding system for higher education between 1979 and 2010, 12 of those states later dropped or suspended it (with 3 reestablishing it) (Burke & Minassians, 2003; Dougherty & Reid, 2007; Dougherty & Natow, 2009; McLendon et al., 2006; Southern Regional Education Board, 2008, p. 22).


The purpose of this paper is to analyze the causes of the instability of state performance funding systems for higher education. We begin by tracing which states have dropped performance funding over the last 30 years. We then review the main explanations extant of the causes of the demise of performance funding systems. We follow with a new explanation, based on an analysis of the experiences of four states: two states that dropped performance funding (Missouri and Washington), one that still retains its performance funding program after 30 years (Tennessee), and one that ended one program but kept another (Florida). This new explanation concurs with but also goes beyond existing explanations of the demise of state performance funding systems. In the summary and conclusions, we note how our new explanation converges with and diverges from findings in the literature on policy termination and program sustainability.


Our findings have significance beyond just higher education in the United States. Performance funding systems are also found in state funding of elementary and secondary education in the United States, as in the case of Kentucky and Texas (Carnoy, Elmore, & Siskin, 2003, chap. 1). Moreover, performance funding is a feature of the quality assurance systems for higher education in countries such as Germany and the United Kingdom (Dill, 2007; Rhoades & Sporn, 2002).


THE UNSTABLE ADOPTION OF PERFORMANCE FUNDING


In the nearly three decades between 1979 (when Tennessee first adopted performance funding) and 2010, half of US states enacted performance funding policies for higher education (Burke & Minassians, 2003; Dougherty & Reid, 2007; McLendon et al., 2006; Southern Regional Education Board, 2008, p. 22). However, during those years, half of those states that adopted performance funding later dropped or suspended their performance funding systems (though three later created new ones).


To determine which states had abandoned performance funding, we took several steps. First, we compiled reports of the demise of performance funding in the scholarly literature (Burke, 2002a, 2002b; Burke & Minassians, 2003; Burke & Modarresi, 2000; Dougherty & Reid, 2007) and journalistic sources such as the Chronicle of Higher Education. Second, in cases where there was some doubt about whether a state had indeed stopped its performance funding (or even whether the state had established performance funding to begin with), we contacted higher education officials and academic experts familiar with policymaking in those states.2


Table 1. States Where Initial Performance Funding System Was Terminated: Year of Occurrence

Year

#

         

1996

1

Colorado

       

1997

2

Arkansas

Kentucky

     

1998

1

Minnesota

       

1999

1

Washington

 

     

2000

0

         

2001

0

         

2002

5

Florida WDEF

Missouri

Illinois

New Jersey

Oregon

2003

1

S. Carolina

       

2004

0

         

2005

0

         

2006

0

         

2007

0

         

2008

2

Georgia

Kansas

     

2009

0

         

2010

0

         


Sources: Burke (2002); Burke & Minassians (2003); Burke & Modarresi (2000); Dougherty & Reid (2007); authors interviews.


As Table 1 indicates, five cases of demise occurred before 2000, while eight fell in the years 2002 and later (with five cases in 2002 alone). Florida suspended funding for its Performance-Based Budgeting program after 2008, but we are not counting it as a case of demise since it has not eliminated the program.  


EXPLAINING THE DEMISE OF STATE PERFORMANCE FUNDING SYSTEMS


The literature analyzing the demise of state performance funding programs for higher education is very small, consisting only of a number of pieces by Joseph Burke and his colleague Shahpar Modarresi (Burke, 2002a, 2002b; Burke & Modarresi, 2000).3 Burke and Modarresi compared the experiences of four states that abandoned performance funding (Arkansas, Colorado, Kentucky, and Minnesota) in the 1990s and two states that had systems that continued (Tennessee and Missouri) at least through 2000.4 Utilizing qualitative and quantitative data, the authors lay out a variety of factors that distinguish the two sets of states. The qualitative analysis consisted of documentary analysis and interviews in the six states (Burke, 2002a). The quantitative analysis involved a discriminant analysis of the responses of state and campus policymakers surveyed in December 1996 in the six states.


Though Burke and Modarresi do not do so, their explanatory factors can be grouped into immediate precursors and more distant causes. In terms of immediate precursors, Burke noted that the states that abandoned performance funding were characterized by strong opposition to performance funding by higher education institutions and a turnover in governors who were supportive of performance funding (Burke, 2002a, pp. 223226, 235, 238, 241). A changeover in governor from a supporter to one who had a different policy agenda played a role in undermining performance funding in Arkansas and Kentucky (Burke, 2002a, pp. 223224, 238, 241). In all four states, opposition by higher education institutions contributed to the demise of performance funding (Burke, 2002a, p. 241). Fueling this opposition was a perception by higher education institutions that performance funding levels fell below expectations and did not seem fairly distributed (a major factor in Arkansas), a perceived lack of consultation with colleges in developing and implementing the performance funding system (Colorado and Minnesota), and unhappiness with the amount of authority being exercised by the Department or Commission of Higher Education in Arkansas and Kentucky (Burke, 2002a, pp. 224, 226, 235, 241).


Lying behind these immediate precursors are more distant causes rooted in perceptions of how performance funding was established, how it was designed, and what its impacts were (Burke, 2002b, pp. 250252, 260; Burke & Modarresi, 2000, pp. 442443, 446, 450). Burke and Modarresi conducted a discriminant analysis of a 19961997 survey of state and campus officials in the four states that gave up performance funding (Arkansas, Colorado, Kentucky, and Minnesota) and the two states that kept it through 2000 (Missouri and Tennessee). The state officials included governors and their aides, chairs of legislative fiscal and education committees, and board chairs and senior officials of higher education coordinating boards and of university systems. The campus officials included presidents, vice presidents, academic deans, and faculty senate chairs at all public colleges and universities. The total number of survey respondents across the six states was 530, 50.1% of those surveyed (Burke & Modarresi, 2000, pp. 439, 449).   Burke and Modarresi found that respondents in the states that gave up performance funding more often identified business, legislators, community leaders, and governors as playing an important role than did respondents in the states that kept performance funding. Conversely, the states giving up performance funding assigned a weaker role to state coordinating boards and their officials than did respondents in the states keeping performance funding (Burke, 2002b, p. 250; Burke & Modarresi, 2000, pp. 442443, 450). In terms of program design, respondents in the demise states were less likely than those in the continuation states to see quality as a current goal of their performance funding system and more likely to see efficiency as a current goal (Burke, 2002b, p. 252; Burke & Modarresi, 2000, pp. 442443, 450). Finally, Burke and Modarresi also found that respondents in the four states that gave up performance funding differed significantly from their counterparts in states that kept performance funding in their assessment of the impacts of performance funding. Respondents in the demise states less often perceived performance funding as having long-term favorable prospects and as resulting in improved higher education, increased accountability, increased state funding, or improved public perception of higher education (Burke, 2002a, p. 230; Burke, 2002b, pp. 251252, 260; Burke & Modarresi, 2000, pp. 442443, 446, 450). Respondents in the demise states were also more likely to see performance funding as eroding campus autonomy, causing budget instability for institutions, and carrying significant implementation costs (Burke, 2002b, p. 251; Burke & Modarresi, 2000, pp. 442443, 450).


CRITIQUE OF BURKE AND MODARRESIS STUDY


The Burke and Modarresi study makes a major contribution to our understanding of the causes of the demise of performance funding systems. As we will show below, our own research backs up their findings on several particulars.


However, the study has two notable limitations. First, its four cases of demise of performance funding all involve cases where the demise was before 2000. However, as Table 1 indicates, the majority of the cases of demise were after 2000. Second, Burke and Modarresi treated Missouri as a case of continuation of performance funding. However, Missouri abandoned performance funding in 2002.  


As we show below, our own analysis rectifies these limitations by including cases where performance funding lapsed after 2000. In fact, one of the cases of demise that we include is Missouri. As consequence of this difference in sampling, we discovered important causes of the demise of performance funding that are not discussed by Burke and Modarresi, and constitute important additions to the factors that Burke and Modarresi did discover. In particular, we found that state fiscal troubles play a very important role in the demise of performance funding. We also found that a key cause of higher education opposition is whether performance funding takes the form of reserving a portion of the state appropriation for colleges and requiring them to earn this money back, rather than new money that is given over and above the usual state funding for higher education Finally, loss of support for performance funding involves not just turnover in the governorship but also the loss of key legislative champions (whether through term limits or change of party control of the legislature) or of state higher education officials leaving office.


RESEARCH METHODS


To shed new light on the causes of the demise of performance funding, we examined the experience of four states with different experiences of cessation of performance funding and different contextual factors. Two states relinquished performance funding but did so at different times. Washington gave up performance funding in 1999, before the recession of the early 2000s.5 Missouri gave up performance funding in 2002, in the midst of the recession. Hence, in contrast to the Burke and Modarresi study (Burke, 2002a, 2002b; Burke & Modarresi, 2000), our study includes cases from the 19961999 period and the post2001 period. In addition, one of our cases is one that had originally looked to be a case of long-term continuation (Missouri) but turned out to be a case of program demise. Hence, examining the causes of demise in that state is particularly instructive.


We contrast the experiences of these two cases of program cessation with two other states. Tennessee has never given up performance funding, holding onto it since 1979.6 Florida, meanwhile, created two performance funding systems, giving up one in 2002 but retaining the other to this day (although it has not funded it since 2008). Hence, by contrasting the experiences of these two programs in Florida we can add to the analytic leverage we get by comparing Tennessee to Washington and Missouri.  


In addition to different experiences with performance funding cessation, the four states differed in other ways that give us different windows into the politics of program cessation (see Table 2). They differed in the sectors of higher education covered, with performance funding in Florida applying only to community colleges while in the other states it covered 2-year and 4-year public institutions. In addition, the states differed considerably in the proportion of state higher education funding taken up by performance funding, with Florida and Tennessee considerably higher than Missouri and Washington. Finally, the states differed considerably in their higher education governance structures as of 2003 (McGuiness, 2003).7 Two states, Missouri and Washington, were fairly decentralized, with neither having a governing board or coordinating board for the 4-year colleges and universities, beyond a weak coordinating board for all higher education. The other two states, Florida and Tennessee, were considerably more centralized, with consolidated governing boards for their public 4-year colleges and (in the case of Tennessee) a consolidated governing board covering all public 2-year colleges (McGuinness, 2003).



Table 2. Policy Characteristics of Case Study States

     
 

Washington

Missouri

Florida

Tennessee

Demise of initial performance funding system

Yes

Yes

Yes (WDEF); No (PBB)*

No

Timing of demise: Before or during recession of 20012002

Before 1999

During 2002

During 2002 (WDEF system)

 

Duration of performance funding system that was given up

2 years

7 years

4 years

 

Legal basis for performance funding

Budget proviso

Budget proviso

Statute

Budget proviso

Sectors of higher education covered by performance funding system

Public 2 and 4 yrs

Public 2 and 4 yrs

Public 2 yrs only

Public 2 and 4 yrs

Peak proportion of state funding for public higher education taking the form of performance funding (see text)

1.2% (FY 1999)

1.6% (FY 1999)

6.6% (FY 2001)

4.4% (FY 2005)

Governance for higher education (McGuinness, 2003):

* State-level coordinating board for all public higher education

X

X

X

X

* Public universities: Consolidated governing board for all public universities

   

X

X (U of TN 5 campuses)

* Public universities: Individual governing boards for each public university or university system

X

X

   

* Public 2-year colleges: Consolidated governing board for all public 2-year colleges

     

X (all public 2-year colleges & other universities)

* Public 2-year colleges: Coordinating board for all public 2-year colleges

X

 

X

 

* Public 2-year colleges: Individual governing boards for each public 2-year college

 

X

   

* Florida suspended funding for the Performance-Based Budgeting program in 2008 but has not eliminated it.


DATA SOURCES


Our analysis is based on extensive interviews and examination of documentary records in the form of public agency reports, academic books and journal articles, doctoral dissertations, and newspaper articles. Table 3 indicates the number and types of interviews that we conducted with various kinds of potential political actors. Our interviews were with state and local higher education officials, legislators and staff, governors and their advisors, business leaders, leaders of minority groups, researchers and academic experts, and organizational consultants (see Table 3). To maintain confidentiality, we do not identify our interviewees by name but rather identify them by approximate position.


Table 3. People Interviewed

 

FL

MO

TN

WA

 

State higher education officials

10

4

6

6

 

Higher education institution officials

8

4

5

6

 

Legislators and staff

3

5

2

8

 

Governors and advisors

3

3

1

2

 

Other state government

1

0

0

   

Business leaders

2

2

1

1

 

Other (consultants, researchers, minority group leaders)

 

2

5

3

 

Total

27

20

20

26

 


All of our interviews were transcribed, coded, and entered into the NVivo qualitative software. (To the degree possible, we also coded and entered into NVivo our documentary materials.) The coding began with a preestablished list of codes focusing on the timing and form of demise, the actors and motives involved, and contextual events such as state budget problems or changes in control of the government that were hypothesized to affect the likelihood and form of demise. However, as we proceeded with our interviews and documentary analysis, we added new codes and changed existing ones.


To analyze the data, we ran coding inquiries in NVivo to find all references in the interviews and documentary materials to particular actors, motives, or contextual events. With these references in hand, we constructed analytic tables to compare different references to the same actor, motive, or event to determine to what degree our evidence was converging on certain findings and whether divergences were due to the fact that different respondents occupied different social locations that would variably shape their perceptions. In some cases, when we found a high degree of divergence of perception, we conducted additional interviews that might help us resolve these discrepant findings.


In the remainder of this paper, we report our findings for each state separately and then at the end draw conclusions about the general causes of the demise of performance funding. As we will show, our findings support several of the arguments made by Burke and Modarresi (Burke, 2002a, 2002b; Burke & Modarresi, 2000). However, we also arrived at several findings that go beyond theirs and shed new light on why performance funding programs do not persist.


FINDINGS


We begin our case reports with Washington and then Missouri, which gave up their performance funding systems in 1999 and 2002. Florida then provides a partial contrast, for this state gave up one performance funding system but kept another (although it has suspended funding since 2008). And Tennessee provides a full contrast, for this state has maintained performance funding ever since 1979.


WASHINGTON: DEMISE OF PERFORMANCE FUNDING IN THE 1990S


Washington is an example of a state that gave up performance funding in the 1990s, before the recession of the early 2000s.8 In 1997, Washington adopted performance funding for the state public institutions as a provision in the states higher education appropriation for fiscal years 1998 and 1999 (Washington State General Assembly, 1997; see also Nisson, 2003; Washington State Higher Education Coordinating Board, 1998). Under this program, the state held back a small portion of state appropriations for higher education and required institutions to achieve specified performance levels to recover the full amount of withheld funding. The withheld amount consisted of $10.6 million for 4-year colleges and $2.05 million for 2-year colleges, amounting to 1.2% of the states total appropriations for higher education in fiscal year 1999 (Washington State Board for Community and Technical Colleges, 1999a, 1999b, p. 1; Washington State Higher Education Coordinating Board, 2000, p. 3; Washington State Higher Education Coordinating Board, 2001, p. 75; Washington State Higher Education Coordinating Board, 2006, App. 1, p. 1).


Which performance indicators drove this funding varied by whether an institution was a 4-year or 2-year college. Four-year colleges were required to meet standards relating to persistence, completion, faculty productivity, graduation efficiency (proportion of credits taken to credits needed to graduate), and one measure that would be unique for each college (Washington State General Assembly, 1997; see also Sanchez, 1998; Washington State Higher Education Coordinating Board, 1998, 2000). Two-year colleges were required to meet standards relating to transfer rates, course completions, earnings of occupational program graduates, and graduation efficiency (Washington State General Assembly, 1997; see also Nisson, 2003; Washington State Community and Technical College Board, 1999a).


In 1999, it came time for the Washington state legislature to adopt a new budget for the following biennium. However, the legislature removed the performance funding component, leaving only a performance reporting system for the 19992001 biennium (authors interviews; see also Washington State General Assembly, 1999).


Our findings suggest that a number of factors contributed to the demise of performance funding in Washington State in 1999. These include the loss of key supporters in the legislature and dislike and even hostility on the part of the higher education community toward the particular form of the performance funding system adopted in 1997. Interestingly, budget problems were not an issue, as would be the case for the states that gave up performance funding in the early 2000s.


Loss of Key Political Supporters


The Republicans loss of party control in the state legislature played a role in the discontinuance of performance funding in Washington State. Republican legislators had played a key role in the enactment of performance funding in 1997 (Dougherty, Natow, Hare, & Vega, 2010). But after the 1998 election, Democrats were once again the dominant political party of the Washington State Senate, and Democrats and Republicans held equal representation49 seats eachin the State House of Representatives (Ammons, 1998; Nisson, 2003). This change in party control helped to bring about the demise of the 19971999 performance funding system (authors interviews WA #2, 9, 10, 11, 14, 15, 16, 21, 22, 23). A well-placed observer noted that the Democrats took control of the legislature and they didnt have any investment in the performance funding proviso. Democrats in the state legislature were more sympathetic to the preferences of institutions than were Republicans, according to a leading former legislator:


Democrats were more willing to agree with their institutional representatives that it would be a penalty to the least able and first-time college students, that the institutions were already doing the best they could, and [that] in the long run there were relatively few students who like to stay on in higher education and be professional students.


Higher Educations Lack of Support: The Impact of Policy Design


Key to the elimination of the 19971999 performance funding system was a lack of support by higher education institutions, the Washington State Board for Community and Technical Colleges, and the Washington State Higher Education Coordinating Board. The states colleges and universities strongly disliked the 19971999 performance funding system and did not keep their aversion a secret (authors interviews WA #2, 9, 14, 15, 16, 18, 22, 23; also see Sanchez, 1998). A former state higher education official remarked:


[T]he institutions were never particularly, I think, comfortable with the whole idea of performance measurements. They were very good in the subsequent years, [at] lobbying the members of the legislature, and administration, about their resistance to this, and why it wasnt really major to what was important in education . . . I think the institutions did a pretty good job of making a case.


The opposition of the public higher education institutions to the Washington performance funding system was sparked by several features of its policy design. Below we explore each of these in turn.  


A central reason behind the higher education communitys opposition to performance funding was the systems holdback funding formula (authors interviews WA #1, 2, 8, 9, 14, 18, 23; also see Sanchez, 1998). This is a policy design feature not mentioned by Burke and Modarresi (Burke, 2002a, 2002b; Burke & Modarresi, 2000), but it played an important role in catalyzing opposition to performance funding in Washington and, as we shall see, Florida. Washingtons 19971999 performance funding system held back a fraction of higher education appropriations; institutions would receive these withheld monies only by doing well on performance funding indicators (Sanchez, 1998; Washington State General Assembly, 1997; Washington State Higher Education Coordinating Board, 1998). Many of the public institutions viewed the holdback provision as punitive, providing only negative reinforcement (authors interviews WA #1, 2, 9, 14; also see Sanchez, 1998). A state higher education official observed: [O]ne of the things that made it a difficult sell for the higher education system . . . was that there . . . was no additional money put on the table as an incentive to improve performance. There was only the prospect of punishment there.


The holdback formula was particularly troublesome to some institutions because they had difficulty meeting the performance criteria (authors interviews WA #2, 15, 16). As a former legislator observed, institutions had to show improvement to get their full allocation. That proved to be fairly challenging for the institutions. A staffer at the State Board for Community and Technical Colleges agreed: [B]y the end of the year . . . several colleges didnt get their money back. And some of them were counterintuitiveinstitutions which everybody perceived as always doing the right thing, and they didnt get the points.


Another reason that higher education institutions opposed the 19971999 system was a perception that the system did not sufficiently account for institutional diversitythat is, the indicators painted a picture of accountability with a relatively broad brush that did not capture well the unique institutional missions of different types of public higher education in the state (authors interviews WA #7, 10, 14).


Along similar lines, higher education institutions perceived an incongruity between the performance indicators adopted by the legislature and the performance goals that institutions believed to be important. There was concern that the legislatures performance indicators would cause colleges to focus their energy and resources on programs that were more likely to enhance institutional performance on the indicators, while neglecting or even abandoning programs that the colleges felt were valuable (authors interviews WA #2, 14; see also Dougherty & Hong, 2006). For example, a state higher education official decried the fact that the emphasis put on technical programs to place their graduates in jobs with a median wage of $12 an hour pressured colleges to drop programs such as early childhood education or secretarial training that typically do not lead to jobs paying this much (authors interviews WA #2).


Another reason that some higher education institutions in Washington opposed the 19971999 system was because they felt that the system duplicated other mandates to which colleges and universities in the state were already subject (authors interviews WA #12, 14). As a state executive branch staffer told us, institutions principal argument against performance funding was that, We go through an accreditation process. What more do you need? A former state higher education official reiterated that institutions measure performance internally, and thats something that they ought to be able to continue to do, rather than have oversight from a separate board, like the Higher Education Coordinating Board or any other organization.


Connected with the opposition to performance funding on the part of higher education institutions was the frustration on the part of the state higher education boardsthe Higher Education Coordinating Board (HEC Board) and the State Board for Community and Technical Colleges (SBCTC)with the way the system had been adopted (authors interviews WA #2, 14). The legislature had taken the lead in adopting performance funding in Washington State, and the Higher Education Coordinating Board and State Board for Community and Technical Colleges were given little time to propose performance funding indicators and measures (authors interviews WA #2, 14; see also Nisson, 2003). On the community college side, the State Board for Community and Technical Colleges was given only a few days to develop indicators and measures for 2-year colleges (authors interview WA #2; see also Nisson, 2003). One State Board for Community and Technical Colleges staff member told us: [T]hey were sprung upon us. The legislature said youre going to have . . . to have indicators, and youre going to have to have them in 3 days.


The state Higher Education Coordinating Board had more time to devise performance indicators and measures for the 4-year institutions (in conjunction with the legislature and the institutions themselves) than the State Board for Community and Technical Colleges had (authors interview WA #14). However, according to our respondent, even the HEC Board had not been given very much time: Once we found out that the legislature was serious in doing it, the legislative session at that time was probably about up, you knowabout 3 or 4 months total. In the end, the State Board for Community and Technical Colleges and the HEC Board did not endorse the 19971999 performance funding system.

 

Lack of Business Support


While performance funding was encountering opposition from higher education institutions, the system was not garnering support from the outside community. Business had supported performance accountability efforts in Washington (Dougherty et al., 2010). However, by 1999, there was little evidence of business support (authors interview WA #2). A state higher education official noted: I think often we do have support from labor, or business, or our Workforce Board . . . but on this issue, I dont think any of them spoke on the issue at all.


Enactment Through Budget Proviso Rather Than Statute


A final factor contributing to the demise of Washingtons 19971999 performance funding system was the fact that performance funding was enacted through a budget proviso rather than statute. This made the system easier to eliminate. Provisos can be eliminated simply by not including them in the new budget; there is no need to go through a legislative repeal process, as would be the case with a statute (authors interviews WA #9, 19, 20, 23). The fact that the 19971999 system was enacted by proviso rather than statute contributed to the ease with which the legislature was able to discontinue the system after one biennium (authors interviews WA #9, 19). As one higher education official told us: [T]here wasnt a law passed in 1997, and so it was just part of the budget. Those provisions were only there, and so the legislature didnt have to actually eliminate it. They just didntthey chose not to renew it (authors interview WA #9).


Summary


Several factors played a role in the rapid demise of the 19971999 Washington performance funding system. First, the state Senates dominant political party switched from Republican to Democratic, and the Democrats were not as supportive of tying funding to institutional performance. Second, higher education institutions were strongly displeased with the 19971999 performance funding system for several reasons rooted in policy design: the use of a holdback funding system, the difficulty some institutions had in meeting performance criteria, differences between institutions and the legislature regarding the goals for higher education, institutions belief that the 19971999 system did not take sufficient account of institutional diversity, their belief that performance funding duplicated existing accountability mandates, and the perceived lack of sufficient consultation with the State Board for Community and Technical Colleges and the Higher Education Coordinating Board. Even as performance funding faced strong opposition from higher education institutions and the state higher education boards, the system failed to get support from organizations outside higher education, including the business community. Finally, the fact that the 19971999 performance funding system was enacted by budget proviso rather than by statute made eliminating the system in the following biennium relatively easy.



MISSOURI: DEMISE OF PERFORMANCE FUNDING IN THE 2000s


Missouri is one of several states that gave up performance funding during the recession of the early 2000s. One of the first states to establish performance funding, Missouri enacted its Funding for Results (FFR) program through a successful request in 1993 by the Coordinating Board for Higher Education for a supplementary appropriation for performance funding for the 1994 fiscal year. Performance funding was first applied to 4-year colleges that year, and the community colleges were added the next fiscal year (Dougherty et al., 2010; Naughton, 2004, p. 68; Stein, 2002, pp. 113114, 127128). At the peak in fiscal year 1999, Funding for Results accounted for 1.6% of state appropriations for higher education (National Center for Education Statistics, 2007, Table 339; Stein, 2002, pp. 127129).


The Funding for Results program began with three indicators, but these eventually flowered into six for the community colleges and eight for the universities. Four indicators were common to both types of institutions: Freshman Success Rates, Success of Underrepresented Groups, Performance of Graduates, and Successful Transfer. The community colleges had two additional, sector-specific indicators: Degree/Certificate Productivity and Successful Job Placement. The 4-year colleges and universities, meanwhile, had four additional sector-specific indicators: Quality of New Undergraduate Students, Quality of New Graduate Students, Quality of Prospective Teachers, and Attainment of Graduation Goals. Two additional indicators were dropped over the years: Assessment of Graduates and Degrees in Critical Disciplines (Naughton, 2004; Stein, 2002).


In 2002, the legislature did not fund performance funding, and since then it has been in abeyance in Missouri (authors interviews; Schmidt, 2002b). The demise of performance funding in Missouri came as a great surprise to higher education officials in the state and to outside observers (Schmidt, 2002a, 2002b). In fact, Burke and Modarresi had classified Missouri as one of their two cases of long-lasting continuation of performance funding (Burke, 2002b; Burke & Modarresi, 2000).


The main cause of the surprising demise of the Funding for Results program was the states sharp drop in revenues. However, other factors also played a role, including a lack of political support for performance funding from the governor and the legislature, higher education institutions, and the business community.


State Budget Problems


The cessation of the Funding for Results program is largely attributable to the states budget shortfall in the early 2000s (authors interviews MO #1, 3, 6, 7, 12, 14, 15, 16, 17, 20; Schmidt, 2002b). The state of Missouri suffered a 6.0% drop in general revenues between fiscal year 2000 and 2002 (U.S. Census Bureau, 2002, Table 429; U.S. Census Bureau, 2006, Table 439). Consequently, state appropriations for higher education were cut by 9.7% between fiscal year 2001 and fiscal year 2003 (National Center for Education Statistics, 2007, Table 339).9 But the fact that the cut in higher education funding was in excess of the drop in state revenues indicates that more than just budgetary factors were at work.


Loss of Support from the Governor and the Legislature


Governor Mel Carnahan, who had strongly supported more spending for higher education and performance funding, died in 2000. His successor, Bob Holden, was not as committed to funding performance funding (authors interviews MO #1, 2, 4, 6, 10, 12, 15; Schmidt, 2002b). As a state higher education official noted:


When the money went south, we were not successful with the Holden administration in the end to keep it alive, even with crumbs . . . You know when you are working with the magnitude of money theyre working with and somebodys talking to you about pennies in the basket, its hard to get focused attention.


The Holden administration focused on preventing cuts in elementary and secondary education. In fact, even as higher education appropriations were cut sharply, state appropriations for public elementary and secondary schools actually rose by 3.1% between fiscal year 2001 and fiscal year 2003 (National Center for Education Statistics, 2005, Tables 154, 155; National Center for Education Statistics, 2006, Table 153).


Holden blamed Republicans in the legislature for blocking his effort to use the states rainy-day fund to cover the budget shortfall the state was facing. He also complained that the Coordinating Board for Higher Education and legislative representatives of the public colleges had not supported strongly enough his effort to draw on the rainy-day fund (Schmidt, 2002b; Sloca, 2002).


The lack of support for performance funding in the legislature reflected the fact that in 2002 the Republican Party had taken control of the legislature. In contrast with Republican legislators in other states, the new Missouri legislators were not much interested in performance funding (authors interviews MO #6, 10).


Lack of Support from Higher Education Institutions


In the face of the higher education budget cuts, higher education institutions did not push to preserve Funding for Results. Their first priority was preserving their base institutional funding, and this priority made performance funding expendable (authors interviews MO #1, 3, 5, 6, 7, 12, 15, 17; Schmidt, 2002b). As a university official noted:


All institutions were focused on protecting their core . . . If I was in a performance-based system, I may get $50 million plus $500,000 based on my performance measures last year. All right? Well, when the conversation . . . in the following year [is] that youre [not] going to get $50 million, but youre going to get cut 10%, so now youre going to get $45 [million], the performance based funding just comes off the table. Because performance was always an add-on. It wasnt part of the core.


FFR appropriations that institutions received in one year were built into the base of their budgets for the following year. However, the budget cuts facing higher education in 2002 may have made this seem immaterial.


The lack of support for Funding for Results on the part of higher education had more longstanding roots than just a desire to protect base funding during the 2002 budget crisis. The lack of support also reflected reservations about Funding for Results on the part of some public higher education institutions, particularly the more prestigious ones (authors interviews MO #1, 4, 5, 9, 11; Naughton, 2004; Schmidt, 2002b; Stein, 2002, p. 115).


The more prestigious public universities did not see Funding for Results (FFR) as an attractive way to secure extra state funds. FFR was seen as not generating a lot of money compared to the university total budget. In fact, university administrators tended to view FFR money not as new money but rather as money that had been taken away from expected budget increases to meet inflationary increases in costs (authors interviews MO # 1, 16; see also Naughton, 2004, pp. 7778, 83). In the face of this, some institutions found it more attractive to strike special deals with the governor or the legislature for mission enhancement funds to strengthen academic programs and build new facilities rather than to focus on improving their institutional performance in order to secure more FFR funding (authors interviews MO # 5, 7, 10, 15, 16, 19; also see Naughton, 2004, p. 78).


In addition to these fiscal complaints, some Missouri public higher education institutions also had other criticisms of the Funding for Results program (authors interviews MO #7, 9). For example, administrators at a high-prestige university felt that FFR was inappropriate for their institution because as the states only public doctoral institution it was committed to improving its research capacity, but FFR, which was primarily driven by measures associated with undergraduate education, had no elements related to research (Naughton, 2004, p. 78). An administrator at the University of Missouri noted:


Where are the aspects of performance-based funding that reflect one of the major roles of the university and thats research creativity and not to mention our extension mission? . . . I would say to this day in large part the [FFR] approach was basically suited for nonresearch institutions and very poorly suited to the four-campus system of the University of Missouri. We had to be a bit skeptical and lukewarm about the whole thing.


In addition, administrators at more prestigious institutions tended to view FFR as an intrusion on institutional autonomy (authors interviews MO #2, 7, 9; also see Naughton, 2004, pp. 7780; Stein, 2002, p. 116). This was particularly the case with the campus-level FFR, which encouraged campuses to apply performance funding to their internal budgeting systems (Naughton, 2004, pp. 7980; Stein & Fajen, 1995, pp. 8588). An official at the University of Missouri noted:


I think the attitude was basically here's something that the CBHE is trying to push down our throats. It's not well baked. There was a lot of emphasis on test scores again, and our own faculty, you know for the most legitimate academic reasons, had deep concerns about the heavy weight given to test scores.


At the same time, Funding for Results enjoyed a more favorable reception at medium-prestige public institutions (Naughton, 2004, pp. 8390). In fact, one of them, Truman State University, had been a very early exponent of campus assessment and its former president, Charles McClain, had gone on to be the state commissioner of higher education who led the effort to establish Funding for Results.


Lack of Mobilized Support from Business


Though the business associations did not lobby for performance funding in the early 1990s, some prominent business people had strongly supported it. In fact, a businessman headed the Missouri Business and Education Partnership Commission that in 1991 had called for performance funding (Dougherty et al., 2010). However, a decade later, business was silent when Funding for Results was no longer supported by key politicians. Key business people, legislators, and higher education officials do not recall any reaction on the part of the business community when FFR was eliminated (authors interviews MO #5, 6, 12, 13, 19). A state official who was involved in budget deliberations at the time noted: I dont remember any business involvement at all in the process in 2002, when it [Funding for Results] kind of atrophied away. I remember just complete silence in the business community on that issue.


The Political Weakness of the Coordinating Board for Higher Education


The one body that actively defended Funding for Results was its prime progenitor, the Coordinating Board for Higher Education. However, the board was not very strong politically (authors interviews MO #2, 9, 15, 20). A top official at the University of Missouri noted the historically weak role of the Coordinating Board for Higher Education:


We [the University of Missouri] have our own governing board, and historically you never paid a lot of attention to the Coordinating Board . . . It had recommending powers on the budget for the University of Missouri and the other public institutions. It had some control of computing equipment for the other institutions, but not the University of Missouri. The board derived its power fundamentally from the relationship between the commissioner and the governor.


Unfortunately, the governor in power was no longer Mel Carnahan, a strong supporter of performance funding.


Summary


The main cause of the surprising demise of the Funding for Results program was the states sharp drop in revenues in the early 2000s. The FFR program perhaps could have survived the states budget crisis if the Coordinating Board for Higher Education had been joined by other strong advocates. However, this support was missing. The governor in office was no longer the governor who had strongly championed higher education spending and performance funding. The legislature was now led by Republicans who were not interested in performance funding. The higher education institutions were not calling for saving performance funding because their focus was on protecting their base funding, and some had considerable reservations about performance funding to begin with. The business community, a potentially powerful supporter of performance funding, was silent. Finally, the Coordinating Board for Higher Education depended on its relationship with the governor, and the incumbent in that role was no longer Mel Carnahan, who had been a strong supporter of performance funding.10


FLORIDA: DEMISE AND SUSPENSION OF PERFORMANCE FUNDING


Florida provides a case both of the demise of performance funding and of its continuation but suspension. In the mid-1990s, Florida created two performance funding systems that applied only to its community colleges.11 One (the Workforce Development Education Fund) ended in 2002. The other (Performance Based Budgeting) survives to this day, although funding has been suspended since 2008.


Performance Based Budgeting began with an appropriation of $12 million for fiscal year 19961997.12 These funds would be distributed to community colleges at the end of the fiscal year, depending on their individual performances on three sets of indicators: completion of certificates and associate of arts and associate of science degrees; completion of the same by students who are economically disadvantaged, disabled, non-English speakers or in English as a Second Language (ESL) programs, passed state job licensure exams, or were placed in jobs in targeted occupations; and associate of arts completers who graduated with less than 72 attempted credit hours.13 Over the years, PBB funding has accounted for 1% to 2% of total state appropriations for the community colleges (Dougherty & Natow, 2010).


The Workforce Development Education Fund (WDEF) was enacted in 1997 and took effect the following year.14 The fund applied to community colleges and area vocational-technical centers run by K12 districts. At the funds peak, the WDEF comprised nearly 6% of state funding for community colleges.15 The WDEF withheld 15% of an institutions state appropriation from the previous year for vocational and technical education. Institutions could then win this money back based on their performance on the following measures: (a) number of adult basic education completions, vocational certificates, and vocational associates of science for students with certain characteristics (economically disadvantaged students, welfare recipients, students with disabilities, dislocated students, and ESL students) and (b) job placement of students (with institutions getting more points for placement in higher paying jobs) (Bell, 2005, pp. 47, 5960, 175176; Florida State Board for Community Colleges, 1998, 2000; Pfeiffer, 1998; Wright, Dallet, & Copa, 2002, p. 163; Yancey, 2002, pp. 5961).


Budget Problems


Funding problems were a key cause of the demise of the Workforce Development Education Fund in Florida and the suspension of funding for the Performance Based Budgeting program. In the early part of this millennium, Florida government revenues dropped from $51.6 billion in fiscal year 19992000 to $47.9 billion in fiscal year 20012002, a decrease of 7.2% (U.S. Census Bureau, 2002, Table 429; U.S. Census Bureau, 2006, Table 439). Moreover, soon after the election, Governor Jeb Bush (who held office from 1999 to 2007) moved to cut spending or keep down increases in many areas of the state budget in order to meet increasing Medicaid costs, fund new initiatives of particular interest to him, and allow large cuts in taxes (Dyckman, 2001; Pendleton & Saunders, 2001). Consequently, Gov. Bush kept down state spending on higher education. While state higher education spending rose by 5.0% between fiscal years 2001 and 2004, it badly lagged rising college enrollments, with the result that state spending per full-time equivalent (FTE) student at the community colleges dropped by 13.7% during those years (Florida State Department of Education, 2009, Table 19; Florida State University System, 2008, Tables 10, 40; National Center for Education Statistics, 2007, Table 339).16


Faced with these budget constraints, the community colleges wanted to protect their main enrollment-based funding and deemphasize performance funding (authors interviews FL #20, 21). A leading state community college official noted:


They [community colleges] had not gotten any additional money in a long time, yet they had an open door policy, and so they were taking more and more enrollments. So they wanted to go back on more of an enrollment basis and de-emphasize performance . . . They wanted the focus to be on enrollment, because they had been pulling in more and more students every year, and particularly as the budget got tight and universities were capping [enrollments], they were getting the spillover on it. So all of a sudden, enrollment became a more salable argument for funding than did performance.


Similar economic considerations played a role in the suspension of state funding for the Performance Based Budgeting program after fiscal year 2008.  State funding for community colleges (general revenues and lottery funds) dropped 7.3% between fiscal year 2008 and fiscal year 2010 (Florida Department of Education, 2011, Table 19).  A leading community college official noted how this led to the suspension of PBB:


It was suspended because of the economy and the cuts to state support.  The Council of Presidents recommended that the funds just be rolled into the CCPF (Community College Program Fund) to try to get the most equitable distribution during times of rapid enrollment growth coupled with declining state support.

   


But why was the WDEF program terminated while PBB has only been suspended? Other factors besides fiscal strain help to explain this.


Lack of Community College Support: Criticisms of the Policy Design of the WDEF


The community colleges wanted to be out from under the Workforce Development Education Fund because they had become quite unhappy with several features of the program: its use of a holdback feature to reward community colleges, lack of increases in funding for WDEF as time passed, measurement of institutional performance against the average for other colleges rather than against a colleges own past history, the opaqueness and perceived political nature of how the WDEF funding formula was applied, and lack of sufficient consultation with the colleges in designing the WDEF to begin with. We discuss each of these criticisms in turn.


Unlike the Performance Based Budgeting program, the Workforce Development Education Fund program involved a holdback feature. Community colleges and school district area vocational-technical centers received 85% of their prior years state workforce-related appropriation up front. The remaining 15% was held back, to be returned to the colleges and vocational-technical centers according to their performance in the subsequent year on a variety of workforce preparation measures, such as vocational graduation rates and placements in high-wage/high-demand jobs. The baseline allocation was first made in 19981999, and the formula was first applied in 19992000 (Pfeiffer, 1998, p. 24; Wright et al., 2002, p. 153; Yancey, 2002, pp. 5961). From the very beginning, the community colleges and vocational-technical centers were unhappy with the prospect that they might not fully recapture the funds held back (authors interviews FL #2a, 2b, 3b, 4b, 6a, 27).


The colleges uncertainty was further exacerbated by the fact that the state legislature did not increase funding for the WDEF, even as the colleges improved their performance. The result was that colleges could increase their performance but still not receive any additional money (authors interviews FL #2a, 3a, 3b, 4b, 6a, 10, 21). As a leading state workforce training official noted:


Because [the colleges] were recruiting primarily poor folks and target groups much more aggressively, their point production went up significantly in the beginning, and with that, nobody should have lost money. But when you dont have any additional money in the pot, somebody has to lose.


In gauging how well colleges were performing, the WDEF system measured colleges not against their past performance but against that of other colleges. Thus, a college could increase its workforce training output and still lose a portion of the held-back funds if other colleges increased their output even more (authors interviews FL # 4b, 6a, 21, 25; see also Dougherty & Hong, 2006). A vocational education dean at a community college noted:


If you improve more and theres not any new money in that pot, guess where your more improvement comes from. From my pot of money. Because if . . . every one of us improved, but these two here improved even more, part of my money is gone that I operated on last year.


As the performance funding demise analysts have noted, requiring colleges to compete against each other tends to provoke opposition to performance funding (Burke, 2002a, p. 225).


The formula connecting college performance to funding outcomes was very unclear to colleges, which aroused institutional distrust of and opposition to the system. Part of the opaqueness of the WDEF formula was because, as we have noted, funding outcomes depended not just on a colleges own performance but also on that of other colleges and on how much money was allocated to the WDEF that year. However, the problem was compounded by the fact that funding allocations were done at the end of the year by a very small number of state legislative staff members who were responding to legislative pressures (authors interviews FL #2a, 6a, 22, 23, 24). As a state community college official noted, while the PBB formula for determining colleges funding shares was viewed as straightforward, that was not the case with the WDEF:


The other problem we had with [the WDEF], to be honest with you, [was that] it was a black box. In other words, two people ran the model. Nobody in the world knew what they were doing. They finagled the numbers. No one knew how they came up with the points . . . So a lot of mistrust was created by a black box approach . . . No one really trusted the data.


When the Workforce Development Education Fund was enacted in 1997 (Laws of Florida, SB 1688, Chap. 97307), it was very much a product of the state Senate. In contrast with the development of the Performance Based Budgeting system, which involved broad and deep participation by the community colleges, the development of the WDEF was a much more closely held initiative. A handful of state senators and their staff designed the program, with little consultation with the community colleges. The community colleges were consulted after the fact in designing how the law would be implemented, but they had little to do with working out the basic framework, and particularly the holdback provision, which they roundly disliked (Dougherty et al., 2010).


K12 Criticisms of the WDEF


The vocational-technical centers run by local school districts were also subject to the WDEF, and they, too, were critical of it. They found themselves competing against the community colleges for funding and often losing in that competition (authors interviews FL #3b, 6a, 22). This was particularly galling because there had been a history of conflict between the community colleges and the K12 system over who should offer postsecondary vocational education (authors interviews FL #3b, 6a, 22). In fact, at one point the community college system tried to take over all postsecondary vocational education by absorbing the vocational-technical centers. This was bitterly fought by the K12 districts and became another reason for repudiating the WDEF.


All of these objections the community colleges and the K12 districts had to the WDEF might not have been enough to kill the program if it had been counterbalanced by strong support from its original legislative advocates and by strong community support, particularly among the business community. However, neither was operative.


Loss of Legislative Champions


The main supporters for the WDEF in 1997 had been members of the state Senate, particularly George Kirkpatrick (D-Gainesville), chair of the state Senate Appropriations Committee, and Jim Horne (R-Jacksonville). However, by the new millennium these key supporters were gone. Senator Kirkpatrick left the senate after 2000 (having run into a term limit) and died suddenly in 2003 (Associated Press, 2003). Meanwhile, Senator Hornefacing a term limit in 2002accepted the position of commissioner of education in 2001 (Saunders, 2001).


These state Senate advocates of the WDEF program were replaced by new members who had less allegiance to the WDEF (authors interviews FL # 2a, 6a). Many had been in the state House at the time the WDEF was enacted, but this bred little allegiance, because the WDEF had been incubated in the state Senate with very little involvement by the House (Dougherty et al., 2010). As a state community college official noted, these new senators did not feel bound by the past decision to enact performance funding and wanted to use the funds involved for projects of their own:


Because we have term limits here in Florida, probably some of the champions of that [performance funding] got term-limited out. And other people said, This doesnt make sense, were going to use those monies in different ways. Because different people are always looking for different pots of money. [Because of term limits], you only have eight years, so you have to do something.


In addition, the new legislators were hearing many complaints from community colleges and K12 districts about the WDEF (authors interview FL #6a).


Lack of Business Support


The Workforce Development Education Fund might have survived in the face of the dissatisfaction of the community colleges and K12 districts if the WDEF had had strong community support. However, this was not forthcoming. Particularly important was the lack of support by business, given its importance in state politics generally (Thomas & Hrebenar, 2004, p. 118) and Florida politics specifically. One might think that the Florida business community would have stepped in to save the WDEF, given the resonance of performance funding with business notions of efficiency and the primacy of market forces and the fact that business had played a significant role in the origins of Performance Based Budgeting (Dougherty et al., 2010). However, business did not display much concern about performance funding in the early years of this millennium (authors interviews FL #3b, 21). As a leading state workforce development official noted:


I think performance-based funding was just so much academic jargon to them. If the programs improved, they were happy with that, but they might not do a cause-and-effect with performance-based funding . . . They were more interested in there being funding, particularly in K12 because . . . the quality of the schools becomes a big factor in whether businesses want to be there, where they want to locate.


In summary, the demise of the Workforce Development Education Fund is attributable to the joint effect of several forces. State appropriations for higher education were being held down to free up monies to pay for increasing Medicaid costs, fund new initiatives by the governor, and allow tax cuts. Faced with decreasing per-student FTE state spending, the community colleges preferred to have the WDEF eliminated and protect their main enrollment-based state funding. Moreover, unlike the case with the Performance Based Budgeting program, community colleges had very substantial criticisms of how the WDEF worked, particularly the way it left colleges very uncertain about their funding because of its holdback feature, the lack of increases in state funding despite improvements in community college performance, and the fact that the WDEF measured a colleges performance improvement against that of other colleges rather than a colleges past performance. The community colleges were joined in their lack of enthusiasm for the WDEF by the K12 districts, which were also subject to the WDEF and had their own criticisms of it. This dissatisfaction on the part of community colleges and K12 districts was not counterbalanced by strong enthusiasm on the part of the business community or strong efforts by the legislative champions of performance funding. The senators who had championed WDEF were no longer in office and able to defend it.


Why Performance-Based Budgeting Survived in Florida When WDEF Did Not


While the WDEF disappeared after 2002, the PBB program survived, even if its funding has been suspended since 2008. This survival is attributable to key differences in their policy design and, ultimately, in how the programs arose.


Unlike the WDEF, the PBB did not hold back a portion of a colleges state appropriation and require colleges to win it back. PBB funding was new money, over and above the regular enrollment-based appropriation that colleges received (authors interviews FL #2a, 6a). This made community college presidents much more comfortable with the PBB than the WDEF, according to a leading state community college official: With our PBB, its split the pot . . . Its just an add-on, and you will get something . . . In [Performance Based Budgetings] current configuration, I think people are comfortable with it, and they understand it. Thats why its able to stay.


Moreover, PBB funds were distributed based on a formula that the colleges understood (authors interview FL #6a). A state official familiar with both programs noted:


[In the case of PBB], its all in the open . . . We meet in the open, and its all decided. In other words, its a collaborative effort as opposed to a top-down approach. For PBB to be successful, people have to understand it. They have to be able to replicate the results. And they couldnt do that with the Workforce Development Funding.


In fact, the colleges had a major hand in creating the PBB formula and have continued to this day to be able to shape it as they see fit. This reflects the fact that the community colleges were much more involved in the initial framing of PBB than they were in the case of the WDEF (Dougherty et al., 2010; Dougherty & Natow, 2010).


We can deepen our analysis of the factors causing some performance funding programs to disappear while others persist by looking at the case of Tennessee.


TENNESSEE: CONTINUATION OF PERFORMANCE FUNDING


Tennessee has had the longest operating performance funding system in higher education, one that has been in continuous operation for over 30 years. The system began as a pilot program in 1974 and in 1979 went statewide (Banta, 1986; Bogue, 2002; Ewell, 1994). The performance funding system began with five equally weighted indicators: proportion of eligible programs in an institutions inventory that are accredited; student performance in major fields as assessed by national, regional, or state examinations; student performance in general education as assessed by a nationally normed exam; evaluation of instructional programs or services by current students, recent alumni, community members, or employers; and evaluation of academic programs by outside peer review teams (Banta, 1986, pp. 123128; Bogue & Dandridge-Johnson, 2009). In the years following, some performance indicators were added, others were dropped, and some were measured in new ways (Dougherty & Natow, 2010). At the peak in fiscal year 2005, performance funding accounted for 4.4% of total state appropriations for higher education (Dougherty & Natow, 2010, Table 2).17


Why has Tennessees performance funding system survived while those in the previous three states failed? Key has been the fact that the system has enjoyed much stronger support from higher education institutions than was the case in the states where performance funding has been dropped. Moreover, Tennessee higher education institutions faced a very different budgetary situation in this decade than was the case in states such as Florida and Missouri.


The Tennessee state higher education system did not experience budget problems of the size that occurred in Florida and Missouri in the early years of this decade. Tennessee did experience a 5.4% drop in total state revenues between fiscal years 2000 and 2002 (U.S. Census Bureau, 2002, Table 429; U.S. Census Bureau, 2006, Table 439). However, state appropriations for Tennessee public higher education institutions actually rose by 3.9% between fiscal years 2001 and 2003faster than the increase in enrollments18unlike Missouri, where state appropriations dropped, and Florida, where the rise in appropriations badly lagged the rise in enrollments (U.S. Census Bureau, 2002, Table 429; U.S. Census Bureau, 2006, Table 439). Thus, there was no push by Tennessee colleges to cut out performance funding in order to protect base funding. Further protection for performance funding in Tennessee has come from the fact that it is embedded in the appropriation for higher education. The system is not a separate program but built into the regular budget. Thus, performance funding does not stand out as an element to cut out (authors interviews TN #8, 10).


In addition to a more favorable budgetary situation, performance funding in Tennessee enjoyed the support of the public higher education institutions, in good part due to the more favorable budgetary situation. However, performance funding also enjoyed support due to its having design features that higher education institutions favored. Unlike performance funding in Washington and the Workplace Development Education Fund in Florida, performance funding in Tennessee did not take the form of a holdback that institutions had to earn back through improved performance. Instead, the institutions have viewed performance funding as a source of additional income over and above their base funding (authors interviews TN #1, 4, 8). A former university administrator explained that many institutions were bringing in a substantial chunk of money based on performance funding, and that these institutions feared that should the performance funding program be eliminated, the money would go away too.19


Institutional support for performance funding has also been bolstered by the fact that this funding system was and is developed with considerable input from the institutions, unlike the situation in Washington and Florida (in the case of the WDEF) (authors interviews TN #2, 3, 8). The Tennessee Higher Education Commission took 5 years to pilot test the system and develop institutional support for it. As a former state higher education official explained, This policy was not shoved down our throats by a legislature. It was not imposed in any way. It was something that we developed from within. After the performance funding program was created, it has been reevaluated every 5 years, with input from advisory committees that include institutional representatives (authors interviews TN #1, 2, 3). A former Tennessee institutional administrator noted:


[T]hey [the Tennessee Higher Education Coordinating Board] did have the advisory committees that met annually to talk it over, and then every 5 years they could actually make changes . . . they'd be responsive to the things that drew the most complaints, or that seemed to be a real improvement over what had been done before.


The long-term persistence of Tennessees performance funding system is attributable to the fact that the system has enjoyed much stronger support from higher education institutionsin good part due to how the system was designed and implementedthan was the case in the states where performance funding was dropped.20 In addition, Tennessee higher education institutions did not encounter the budget pressures faced by institutions in Missouri and Florida.


SUMMARY AND CONCLUSIONS


One of the great puzzles about performance funding in the United States is that such funding has been popular and unstable, with many states that enacted performance funding later dropping it. To shed light on the causes of this unstable institutionalization of performance funding, we examined the differing experiences of four states: Washington and Missouri, where performance funding was given up (one before the 20012002 recession and one during); Florida, which gave up one performance funding program in 2002 but retained another; and Tennessee, which has retained performance funding for more than 30 years.


COMPARISON TO EARLIER STUDIES OF PERFORMANCE FUNDING DEMISE


Our analysis arrived at findings that converge with but also diverge from Burke and Modarresis findings on the causes of the demise of performance funding programs (Burke, 2002a, 2002b; Burke & Modarresi, 2000). We concur that higher education opposition played a key role in the demise of performance funding. It was present in Missouri, Washington, and Florida (in the case of the Workforce Development Education Fund) where performance funding ceased, but not Tennessee and Florida (in the case of the Performance Based Budgeting program) where it survived. Stimulating this opposition were many of the same factors identified by Burke and Modarresi: a perceived lack of adequate consultation with higher education institutions (Washington and Florida, in the case of the WDEF); the use of performance indicators that higher education institutions did not find valid (Washington and Missouri); a perception of high implementation costs to institutions (Floridas WDEF); and a perception of erosion of campus autonomy (Washington and Missouri). In contrast, in Tennessee, we find little campus unhappiness with performance funding and little perception that performance funding was developed without much input from higher education institutions, uses invalid measures, or badly erodes campus autonomy.


At the same time, our analysis turned up other causes of higher education opposition to performance funding that were not discovered by Burke and Modarresi. One of the most potent was the use of an appropriation holdback, where a portion of the state appropriation to higher education institutions was held back and the institutions had to earn it back through improved performance. This caused great anger in Washington and Florida (in the case of the WDEF). It is very instructive that Floridas surviving PBB program does not have this feature and has had much greater institutional support than did the defunct WDEF. Moreover, we also find that a major cause of higher education opposition to performance funding was a desire to preserve base funding at a time when the economic recession of the early 2000s was devastating state budgets.21 This leads us to our second main break with Burke and Modarresis findings.


Our inclusion of two cases from the 2000s brings to the surface another factor that went unmentioned by Burke and Modarresi: the crucial impact of downturns in state finances. As we noted, a key feature of our analysis is the inclusion of cases (Missouri and Florida) where performance funding was dropped or suspended in the 2000s, while Burke and Modarresis cases were restricted to earlier years. The recession of the early 2000s played a major role in the demise of performance funding in Missouri and Florida (in the case of the Workforce Development Education Fund).22 Similarly, the recession of the late 2000s led to the suspension of funding for Floridas Performance Based Budgeting program.  As state appropriations for higher education faced cuts or failed to keep pace with enrollments, higher education institutions moved to protect their core state funding and turned against performance funding. Meanwhile, Tennessees system survived the fiscal challenges of the early 2000s in good part because the system was insulated from the ups and downs of the state revenue cycle.


Finally, a third area in which our findings go beyond those of Burke and Modarresi (Burke, 2002a, 2002b; Burke and Modarresi, 2000) concerns which champions of performance funding were lost. Burke and Modarresi highlighted the loss of gubernatorial support, as governors who championed performance funding were succeeded by governors who were not as interested. We found evidence of the impact of a loss of gubernatorial champions (in the case of Missouri), but we also found evidence of the loss of legislative champions who left office or lost leadership positions (Florida and Washington) and of waning interest on the part of business, which played an important role in encouraging the establishment of performance funding (Florida and Washington). Moreover, based on the research we conducted in Illinois, we would add loss of support from the heads of state coordinating boards. A major factor in the demise of performance funding in Illinois was the fact that the state community college officials who spearheaded it were no longer in office as the program expired in 20022003 (Dougherty & Natow, 2009).


WIDER IMPLICATIONS: ADDRESSING POLICY TERMINATION AND PROGRAM SUSTAINABILITY THEORY


Our findings have resonance beyond higher education policy. They converge with and diverge from key findings in two important policy research literatures: policy termination theory in the policy sciences literature and program sustainability theory in the public health and social welfare literature.


Our findings concur that policy termination is more likely to occur when a policy is operating in a period of budget cuts; there is a change of administrations, with new office holders who are not wedded to existing policy; the initial champions of a policy are no longer around; and the resistance to policy termination lacks capable leadership or effective defensive tactics (Bardach, 1976; DeLeon, 1978; Kirkpatrick, Lester, & Peterson, 1999). However, we have seen no evidence in our study for two other cited predictors: the ideological matrix in which the policy is embedded has been delegitimated; and the policy is new and has had less opportunity to accumulate allies (Bardach, 1976; DeLeon, 1978; Kirkpatrick et al., 1999).


Our findings also agree and disagree with those in the extensive research literature on sustainability of public health and social welfare programs. We also found that program sustainability is enhanced if the program design conforms to traditional practices and organizational forms, the design process allows for input from program constituents, and the implementing institutions champion the policy and have the resources to effectively implement it (Racine, 2006; Scheirer, 2005; Shediac-Rizkallah & Bone, 1998). However, contrary to program sustainability theory, we found no evidence that the demise of performance funding was due to a perceived lack of program impact.


POLICY IMPLICATIONS


The factors causing the demise of performance funding that are discussed above point to three key tasks that advocates of performance funding must undertake if they are to create a sustainable basis for such a program. First, a way of financing performance funding must be found that insulates it from the ups and downs of the state revenue cycle and that provides funding that colleges regard as new money, rather than money that is being held back or coming at the expense of their enrollment-based funding. If these finance issues are not resolved, performance funding is highly vulnerable to being jettisoned when state funding for higher education drops or fails to keep pace with enrollment increases.


Second, performance funding advocates need to find ways of better securing the support of public colleges and universities themselves. Their support might save performance funding in a time of fiscal trial, while their opposition will very likely doom it. Higher education institutions are more likely to support performance funding if it involves new money and if the institutions are given a role in designing the performance funding system. This involvement makes it more likely that the funding structure will be one they find comfortable and that the performance indicators used in the system will reflect missions the institutions value. Moreover, the support of higher education institutions also will be enhanced by finding ways of reducing the administrative burden and financial costs of data collection and analysis imposed on colleges and universities by state performance funding systems.


Third, if advocates wish to enhance the sustainability of performance funding, they need to expand the degree of social support. One key potential supporter is business. Another is social groups that are moved primarily by the values not of efficiency but of educational equality and effectiveness, particularly for underserved students. These equity-oriented actors may be attracted by performance funding that rewards colleges for enrolling, educating, and graduating students from underserved populations (for more, see Dougherty & Hong, 2006; Dougherty, Hare, & Natow, 2009.


Acknowledgments


We thank Lumina Foundation for its financial support of this research. The views expressed here are solely those of the authors. We also wish to thank the Alfred P. Sloan Foundation; some of the interviews cited in this study were conducted as part of research the foundation funded at the Community College Research Center at Teachers College, Columbia University. We wish to thank E. Grady Bogue, Patrick Callan, Edward Cisek, Robert B. Stein, Beth Stevens, Patricia Windham, Jan Yoshiwara, and William Zumeta for their comments on portions of this paper.


Notes


1. Given the widespread interest in performance funding, it is curious that there is a dearth of systematic analyses of its impact on institutional performance. Careful studies are rare, and they do not find strong evidence that performance funding causes striking improvements in institutional outcomes (see Dougherty & Hong, 2006; Dougherty & Reddy, 2011).

2. These interviews and documentary sources led us to discount reports that South Dakota, Indiana, and Idaho had suspended their systems. Our interviews with state higher education officials and academic experts led us to conclude that South Dakota had not relinquished performance funding but had simply changed its form (Richardson & Martinez, 2009, p. 135; Toman, 2008, pp. 7071). Meanwhile, Indiana did not establish performance funding until 2007, a system that is still in force (M. Baumgartner, formerly Associate Commissioner, Indiana Commission for Higher Education, personal communication, 2010; Indiana Commission for Higher Education, 2008). Idaho has had since the mid-1990s a performance funding process for the Technical Colleges that only applies if the legislature appropriates new money for capacity building.  The last couple of years the process has not been funded due to the budget crisis but it remains in place (M. Rush, Executive Director, Idaho State Board of Education, personal communication, 2011).  

3. To draw the wider implications of our findings, we will address in the summary and conclusions to this article how our findings converge and diverge with policy termination theory in the policy sciences literature (Bardach, 1976; DeLeon, 1978; Kirkpatrick et al., 1999) and program sustainability theory in the public health and social welfare literature (Racine, 2006; Scheirer, 2005; Shediac-Rizkallah & Bone, 1998).

4. Ironically, Missouri subsequently abandoned its performance funding system after 2002.  

5. We chose Washington over the other cases of demise during the years 19961999 (Arkansas, Colorado, Kentucky, and Minnesota) because the others had already been studied by Burke and Modarresi (Burke, 2002a, 2002b; Burke & Modarresi, 2000).  

6. Ohio and South Dakota have also had fairly long-running performance funding systems (Burke & Minassians, 2003; Moden & Williford, 2002; Richardson & Martinez, 2009). However, Tennessees system is distinctive in that it has lasted far longer (more than 30 years as versus 15 years for Ohio and 13 for South Dakota).

7. We picked this year because it divides the cases of performance funding demise in about equal halves and we have excellent data on higher education governance arrangements (McGuinness, 2003).

8. Performance funding reappeared in Washington with the establishment in 2007 of the Student Achievement Initiative for community colleges (Dougherty et al., 2010; Washington State Board for Community and Technical Colleges, 2007).  

9. Missouri state figures put the decline as even larger: 13.8% (Missouri Department of Higher Education, 2009).

10. Although the Funding for Results program did end, performance funding may reappear in Missouri, as it did in Washington State. A new higher education funding model has been proposed that includes performance funding. This model was given unanimous support by presidents and chancellors of the public colleges and universities (authors interview MO #1; Missouri Department of Higher Education, 2008).

11. The state universities did get some performance funding, but it consisted of only three one-time yearly payments over the past 14 years, with each of those payments amounting to only $34 million each year. The payments were not made as part of the PBB system (authors interviews).

12. The program was enacted in 1996 under the name Performance Incentive Funding, but was folded into the Performance Based Budgeting program that had been enacted in 1994 (Dougherty et al., 2010).

13. While the performance indicators have changed over time, they have continued to focus on degree completion, transfer to the state university system, successful passage of licensure exams, and securing jobs paying more than $10 an hour (Dougherty & Natow, 2010).

14. The Workforce Development Education Fund (WDEF) was preceded by an experimental Performance Based Incentive Fund, which was established in 1994 and phased out 2 years later. Unlike the WDEF, the PBIF was voluntary and involved less funding (Dougherty et al., 2010; Wright, Dallet, & Copa, 2002).

15. In fiscal year 2001, Floridas performance funding for community colleges through the WDEF reached $46.9 million (Wright et al., 2002, p. 163; Yancey, 2002, pp. 5762). This figure is based on the 15% withheld from community college workforce funding. In that same year, state appropriations for community colleges (based on general revenues and lottery proceeds) were $842.3 million. Revenues for community colleges from all sourcesincluding state appropriations, student fees, sales and services, other receipts, and federal fundingtotaled $1.2 billion (Florida State Community College System, 2002, pp. 77, 80).  

16. Fall enrollments in Florida public institutions rose by 10.9% between fall 2000 and fall 2002 (National Center for Education Statistics, 2007, Table 194).

17. This was the actual share attained. However, institutions had the potential to earn as much as 5.45%.

18. In Tennessee, enrollments in public higher education institutions actually dropped by 4.1% between fall 2000 and fall 2002. If we exclude the anomalously high enrollments in fall 2000 and compare those for fall 1999 and fall 2002, enrollments rose by only 0.3% (National Center for Education Statistics, 2007, Table 194).

19. In actuality, performance funding in Tennessee might not really be new money but come at the expense of greater regular funding for higher education. However, the perception that the performance funding allocation is new money is widespread and an important source of support for the performance funding system.

20. Ohio is another state that has maintained performance funding over a good number of years (since 1995) for reasons similar to Tennessees. From interviews we have conducted there, performance funding has enjoyed strong support from Ohio institutions, in large part because the universities see such funding as a way of getting state funds the institutions would otherwise not get.   

21. In addition to the above, we also found in Florida additional causes of institutional opposition that were not mentioned by Burke and Modarresi: the perceived opaqueness of the Workforce Development Education Fund formula and the fact that institutional performance improvement was measured relative to the improvement of other institutions, both of which left community colleges quite uncertain about how much of the reserved WDEF funding they would be able to win back.

22. State budget troubles were also a key factor in the demise of performance funding in Illinois in 20022003 (Dougherty & Natow, 2009) and Oregon in 2002 (Oregon University System, 2003, p. 13).


References


Alexander, F. K. (2000). The changing face of accountability: Monitoring and assessing institutional performance in higher education. Journal of Higher Education, 71(4), 411431.


Ammons, D. (1998, November 22). Washington will do quick construction on its split house. The Sunday Oregonian, p. C06. Retrieved from Lexis-Nexis Academic database.


Associated Press. (2003, February 6). Former state lawmaker George Kirkpatrick dies.


Banta, T. W. (Ed.). (1986). Performance funding in higher education: A critical analysis of Tennessees experience. Boulder, CO: National Center for Higher Education Management Systems (ERIC Document Reproduction Service No. ED310655). Retrieved from ERIC Database.


Bardach, E. (1976). Policy termination as a political process. Policy Sciences, 7, 123151.


Bell, D. (2005). Changing organizational stories: The effects of performance-based funding on three community colleges in Florida (Unpublished doctoral dissertation, University of California, Berkeley).


Bogue, E. G. (2002). Twenty years of performance funding in Tennessee: A case study of policy intent and effectiveness. In J. C. Burke (Ed.), Funding public colleges and universities: Popularity, problems, and prospects (pp. 85105). Albany, NY: SUNY Press.


Bogue, E. G., & Dandridge-Johnson, B. (2009, November). Performance incentives and public college accountability: A quarter century policy audit. Presentation at the annual meeting of the Association for the Study of Higher Education, Vancouver, Canada.


Burke, J. C. (2002a). Performance funding: Easier to start than sustain. In J. C. Burke (Ed.), Funding public colleges and universities: Popularity, problems, and prospects (pp. 219242). Albany: SUNY Press.


Burke, J. C. (2002b). Performance funding: Assessing program stability. In J. C. Burke (Ed.), Funding public colleges and universities: Popularity, problems, and prospects (pp. 243264). Albany: SUNY Press.


Burke, J. C. (Ed.). (2002c). Funding public colleges and universities: Popularity, problems, and prospects. Albany: SUNY Press.


Burke, J. C. (2005). Reinventing accountability: From bureaucratic rules to performance results. In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public, academic, and market demands (pp. 217245). San Francisco, CA: Jossey-Bass.


Burke, J. C., & Minassians, H. (2003). Performance reporting: Real accountability or accountability lite seventh annual survey. Albany: State University of New York, Rockefeller Institute of Government, Higher Education Program.


Burke, J. C., & Modarresi, S. (2000). To keep or not to keep performance funding. Journal of Higher Education, 71(July/Aug.), 432453.


Carnoy, M., Elmore, R., & Siskin, L. S. (2003). The new accountability: High schools and high states testing. New York, NY: RoutledgeFalmer.


DeLeon, P. (1978). Public policy termination: An end and a beginning. Policy Analysis, 4(3), 369392.


Dill, D. (2007). Will market competition assure academic quality? An analysis of the UK and US experiences. In D. F. Westerheijden, B. Stensaker, & M. J. Rosa (Eds.), Quality assurance in higher education (pp. 4792) . Dordrecht, the Netherlands: Springer.


Dougherty, K. J., Hare, R., & Natow, R. (2009). Performance accountability systems for community colleges: Lessons for the Voluntary Framework of Accountability for Community Colleges. A report to the College Board. New York, NY: Columbia University, Teachers College, Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/Publication.asp?UID=728


Dougherty, K. J., & Hong, E. (2006). Performance accountability as imperfect panacea: The community college experience. In T. Bailey & V. S. Morest (Eds.), Defending the community college equity agenda (pp. 5186). Baltimore, MD: Johns Hopkins University Press.


Dougherty, K. J., & Natow, R. S. (2009). The demise of higher education performance funding systems in three states. New York, NY: Columbia University, Teachers College, Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/Publication.asp?UID=693


Dougherty, K. J., & Natow, R. S. (2010). Continuity and change in long-lasting state performance funding systems for higher education: The cases of Tennessee and Florida. CCRC Working Paper No. 18. New York, NY: Columbia University, Teachers College, Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/Publication.asp?UID=743


Dougherty, K. J., Natow, R. S., Hare, R., & Vega, B. E. (2010). The political origins of state-level performance funding: The cases of Florida, Illinois, Missouri, South Carolina, Tennessee, and Washington State. Working paper #22. New York, NY: Columbia University, Teachers College, Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/Publication.asp?UID=819


Dougherty, K. J., & Reddy, V. (2011). The impacts of state performance funding systems on higher education institutions: Review of research literature and policy recommendations. New York, NY: Columbia University, Teachers College, Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/Publication.asp?UID=1004


Dougherty, K. J., & Reid, M. (2007). Fifty states of Achieving the Dream: State policies to enhance access to and success in community colleges across the United States. New York, NY: Columbia University, Teachers College, Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/Publication.asp?uid=504


Dyckman, M. (2001, February 25). Little spared by Bushs proposed budget cuts. St. Petersburg Times, p. 1D.


Ewell, P.T. (1994). Tennessee. In S. S. Ruppert (Ed.), Charting higher education accountability (pp. 8393). Denver, CO: Education Commission of the States (ERIC Document Reproduction Service No. ED375789). Retrieved from ERIC Database.


Ewell, P. T., & Jones, D. P. (2006). State-level accountability for higher education: On the edge of a transformation. New Directions for Higher Education, 135(Fall), 916.


Florida State Board for Community Colleges. (1998). The Florida community college accountability plan at year four: A report for South Florida Community College. Tallahassee, FL: Author.


Florida State Board for Community Colleges. (2000). Accountability in the year 2000. Tallahassee, FL: Author.


Florida Community College System. (2002). The fact book: Report for the Florida college system. Tallahassee, FL: Author. Retrieved from www.fldoehub.org/CCTCMIS/c/
Documents/Fact%20Books/webversion.pdf


Florida State Department of Education. (2009). The fact book: Report for the Florida college system. Tallahassee, FL: Author. Retrieved from http://www.fldoe.org/arm/cctcmis/pubs/
factbook/fb2009/fb2009.pdf


Florida State Department of Education. (2011). The fact book: Report for the Florida college system. Tallahassee, FL: Author. Retrieved from http://www.fldoe.org/arm/cctcmis/pubs/
factbook/fb2011/fb2011.pdf


Florida State University System. (2008). 20072008 fact book. Tallahassee, FL: Author. Retrieved from http://www.flbog.org/resources/factbooks/2007-2008/xls/t40_00_0708_F.xls


Gaither, G., Nedwek, B. P., & Neal, J. E. (1994). Measuring up: The promises and pitfalls of performance indicators in higher education (ASHE/ERIC Higher Education Research Report, No. 5). San Francisco, CA: Jossey-Bass.


Indiana Commission for Higher Education. (2008). Reaching higher for accountability: Embracing accountability for results. Indianapolis, IN: Author. Retrieved from http://www.in.gov/che/files/1-_Accountability-7-7.pdf


Kirkpatrick, S. E., Lester, J. P., & Peterson, M. R. (1999). The policy termination process: A conceptual framework and application to revenue sharing. Policy Studies Review, 16(1), 209236.


Laws of Florida. (1997). Ch. 97-307, SB 1688.


Layzell, D. (1999). Linking performance to funding outcomes at the state level for public institutions of higher education: Past, present, and future. Research in Higher Education, 40(2), 233246.


McGuinness, A. (2003). Models of postsecondary education coordination and governance in the states. Denver, CO: Education Commission of the States.


McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the origins and spread of state performance-accountability policies for higher education. Educational Evaluation and Policy Analysis, 28(1), 124.


Missouri Department of Higher Education. (2008). Higher Education Funding Task Force final report. Jefferson City, MO: Author. Retrieved from http://www.dhe.mo.gov/files/
heftaskforcereport.pdf


Missouri Department of Higher Education. (2009). Missouri public higher education institutions appropriations. Jefferson City, MO: Author.


Moden, G. O., & Williford, A. M. (2002). Ohios challenge: A clash of performance funding and base budgeting. In J. C. Burke (Ed.), Funding public colleges and universities: Popularity, problems, and prospects (pp. 169193). Albany: SUNY Press.


National Center for Education Statistics. (2005). Digest of education statistics, 2004. Washington, DC: Author.


National Center for Education Statistics. (2006). Digest of education statistics, 2005. Washington, DC: Author.


National Center for Education Statistics. (2007). Digest of education statistics, 2006. Washington, DC: Author.


Naughton, B. A. (2004). The efficacy of state higher education accountability programs (Unpublished doctoral dissertation, Stanford University). Digital Dissertations AAT 3145567.


Nisson, B. D. (2003). Performance measures funding: The journey of one Washington community college (Doctoral dissertation, Oregon State University) (UMI No. 3080155)


Oregon University System. (2003). History of OUS performance indicators. Salem, OR: Author. Retrieved from http://www.ous.edu/factreport/mp/files/History_of_PI_in_Oregon.pdf


Pendleton, R., & Saunders, J. (2001, January 17). Bush to unveil tighter budget: Soaring costs eat revenue. Florida Times-Union (Jacksonville), p. B1.


Pfeiffer, J. (1998). From performance reporting to performance-based funding: Floridas experience in workforce development performance measurement. New Directions for Community Colleges, 104(Winter), 1728.


Racine, D. P. (2006). Reliable effectiveness: A theory of sustaining and replicating worthwhile innovation. Administration and Policy in Mental Health and Mental Health Services Research, 33, 356387.


Rhoades, G. , & Sporn, B. (2002). Quality assurance in Europe and the U.S.: Professional and political economic framing of higher education policy. Higher Education, 43(3), 355390.


Richardson, R. C., & Martinez, M. (2009). Policy and performance in American higher education. Baltimore, MD: Johns Hopkins University Press.


Ruppert, S. (Ed.). (1994). Charting higher education accountability. Denver, CO: Education Commission of the States.


Sanchez, R. (1998, November 28). Colleges to lose $1 million over state report card. The Seattle Times, p. A1.


Saunders, J. (2001, June 7). As expected, Bush puts Horne in education post; Orange Park senator to leave legislature. Florida Times-Union (Jacksonville), p. A1.


Scheirer, M. A. (2005). Is sustainability possible? A review and commentary on empirical studies of program sustainability. American Journal of Evaluation, 26(3), 320347.


Schmidt, P. (2002a). Missouri's financing system is praised, but more for longevity than for results. The Chronicle of Higher Education, 48(24), A21.


Schmidt, P. (2002b). Dashed hopes in Missouri. The Chronicle of Higher Education, 49(14), A18.


Shediac-Rizkallah, M. C., & Bone, L. R. (1998). Planning for the sustainability of community-based health programs: Conceptual frameworks and future directions for research, practice, and policy. Health Education Research, 13(1), 87108.


Shulock, N., & Moore, C. (2002). An accountability framework for California higher education. Sacramento, CA: California State University at Sacramento, Center for California Studies.


Shulock, N., & Moore, C. (2005). A framework for incorporating public trust issues in states higher education accountability plans. Sacramento: California State University at Sacramento, Institute for Higher Education Leadership and Policy.


Sloca, P. (2002, May 10). Colleges, universities take hit in budget cuts. Associated Press State and Local Wire.


Southern Regional Education Board. (2008). Legislative report: 2008 final report. Atlanta, GA: Author.


Stein, R. (2002). Integrating budget, assessment and accountability policies: Missouris experiment with performance funding. In J. C. Burke (Ed.), Funding public colleges and universities for performance: Popularity, problems, and prospects (pp. 107132). Albany: State University of New York Press.


Stein, R. B., & Fajen, A. L. (1995). Missouris Funding for Results initiative. New Directions for Higher Education, 91, 7790.


Thomas, C. S., & Hrebenar, R. J. (2004). Interest groups in the states. In V. Gray & R. L. Hanson (Eds.), Politics in the American states (8th ed., pp. 100128). Washington, DC: CQ Press.


Toman, J. (2008). Partners or adversaries: A comparative case study of the accountability relationship between higher education and state governments in three states (Doctoral dissertation, University of South Dakota).


United States Census Bureau. (2002). Statistical abstract of the United States, 2002. Washington, DC: Government Printing Office.


United States Census Bureau. (2006). Statistical abstract of the United States, 2006. Washington, DC: Government Printing Office.


Washington State Board for Community and Technical Colleges. (1999a). Performance funding for improvement in Washington community colleges. Olympia, WA: Author.


Washington State Board for Community and Technical Colleges. (1999b). Washington community and technical college 1998-99 performance funding allocation $2.05 million assigned to demonstrated performance. Olympia, WA: Author.


Washington State Board for Community and Technical Colleges. (2007). Student achievement initiative. Olympia, WA: Author. Retrieved from http://www.sbctc.ctc.edu/
college/education/student_achieve_briefing_system_announcement_sept07_000.pdf


Washington State General Assembly. (1997). Wa. ALS 454, sections 601-611, 1997 Wa. Ch. 454; 1997 Wa. HB 2259.


Washington State General Assembly. (1999). Wa. ALS 309, sections 601-611, 1999 Wa. Ch. 309; 1999 Wa. SB 5180.


Washington State Higher Education Coordinating Board. (1998). Performance funding and accountability: Progress report and recommendations for the future. Olympia, WA: Author. Retrieved from http://www.hecb.wa.gov/Docs/reports/Account12-1998.pdf


Washington State Higher Education Coordinating Board. (2000). Performance accountability: 19992000 academic year review and recommendations for 2001-03. Olympia, WA: Author.


Washington State Higher Education Coordinating Board. (2001). Higher education statistics. Olympia, WA: Author.


Washington State Higher Education Coordinating Board. (2006). Accountability for student success in Washington higher education. Olympia, WA: Author. Retrieved from www.hecb.wa.gov/news/newsreports/documents/AccountabilityFullReport.pdf


Wright, D., Dallet, P. H., & Copa, J. C. (2002). Ready, fire, aim: Performance funding policies for postsecondary education in Florida. In J. C. Burke (Ed.), Funding public colleges and universities: Popularity, problems, and prospects (pp. 137168). Albany: SUNY Press.


Yancey, G. (2002). Fiscal equity change in the Florida community college system during the first five years after the implementation of performance funding (Doctoral dissertation, University of Florida).


Zumeta, W. (2001). Public policy and accountability in higher education: Lessons from the past and present for the new millennium. In D. E. Heller (Ed.), The states and public higher education policy: Affordability, access, and accountability (pp. 155197). Baltimore, MD: Johns Hopkins University Press.




Cite This Article as: Teachers College Record Volume 114 Number 3, 2012, p. 1-41
https://www.tcrecord.org ID Number: 16313, Date Accessed: 1/19/2022 9:40:09 AM

Purchase Reprint Rights for this article or review
 
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Kevin Dougherty
    Teachers College, Columbia University
    KEVIN DOUGHERTY is associate professor and senior research associate, Community College Research Center, Teachers College, Columbia University. His research interests include performance accountability in education, immigration and educational opportunity, the role of higher education in economic development, and the impact of information disparities on student opportunity in education. His recent publications include “Undocumented Immigrants and State Higher Education Policy: The Contrasting Politics of In-State Tuition Eligibility in Texas and Arizona,” Review of Higher Education, 34(1) (Fall 2010) (with H. Kenny Nienhusser and Blanca E. Vega) and “U.S. Community Colleges and Lessons for British Further Education Colleges,” in Tony Dolphin and Jonathan Clifton (Eds.), Colleges 2020 (London: Institute for Public Policy Research, 2010).
  • Rebecca Natow
    Teachers College, Columbia University
    REBECCA S. NATOW is currently a doctoral candidate, graduate assistant, and instructor in the Higher & Postsecondary Education Program at Teachers College, Columbia University. She is also a research associate at the Community College Research Center at Teachers College. Research interests include the politics surrounding the development of higher education public policy at the state and federal levels.
  • Blanca E. Vega
    Teachers College, Columbia University
    E-mail Author
    BLANCA E. VEGA is currently a doctoral candidate in the Higher & Postsecondary Education Program at Teachers College, Columbia University. Research interests include race and immigration and their impact on higher education achievement.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS