Title
Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

State Higher Education Performance Funding for Community Colleges: Diverse Effects and Policy Implications


by David A. Tandberg, Nicholas Hillman & Mohamed Barakat - 2014

Background/Context: Community colleges are central to the United States’ college completion goals. A popular strategy pushed by a number of influential policy organizations and foundations is a policy of tying state funding to community college completions, otherwise known simply as performance funding. This is happening despite little to no evidence that such a strategy actually increases institutional performance.

Purpose: This study asks, To what extent does the introduction of performance funding programs impact two-year degree completion among participating states?

Population/Participants: We created a unique panel data set for the period 1990 through 2010, using states as our primary unit of analysis. The data set contains 1,050 total observations, drawing from a range of postsecondary data sources.

Research Design: We used a quasi-experimental regression technique called difference-in-differences regression technique. We also included in our model multiple control variables and year and state fixed effects.

Findings/Results: We find that the program had no effect on average and mixed results for the individual states where performance funding was associated with lower completions in six states, greater completions in four states, and inconclusive patterns in nine states.

Conclusions: We conclude that performance funding is no “silver bullet” for improving community college completions; rather, in some cases, it may interfere with national completion goals. We recommend that state policy makers seek out evidence-based alternatives for improving community college completions.



Community colleges have been a staple in higher education in post-World War II America and have served as the primary point of access to postsecondary learning and credentials for individuals traditionally underserved by four-year colleges and universities (Lucas, 2006).These institutions have experienced a more than three-fold increase in enrollment since 1970 and now educate/enroll about 40% of all postsecondary students (Snyder & Dillow, 2012). Much of this growth can be attributed to their multiple missions, particularly related to their role in academic transfer, workforce development, and developmental education (Mullin, 2010). In recent years, state and federal policy makers have encouraged community colleges to do more than enroll students; they must also increase degree attainment rates. As a result, community colleges have become a focal point in national efforts to increase postsecondary educational attainment rates (Complete College America, 2012).


However, critics have not had to look very hard to find data points to support their calls for change within the community college sector. For example, community colleges nationally only graduate 15% of their students on time (two years or less), and only 26% of first-time beginning community college students attained a degree or certificate within five years (ACT, 2011). With community colleges serving as a critical open access point for a large population of Americans, and yet failing to graduate a significant number of those students in a timely way, policy makers have begun rethinking their traditional policy mechanisms in hopes of producing better outcomes, namely, college completions (McKeown-Moak et al., 2013).


What has resulted is a reconceptualization of the traditional means of funding postsecondary education and, in particular, community colleges. Traditionally, community colleges have been funded based on enrollments or a percentage increase/decrease from the prior years funding; however, states have been moving toward funding institutions based on outputs. States measure these outputs in various ways, including, among other measures, student retention, graduation rates, student scores on licensure exams, job placement rates, faculty productivity, and campus diversity. Ultimately, many of these measures are designed to increase the number of students completing postsecondary credentials, thus serving as proxies for performance. Performance funding also has not been isolated to the community college sector; it has existed for some time within the four-year sector too (Tennessee has used performance funding since 1979), with mixed results. Given this context, this study asks, To what extent does the introduction of performance funding programs impact two-year degree completion among participating states?


THE NEED FOR GREATER COLLEGE COMPLETIONS


Community colleges serve a unique role in the United States system of higher education. Central to this role is their open-access mission, which means they are the point of access for millions of students, including many from diverse backgrounds in terms of race, age, and socioeconomic status. Therefore, as a sector, community colleges enroll a larger percentage of minority and low-income students than the four-year sector (U.S. Department of Education, 2013). Additionally, community colleges are located within commuting distance of 90% of the U.S. population, so citizens have greater access to these institutions simply because of their proximity to highly populated areas (Boggs, 2011). In addition to being geographically convenient, community colleges also offer convenience with regard to time. Most of their degree programs can be completed in a shorter period than at four-year institutions; many associates degrees are designed to be completed within two years.


In theory, these conveniences should help students persist in college and earn their degrees. However, because many of these students work full time, are the first in their families to attend college, frequently must take remedial coursework, and are from lower income households, these conveniences do not necessarily translate into degree completion (Tinto, 2012). Similarly, not all students enroll in community colleges with the goal of earning a degree, and state and federal efforts to encourage completion can actually work against degree attainment for some individuals (Goldrick-Rab, 2010). Still others have critiqued community colleges in terms of their cooling out function (Clark, 1960, p. 1), in which students enroll but are not supported in ways that ultimately result in degree attainment. For example, students who enter community colleges with the intention of transferring and completing a four-year degree are significantly less likely to achieve their goal than their peers who immediately enter a four-year institution (Clark, 1960, 1980; Dougherty, 1992; Hills, 1965; Laanan, 1996, 2004; Levin, 2000; Melguizo, Kienzl, & Alfonso, 2011). Furthermore, only a quarter of entering community college students are attaining a two-year degree within five years (ACT, 2011).The long-term effects of noncompletion of an associates degree are evident. For example, not completing reduces ones earned income by approximately $354,000 over a lifetime (Baum & Ma, 2007).Taken together, it is not enough to simply get students in the door of these colleges; rather, they must also have support structures in place to aid in degree completion.


Current national education goals may be able to address these concerns because an effort to increase community college completions may reverse some of the mentioned educational challenges if policy makers and institutional leaders adjust their practices and policies accordingly. In that pursuit, community colleges can play a more central role in lifting the countrys educational attainment rates. In fact, in addition to the Obama administrations goal of 8.2 million postsecondary degrees, the administration has called for an additional five million community college graduates by 2020. Likewise, the Bill and Melinda Gates Foundation has made community colleges the centerpiece of their postsecondary grant making. In explaining this decision, they argued that community colleges are flexible, affordable, and accessible institutions and that as a sector, they enroll the largest number of low-income students (Bill and Melinda Gates Foundation, 2013).The recent attention paid to community colleges, and the desire to improve their outcomes, has led to a number of organizations specifically advocating for performance funding for the two-year sector (Altstadt, 2012; Complete College America, 2012). This push is happening despite no empirical evidence that performance funding positively impacts community college outcomes, beyond anecdotal and basic descriptive data (Altstadt, 2012).


RECENT TRENDS IN PERFORMANCE-BASED FUNDING


To address the need for greater college completions, particularly among community colleges, many states have turned to performance funding. State performance funding for higher education grew out of a movement that took hold in the 1990s and shifted the way states approached their governmental oversight and accountability responsibilities. During this time, states moved away from an approach that focused on regulatory compliance and rudimentary reporting of inputs to one focused on measuring performance and accounting for outcomes (Burke, 2005; McGuinness, 1995; McLendon, 2003a; Volkwein, 2007; Volkwein & Tandberg, 2008). This new governmental approach to oversight has commonly been referred to as the reinventing government movement (Fryar, 2011; Gore, 2003; Osborne & Gaebler, 1992; Rabovsky, 2012). The application of the reinventing government movement to state higher education governance and finance has been termed the new accountability movement in public higher education, for which performance funding was a central policy solution (Gorbunov, 2004; McLendon, Hearn, & Deaton, 2006; Toutkoushian & Danielson, 2002; Zumeta, 1998).


Although Tennessee has had a performance funding program in place since 1979, these programs did not take hold nationally until the 1990s. McLendon et al. (2006) noted that in 1985, only two states had such a policy, but by 2001, nearly half of all states adopted (though did not necessarily maintain) performance funding systems. In the early 2000s, the United States experienced a decline in the number of these programs. Nevertheless, a new wave of states recently began adopting performance funding programs. Since 2007, eight states have adopted or significantly revised their performance funding programs, and several other states are actively working toward the development of their own programs. This new wave has been commonly referred to as performance funding 2.0 (Dougherty & Reddy, 2011; Rabovsky, 2012). The performance funding 2.0 movement has been spurred by a number of national foundations, policy organizations, and nonprofits, including the Bill and Melinda Gates Foundation, the Lumina Foundation, the National Center for Higher Education Management Systems, and Complete College America, which have been advocating for the implementation of state performance funding programs as a way of increasing postsecondary attainment rates (Rabovsky, 2012).


Although there is as of yet no empirical evidence tying performance funding to increased performance, advocates argue that previous attempts have suffered from poor design, inadequate funding, and lack of commitment, among other factors (Dougherty & Reddy, 2011). Performance funding 2.0 aims to produce better outcomes by distinguishing itself from the earlier efforts in at least two notable ways: (1) A number of the states have included not only ultimate performance indicators (graduation rates, completions, and so on) in their funding programs but also intermediate achievement indicators such as course completions and completion of development courses; and (2) Several are no longer using performance funding as a bonus on top of the institutions regular appropriation and are instead embedding it into the regular base funding that institutions receive (Dougherty & Reddy, 2011). Advocates argue that these innovations have the potential to significantly improve the outcomes associated with performance funding (Complete College America, 2012; Dougherty & Reddy, 2011; Jones, 2012a, 2012b).


Currently, nine states operate performance funding models for their community college system; although each state implements its program differently, one commonality is that they all use performance funding as a policy instrument for aligning institutional behaviors toward state priorities. In total, 20 states have had some version of performance-based funding affecting community colleges. In each case, a primary performance indicator is increased completions (McKeown-Moak et al., 2013). Not all these programs were sustained over time; some states adopted programs that were eventually discontinued. Nevertheless, the underlying theory of action is the same for participating states: By linking a portion of state funding directly and tightly to the performance of public campuses on individual indicators, states should begin to experience greater educational attainment levels (Burke & Minassians, 2003, p. 3). The amount of performance funding an institution receives is determined by the institutions performance and the specified weights for various performance factor values. Notably, not all states have adopted performance funding programs, and not all programs are designed exactly the same; performance funding models are diverse, and each states program is unique in regard to the amount of money devoted to the program and the number and type of performance indicators. Despite these different policy design strategies, there is a common feature across all states: They all use completions as a key policy outcome (McKeown-Moak et al., 2013).


States measure performance in various ways, including, among other measures, student retention, graduation rates, student scores on licensure exams, job placement rates, faculty productivity, and campus diversity. Ultimately, many of these performance measures are designed to increase the number of students completing postsecondary credentials. However, the performance-based funding metrics established for traditional four-year institutions have conflicted with the mission and realities of the community college model. Students enter community colleges with a variety of aspirations, including degree seeking, personal enrichment, professional development, and taking individual courses to transfer to another institution (Tinto, 2012). Additionally, because of the institutions open access mission and low cost, community college students on average tend to be less prepared for college academically, be older, have fewer financial resources, come from less educated homes, and more likely to stop and start than their peers at four-year institutions; community colleges also tend to enroll larger percentages of minority students. Therefore, the community college student population tends to be far more transient, come with a wider variety of educational goals, and face significantly greater challenges to the accomplishment of those educational goals when compared with the traditional four-year college and university student population.


CONCEPTUAL FRAMEWORK AND RELEVANT LITERATURE


State higher education performance funding programs are best understood from the perspective of principal-agent theory. Under principal-agent theory, the principal (in this case, the state) uses rewards and sanctions to ensure that the agents (public community colleges) meet the principals goals. From the principal-agent theoretical perspective, both parties are assumed to be self-interested actors who seek to maximize their own often divergent preferences. These conditions compel principals to invest resources in monitoring the behavior of agents in an effort to control their behavior. How the various actors manage their relationships and individual interests is the primary concern of principal-agent theorists and researchers (Moe, 1987). The relationship is complicated when principals and agents disagree on the policy goals, agents face multiple principals, or when the incentives impose significant transaction costs to the agent. Some institutions may not have the will or the resources to increase college completions, even when financial incentives are in place. As well as others may be unclear of the states priorities if there are several competing performance metrics that must be attended to. As a result, some colleges may not work towards the policy goals if incentives are insufficiently strong, clear, or appropriate (Brewer, Gates, & Goldman, 2004; Lane & Kivisto, 2008).


McLendon (2003b) suggested that principal-agent theory provides a useful conceptual lens through which facets of political control of the state higher education institutions can be examined. Furthermore, Lane and Kivisto (2008) argued that principal-agent theory provides common assumptions for investigating the role of individual and organizational interests, information flows, and incentives in higher education administration and governance (p. 142). Principal-agent theory provides a useful frame for our study because the underlining theory of higher education performance funding programs is the idea that states can incentivize institutions to alter their business practices to produce outcomes better aligned to state priorities. The theory also helps us anticipate some of the challenges and transaction costs that state policy makers may face when implementing performance funding models and the challenges faced by institutions attempting to respond to the states incentives. From this perspective, performance funding programs fall under McDonnell and Elmores (1987) inducements category of policy instruments, in which the principal attempts to induce specific behavior from the agent in order to produce something of value (more graduates, educated citizens, and so on) through monetary rewards.1


When turning to the current research on higher education performance funding, the evidence is mixed with regard to how institutions respond to these state policies. We were unable to identify any study focusing specifically on the impact of performance funding programs on two-year college outcomes. Nevertheless, several studies have focused on the impact of these programs on public higher education generally, specifically as they relate to four-year (rather than two-year) colleges and universities. Both Dougherty and Reddy (2011) and Rabovsky (2012) found that performance funding is associated with changes in campus decision making. Dougherty and Reddy (2011) found that these programs impact campus planning efforts and administrative strategies, and Rabovsky (2012) found that four-year colleges and universities respond to performance funding by investing more money into instruction. These findings suggest that performance funding programs do impact the behavior of institutions, which are reallocating resources in response to the new policies. However, the findings do not address whether completion levels or other outcomes have changed as a result of the policy.


Two studies have examined the relationship between performance funding and student or institutional outcomes. Sanford and Hunter (2011) estimated how changes in Tennessees existing performance funding program impacted retention and graduation rates and concluded that the changes had no systematic effect. Shin and Milton (2004) examined performance funding programs nationally and also concluded that the programs had no systematic impact on graduation rates.


The nonfindings may result from a number of factors, including the result of poor policy design and/or poor policy implementation. Dougherty and Reddy (2011) identified a number of obstacles to effective performance funding, including, among other things, inappropriate measures; instability in funding indicators and measures; the brief duration of some programs; inadequate funding; institutional resistance; and gaming of the system. Other factors may be at work too. Tracking the development of performance funding is a challenging task for any researcher because these policy developments change over time and are not always codified into state statute. As a result, much of the research on the subject relies on self-reported survey results, case studies, or reviews of internal governmental documents. For example, McLendon et al. (2006) summarized several national surveys on performance funding and concluded that 25 states had operated these funding models by 2002. However, a recent study by Rabovsky (2012) indicated that far fewer states had (or currently have) performance funding models. Therefore, these identification, or data quality, issues could be another reason that the current research on performance funding is so inconclusive.


DATA SOURCES AND RESEARCH DESIGN


As states adopted performance funding policies over the years, several researchers documented the evolution of these policy developments. The earliest efforts to document these developments came from Joseph Burkes annual surveys of state chief financial officers. From 1998 through 2003, their surveys revealed that 26 states self-reported operating performance funding at some point in time. Not all these states actually funded their programs, but states have been experimenting with performance funding efforts since at least the 1990s. These surveys are no longer in use, but Dougherty and colleagues have continued to document performance fundings evolution through a series of case studies, national reports, and policy audits. Burke and Doughertys work is complementary; the Burke surveys provide a broad overview of state adoption of performance funding, and Dougherty and colleagues provide in-depth details and a history of state-specific policy developments, particularly as they relate to community colleges.


One of the challenges of identifying performance funding states from these two sources is that they sometimes conflict with regard to the origins (and demise) of their funding policies. For example, the Burke surveys do not indicate the years when states codified and first funded their performance funding programs, whereas Dougherty and colleagues captured this information in their case studies. However, Dougherty and colleagues have not yet conducted case studies of every performance funding state, so we turned to several additional sources to ensure that we have documented the correct start and end dates for community college performance funding. These resources include the Education Trusts examination of state performance funding programs (Aldeman & Carey, 2009), the American Federation of Teachers What Should Count? database (2012), the American Association of State Colleges and Universities report on performance funding (Harnisch, 2011), Jobs for the Futures examination of state community college performance funding programs (Altstadt, 2012), and the data collection efforts of Alexander Gorbunov at Vanderbilt University. Finally, we shared our list with associates at the National Center for Higher Education Management Systems and Kevin Dougherty as a peer-check to ensure that these states and dates are correct. In the absence of a central database or other resource that has systematically tracked the origins, demise, and enabling legislation over time, our list may provide a useful tool for other researchers and policy analysts interested in studying this topic in the future.


Interestingly, some states adopted performance funding policies but did not immediately fund their efforts. Missouri stands out as an example; in 1991, the state legislature passed the Economic Survival Act, which introduced performance funding for higher education, but the state did not actually fund the program until 1993. That year, only public four-year colleges and universities were subject to the new policy, and it wasnt until 1994 that community colleges were subject to (and funded by) performance funding (Burke, 2002; Dougherty, Natow, Bork, Jones, & Vega, 2011). Accordingly, our list indicates that Missouris program began in 1994. The state suspended performance funding indefinitely in 2002 because of fiscal challenges and is now actively engaged in developing a new performance funding 2.0 model (National Conference of State Legislatures, 2013). As seen with Missouri, states sometimes adopt and implement, but later discontinue, their performance funding efforts. Colorado is another example; the state adopted performance funding in 1994 but phased out the program by 1996. Four years later, the state legislature readopted performance funding, but only until 2004.


These on and off periods introduce a degree of financial uncertainty for campuses, so we suspect that states with more stable performance funding programs may experience different outcomes than those states that only temporarily adopted (or even readopted) performance funding efforts (see Table 1). But these periods can also be a plausible source of exogenous variation that can simulate a natural experience through which we can examine how performance funding impacts educational outcomes. The following section discusses data sources and variable selection, followed by an overview of our analytical techniques.


Table 1. State Performance Funding for Public Community Colleges

State

Years in operation

Arkansas

19951997

Colorado

19941996, 20002004

Florida

19962008

Idaho

20002005

Illinois

19982002

Indiana

2007current

Kansas

2002, 2005current

Kentucky

19951997

Minnesota

19951998

Missouri

19942002

New Jersey

19992002

New Mexico

2007current

North Carolina

20012008

Ohio

2009current

Oklahoma

19982000, 2002current

South Carolina

19962004

Tennessee

1979current

Texas

2009current

Washington

2007current

Virginia

2007current



DATA SOURCES AND VARIABLE SELECTION


We created a unique panel data set for the period 1990 through 2010, using states as our primary unit of analysis. The data set contains 1,050 total observations, drawing from a range of postsecondary data sources, including the U.S. Department of Educations Digest of Education Statistics and Integrated Postsecondary Education Data System (IPEDS); State Higher Education Executive Officers (SHEEO) and Grapevine Reports; the National Association of State Scholarship and Grant Aid Programs (NASSGAP); the U.S. Census Bureau; and the Bureau of Labor Statistics. Because Tennessee adopted performance funding before 1990, it is excluded from our analysis, discussed next.


The outcome of interest is total associates degrees completed within each state during each year. Although some states link performance funding to multiple measures, such as retention rates, transfer rates, or even job placement rates, every state ultimately aims to increase degree completion. Notably, our outcome variable represents one of many potential policy outcomes. Given the scarce amount of research on the impacts of performance funding (particularly in community colleges), we believe this outcome serves as a useful starting point for building a research agenda around alternative outcomes that could be further explored in later work. Our focus is on academic degrees rather than vocational certificates and awards; accordingly, we exclude certificates and awards from this analysis. We initially conducted our analysis to include examining performance fundings impact on certificates and awards, but these did not yield substantial or significant results. Additionally, focusing solely on associates degrees provides a degree of parsimony for interpreting the results but also for focusing on policy issues that relate to academic versus vocational programs.


Drawing from the conceptual framework and reviewed literature, we posit that associates degree production within states is a function of that states higher education financial investment (i.e., appropriations and grant aid per student), as well as published tuition and fees, inflation-adjusted to 2010 dollars. States with high degrees of community college participation will likely experience greater gains in associates degree production, so we control for the percent of all public postsecondary students (four-year and two-year) enrolled in the community college sector. Degree production is also a function of the stock of human capital in the state, so we include the total number of high school graduates and state unemployment and poverty rates to control for these factors that change over time. In addition, we expect to find increases in degree production the longer a state has operated performance funding, so we control for the number of years each states program has been in operation. To account for unobservable factors that do not change over time, we also include state fixed effects, and we include year fixed effects to account for annual shocks that affect all states in similar ways (discussed in more detail next).


ANALYTICAL TECHNIQUE


Our research question asks, To what extent does the introduction of performance funding programs impact two-year degree completion among participating states? One of the challenges in answering this question (and in other policy research) is that it is impossible to observe what would have happened in the absence of a performance funding policy. In the absence of a counterfactual, we must turn to quasi-experimental research designs to simulate plausible natural experiments. In particular, we implement a difference-in-differences regression technique in which we are able to observe changes in degree completions pre and post the introduction of performance funding. We compare these differences against a set of similar states that never adopted community college performance funding in order to simulate a plausible natural experiment. As a result, we can identify scenarios of what could have happened in the absence of performance funding policies. The central aim of this quasi-experimental technique is to improve the internal validity of our models to get closer to true causal relationships between the policy and our measurable outcomes. Of course, we never observe the true causal relationship, so results should be interpreted with caution.


An additional step to improving our models internal validity includes adding year and state fixed effects. The year fixed effects account for unobserved factors that affect all states in each year, such as changes in federal aid policies or recessions that impact states in similar ways, and are captured here. The state fixed effects account for unobserved state characteristics that are relatively stable over time. The final model is expressed:


yit  = α + β1(treat*post) it + β2(opyears) it + β3 ln(ccshare) it + β4 ln(pop) it + β5 ln(tuition4) it + β6 ln(tuition2)it + β7 ln(grantaid) it  + β8 ln(approps) it  + β9 ln(unemploy) it + β10 ln(poverty) it + γi + ηt + εit


where y is the logged value of total associate’s degrees awarded in each state (i) for each year (t), α is the intercept, and, because each state introduced performance funding for different periods, the interaction (treat*post) is set to unity for all performance funding states in the years after adopting the policy. Additionally, the variable opyears measures the number of years performance funding has been in operation within each state.


To account for other observable factors associated with degree completion, we include the logged values of eight additional control variables, including the share of total public postsecondary students enrolled in community colleges (ccshare) and total state population (pop).The variables tuition4 and tuition2 measure the average sticker price of tuition and fees for both public four-year and public two-year colleges within the state, respectively. State subsidies are measured via total state grant aid per full-time employee (FTE) (grantaid) and higher education appropriations per FTE (approps). To account for socioeconomic characteristics of states, we include the annual average share of the labor force that is unemployed (unemploy) and each states share of total population living below the federal poverty line (poverty). Finally, the variable γi represents the state (i) fixed effects, and ηt represents year (t) fixed-effects. We implemented a Wooldridge test summarized by Drukker (2003) and found serial correlation in our error terms, so we cluster the errors at the state level. This approach makes our models robust to autocorrelation and to within-panel heteroskedasticity (Wooldridge, 2002).


ROBUSTNESS CHECKS


To assess the robustness of our difference-in-difference results, we implement a series of robustness checks to examine how well the findings hold up under alternative comparison groups. First, it is possible that the national analysis that compares all performance funding states against all nonperformance funding states could be too broad of a comparison group if we suspect there to be regional factors associated with policy adoption. For example, if all performance funding states were clustered in the New England region, it may be difficult to argue for including states from the intermountain west as its comparison group. Accordingly, our first robustness check restricts our sample to neighboring states where we only compare performance funding states against those that share a state border. Regional comparisons are a common strategy in difference-in-difference designs, in which the researchers aim is to offer alternative explanations that could plausibly drive the results (Meyer, 1995; Shadish, Cook, & Campbell, 2002). As a second robustness check to falsify our original findings, we compare states according to their public higher education governance structures. Most performance funding programs exist in states with moderately centralized planning and coordination efforts, so we suspect that performance funding states could be more similar to states that also share a common governance structure. Literature on higher education governance has found that states with similar public higher education governance structures may not only design similar policies but also influence the direction of higher education in more tightly controlled ways (McLendon, 2003a). Accordingly, it is plausible that states with similar governance oversight (in this case, coordinating/planning boards) could be a compelling comparison group, so we offer this as our second robustness test.


As a third robustness check, we examine state-specific impacts of performance funding on degree completions. It is possible that some states experience positive (or negative) completion effects after the introduction of the new policy, but this heterogeneity will go unnoticed in our original difference-in-difference design. The original design only accounts for the average treatment effect across all states, so our third robustness check interacts the performance funding dummy variable with each performance funding state in order to identify evidence of heterogeneous effects across each performance funding state. To illustrate the importance of providing this robustness check, consider the implications of not extending the analysis beyond average treatment effects. We may find that the average treatment effect of performance funding is negative; however, this pattern may not hold true for all performance funding states because each state designs its programs differently. It is possible that some states experience positive impacts, whereas others experience negative (or even null) impacts on college completion; in an effort to investigate this possibility, we rerun the analysis with these state-specific interactions.


LIMITATIONS


The study is limited in the following ways. First, states design performance funding policies according to their own statewide educational goals and relative to their financial capacity. It is beyond the scope of our analysis to identify which specific policy designs make the greatest impact on degree completions, yet we believe our results (particularly the state-specific analysis) can inform further research examining adoption and implementation of performance funding policies. Second, although we control for the number of years each state operated performance funding and we include lags, the lagged time horizon may not be sufficient, and there may still be a timing element that goes underexamined in this analysis. Further research could explore this possibility through comparative case studies and additional research into the origins, implementation, and impacts of performance funding. In addition, because Tennessee began its performance funding model before the years investigated in our analysis, the state has no pre-treatment observations and is excluded from our analysis. Finally, despite our efforts to improve the studys internal validity, we acknowledge that our parameter estimates could still be biased because of omitted variables. Because of these factors, we are careful to not overstate causal claims when interpreting our results.


KEY FINDINGS


Table 2 provides descriptive statistics of the variables included in the analysis. Here, we see that between 1990 and 2010, the average state produced approximately 9,600 public associates degrees per year, and performance funding states produced more degrees than the national average. This is likely because performance funding states tend to be larger than other states, as measured by the total public four-year enrollment levels and total public high school graduates. Additionally, performance funding states tend to have more generous student financial aid packages than other states and slightly lower tuition and fees. Community college performance funding states are also different from other states in terms of the share of students enrolled in the community college sector. In performance funding states, approximately 43% of public college students enroll in community colleges, whereas the national average is slightly less at 39%.


Table 2.

Mean (Standard Deviation) of State Higher Education Characteristics, 19902010

 

All states

Nonperformance funding states

Performance funding states

Associates degrees conferred

9,618.0

8,051.5

12,091.3

 

(12,242.7)

(13,126.5)

(10,237.6)

Duration of performance funding (in years)

2.3

0.0

5.9

 

(3.5)

(0.0)

(3.2)

Share of total public enrollment in c.c.

38.9

36.6

42.5

 

(15.0)

(16.2)

(12.2)

State population (in 1000s)

4,248.1

3,639.8

5,208.6

 

(4,679.8)

(5,075.5)

(3,787.3)

Tuition (public four-year)

4,915.5

4,996.7

4,787.1

 

(2,019.3)

(2,032.0)

(1,994.9)

Tuition (public two-year)

2,297.5

2,432.6

2,084.1

 

(963.5)

(1,033.2)

(797.8)

State aid per FTE

505.5

434.3

617.9

 

(469.7)

(482.4)

(425.8)

State appropriations per FTE

7,198.4

7,313.7

7,016.4

 

(2,029.7)

(2,351.2)

(1,360.1)

State unemployment

5.4

5.4

5.5

 

(1.8)

(1.9)

(1.6)

State poverty rate

12.5

12.1

13.0

 

(3.6)

(3.6)

(3.4)

Observations

1,029

630

399

Number of states

49

30

19


Eleven states discontinued performance funding programs, and eight (nine including Tennessee) currently operate this funding model. When states implemented performance funding, the average duration of the program was approximately 5.9 years. In our regression analysis, we control for these differences in our attempt to isolate the relationship between performance funding and degree completions. As discussed earlier, the regression model converts these values to their natural logs to help the model conform to regression assumptions. Table 3 provides our regression estimates for the average treatment effects meant to help us answer our research question. As discussed earlier, we compare performance funding states against multiple comparison groups as a robustness check (neighboring states and planning/coordinating board states). We also report lagged results to test the assumption that it may take at least 2 years for states to experience the programs effect.




Table 3.

Effects of Performance Funding on Public Associates Degree Completions, 19902010

 

No lag

Two-year lag

 

All states

Neighboring states

Plan./Coord. board states

All states

Neighboring states

Plan./Coord. board states

Treat x post

-0.009

 

-0.009

 

-0.012

 

-0.011

 

-0.011

 

-0.017

 
 

(0.024)

 

(0.024)

 

(0.028)

 

(0.027)

 

(0.026)

 

(0.027)

 

Duration of PBF (in years)

-0.004

 

-0.005

*

-0.004

 

-0.002

 

-0.004

 

-0.002

 
 

(0.003)

 

(0.003)

 

(0.004)

 

(0.003)

 

(0.003)

 

(0.004)

 

Share of total public enrollment in c.c.

0.011

 

0.005

 

0.078

 

-0.003

 

-0.014

 

0.178

 
 

(0.026)

 

(0.025)

 

(0.105)

 

(0.031)

 

(0.030)

 

(0.152)

 

State population (in 1000s)

1.156

***

1.027

***

0.688

**

1.115

***

1.001

***

0.903

**

 

(0.168)

 

(0.161)

 

(0.329)

 

(0.166)

 

(0.156)

 

(0.386)

 

Tuition (public four-year)

0.130

*

0.111

 

0.102

 

0.123

 

0.162

**

0.163

*

 

(0.068)

 

(0.074)

 

(0.071)

 

(0.077)

 

(0.080)

 

(0.091)

 

Tuition (public two-year)

-0.003

 

0.028

 

-0.028

 

-0.032

 

-0.052

 

-0.070

 
 

(0.076)

 

(0.099)

 

(0.122)

 

(0.070)

 

(0.077)

 

(0.108)

 

State aid per FTE

0.018

 

0.015

 

0.060

 

0.024

*

0.020

 

0.026

 
 

(0.015)

 

(0.016)

 

(0.036)

 

(0.014)

 

(0.014)

 

(0.036)

 

State appropriations per FTE

0.085

 

0.076

 

0.066

 

0.007

 

-0.006

 

0.062

 
 

(0.087)

 

(0.089)

 

(0.116)

 

(0.078)

 

(0.075)

 

(0.106)

 

State unemployment

0.125

**

0.114

 

0.093

 

0.117

**

0.061

 

0.081

 
 

(0.061)

 

(0.068)

 

(0.101)

 

(0.056)

 

(0.059)

 

(0.091)

 

State poverty rate

-0.054

 

-0.037

 

-0.035

 

-0.054

 

-0.034

 

-0.039

 
 

(0.041)

 

(0.043)

 

(0.054)

 

(0.041)

 

(0.038)

 

(0.051)

 

Intercept

-10.597

***

-8.671

***

-3.561

 

-8.899

***

-7.152

***

-6.949

 

 

(2.487)

 

(2.433)

 

(5.043)

 

(2.430)

 

(2.334)

 

(5.737)

 

Observations

1029

 

861

 

542

 

931

 

779

 

492

 

R2

0.740

 

0.787

 

0.771

 

0.707

 

0.762

 

0.733

 

State fixed effects

yes

 

yes

 

yes

 

yes

 

yes

 

yes

 

Year fixed effects

yes

 

yes

 

yes

 

yes

 

yes

 

yes

 

Note: Robust standard errors in parentheses; PBF = performance-based funding.

* p < 0.1. ** p < .05. *** p < .01.



The results for the average treatment effects, displayed in Table 3, do not reveal any significant effect on two-year college completions. In fact, at no point do any of the coefficients even approach statistical significance. When compared with their findings, in which performance funding was associated with positive effects for bachelors degree completions, this finding is particularly startling. The negative coefficient for treat x post suggests that the introduction of performance funding has decreased completions and that the results are statistically insignificant. We cannot infer that performance funding has reduced completions, but it is noteworthy that this funding effort has not improved community college completions.


On average, performance funding has not yielded the sort of impact that states may have hoped/expected to see (i.e., positive and significant). Likewise, a significant amount of attention and policy action have already been devoted to the development of performance funding programs affecting the two-year sector, with no evidence of positive results. As we discuss later, these results suggest that performance funding for community colleges is not only ineffective in terms of raising educational attainment but also may simply be misaligned with the mission and markets of two-year institutions.


STATE-SPECIFIC RESULTS


As noted earlier, we suspect that individual states experienced different patterns (i.e., heterogeneous treatment effects) that may break away from the average effects discussed previously. This seems plausible because each state designs and/or implements performance funding differently, and this heterogeneity may be overlooked if one simply looks at these average treatment effects. For example, some states organize stronger performance incentives (e.g., South Carolina) and therefore may be better able to impact degree productivity via their performance funding policies. Similarly, some states may have approached performance funding in a collaborative manner that yields greater buy-in from institutional stakeholders. Our purpose is not to explain why some states were more (or less) successful in their educational outcomes; rather, we attempt to identify the extent to which our results from Table 3 hold when examining each performance funding state. The results displayed in Table 4 paint an interesting picture of the state-level performance funding landscape for the two-year sector.



Table 4.

State-Specific Effects of Performance Funding on Public Associates Degree Completions, 19902010

 

No lag

Two-year lag

 

All states

Neighboring states

Plan./Coord. board states

All states

Neighboring states

Plan./Coord. board states

Arkansas x post

-0.206

***

-0.183

***

-0.211

***

0.238

***

0.221

***

0.199

***

 

(0.020)

 

(0.021)

 

(0.028)

 

(0.023)

 

(0.023)

 

(0.040)

 

Colorado x post

-0.033

*

-0.044

**

-0.042

*

-0.064

***

-0.077

***

-0.060

**

 

(0.017)

 

(0.020)

 

(0.024)

 

(0.014)

 

(0.015)

 

(0.026)

 

Florida x post

0.050

*

0.034

 

0.060

 

0.024

 

0.011

 

0.084

 
 

(0.026)

 

(0.028)

 

(0.054)

 

(0.026)

 

(0.027)

 

(0.064)

 

Idaho x post

-0.038

*

-0.050

**

-0.047

 

-0.054

**

-0.067

***

-0.052

 
 

(0.019)

 

(0.019)

 

(0.031)

 

(0.021)

 

(0.022)

 

(0.040)

 

Illinois x post

0.012

 

0.001

 

0.005

 

0.017

 

0.035

 

0.019

 
 

(0.022)

 

(0.025)

 

(0.032)

 

(0.022)

 

(0.023)

 

(0.030)

 

Indiana x post

-0.007

 

0.014

 

0.069

 

0.002

 

0.013

 

0.080

 
 

(0.034)

 

(0.038)

 

(0.065)

 

(0.032)

 

(0.034)

 

(0.054)

 

Kansas x post

0.048

**

0.069

***

0.065

 

0.073

***

-0.090

***

0.091

***

 

(0.023)

 

(0.025)

 

(0.039)

 

(0.022)

 

(0.023)

 

(0.032)

 

Kentucky x post

0.021

 

0.040

 

0.055

 

-0.009

 

-0.022

 

-0.049

 
 

(0.023)

 

(0.024)

 

(0.033)

 

(0.035)

 

(0.036)

 

(0.042)

 

Minnesota x post

0.058

***

0.037

*

0.077

**

0.067

***

0.051

**

0.088

**

 

(0.021)

 

(0.020)

 

(0.030)

 

(0.022)

 

(0.022)

 

(0.031)

 

Missouri x post

0.054

*

0.020

 

0.022

 

0.049

 

0.018

 

0.042

 
 

(0.028)

 

(0.027)

 

(0.040)

 

(0.032)

 

(0.032)

 

(0.042)

 

New Jersey x post

0.124

***

0.116

***

0.104

***

0.131

***

0.119

***

0.113

***

 

(0.016)

 

(0.017)

 

(0.022)

 

(0.017)

 

(0.018)

 

(0.027)

 

New Mexico x post

-0.084

***

-0.075

**

-0.062

 

-0.062

***

-0.039

*

-0.026

 
 

(0.029)

 

(0.031)

 

(0.054)

 

(0.023)

 

(0.023)

 

(0.045)

 

North Carolina x post

0.022

 

0.031

 

0.002

 

0.015

 

0.008

 

0.009

 
 

(0.025)

 

(0.030)

 

(0.038)

 

(0.023)

 

(0.027)

 

(0.038)

 

Ohio x post

-0.034

 

-0.007

 

-0.006

 

0.019

 

0.041

 

0.055

 
 

(0.032)

 

(0.035)

 

(0.060)

 

(0.027)

 

(0.029)

 

(0.043)

 

Oklahoma x post

-0.047

**

-0.036

 

-0.034

 

0.059

***

0.041

*

0.052

*

 

(0.021)

 

(0.024)

 

(0.031)

 

(0.021)

 

(0.023)

 

(0.027)

 

South Carolina x post

-0.072

***

-0.088

***

-0.080

**

-0.091

**

0.098

**

-0.098

**

 

(0.024)

 

(0.026)

 

(0.037)

 

(0.035)

 

(0.040)

 

(0.047)

 

Texas x post

-0.138

***

-0.143

***

-0.121

**

-0.162

***

-0.140

***

-0.138

**

 

(0.036)

 

(0.042)

 

(0.054)

 

(0.035)

 

(0.037)

 

(0.056)

 

Virginia x post

-0.049

**

-0.028

 

-0.040

 

-0.064

***

-0.054

***

-0.052

 
 

(0.022)

 

(0.024)

 

(0.043)

 

(0.017)

 

(0.018)

 

(0.034)

 

Washington x post

0.138

***

0.154

***

0.128

***

0.160

***

0.168

***

0.154

***

 

(0.020)

 

(0.020)

 

(0.040)

 

(0.017)

 

(0.017)

 

(0.035)

 

Observations

1029

 

861

 

542

 

931

 

779

 

492

 

R2

0.745

 

0.792

 

0.781

 

0.719

 

0.775

 

0.754

 

Controls included

yes

 

yes

 

yes

 

yes

 

yes

 

yes

 

State fixed effects

yes

 

yes

 

yes

 

yes

 

yes

 

yes

 

Year fixed effects

yes

 

yes

 

yes

 

yes

 

yes

 

yes

 

Note: Robust standard errors in parentheses.

*p < 0.1. p < **0.05. p < ***.01.




Of the 19 states included in our analysis that implemented performance funding programs, only 4 experienced positive and statistically significant effects in our models (Minnesota, Missouri, New Jersey, and Washington). Missouri stands out as having the least consistent patterns within the positive impacts group given that performance fundings impact is statistically significant in only one model. The other three states have more consistent patterns across the models, where Minnesota, New Jersey, and Washington experienced positive and statistically significant results across all comparison groups and lagged models. Alternatively, performance funding in six states is systematically associated with negative impacts on college completions (Colorado, Idaho, New Mexico, South Carolina, Texas, and Virginia). Three of these states (Colorado, South Carolina, and Texas) yield similar results across all models, whereas the other three states yield statistically insignificant results in at least one model. All other states yield no statistically significant results across the models, or their parameter estimates were mixed (e.g., sometimes positive, sometimes negative) between the lagged models. This mixed group is the most difficult to interpret, particularly in the case of Oklahoma, where the positive effects seem to have been delayed. Here, institutions may have reacted initially in a way that reduced completions but then produced longer term positive results (e.g., adjustments to their enrollment profile). Future research could examine these cases in greater detail.


Table 5 summarizes these results by organizing states into one of four categories: positive, negative, mixed, and insignificant. States with statistically significant coefficients are arrayed into one of these four categories, though (as discussed earlier) some states display more systematic patterns across all models than others. These state-specific results (Table 4) add a layer of nuance to the original analysis (Table 3) by identifying heterogeneous treatment effects across states, indicating that performance funding is met with varying degrees of success within adopting states. Table 5 also demonstrates that most of the performance funding models are not generating systematic effects in the states. The mixed group includes three states where performance funding has confusing effects on college completions. In Arkansas and Oklahoma, for example, the nonlagged models indicate a negative effect, whereas the lagged models show positive effects. For the most part, Kansas demonstrates positive results across the models, but the effects become negative when lagged. All other states experience no statistically significant impacts on college completions.


Table 5.

     

Summary of Key Findings: Performance Fundings Impact on Associates Degree Completions

Positive

Negative

Mixed

Not significant

Minnesota

Colorado

Arkansas

Florida

Missouri

Idaho

Kansas*

Illinois

New Jersey

New Mexico*

Oklahoma*

Indiana*

Washington*

South Carolina

 

Kentucky

 

Texas*

 

North Carolina

 

Virginia*

 

Ohio*

Note: Asterisk denotes states currently operating performance funding


These results are not a strong endorsement of performance funding in the community college sector. To the question of why we do not observe a positive and significant relationship between performance funding and associates degree completions, our hypothesis revolves around institutional capacity to respond to the policy environment. For example, the latest data from the Delta Cost Project show that in 2009, the average education and related spending per FTE at public research universities was $16,000. At community colleges, it was $10,000. We argue that the lack of institutional resources might limit the two-year sectors capacity to alter their institutional practices in response to state policy changes. Rabovskys (2012) findings support this general hypothesis; he observed that public four-year institutions altered the expenditure patterns in response to the implementation of performance funding programs. The effect was largest among those institutions with the greatest resources, specifically, the public research universities. The lack of resources accentuates existing challenges because community colleges tend to have higher percentages of students with social and academic disadvantages, who often require additional institutional resources to help them be successful (Dougherty & Hong, 2006).


Furthermore, part-time faculty and part-time students are overrepresented in the community college sector. A total of 64% of faculty at public four-year institutions are full time, whereas only 31% of faculty at community colleges are full time. Likewise, 78% of students at public four-year institutions are full time, and only 40% of students attending community colleges do so full-time (U.S. Department of Education, 2013). A large part-time workforce and student population may limit the effect of any institutional policy changes, given that those who are tasked with carrying out the policy changes and those who are supposed to be affected by those changes are less attached to the institution.


Finally, community colleges serve multiple missions, only one of which is the delivery of associates degree programs. They also deliver certificate programs2; community interest and lifelong learning education; vocational and technical education; adult, basic, and literacy programs; and other nondegree programs. This diversity of missions may make it difficult for institutions to alter their practices to respond to changes in the policy environment that affect a single portion of their broader set of missions. Likewise, there is significant variance in the degree to which individual community colleges emphasize the delivery of associates degrees. Some institutions make associates degree education their central mission, whereas others emphasize such educational programs as certificate and lifelong learning (Cohen & Brawer, 2008). Often, institutions that emphasize associates degrees enroll a significant number of students who intend to earn a bachelors degree and therefore often transfer to a four-year institution before earning their associates degree (Dougherty & Reddy, 2011). This variance in institutional programming may limit institutions ability to impact completions and our ability to observe effects for associates degree completions. One way for future research to address this question might be to separate vocational and technical associates degrees and academic/transfer degrees.


Returning to Table 3, the findings in regard to a few of our control variables warrant some consideration. First, we find that increases in population are associated with increases in the number of completions. Second, we find that unemployment is positively associated with completions when the performance funding states are compared with all other states. This finding is consistent with prior research, which shows that both enrollments and completions are positively impacted by market downturns (Betts & McFarland, 1995; Hillman & Orians, 2013). Third, we find weak evidence that state financial aid positively impacts two-year completions and that the duration of the performance funding program has the opposite effect. However, these effects are only significant in one model each.


DISCUSSION AND CONCLUSION


In recent years, performance funding has experienced a resurgence in popularity. Despite very little evidence of the effectiveness of performance funding models, the movement toward the implementation of new and significantly revised performance funding programs is being driven by national advocacy groups and political entrepreneurs who are seeking to improve public higher education productivity (Rabovsky, 2012). Additionally, recent research has shown that these types of performance accountability programs are in part motivated by political factors such as partisanship (e.g., McLendon et al., 2006). Nevertheless, an important policy question behind these trends is whether performance funding is impacting college completions. The results of this study ought to serve as a cautionary note for policy makers. Similar to previous studies (i.e., Sanford & Hunter, 2011; Shin & Milton, 2004), on the aggregate, our results show that performance funding does not have a significant effect on completions in the two-year sector. At the individual state level, our results show that there are slightly more examples of performance funding having a significant and negative effect on two-year completions, and far more examples of it having no effect. We conclude that the policy is not a silver bullet for improving community college completions; rather, it may interfere with national completion goals.


As indicated earlier, principal-agent theory highlights a number of factors that can complicate performance funding programs and possibly lead to ineffective programs, which may be worth further investigation in future research: In states where there was no positive effect observed, did the principals and agents disagree on the policy goals? Were the transaction costs too significant to the agent? Did some institutions not have the will or the resources to increase college completions, even when financial incentives were in place? Or were the institutions unclear on the states priorities because there were too many competing performance metrics (Brewer et al., 2004; Dougherty & Reddy, 2011; Lane & Kivisto, 2008)? Beyond these general questions, our findings also lead us to a series of discussion items that flow from our studys findings and that we address next.


Discussion Item 1: Negative impacts and program design. As summarized in Table 5, performance funding is associated with declines in college completions in six states; notably, three of these states are currently operating performance funding. At a minimum, these negative relationships suggest that New Mexico, Texas, and Virginia are operating ineffective programs in terms of improving community college degree completions. Although we cannot identify specific design features associated with these negative results, it is important to note that all three of these states are implementing performance funding 2.0 models that are designed to be more ambitious than previous performance funding models. Performance funding 2.0 often ties funding to base appropriations (rather than supplemental funding), and because they are heavily promoted by Complete College America, they also tend to focus on degree completion (Dougherty & Reddy, 2011). Additionally, in each state, the community colleges have fewer financial resources than the national average. The average education and related spending per FTE for community colleges in New Mexico, Texas, and Virginia are $10,041, $9,161, and $8,308, respectively, while the national average for community colleges is $10,242 (Delta Cost Project, 2012). The lack of resources, combined with the imposition of performance funding, may be driving down completions. Certainly, further research is necessary to investigate why these states are experiencing negative results, but it is hoped that our analysis contributes to ongoing policy and research discussions to help these states improve the effectiveness of performance funding.


Observers often point to South Carolina as an example of an overly aggressive and poorly designed performance funding model, given that the state attempted to allocate 100% of state appropriations via 37 performance metrics. The state experimented with this reform for 8 years (1996 to 2004) and finally discontinued it in 2004 because of budgetary challenges and lack of political support. It was not surprising to find South Carolina on the list of states where performance funding had negative impacts on degree completions. Interestingly, South Carolina was a precursor to todays performance funding 2.0 model, in which the ultimate goal is often to tie all state appropriations to performance metrics (i.e., the revised Tennessee performance funding program). Although our evidence cannot determine the specific design features of these negative performance funding programs, we suspect that these results may be associated with common features found in performance 2.0 models. Further research could examine this in more detail by investigating the extent to which similar patterns are found among current (e.g., Texas) and emerging (e.g., Wisconsin) performance funding 2.0 states. Of course, these programs have only been in effect for a few years, so it is also possible that these states will begin to reverse these trends in time.


Discussion Item 2: Why discontinue a policy that appears to be working? To the extent that performance funding is associated with reductions in degree completions, it seems appropriate for states to abandon performance funding efforts. Colorado, Idaho, and South Carolina did exactly that, and some observers may view the discontinuation of performance funding in these three states as a successful policy outcome. Alternatively, when states experience positive gains in completions, the policy would appear to be generating desired educational outcomes. However, three of the four states that experienced positive gains have discontinued their performance funding efforts. Only one state (Washington) is currently operating performance funding that yields positive effects in all our models.


Although the previous discussion noted performance funding 2.0 to be associated with negative effects on college completions, it is important to note that Washington is also a performance funding 2.0 state. It is possible that even within performance funding 2.0 designs, some practices and contexts are more or less effective than others in terms of degree attainment. Further research could investigate these design features in more detail to identify whether Washington has unique features or contexts that could be associated with facilitating greater community college completions.


It is noteworthy that the three states that appear to have been making positive gains in community college completions (Minnesota, Missouri, and New Jersey) did not operate performance funding for many years. Each state discontinued performance funding by 2002, and despite the relatively brief duration of their programs, they still appear to be able to make positive gains via performance funding. Again, we are curious to explore the design features or other contextual factors behind this short-lived success; further research could continue to examine these efforts to assess best practices. Perhaps more pertinent to this discussion is the question about why these states abandoned a program that appears (at least in hindsight) to have been making positive educational outcomes. After all, isnt performance funding designed to improve educational outcomes in these exact ways? While we can only speculate why these programs failed, we wonder if the funding incentives were unsustainable, if political champions left office, or if there were governance changes that resulted in their demise. These are empirical questions that require further case study research similar to that currently being conducted by Dougherty and associates (e.g., Dougherty et al., 2011).


Discussion Item 3: States pursue these funding fads without adequate evidence of their efficacy. Table 5 outlines the remaining states that experienced either mixed results or no significant patterns in degree completions. Similar to the average treatment effects found in Table 3, our primary conclusion is that most performance funding states experience little to no gains in college completions. Many of these states have disbanded their performance funding programs (e.g., Florida, Illinois, Kentucky, North Carolina), which could be viewed as a smart policy decision because the funding efforts were not yielding educational gains. However, Indiana, Ohio, and Oklahoma are currently operating these programs without any evidence that they are experiencing positive completions as a result of performance funding. Kansas is experiencing some evidence of positive impacts, but they appear to be the exception to the rule.


Considering how many states are now pursuing performance funding for their community colleges, it is important for advocates and policy makers to weigh this evidence in their policy deliberations. Perhaps performance funding efforts are designed to achieve other policy goals beyond college completions; to the extent that this is the case, we suspect our results will have little policy relevance. However, if one of the policy goals is to increase educational attainment, we hope our results can contribute to evidence-based policy decisions. Our study suggests that performance funding is not an effective tool for increasing community college completions, and in some cases, it may actually be working against broader completion goals.


CONCLUSIONS


The results of this study show that state higher education policy interventions can have diverse effects. Our results revealed inconsistent effects of performance funding for two-year colleges across the states, however, in the majority of cases, the effects where either not significant, or negative and significant. Therefore, it may be advisable for policy makers who are interested in implementing performance funding to design programs that take into account differences between sectors, institutional missions, and specific state contexts. Jones (2012a, 2012b) encouraged states to design performance funding programs that promote mission differentiation by using different metrics/drivers for different kinds of institutions. Under the Jones plan, community colleges would be rewarded for producing associates degrees and certificates and transferring students, and for students reaching specified momentum points (e.g., remedial success, dual enrollment, and job placement). Part-time students would be valued and proportionally represented in an FTE count in the performance equation. Finally, the performance funding program would include provisions that reward success with underserved populations. These ideas sound appealing; however, future research will need to determine how effective these design characteristics are in effecting positive outcomes within community colleges.


Our results are not encouraging, especially given that many of the design features advocated by Jones (2012a, 2012b) are key elements of performance funding 2.0, and our state-level analysis revealed multiple performance funding 2.0 states with negative or insignificant results. However, future researchers and performance funding advocates should explore characteristics of those programs that experienced positive and significant results and those states that did not in order to identify possible factors associated with positive results. They must also be open to the very real possibility that performance funding, in whatever form, may not be a suitable finance policy for the two-year sector. Additionally, as argued earlier, we fear that without adequate financial resources, the imposition of performance funding may be harmful to community colleges. Incentives without adequate resources may only result in wasted resources.


Although there is the possibility that with careful attention to program design and implementation and with adequate resources, performance funding may positively impact completions in the two-year sector, it is imperative that we empirically analyze the policy effects and evaluate the claims of its proponents. As Birnbaum (2000) argued, as management fads make their way through higher educationand are, more often than not, found to be either ineffective or harmfuladvocates are known to argue that the observed failures are a consequence of improper implementation or deviation from the original philosophy. Rarely would advocates of a failed policy argue that their underlying theory of action is wrong; that is, their policy solution (i.e., performance funding) would not be a result of a fundamental mismatch with the desired policy goals (i.e., greater college completion). The rationalization of the failed policy or management fad therefore sets the stage for . . . reinventing the innovation and recycling it with minor modifications (Birnbaum, 2000 p. 8).


This may already be the case with performance funding, in which advocates still believe that paying community colleges for completions is a worthwhile policy strategy for improving college completions. Given that community colleges face significant budget constraints and capacity issues, while simultaneously serving an open-access mission that draws a diverse range of students, we believe the underlying policy solution may be misaligned with college completion goals. Increasing college completion is seen as an important state and national priority, but there are likely better and potentially more impactful ways to achieve these educational goals. Although we acknowledge that evidence is often not the key driver of policy debates, the negative results found in this study should encourage state policy makers to reevaluate performance funding and seek out evidence-based alternatives for improving community college completions.


Notes


1. McDonnell and Elmores (1987) other policy instrument categories include mandates (environmental regulation), capacity building (investments in basic research, preservation, and so on), and system changing (vouchers, new providers, and so on).

2. We tested the same models predicting certificate degree completions and the results where the same, with no significant performance funding effects observed. These results are available on request.


References


ACT. (2011). College student retention and graduation rates from 2000 through 2011. Retrieved from http://www.act.org/research/policymakers/reports/graduation.html


Aldeman, C., & Carey, K. (2009, June 30). Ready to assemble: Grading state higher education accountability systems. Education Sector. Retrieved from http://www.educationsector.org/publications/ready-assemble-grading-state-higher-education-accountability-systems


Altstadt, D. (2012). Tying funding to community college outcomes: Models tools, and recommendations for states. Indianapolis, IN: Achieving the Dream.


American Federation of Teachers. (2012). What should count? Retrieved from http://www.whatshouldcount.org/


Baum, S., & Ma, J. (2007). Education pays: The benefits of higher education for individuals and society. New York, NY: College Board.


Betts, J. R., & McFarland, L. L. (1995). Safe port in a storm: The impact of labor market conditions on community college enrollments. Journal of Human Resources, 30, 741765.


Bill & Melinda Gates Foundation. (2013). Postsecondary education. Retrieved from http://www.gatesfoundation.org/What-We-Do/US-Program/Postsecondary-Success


Birnbaum, R. (2000). The life cycle of academic management fads. Journal of Higher Education, 71, 116.


Boggs, G. R. (2011). Community colleges in the spotlight and under the microscope. New Directions for Community Colleges, 2011(156), 322.


Brewer, D., Gates, S., & Goldman, C. (2004). In pursuit of prestige: Strategy and competition in U.S. higher education. New Brunswick, NJ: Transaction.


Burke, J. C. (2002). Funding public colleges and universities for performance. Albany, NY: Rockefeller Institute Press.


Burke, J. C. (2005). The many faces of accountability. In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public, academic, and market demands (pp. 124). San Francisco, CA: Jossey-Bass.


Burke, J. C., & Minassians, H. P. (2003). Performance reporting: Real accountability or accountability lite. The seventh annual survey 2003. Albany, NY: Rockefeller Institute.


Clark, B. R. (1960). The cooling-out function in higher education. American Journal of Sociology, 65, 569576.


Clark, B. R. (1980). The cooling out function revisited. New Directions for Community Colleges, 32, 1531.


Cohen, A. M., & Brawer, F. B. (2008). The American community college (5th ed.). San Francisco, CA: Jossey-Bass.


Complete College America. (2012). Retrieved from http://www.completecollege.org/


Delta Cost Project. (2012). Spending: Where does the money go? Washington, DC: Author.


Dougherty, K. J. (1992). Community colleges and baccalaureate attainment. Journal of Higher Education, 63(2), 188214.


Dougherty, K. J., & Hong, E. (2006). Performance accountability as imperfect panacea: The community college experience. In T. Bailey & V. S. Morest (Eds.), Defending the community college equity agenda (pp. 5186). Baltimore, MD: Johns Hopkins University Press.


Dougherty, K. J., Natow, R. S., Bork, R. J. H., Jones, S. M., & Vega, B. E. (2011). The politics of performance funding in eight states: Origins, demise, and change. Community College Research Center, Columbia University, New York, NY.


Dougherty, K. J., Natow, R. S., Bork, R. H., Jones, S. M., & Vega, B. E. (2013). Accounting for higher education accountability: Political origins of state performance funding for higher education. Teachers College Record, 115(1), 150.


Dougherty, K. J., & Reddy, V. (2011). The impacts of state performance funding systems on higher education institutions: Research literature review and policy recommendations (CCRC Working Paper No. 37). Community College Research Center, Columbia University, New York, NY.


Drukker, D. (2003). Testing for serial correlation in linear panel-data models. The Stata Journal, 3(2), 168177.


Fryar, A. H. (2011, June). The disparate impacts of accountabilitysearching for causal mechanisms. Paper presented at the 11th Public Management Research Conference, Syracuse, NY.


Goldrick-Rab, S. (2010). Challenges and opportunities for improving community college student success. Review of Educational Research, 80(3), 437469.


Gorbunov, A. (2004). Performance funding: Policy innovations in the era of accountability. Unpublished manuscript, Department of Leadership, Policy, and Organizations, Peabody College, Vanderbilt University, Nashville, Tennessee.


Gore, A. (2003). Creating a government that works better and costs less: A report of the National Performance Review. Washington, DC: U.S. Government Printing Office.


Harnisch, T. L. (2011). Performance-based funding: A re-emerging strategy in public higher education financing. Washington, DC: AASCU.


Hillman, N. W., & Orians, E. L. (2013). Community colleges and labor market conditions: How does enrollment demand change relative to local unemployment rates? Research in Higher Education, 54, 116.


Hills, J. R. (1965). Transfer shock: The academic performance of the junior college transfer. Journal of Experimental Education, 33(3), 201215.


Jones, D. (2012a). Performance funding: From idea to action. Washington, DC: Complete College America.


Jones, D. (2012b). Value-added funding: A simple, easy-to-understand model to reward performance. Washington, DC: Complete College America.


Laanan, F. S. (1996). Making the transition: Understanding the adjustment process of community college transfer students. Community College Review, 23(4), 6984.


Laanan, F. S. (2004). Studying transfer students: Part I: Instrument design and implications. Community College Journal of Research and Practice28(4), 331351.


Lane, J. E., & Kivisto, J. A. (2008). Interests, information, and incentives in higher education: Principal-agent theory and its potential applications to the study of higher education governance. In J. C. Smart (Ed.), Higher education (pp. 141179). Dordrecht, Netherlands: Springer Netherlands.


Levin, J. S. (2000). The revised institution: The community college mission at the end of the twentieth century. Community College Review, 28(2), 125.


Lucas, C. J. (2006). American higher education: A history. New York, NY: Palgrave Macmillan.

McDonnell, L. M., & Elmore, R. F. (1987). Getting the job done: Alternative policy instruments. Educational Evaluation and Policy Analysis, 9(2), 133152.


McGuinness, A. C. (1995). Prospects for statehigher education relations: A decade of grinding tensions. New Directions for Institutional Research, 85, 3345.


McKeown-Moak, M. P., Zaken, O., Olson, J., Vesely, R. S., Jimenez-Castellanos, O., Okhremtchouk, I., & Della Sala, M. R. (2013). The new performance funding in higher education. Educational Considerations, 40(2), 312.


McLendon, M. K. (2003a). State governance reform of higher education: Patterns, trends, and theories of the public policy process. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (pp. 57143). Dordrecht, Netherlands: Springer Netherlands.


McLendon, M. K. (2003b). The politics of higher education: Toward an expanded research agenda. Educational Policy, 17(1), 165191.


McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the origins and spread of state performance-accountability policies for higher education. Educational Evaluation and Policy Analysis, 28(1), 124.


Melguizo, T., Kienzl, G. S., & Alfonso, M. (2011). Comparing the educational attainment of community college transfer students and four-year college rising juniors using propensity score matching methods. Journal of Higher Education, 82(3), 265291.


Meyer, B. D. (1995). Natural and quasi-experiments in economics. Journal of Business and Economic Statistics, 13(2), 151161.


Moe, T. M. (1987). An assessment of the positive theory of congressional dominance. Legislative Studies Quarterly, 12, 475520.


Mullin, C. M. (2010). Rebalancing the mission: The community college completion challenge (AACC Policy Brief 2010-02PBL). American Association of Community Colleges.


National Conference of State Legislatures. (2013). Performance funding for higher education. Retrieved from http://www.ncsl.org/issues-research/educ/performance-funding.aspx


Osborne, D., &Gaebler, T. (1992). Reinventing government: How the entrepreneurial spirit is transforming the public sector. Reading, MA: Addison-Wesley.


Rabovsky, T. M. (2012). Accountability in higher education: Exploring impacts on state budgets and institutional spending patterns. Journal of Public Administration Research and Theory, 22(4), 675700.

Sanford, T., & Hunter, J. (2011). Impact of performance funding on retention and graduation rates. Education Policy Analysis Archives, 19(33).


Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.


Shin, J. C., & Milton, S. (2004). The effects of performance budgeting and funding programs on graduation rate in public four-year colleges and universities. Education Policy Analysis Archives, 12(22), 126.


Snyder, T. D., & Dillow, S. A. (2012). Digest of education statistics, 2011 (NCES 2012-001). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.


Tinto, V. (2012). Completing college: Rethinking institutional action. Chicago, IL: University of Chicago Press.


Toutkoushian, R. K., & Danielson, C. (2002). Using performance indicators to evaluate decentralized budgeting systems and institutional performance. In D. Priest, W. Becker, D. Hossler, & E. St. John (Eds.), Incentive-based budgeting systems in public universities (pp. 205226). Northampton, MA: Edward Elgar.


U.S. Department of Education, National Center for Education Statistics. (2013). IPEDS enrollment survey [Data file]. Retrieved from http://nces.ed.gov/forum/datamodel/eiebrowser/datasets.aspx


Volkwein, J. F. (2007). Assessing institutional effectiveness and connecting the pieces of a fragmented university. In J. Burke (Ed.), Fixing the fragmented university (pp. 145180). Bolton, MA: Anker.


Volkwein, J. F., & Tandberg, D. A. (2008). Measuring up: Examining the connections among state structural characteristics, regulatory practices, and performance. Research in Higher Education, 49(2), 180197.


Wooldridge, J. M. (2002). Econometric analysis of cross section and panel data. Cambridge, MA: MIT Press.


Zumeta, W . (1998). Public university accountability to the state in the late twentieth century: Time for a re-thinking? Policy Studies Review, 15(4), 522.




Cite This Article as: Teachers College Record Volume 116 Number 12, 2014, p. 1-31
https://www.tcrecord.org ID Number: 17691, Date Accessed: 10/24/2021 6:38:31 PM

Purchase Reprint Rights for this article or review
 
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • David Tandberg
    Florida State University
    E-mail Author
    DAVID A. TANDBERG is an assistant professor of higher education and associate director of the Center for Postsecondary Success at Florida State University. His research interests focus on state policy and politics for higher education. His most recent publications include “State Higher Education Performance Funding: Data, Outcomes and Policy Implications” in the Journal of Education Finance and “The Conditioning Role of State Higher Education Governance Structures” in The Journal of Higher Education.
  • Nicholas Hillman
    University of Wisconsin-Madison
    E-mail Author
    NICHOLAS HILLMAN is an assistant professor of educational leadership and policy analysis at the University of Wisconsin-Madison. He studies the intersections of higher education finance and policy as they relate to educational access and equity. His most recent publications include “College on Credit: a Multi-level Analysis of Student Loan Default” in Review of Higher Education and “Market-based Education: Does Colorado’s Voucher Model Improve Higher Education Access and Efficiency?” in Research in Higher Education.
  • Mohamed Barakat
    Fordham University
    E-mail Author
    MOHAMED BARAKAT is a resident director at Fordham University and a graduate of the Florida State University higher education program. He is interested in evaluating the impact of institutional and state policies on student success.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS