Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

Using Integrative Implementation Theory to Understand New Jersey’s 2015 Opt-Out Movement

by Jonathan A. Supovitz - 2021

Background/Context: In the spring of 2015, about 135,000 New Jersey students—almost 20% of the test-eligible children—did not take the state’s test. Opposition of this magnitude directly contradicted a central stipulation of the federal No Child Left Behind Act of 2001 (NCLB), which required states to test 95% of their eligible students to receive federal education funding. This article examines what happened in New Jersey in 2015 to produce such a large change in the implementation of state testing policy.

Purpose/Objective/Research Question/Focus of Study: In this article, I use Richard Matland’s (1995) integrative policy implementation theory to explain the circumstances that produced such a large change in the enactment of New Jersey testing policy in 2015. More specifically, I focus on two research questions: (1) What were the major national, state, and local factors that contributed to parent and student opt-out decisions in New Jersey in 2015? (2) How does integrative implementation theory help us to understand the different circumstances contributing to the opt-out movement in New Jersey in 2015?

Population/Participants/Subjects: The article is based upon extensive document review and 33 interviews with New Jersey state policymakers, professional education association representatives, advocacy group leaders, school administrators, teachers, parents, and students. Intervention/Program/Practice: Since the 1990s, testing has become an increasingly important function of state education policy (Fuhrman & Elmore, 2004). High-stakes state testing policies were further expanded under the NCLB, which required states to test 95% of their eligible students to receive streams of federal education funding. The opt-out movement across the nation in 2015 was a major departure from this well-established policy.

Research Design: This study employs qualitative interpretive research to examine how multiple actors at different levels of the New Jersey education system understood and interpreted the 2015 opt-out movement. Using Matland’s (1995) ambiguity-conflict model of policy implementation I interpret the interactions between the policy design and local implementers’ beliefs and goals.

Findings/Results: The findings illuminate the shifting national, state, and local factors that contributed to district opt-out rates as well as variation across school levels and districts with different socioeconomic conditions. The combination of increased federal press on states, New Jersey’s fast timelines for new standards and assessment adoption even as it ratcheted up accountability, and inconsistent state policy signals all contributed to backlash from teachers, community members, and anti-testing advocates. These factors illuminate many of the changing circumstances in New Jersey that fueled the dramatic opt-out movement in 2015.

Conclusions/Recommendations: While Matland’s integrative implementation model helps to explain the dynamics of the 2015 New Jersey opt-out movement, it does not account for additional contributing factors including changes over time, dynamics at different system levels, and consideration of a broader range of actors beyond policymakers and policy targets. Incorporating these factors can help make integrative implementation theories even more robust.


In the spring of 2015, the annual rite of testing in America was disrupted by a wave of widespread protest, as parents across the nation rose up to opt their kids out of high-stakes testing. FairTest, an advocacy organization opposed to high-stakes standardized testing, estimated that about 670,000 students nationwide opted out of their state test in the spring of 2015 (FairTest, 2015). In New Jersey alone, about 135,000 studentsalmost one out of every five test-eligible children in the statedid not take the state test that year (Supovitz et al., 2016). Opposition of this magnitude directly contradicted a central stipulation of the federal No Child Left Behind Act of 2001(NCLB), which required states to test 95% of their eligible students to receive streams of federal education funding. The purpose of the 95% participation rate was to discourage schools and districts from inflating their performance by omitting students in lower performing subgroups in their test results.

Since the 1990s, testing has become an increasingly important function of state education policy (Fuhrman & Elmore, 2004). While there has been much debate about the merits and consequences of high-stakes testing (see, for example, Hamilton, 2003; Koretz, 2008; Supovitz, 2009), it can also be seen as an example of a successful state policy in that all 50 states have some form of uniform assessment of public school students. Yet 2015 was a year of unparalleled resistance to high-stakes testing across the nation.

In this article, I use integrative policy implementation theory to better understand the circumstances that led 135,000 students to eschew the New Jersey state test in 2015. Theories of policy implementation have generally emphasized either the perspective of policymakers (Mazmanian & Sabatier, 1989; OToole, 1989) or the perspective of the targets of policy (Hjern, 1982; Spillane et al., 2002). Some scholars have tried to integrate these two views into a more unified theory (Berman & McLaughlin, 1978; Pressman & Wildavsky, 1973). One particular integrative effort of these two viewpoints is Matlands (1995) ambiguity-conflict model of policy implementation, which seeks to explain the effects of policy implementation through the interaction of the policy design and local implementers beliefs and goals. Through document analysis and interviews with a wide range of New Jersey stakeholders, I use Matlands model as a lens to illuminate the dynamics that contributed to the unprecedented 2015 opt-out movement in New Jersey. I conclude with a discussion of how three additional factors: changes over time, dynamics at different system levels, and consideration of a broader range of actors beyond policymakers and policy targets, can make integrative implementation theories even more robust.


Policy implementation has long been a puzzle to education policymakers and researchers. What makes the introduction of a policy more likely to be enacted by a target group? What factors contribute to relatively smooth implementation or to policies that blow up in their architects faces? What are the relative roles of the policy itself, the policy environment, the implementation process, and the dynamics of the policy targets? Implementation theory in the 1970s and 1980s was largely broken into two veins. One group of scholars focused on top-down models that emphasized the administrative nature of the implementation process (Baier et al., 1986; OToole, 1989). This work emphasizes the clarity of policy goals, the prescriptiveness of ensuing rules, the comprehensibility of statutory language, and the exactness of enactment steps. The concept of implementation fidelity grew out of the assumption that enactment should be faithful to the designers intent. From the top-down perspective, unsuccessful implementation was often perceived of as resistance on the part of implementers to sabotage reforms that were largely inconsistent with their own agenda (Firestone, 1989; Hjern, 1982; Lipsky, 1978).

A second group of scholars focused more closely on implementation from the perspective of the reform targets. This led to a greater emphasis on the bottom-up dimensions of implementation (Berman & McLaughlin, 1978; Hjern, 1982; Weatherly & Lipsky, 1977). The bottom-up perspective emphasized the dynamics of local conditions, the way policy targets understood reform, and how they incorporated new ideas into their own frames of reference, often changing the meaning of the policy in the process (Cohen & Barnes, 1993; Spillane et al., 2002).

Several scholars have tried to bridge the top-down and bottom-up conceptions. The seminal work of Pressman and Wildavsky (1973) examined the ways in which local government enacted federal economic development policy and concluded that implementation may be viewed as a process of interaction between the setting of goals and actions geared towards achieving them (p. xxi). In another prominent example, the RAND Change Agent study described the enactment of 293 federally funded education projects. The RAND researchers introduced the notion of mutual adaptation to describe the ways in which reforms were shaped by both their designers and local sites throughout the implementation process (Berman & McLaughlin, 1978). In a study of how low-performing high schools in districts receiving federal support for reform understood the source of their problems and sought external improvement strategies, Supovitz (2008) depicted the complex interplay of reform design and local response as an ongoing process of iterative refraction, in which conceptions of the external reforms changed repeatedly as they filtered through the multiple layers of the education system and were interpreted by actors at the different layers with their own incentives, constraints, and perspectives.

Matland (1995) proposed an ambiguity-conflict model of policy implementation as another way of integrating top-down and bottom-up models of implementation. He argued that the mix of relative ambiguity in the policy design and enactment procedures, combined with the policys relative conflict with the beliefs and goals of local implementers, helped to explain a policys fate.

Matland (1995) represented his theory in a simple two by two matrix, which is reproduced in Figure 1. In cases where policy ambiguity is low and policy conflict is low, which Matland called administrative implementation, successful implementation is basically a product of sufficient resources to meet the goal. In such cases, he argued, information flows from the top down. Implementation is ordered in a hierarchical manner with each underlying link receiving orders from the level above. The policy is spelled out explicitly at each level, and at each link in the chain actors have a clear idea of their responsibilities and tasks (Matland, 1995, p. 161). In these cases, policy is likely implemented as part of regular organizational routines.


Figure 1. Ambiguity-conflict model of policy implementation (Matland, 1995)

A second policy implementation situation is one of low policy ambiguity but high policy conflict, which for Matland (1995) results in political implementation. In these cases, the policy actors have a clearly defined design and goals, but there is dissension between the goals of the policymakers and those of the policy targets. In these cases, according to Matland, the implementation outcomes are decided by power, which often results in either a test of wills between policymakers and implementers, or in a negotiated resolution.

A third scenario is a policy implementation situation with high policy ambiguity and low conditions of conflict. In these situations, the local contextual conditions dominate the process. In these cases, the confusion of the policy design results in different local interpretations, which often produce local variation in implementation due to the constellation of actors participating, the pressures on these actors, their perceptions of what the policy is, and the available resources (Matland, 1995, p. 166). The variation of responses in these situations is what Matland referred to as experimental implementation because there is little prior local experience by which to guide implementation, which results in tinkering and uncertainty from local context to context.

Finally, there is a fourth situation of policy implementation, which consists of both high policy ambiguity and high policy conflict. This results in what Matland (1995) called symbolic implementation. Acknowledging that ambiguity often diminishes policy conflict, Matland recognized situations in which policies that invoke highly salient symbols often produce high levels of conflict even when the policy is vague (Matland, 1995, p. 168). In these cases, the symbolism of the policy, even if implementation procedures are vague, produces high levels of conflict among implementers. In these cases, Matland theorized, local coalition strength determines the degree of implementation. The difference between symbolic implementation and political implementation is the locus of conflict. In the former, conflict is at the micro level, while the conflict in political implementation is at the macro level.

The advantage of Matlands (1995) model, and the reason I have chosen to use it to understand the opt-out movement in New Jersey in 2015, is that it provides specific criteria for determining different conditions at the nexus of top-down and bottom-up perspectives of policy implementation. This allows for a clearer description of the conditions extending from different combinations of relative policy ambiguity and degrees of conflict under different circumstances. As we shall see, several aspects of Matlands theory are useful in explaining how the policy conditions in New Jersey changed at both the policy design level (i.e., top) and at the local enactment level (bottom) to produce an array of responses. More specifically, in this paper I examine the following research questions:


What were the major national, state, and local factors that contributed to parent and student opt-out decisions in New Jersey in 2015?


How does integrative implementation theory help us understand the different circumstances contributing to the opt-out movement in New Jersey in 2015?

The first of these questions is descriptive and allows for the identification of factors at both the macro and micro levels of the multitiered American education system. The second question allows for the use of integrative implementation theory to provide insight into the changing dynamics and underlying conditions that produced the 2015 opt-out phenomenon in New Jersey. Based on the responses to these questions, I explore a third question.


How can we further develop integrative implementation theory to better understand policy implementation?


This paper is based upon extensive document review and 33 interviews with state policymakers, professional education association representatives, advocacy group leaders, school administrators, teachers, parents, and students. The interview sample included five district leaders (superintendents and administrators), five parents, four members of different state governmental bodies (the New Jersey Department of Education [NJDOE], the New Jersey Board of Education [NJBOE], the New Jersey School Boards Association, and the New Jersey Education Association [NJEA]), four principals, four teachers, four leaders of statewide advocacy groups, three high school students, two Parent Teacher Association (PTA) leaders, and two teachers union representatives. Document review focused on two major sources. The first were policy statements and press releases from the NJDOE that were posted on the states website in 2014 and 2015. Second, we searched for state and local news accounts related to the state test, the opt-out movement, and test administration issues.

Our sampling strategy for interviews at the district and school levels might be called a stratified sample of convenience. First, we organized the districts in the state using the states socioeconomic status indicator, called a district factor group (DFG). We broke the districts into high DFG, average DFG, and low DFG. Within these three buckets we identified district-level administrators with whom we had prior relationships and reached out to these people for interviews. In cases where administrators declined to be interviewed, we selected replacements. As part of the district interviews, we asked administrators to connect us to a high school principal or assistant principal for the school-level interview. We used the same procedure with the principals to gain teacher interviews. We also asked principals to nominate a senior who would be willing to talk with us. We focused on high schools because our previous research showed they had the highest op-out levels (Supovitz et al., 2016).

The interview protocols were standard, with some customization depending on the respondents position. The questions asked interviewees to describe the opt-out movement in the state from their perspective, their experiences with state testing in 2015, who the major supporters and opponents were and the arguments they used, why opting out was an issue at this point in time, their organizations position on opting out, how the issue was communicated to them, and what they saw as the consequences for their district or state for having large numbers of students not taking the state test. Immediately following each interview, the interviewer crafted a short memo summarizing the interview (Maxwell, 2008).

All interviews were taped and transcribed. Interview transcripts were coded using Dedoose, a qualitative data analysis software package. The data were initially coded according to an early framework we developed about the role of federalism in the layers of the opt-out story. The coded data were then used to develop the outline of events at the national, state, and local context surrounding the opt-out movement, which was used in our initial report (Supovitz et al., 2016). For this article, the data were recoded according to the four quadrants of Matlands (1995) ambiguity-conflict model. These coded data were used to organize the thematic analysis in the section Using Integrative Implementation Analysis to Explain the Opt-Out Surge below.


Before examining the policy environment, it is important to understand the scope and patterns of nontested students in New Jersey in 2015. Using NJDOE population data on 539 districts, 85 charter schools, and 20 vocational schools, the research team assessed the magnitude of opt-out in New Jersey (Supovitz et al., 2016). Overall, we found that the average opt-out rate in the state was 19%, with a standard deviation of 11%. The opt-out rates were similar in English language arts (ELA; 19.3%) and mathematics (19.7%).

We also determined that opt-out rates exceeded 65% in a few districts, and 20 districts had 40% or more of their students opting out. About 15% of districts had more than 25% of their enrolled students opting out, about 20% of districts had 15 to 25% of students opting out, and about a quarter of districts had between 6 to 15% opting out.

The analyses further showed that opt-out rates increased from elementary to middle to high school. For example, only 10% of districts had opt-out rates exceeding 15% in elementary school. In middle school, the number jumped to 24%. In high school, more than half of the districts had opt-out rates exceeding 15%. More specifically, in ELA, 57% of districts had higher than 15% opt-out rates. In geometry, which is largely a 10th grade subject, 38% of districts had more than 15% of their students opting out. In Algebra II, often taken in 11th grade, 59% of districts had greater than 15% opt-out rates.

Finally, the overall rates of opting out were not correlated with DFGs, but they were correlated with DFGs in high school, with lower poverty districts having significantly higher opt-out rates in ELA and mathematics. Opt-out rates could not be calculated for about 40% of the districts as they did not report the number of students who were registered to test. Districts with missing data included a substantial number of charter schools (who were reported as districts), with only 10% of charter schools reporting the data to calculate their opt-out rates.


In this section, I use the document and interview data to describe the national, state, and local factors contributing to student and parental decisions not to sit for testing in the spring of 2015 in New Jersey.


There were two major national movements that contributed to the environment in New Jersey. First was the widespread adoption of the Common Core State Standards (CCSS) and the increasing controversy surrounding them and their associated tests. Second was the economic recession of 2009, which brought about stimulus funding in the form of the Race to the Top (RTTT) competition, which encouraged states to adopt standards, aligned testing, and associated educator accountability systems.

The Common Core and backlash against Common Standards. The CCSS were the latest incarnation of efforts to provide a common conception of what students should know and be able to do at each grade level in major subject areas (McDonnell & Weatherford, 2013; Pizmony-Levy & Green Saraisky, 2016; Supovitz et al., n.d.). Initially adopted in 2010 by 46 states, the CCSS became increasingly controversial, with several states backing out and replacing or revising the CCSS. Opponents of the CCSS made a range of arguments that critiqued the Standards themselves (e.g., not developmentally appropriate, attended to academic priorities at the expense of social and emotional needs), but primarily attacked the CCSS on cultural and ideological grounds (e.g., federal overreach, data privacy, corporate profiting from a public good; Supovitz & McGuinn, 2017).

Federal efforts to push state implementation of standards, tests, and accountability. National circumstances and federal education policy further led to a series of critical decisions that influenced perspectives on state testing. Amid the severe national economic downturn, two federal policies particularly contributed to the national context. First, the Obama administration used funding from the American Recovery and Reinvestment Act, an $800 billion fiscal stimulus bill passed in 2009 that earmarked $97.4 billion to the U.S. Department of Education (USDOE), to push states to adopt the standards. The USDOE used $4.35 billion to create the RTTT grant competition. The 19 state awardees were required to adopt a range of policies including new standards and assessments, build data systems to measure student growth, and develop teacher and principal evaluation systems.

The USDOE also used stimulus funding to award two comprehensive assessment grants in 2010. The Partnership for Assessment of Readiness for College and Careers (PARCC) received $170 million and Smarter Balanced Assessment Consortium (SBAC) received $160 million. The grants tasked the testing consortia with developing next-generation assessments.

The second important policy decision was to allow states to apply for waivers from NCLB requirements. The waivers offered states relief from a dozen requirements of NCLB in exchange for the states commitment to do the following: (1) adopt college- and career-ready standards; (2) develop new accountability systems based on reading and math assessments, graduation rates, and student growth over time; (3) implement teacher and principal evaluation systems based on multiple factors with student growth being a significant factor; and (4) reduce administrative and reporting requirements that are burdensome to states.

The CCSS movement, propelled by RTTT dollars and NCLB waivers, had critics viewing these acts as coercion to implement standards, assessments, and accountability without adequate time to phase them in.


The national context played into a series of events and decisions in New Jersey that contributed to the environment that produced the opt-out dynamic. First, New Jersey adopted the CCSS and won RTTT funds. Second, New Jersey adopted the PARCC test. Third, in a political twist, Governor Chris Christie, who was running for president and seeking to shore up his Republican candidacy, dropped the CCSS while maintaining the CCSS-aligned PARCC test.

Adopting the CCSS and obtaining an NCLB waiver. Joining 46 other states, NJBOE adopted the CCSS in 2010. Along with adopting CCSS, New Jersey joined the PARCC testing consortium in the spring of 2010 and became a PARCC governing state in the spring of 2011, allowing it to have a voice in the development of the assessment system.

In the midst of the recession of 2009, New Jersey competed in all three rounds of RTTT, finally receiving $38 million in the third and final cohort in December 2011. As part of their funding, New Jersey agreed to support the transition to higher standards and improved assessments and implement both teacher and principal evaluation systems. In 2011, the NJDOE also submitted a waiver application to the USDOE for relief from certain provisions of NCLB.

Following the approval of New Jerseys 2011 waiver application, the New Jersey legislature passed a teacher and principal evaluation/tenure bill, the Teacher Effectiveness and Accountability for the Children of New Jersey Act (TEACHNJ) that was supported by the NJEA and signed into law by Governor Christie in August 2012. The law, which went into effect in the 201415 school year, put into place a yearly evaluation system for teachers and principals that included students performance on annual statewide assessments to be considered as a predominant factor in an educators annual performance rating. According to the law, elementary and middle school ELA and mathematics teachers would have 30% of their ratings based on student progress. In July of 2014, due to widespread concern from educators about the proportion of performance connected to a single measure, Christie signed an executive order lowering that amount to 10%.

New Jersey administered PARCC statewide for the first time in the spring of 2015. Joining the PARCC consortium brought several changes to the states testing system. First, the PARCC tests were computer based, in contrast to previous statewide assessments. Second, the PARCC tests were designed to measure students higher order thinking and problem-solving skills and included more short- and extended-response questions. To accomplish this, the test was administered in two time periods separated by six weeks. The first, in March, focused on performance tasks and short-answer questions to capture more authentic representations of student capabilities, while the second, in May, had more multiple-choice questions to ensure domain coverage. Consequently, the total testing time for the PARCC test was longer than previous state tests, clocking in at approximately 8.25 hours for Grades 3 to 7 and 9.7 hours for Grade 11 (New Jersey Department of Education [NJDOE], 2015).

New Jersey teachers union opposes PARCC testing, primarily due to its link to teacher evaluation. Although the states teachers union, the NJEA, supported the CCSS and initially supported the TEACHNJ educator accountability legislation, the union staunchly opposed the new PARCC test (Pizmony-Levy & Woolsey, 2017). The NJEAs opposition to the PARCC was largely motivated by the states policy to use the test results as a factor in teacher evaluation. The tight and immediate link between standards and accountability was particularly problematic because teachers were just beginning to adjust to the new way of teaching with the CCSS. The union also believed the test had several additional shortcomings. The expanded time required for test administration significantly reduced instructional time. Also, the NJEA argued the tests were inequitable for both teachers and students, as not all districts were sufficiently equipped with the technology for online assessment and not all students had equal access to technology as a regular part of their educational experiences.

State graduation requirements made the PARCC superfluous for many 11th graders. Another important factor that contributed to the rise in opt-out rates was state policy during the transition to the PARCC assessment. In 2015, the first year of PARCC, the state announced that although future PARCC results would be tied to graduation, other competency tests would be allowed to substitute in the transition. Achieving a certain score on other testsincluding the Preliminary Scholastic Aptitude Test (PSAT), the Scholastic Aptitude Test (SAT), or the American College Test (ACT)would demonstrate the necessary proficiency for students to graduate.

Advocacy groups mobilized against the PARCC. Several groups were particularly active in the opt-out movement. The three advocacy groups most frequently mentioned in our review were Save Our Schools New Jersey, United Opt Out New Jersey, and Cares About Schools. Many participants identified Save Our Schools as the most involved in leading the opt-out charge. Save Our Schools was founded in Princeton, one of the most affluent districts in the state, by parents who were concerned with charter school expansion and more recently had become involved in advocacy around school funding and high-stakes testing issues. United Opt Out New Jersey was a state chapter of the United Opt Out National group that was established in response to NCLB. The three parent-led groups collaborated with each other to spread information about the PARCC test and opting out in a transpartisan partnership in which issue trumped ideology.

State communications to districts sent mixed messages, fueling uncertainty. As the state and districts prepared for test administration in 2015, the NJDOE issued a number of statements about testing. At the beginning of the 201415 school year, New Jersey Commissioner of Education David Hespe sent a memo to school leaders stating that participating in testing was required for compliance with NCLB (Hespe, 2014). As the test administration period approached, however, the NJDOE announced that district administrators had the latitude to determine how to address test refusals.


The flurry of opt-outs and the rapid rise in the profile surrounding the issue surprised principals and district leaders, who were caught between parents right to recuse their children from testing and state policy requirements that all students participate in testing. The new procedures for the PARCC, the online administration, and the two testing windows were all added challenges. The overall picture that emerged from our data was one of leaders trying to adjust to an evolving situation within a system in flux, beset by a host of implementation challenges.

Test administration challenges result in implementation issues. Several district administrators we interviewed talked about test administration challenges. They noted that preparing for the PARCC was more time consuming than anticipated. Several administrators felt that the NJDOE had not resolved all the test administration issues before launching statewide. Districts spent much time and effort preparing to administer the exam, including ensuring that there was adequate technology, determining a testing schedule that maximized instruction time but adhered to PARCC guidelines, and providing teacher professional development on test preparation and administration guidelines. Even so, there were many reports of administration problems, including inadequate availability of computers, testing technology glitches, weak wireless connections, and impractical testing procedures.  

Test anxiety. Parents and PTA leaders representing them expressed a range of concerns in media coverage and interviews, including the burdens of over-testing, the anxiety produced by testing, and concerns about the developmental appropriateness of the tests. Several interviewees felt that the amount of testing in schools was excessive, in particular for students who were taking AP courses and college admissions tests. Similar opinions were expressed by teachers as well as parents. Parents were also concerned with the developmental appropriateness of the PARCC test, and media coverage widely reported parent views that the difficulty of the test questions were inappropriate for the grade level of their children.


By examining the national, state, and local factors that contributed to the changing testing climate in New Jersey in 2015 through the lens of Matlands (1995) ambiguity-conflict implementation model, we can see how the changing circumstances contributed to a wave of both confusion about and animus toward testing. While the previous decade of regular high-stakes testing in the state may have created a sense of test implementation as a largely administrative activity (or low ambiguity/low conflict in Matlands model), a complex set of interdependent factors changed the implementation calculus in the spring of 2015. Matlands framework helps to illuminate several interconnected factors that contributed to the overall level of opt-outs in 2015.


Several aspects of the shifting situation in 2015 can be seen through the lens of Matlands (1995) political implementation frame, which views underlying conflict of a clear policy during the implementation process as a struggle for power. We see this most prominently in teacher opposition to coupling the first year of PARCC testing to teacher accountability. Many interviewees pointed out the connection between the opt-out movement and the states teacher evaluation policy. According to one school administrator, The fact that teachers were suddenly being held more accountable for test results became a huge political piece. & They [the state] should have studied the test more before tying it to evaluation.

Interviews with other state and local actors also directly attributed teacher union opposition to the link between the new test and teacher accountability. As a union representative stated, We believe that parents should know what the test was about, and we made no secret of the fact that we think the test is deeply flawed and is being used for purposes that it shouldnt be used for. To further press their point, the NJEA conducted and publicized focus groups and polls of parents and voters about their attitudes toward testing in November and December of 2014, right before the testing season. According to the union representative, The results revealed that parents, even to a greater extent than we thought, [and the] public & was really frustrated and upset about what was happening with this testing. Through these efforts, the teachers union was able to reasonably oppose the rapid imposition of teacher accountability without opposing the concept of accountability, which the public has traditionally supported. This created an environment of conflict between the goals of state policy and the goals of teachers.

There was a general consensus among those we talked to that the NJEAs messages influenced parents decisions to opt their children out of state testing. A PTA member in a high-poverty district explained, All the negative press that the test was getting from the NJEA really impacted people. I was getting calls and text messages in response to the ads, she said. The NJEA also developed collaborative relationships with parent opt-out groups and anti-standardized-testing groups to share information about the misuse of standardized testing with parents and other community members. They jointly sponsored PARCC information events across the state where, according to the union representative, members of the NJEAs local associations would work with parent groups and other education groups in their communities to show a film and have a discussion about standardized testing and what the effects of it are. On social media, the NJEA also shared messages and information with opt-out advocacy groups. Additionally, these groups worked together to collect data and publicize the number of students who opted out of the PARCC test in districts across the state.

In Matlands (1995) model, this shifted implementation from a strictly administrative process (quadrant 1 in Figure 1) to a case of political implementation (quadrant 2). In this circumstance, the ambiguity of state policy was low (it was clear that the test was to be used for teacher accountability), but conflict was high (teachers opposed this action). Thus, political conflict was high and policy implementation became a political battle between state policymakers and teachers. The strategies of teacher union oppositiona multimillion dollar ad campaign of television, radio, billboards, print, and social media advertising to turn public opinion against the PARCChad all the hallmarks of a political campaign against state policy. From this perspective, we can interpret the increased opt-out rates as the consequence of a power struggle between the states imposition of a teacher accountability policy and the teachers as the targets of what they perceived as an unfair and premature policy.


Matlands (1995) conception of experimental implementation applies when a policy is unclear and policy targets, while amenable to implementation, struggle to understand how to do so. Three examples of just this kind of situation contributed to parent and student test-taking choices in 2015. First, state policy on high school graduation requirements was unclear. Policymakers decision to allow other tests to qualify for state graduation requirements allowed students who had scored adequately on these exams to bypass the PARCC as a graduation requirement. As one district administrator noted, When parents, especially of students who were academically doing well, saw that their child had already met a graduation requirement through the PSAT, there was less motivation to take the PARCC test because of the number of hours students were going to miss from instruction. A state official felt that the range of alternatives led some school counselors to interpret the PARCC as optional and for kids to think, Oh, I dont need to take the test because Ive already got the SAT score I need. This decision eliminated any incentive for students who performed well on these tests to take the PARCC and undermined the goal of gaining a 95% participation rate on the state test.

The lack of clarity around the states graduation requirements may also have contributed to the differences in opt-out rates across different communities. According to a parent and PTA member from a high DFG district that we interviewed, parents in high DFGs had the time and resources to find, understand, and communicate about the state policies that dictated graduation requirements. The state didnt make it readily available for parents to know that if students take the SAT or ACT they can also use that for the testing option to graduate from high school, the parent said. Thus, information about the options for students to meet the states graduation requirement may have been unevenly available across communities in the state, contributing to variations in opt-out rates across districts.

The second factor that contributed to policy ambiguity was local test administration problems, largely due to the online nature of the PARCC assessment. Administrators in both urban and suburban schools discussed technology challenges, including gaining access to adequate computers as well as training teachers and students to navigate the online test. Students were not used to testing on computers and the interface was not user friendly, according to both school administrators and students. We heard of cases where computers crashed, forcing students to restart the test, and cases where proctors had to constantly log in students who had been logged out for spending too much time on one page. In some schools, weak wireless connections led to the test taking much longer than intended. District administrators also found the states communication of testing directions problematic. One principal said, We were halfway through the testing, and [the state was] changing the protocol. We were thinking at the time: were supposed to be starting in three weeks, and still dont know how this is going to work. Administrators reported that these kinds of problems likely tamped down test-taking rates. Like the confusion over the graduation requirements, these more technical implementation issues are another example of low implementation conflict and high implementation ambiguity. The problematic nature of the test administration process, rather than any opposition to the policy itself, contributed to the opt-out levels in the state.

The third factor leading to experimental implementation was unclear state communication about testing participation. The NJDOE originally stated that districts had to comply with the NCLB law. Subsequently, they communicated that districts could make their own requirement determinations. The NCLB policy that was in effect the spring of 2015 clearly required states to have 95% of students participate in state tests. This was a central tenet of the equitability of the law so that schools and districts could not manipulate test performance by having low-performing students stay away during testing periods. The opt-out movement challenged this premise, but New Jerseys position regarding the consequences for districts with high rates of opting out was unclear. A member of a special interest group remarked that the memo released by the commissioner of education restated the fact that the federal [law] required the state to administer the tests but said that the policy of dealing with test refusals was up to the districts. Interviewees from districts and schools characterized the states communication of opt-out policies as unclear. A principal from a high DFG district explained: The state was very wishy-washy throughout the ordeal. They said that the principals would be the ones to determine what to do. They did a very poor job handling this. Caught between federal directives and a bottom-up movement, the states wavering messages contributed to the unevenness of local response.

Viewed through the lens of Matlands (1995) ambiguity-conflict model of policy implementation, these factors shifted policy implementation from an administrative one (quadrant 1 in Figure 1) to one of experimental implementation (quadrant 3). These circumstances reflected high policy ambiguity and low policy conflict. Students and parents decisions to opt out of testing under these circumstances did not reflect opposition to the PARCC or to state testing policy per se, but they were based on the opportunity to sit out the PARCC exam. This fits well with Matlands notion of experimental implementation, whereby local contextual conditions dominate the process, producing broad variation in outcomes (pp. 165166).


Matlands (1995) fourth implementation situation are cases of both high ambiguity and high conflict, which he viewed as rare instances in which the symbolism of a policy causes conflict, even if the policy is ambiguous. There is some evidence for this interpretation.

Amid teacher and union concerns about premature accountability and confusion around the testing conditions and requirements, there was also a set of active opponents to high-stakes testing in the state. A representative from a special interest group, for example, described the coalition of opt-out advocacy groups as bipartisan; it was parents united for local school education. A school administrator from a lower DFG district described the messaging from Save Our Schools:

When it first started, Save Our Schools was really talking about the concern with testing, the amount of testing. That started maybe two to three years ago. When PARCC came, their message moved from this push from all of this assessment for students, to PARCC being a bad assessment.

Opt-out advocacy groups relied substantially upon social media to disseminate information about the PARCC test and opting out and as a tool to organize their members. A special interest group representative mentioned that the Facebook sites of opt-out groups contained robust discussion and lots of sharing of information, lots of coordination about how to present this issue to local school boards and form resolutions or different kind of policy decisions. Many study participants agreed that social media was important to spurring opting out. A principal from a high poverty district claimed, The community has a Facebook page and thats where opting out gained traction. Several interviewees also mentioned that leaders of the opt-out groups posted opt-out form letters on social media sites that parents could submit to their schools, thereby facilitating the process. A few of these groups were also able to raise funds to put up billboards with anti-PARCC messages. Lastly, these groups were active in public meetings and forums. Members of these groups attended school board meetings to raise their concerns. A special interest group representative said, We also went around from town to town and met with parents to answer questions [about opting out] and share our experiences. With these strategies, opponents raised the profile of the opt-out movement across the state.

These long-standing anti-testing parent groups raised the profile of testing concerns and utilized social media-fueled grassroots information campaigns to legitimize opting out of the state test as a protest against the rise of test-driven education. By circulating forms to help parents opt their children out of the test and sharing information on the parental prerogative to do so, these groups were exploiting the ambiguity around compliance in state testing policy. For these groups, state testing to fulfill federal accountability requirements became a symbol of changes in the education system that moved education away from its local roots. From the perspective of Matlands (1995) model, we can see how, for this constituency, policy implementation became one of high ambiguity and high conflict (quadrant 4 in Figure 1). Thus, the testing policy became a symbol for these groups, and their strong coalition of opposition was an important contributing factor to the levels of opting out in the state.


This study illuminates many of the changing circumstances in New Jersey that fueled the dramatic opt-out movement in the state in 2015. The factors described in the National, State, and Local Factors Surrounding the 2015 Opt-Out Movement section help to explain the confluence of events and conditions that contributed to average district opt-out rates approaching 20% as well as the variation across school levels and socioeconomically different districts. The combination of increased federal press on states, the states fast timelines for standards and assessment adoption even as it ratcheted up accountability, and inconsistent state policy signals all contributed to a political backlash from teachers, community members, and anti-testing advocates.  

Integrative implementation analysis helps meaningfully frame these events. The strength of integrative models that attend to both the perspectives of policy designers and policy targets is that they illuminate the interactions among these two sets of actors. Thus, we can say that implementation is embedded within the interactions between a policy and those it influences. As an integrative model, Matlands (1995) ambiguity-conflict framework of policy implementation is a generative way to organize the many underlying factors that contributed to the exploding opt-out movement in New Jersey in 2015. However, while Matland conceived of his model as helping to identify which of four possible implementation scenarios best [emphasis added] describes the implementation process (1995, p. 156), this study reveals that implementation is more multifaceted.

From this analysis, we can see that there is evidence to support implementation challenges in all three of the quadrants of Matlands (1995) matrix that reflect policy instability. This study shows not the best model to understand implementation, but rather that there were multiple phenomena playing out at different levels of the system and among different groups of actors. By using an integrative approach that emphasizes the interactions among the policy design and the varied interests of policy targets at different levels of the system, we can more readily see how both different aspects of the testing policy (e.g., the use of PARCC for the first time, administration procedures, associated teacher accountability) and the responses from different constituencies (e.g., teachers, teachers union, parents, school administrators) produced both different conditions and therefore different reactions at different levels of the education system. In this way, a more robust integrative implementation framework should be able to illuminate (a) the interactions among policy design and policy targets, (b) differing interactions at different system levels, (c) changes over time, and (d) the influence of actors beyond the policymakers and policy targets who historically tend to be the focus of implementation theory.  

Beyond the interactions between policy intent and policy targets that are a feature of integrative implementation frameworks, this study further shows that these interactions are often happening at multiple levels of the education system, with different consequences. This is how we can see different perceptions of policy ambiguity between the NJEA teachers union, which viewed the testing policy as quite clear, and local implementers, who experienced high levels of ambiguity and confusion around the state testing policy. Integrative frameworks thereby allow us to look at power dynamics at play at different levels of the implementation process and their consequences. While we might assume that policymakers hold the power and local groups are less powerful, these analyses suggest that power dynamics are always in flux and there are circumstances in which power relationships are reversed. Thus, a second important dimension is for integrative implementation theories to account for a more dynamic array of interactions, ones that are occurring simultaneously at different levels of the system and amongst different actors.

A third additional dimension is change over time. Integrative models have the potential to highlight that neither a policy itself nor the terrain within which it is carried out are inherently stable. Rather, there is a temporal aspect to the policy implementation process that is rarely accounted for in implementation theory. While many policies, once well established, are perceived as stable, they often change and degrade over time (Pierson, 2004; Supovitz, 2013). In this study, testing policy was generally viewed as well established, having been enacted and carried out over the course of 15 years (since the 2001 passage of NCLB) and across multiple political administrations and modifications of the New Jersey state test. Given this track record, it is easy to see why testing policy in 2015 might have been perceived as a stable part of the state educational system. Thus, integrative frameworks provide an opportunity to examine the changing nature of the policy implementation dynamics as they play out over time.

A fourth potential contribution of integrative frameworks is that they can point out the contributions of a wider array of actors. In this case, groups like anti-testing advocates and (arguably) parents, who were not the targets of testing policy in the state, nevertheless played a role in the implementation story. Integrative frames can help us look beyond policymakers and local implementers to a range of interest groups across the policy continuum.

Finally, the conclusions drawn in this paper must be tempered by several study limitations. First, as the district and school interview subjects were a sample of convenience, it is possible that the experiences of these educators were not representative of the state as a whole. This may have contributed to bias of interpretations. Second, the entire study is based on the premise that opt-out rates were dramatically different from those in previous years, yet we have no comparable state data from prior years with which to confirm this assumption. Relatedly, this is not a comparative study, so claims cannot be made about how New Jersey differs from other states. Third, the analytic frame of integrative implementation was imposed on the data post hoc, and therefore was not incorporated into the interview protocol design, which may have resulted in missing opportunities to gain a more complete picture of events that contributed to the opt-out movement at that time.

A major challenge for understanding a complex phenomenon like policy implementation is to incorporate key elements into a framework that can help us locate important factors. Integrative implementation models like mutual adaptation (Berman & McLaughlin, 1978), ambiguity-conflict (Matland, 1995), and iterative refraction (Supovitz, 2008) are powerful in their emphasis on the interactions between policy designs and their interplay with policy targets. This study suggests further considerationsincluding interactions with different consequences that are happening relatively simultaneously up and down the system, temporal changes that may influence the dynamics of implementation interactions, and a broader array of actors beyond policymakers and local implementersthat may play important roles in the dynamics of implementation. Incorporating these factors into implementation models can make implementation theory more robust.


Baier, V. E., March, J. G., & Saetren, H. (1986). Implementation and ambiguity. Scandinavian Journal of Management Studies, 2(34), 197212. https://doi.org/10.1016/0281-7527(86)90016-2


Berman, P., & McLaughlin, M. (1978). Federal programs supporting educational change: Vol. 8. Implementing and sustaining innovations. RAND Corporation. https://www.rand.org/pubs/reports/R1589z8.html


Cohen, D. K., & Barnes, C. A. (1993). Pedagogy and policy. In D. K. Cohen, M. W. McLaughlin, & J. E. Talbert (Eds.), Teaching for understanding: Challenges for policy and practice (pp. 207239). Jossey-Bass.


FairTest. (2015, December 12). More than 670,000 refused tests in 2015 [Press release]. https://www.fairtest.org/more-500000-refused-tests-2015


Firestone, W. A. (1989). Using reform: Conceptualizing district initiative. Educational Evaluation and Policy Analysis, 11(2), 151164. https://doi.org/10.3102/01623737011002151


Fuhrman, S., & Elmore, R. F. (Eds.). (2004). Redesigning accountability systems for education. Teachers College Press.


Hamilton, L. (2003). Assessment as a policy tool. Review of Research in Education, 27, 2568. https://doi.org/10.3102/0091732X027001025


Hespe, D. (2014, October 30). Student participation in the statewide assessment program [Memorandum]. New Jersey Department of Education. https://www.nj.gov/education/broadcasts/2014/OCT/30/12404/Students%20Participation%20in%20the%20Statewide%20Assessment%20Program.pdf


Hjern, B. (1982). Implementation researchthe link gone missing. Journal of Public Policy, 2(3), 301308. https://doi.org/10.1017/s0143814x00001975


Koretz, D. M. (2008). Measuring up: What educational testing really tells us. Harvard University Press. https://doi.org/10.2307/j.ctv1503gxj


Lipsky, M. (1978). Standing the study of public policy implementation on its head. In W. D. Burnham & M. Weinberg (Eds.), American politics and public policy (pp. 391402). MIT Press.


Mazmanian, D. & Sabatier, P. 1989. Framework for Implementation Analysis. In Implementation and Public Policy, Daniel Mazmanian and Paul Sabatier (Eds). New York: University Press of America, 97128.


Matland, R. E. (1995). Synthesizing the implementation literature: The ambiguity-conflict model of policy implementation. Journal of Public Administration Research and Theory, 5(2), 145174. https://doi.org/10.1093/oxfordjournals.jpart.a037242


Maxwell, J. A. (2008). Designing a qualitative study. In L. Bickman & D. J. Rog (Eds.), The SAGE handbook of applied social research methods (2nd ed., pp. 214253). Sage. https://dx.doi.org/10.4135/9781483348858.n7


McDonnell, L. M., & Weatherford, M. S. (2013). Evidence use and the Common Core State Standards movement: From problem definition to policy adoption. American Journal of Education, 120(1), 125. https://doi.org/10.1086/673163


New Jersey Department of Education. (2015). Year two of PARCC: Parent PARCC questions answered. https://pdf4pro.com/view/parent-parcc-questions-answered-3368d.html


O'Toole, L. J., Jr. (1989). Goal multiplicity in the implementation setting: Subtle impacts and the case of wastewater treatment privatization. Policy Studies Journal, 18(1), 120. https://doi.org/10.1111/j.1541-0072.1989.tb00596.x


Pierson, P. (2004). Politics in time: History, institutions, and social analysis. Princeton University Press. https://doi.org/10.1515/9781400841080


Pizmony-Levy, O., & Green Saraisky, N. (2016, August). Who opts out and why? Results from a national survey on opting out of standardized tests. Teachers College, Columbia University. https://doi.org/10.7916/D8K074GW


Pizmony-Levy, O., & Woolsey, A. (2017). Politics of education and teachers support for high-stakes teacher accountability policies. Education Policy Analysis Archives, 25(87). https://doi.org/10.14507/epaa.25.2892


Pressman, J. L., & Wildavsky, A. (1973). Implementation: How great expectations in Washington are dashed in Oakland; or, why its amazing that federal programs work at all, this being a saga of the Economic Development Administration as told by two sympathetic observers who seek to build morals on a foundation of ruined hopes. University of California Press.


Spillane, J. P., Reiser, B. J., & Reimer, T. (2002). Policy implementation and cognition: Reframing and refocusing implementation research. Review of Educational Research, 72(3), 387431. https://doi.org/10.3102/00346543072003387


Supovitz, J. A. (2008). Implementation as iterative refraction. In J. A. Supovitz & E. H. Weinbaum (Eds.), The implementation gap: Understanding reform in high schools (pp. 151172). Teachers College Press.


Supovitz, J. A. (2009). Can high stakes testing leverage educational improvement? Prospects from the last decade of testing and accountability reform. Journal of Educational Change, 10(23), 211227. https://doi.org/10.1007/s10833-009-9105-2


Supovitz, J. A. (2013). Slowing entropy: Instructional policy design in New York City, 201112. Consortium for Policy Research in Education. https://doi.org/10.12698/cpre.2013.Geeval


Supovitz, J. A., Daly, A., del Fresno, M., & Kolouch, C. (n.d.). #commoncore Project. https://www.hashtagcommoncore.com


Supovitz, J. A., & McGuinn, P. (2017). Interest group activity in the context of Common Core implementation. Education Policy. Advance online publication. https://doi.org/10.1177%2F0895904817719516


Supovitz, J. A., Stephens, F., Kubelka, J., McGuinn, P., & Ingersoll, H. (2016). The bubble bursts: The 2015 opt-out movement in New Jersey [Working Paper No. 2016-8]. Consortium for Policy Research in Education. https://repository.upenn.edu/cpre_workingpapers/13/


Weatherley, R., & Lipsky, M. (1977). Street-level bureaucrats and institutional innovation: Implementing special-education reform. Harvard Educational Review, 47(2), 171197. https://doi.org/10.17763/haer.47.2.v870r1v16786270x


Cite This Article as: Teachers College Record Volume 123 Number 5, 2021, p. 1-24
https://www.tcrecord.org ID Number: 23665, Date Accessed: 10/28/2021 4:15:58 AM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Jonathan A. Supovitz
    University of Pennsylvania
    E-mail Author
    JONATHAN A. SUPOVITZ, Ph.D., is a professor of education policy and leadership at the University of Pennsylvania's Graduate School of Education and executive director of the Consortium for Policy Research in Education (CPRE). He is an accomplished mixed methods researcher and evaluator and has published findings from numerous educational studies and evaluations of state, district, and school reform efforts. Much of his research focuses on critically analyzing policies and systems that seek to improve the quality of teaching and learning.
Member Center
In Print
This Month's Issue