Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Large-Scale High School Reform Through School Improvement Networks: Exploring Possibilities for “Developmental Evaluation”


by Donald J. Peurach, Sarah Winchell Lenhoff & Joshua L. Glazer - 2016

Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis explores developmental evaluation as an approach to examining school improvement networks as “learning systems” able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one leading school improvement network (the New Tech Network), the analysis provides evidence of the potential power of developmental evaluation for generating formative feedback for network stakeholders regarding the strengths and weaknesses of their networks as distributed, collaborative learning systems. At the same time, it raises issues and questions to be addressed in advancing the practice of developmental evaluation, chief among them being constraints on stakeholders in leveraging feedback in productive ways.

The national education reform agenda includes a keen focus on large-scale high school improvement. In contrast to targeted interventions, one promising strategy is school improvement networks in which a central, “hub” organization collaborates with schools to enact common designs for school-wide improvement (Glazer & Peurach, 2013; Peurach & Glazer, 2012). Some networks support the improvement of existing schools, such as the New Tech Network, Talent Development Secondary, and the International Baccalaureate. Others support the creation of new schools, such as the Knowledge is Power Program, Green Dot Public Schools, and Aspire Public Schools.


As with many U.S. educational innovations, these networks are increasingly subject to rigorous impact evaluations. Researchers have also begun to advance complementary, improvement-focused evaluation strategies with the potential for positive program impact. The purpose of the following analysis is to explore “developmental evaluation” as an improvement-focused evaluation strategy aimed at understanding and advancing networks as “learning systems” able to improve practice and outcomes at a large scale.


The first section of the analysis advances a rationale and framework for developmental evaluation. To explore the use of this framework, the second section details a case study design of the New Tech Network, a high school-level improvement network that currently supports interdisciplinary project-based learning in over 150 U.S. high schools and middle schools. The third section reviews the findings from the case study. The fourth section considers possibilities and challenges in advancing the practice of developmental evaluation.


This analysis provides evidence of the potential power of developmental evaluation for generating formative feedback for network stakeholders regarding the strengths and weaknesses of their networks as distributed, collaborative learning systems. At the same time, it raises issues and questions to be addressed in advancing the practice of developmental evaluation, chief among them being constraints on stakeholders in leveraging feedback in productive ways.


A RATIONALE AND FRAMEWORK FOR DEVELOPMENTAL EVALUATION


We begin by constructing a rationale and framework for developmental evaluation, on the argument that the complexity and uncertainty endemic to school improvement networks both complicates impact evaluation and suggests a need for alternatives.


CHALLENGES IN EVALUATING SCHOOL IMPROVEMENT NETWORKS


Over the past 20 years, school improvement networks have benefitted from billions of dollars in public and philanthropic investment, largely based on their perceived potential to support rapid, large-scale improvement in leadership practice, instructional practice, and student achievement. Moreover, funders increasingly require rigorous impact evaluations that hold networks accountable for exactly that.1

Analogizing to clinical field trials in medicine, successful impact evaluations would provide evidence of a positive, statistically significant “treatment effect” that can be replicated beyond initial, early-adopting schools and in a broader, more representative pool of schools. However, school improvement networks emerge and operate under conditions that greatly complicate doing so (Peurach, Glazer, & Lenhoff, 2016). For example:


The “treatment” is not a unitary, stable entity but, instead, a complex, multi-component school improvement model that is under constant development and improvement, and that adapts and evolves in ways responsive to school-specific needs and exigencies.

The “subject” is not an individual person but, instead, a complex social organization that is under reconstitution from year to year (and even within years), owing to (among other things) the transiency of students and staff.

The “administrator” of the treatment is not a trained expert but, instead, an emerging and evolving hub organization that is, itself, developing the capabilities needed to design, improve, and adapt the treatment to support success with complex, turbulent subjects.

The “environment” is not something that can be dispensed with via randomized assignment but, instead, a complex, ever-evolving combination of political, policy, social, economic, and community influences that penetrate deeply and differently into both treatment and “control” schools, and in ways likely to interact profoundly with the treatment.


Increasing recognition of such uncertainty and complexity has been instrumental in motivating the development and use of new types of improvement-focused evaluation strategies to complement impact evaluations. Whereas impact evaluations often treat schools (and ways in which interventions operate in schools) as black boxes, these improvement-focused evaluation strategies seek to understand and improve dynamics among uncertain, complex treatments, subjects, administrators, and environments. Examples include design-based implementation research focused on the continuous improvement of programs and interventions (e.g., Anderson & Shattuck, 2012; Penuel, Fishman, Cheng, & Sabelli, 2011) and “networked improvement communities” focused on addressing specific problems of practice (Bryk, 2009; Bryk, Gomez, & Grunow, 2010; Mehta, Gomez, & Bryk, 2012).


While the targeted improvement of specific tools, processes, and practices is key, research on school improvement networks suggests that school-wide, practice-focused improvement depends on more broad-based approaches to continuous reflection and learning (Datnow & Park, 2009; Peurach & Glazer, 2012). Specifically, this research suggests that success depends on understanding and improving networks themselves, and the ways in which they function as new types of “learning systems” that produce, use, and refine the practical knowledge needed to realize intended outcomes.


Two learning processes are especially key: exploitation and exploration (Hatch, 2000; Peurach & Glazer, 2012). Exploitation is a type of convergent learning that involves leveraging established knowledge, and then leveraging and refining it through repeated use. Exploration is a type of divergent learning that involves reconsidering premises and identifying new possibilities through search, experimentation, discovery, and invention.2


In U.S. educational reform, exploitation and exploration have long been considered as antithetical, mutually exclusive improvement strategies. Exploitation has long been associated with “fidelity of implementation,” and with initiatives interpreted and experienced as “top-down” and “bureaucratic.” Exploration has long been associated with “local adaptation,” and with initiatives interpreted and experienced as “bottom-up” and “professional.”


Yet, in cases where networks operate as high-functioning learning systems, researchers report that fidelity of implementation and local adaptation actually function as synergistic, interdependent learning strategies, with iterative exploitation and exploration interacting to yield an ever-evolving, ever-expanding formal knowledge base supporting the recreation of practical capabilities in new schools (Peurach & Glazer, 2012; Peurach, Glazer, & Lenhoff, 2016).3 Summarized briefly:


Implementation begins with exploitation: the faithful implementation of established practices as captured in formal routines and guidance, both to establish base levels of performance and to create social resources that support professional collaboration (e.g., common instructional values, methods, language, artifacts, assessments, and experiences).

Schools can then leverage these social resources as the basis for collaborative reflection and learning among teachers and leaders, thus creating potential for productive exploration: that is, local adaptation, problem solving, and invention.

Operating at the center, the hub monitors activity in the network and in broader environments; incorporates favorable adaptations; and feeds improvements through the network through formal routines, guidance, and professional learning opportunities.

The process begins again, with initial, faithful implementation and subsequent adaptation driving continuous, program-wide evolution and improvement.


With this sort of synergistic, coordinated exploitation and exploration, schools do not function as downstream recipients of a centrally diffused model, nor do they use high-level principles and guidelines as a basis for incubating school-specific solutions. Rather, schools function as active collaborators in a widely distributed, knowledge-producing enterprise.


THE POTENTIAL OF DEVELOPMENTAL EVALUATION


While researchers have identified instances of networks functioning as collaborative learning systems, many networks struggle to develop the strategies and infrastructure needed to produce, use, and refine practical knowledge in practice, through distributed, iterative exploitation and exploration. Indeed, cross-sector research on the innovation process suggests that the capabilities to create and manage such learning systems are scarce, with few innovators able to develop such capabilities without extensive support (Van de Ven, Polley, Garud, & Venkataraman, 1999).


Developmental evaluation is an approach to evaluator/innovator collaboration with the potential to provide exactly such support (Dozois, Langlois, & Blanchet-Cohen, 2010; Gamble, 2008; Patton, 2006, 2010). Developmental evaluation is grounded in assumptions that large-scale social innovations emerge and operate under conditions of complexity and uncertainty that challenge impact evaluation and that require continuous learning and improvement. A chief aim of developmental evaluation, thus, is to support the development of large-scale social innovations through learning-centered, improvement-focused evaluation.


An especially critical focus of developmental evaluation is the practice-based coaching and support of decision makers as they guide the enterprise itself: i.e., as they assess prevailing conditions; adapt and reconcile mission and strategy; align operations with mission and strategy; and build the understandings, commitment, and motivation of others (Dozois et al., 2010; Gamble, 2008).


Because the complexity and uncertainty of prevailing conditions are likely to reduce the quality of available information, a primary role of the evaluator is to support decision makers both in interpreting partial and equivocal information and in communicating interpretations to others. As such, key responsibilities of evaluators include framing and conceptualizing important issues and problems; identifying a parsimonious set of key indicators to guide rapid data collection; introducing frameworks and questions to guide interpretation and meaning-making; and building consensus among decision makers and others.


Grounding principles of developmental evaluation in the preceding analysis of school improvement networks, a first-order matter is for evaluators to support hub executives, funders, and other network stakeholders in critically examining (and improving) their networks as “learning systems” that produce, use, and refine practical knowledge. The conjecture is that strengthening the network as a learning system increases the potential for replicable effectiveness, whereas neglecting to do so complicates demonstrating replicable effectiveness.


FRAMEWORK FOR DEVELOPMENTAL EVALUATION


In a prior analysis, we drew on research in education and in broader organizational studies to develop a framework for developmental evaluation to support stakeholders in critically examining school improvement networks as learning systems (Peurach, Glazer, & Lenhoff, 2016). The framework focuses keenly on the logical antecedents to effective implementation and outcomes: specifically, matters related to developing, using, and refining a formal knowledge base supporting the large-scale replication of capabilities. The framework is predicated on three assumptions:


(1)

The success of school improvement networks depends not only on the recreation of new structures and cultures in schools but, more importantly, on the recreation of new capabilities for practice: that is, for teachers and school leaders to work differently, more effectively, and in more coordinated ways.

(2)

These capabilities would be difficult for schools (especially schools with weak initial capabilities) to recreate independently, absent participation in the network.

(3)

Recreating capabilities in a large-scale, geographically distributed, and rapidly growing network challenges the use of social resources for recreating capabilities (e.g., apprenticeship, communities of practice, and professional learning communities) and places a premium on formally codifying practical knowledge in routines and guidance that can be moved, used, and refined throughout the network.


The framework begins with an interpretive scheme used to identify networks as operating in accord with one of four primary approaches to school-wide improvement: a shell enterprise, a diffusion enterprise, an incubation enterprise, or an evolutionary enterprise. Each has advantages and risks in recreating practical capabilities in large numbers of schools.


A shell enterprise is the American default: an enterprise that places a primary emphasis on schools adopting common structures and cultures, with little (or no) emphasis on developing capabilities to work within new structures and cultures to enact and adapt new practices. One chief benefit is that a shell enterprise does not require a complex hub organization with the ability to support large numbers of schools in developing new capabilities. A chief risk, though, is of loose coupling: the creation of new structures and cultures in schools, absent any connection to (or influence on) practice.


A diffusion enterprise attends closely to practice, though primarily through exploitation: the faithful enactment of established, formalized practices. A diffusion enterprise often lacks supports for (and even discourages) school-based adaptation, and it often lacks mechanisms for “feeding back” and circulating new knowledge as it emerges throughout the enterprise (as there is little emphasis on supporting schools in generating new knowledge). While a diffusion enterprise can support quickly establishing new, base-level operations, it also risks capping implementation and outcomes at base levels due to weak support for exploration: that is, for adaptation, problem solving, and invention that address school-specific problems and needs.


An incubation enterprise also attends closely to practice, though primarily through exploration: that is, by guiding schools in operationalizing principles of practice in adaptive, locally responsive ways. An incubation enterprise often lacks supports for the faithful enactment of established practices, as “effective practice” is viewed as context-specific. An incubation enterprise can succeed in schools with sufficient capabilities to act on “in principle” guidance (e.g., as evidenced by higher levels of past performance). Otherwise, the risk is that schools will struggle to design and coordinate new, base levels of performance that deviate in significant ways (if at all) from established practices (due to weak support for exploitation).4


An evolutionary enterprise is one that combines the strengths of diffusion and incubation enterprises in ways that support collaborative, evolutionary learning over time: exploitation (/fidelity), in order to establish conventional, base-level operations; and exploration (/adaptation), in order to adapt in response to local problems and needs. While an evolutionary enterprise has the potential to generate a formal, codified knowledge base supporting the recreation of capabilities in large numbers of schools, it also requires a complex hub organization able to structure and manage distributed, inter-organizational learning, and to capture practical knowledge thus generated in formal routines and guidance.


The framework for developmental evaluation continues with five guiding questions intended to structure the rapid collection and analysis of a parsimonious yet powerful body of evidence (through interviews, document analysis, and participant observation) useful for identifying a given network as operating in accord with one of the four primary types.


(1)

Does the enterprise have a formal, codified design for practice? Beyond describing structures and culture, such a design would provide details about who is supposed to do what. It would be evidenced by formal descriptions of essential roles; qualifications for essential roles; principles detailing responsibilities and coordination among roles; and standards and rubrics for assessing the enactment of those roles.


(2)

Does the enterprise have an explicit strategy for developing capabilities that combines exploitation and exploration? This would be evidenced by two categories of value statements: those emphasizing fidelity (i.e., faithful implementation of established, formalized practices); and those emphasizing adaptation (i.e., extending and revising implementation to address local needs).


(3)

Does the enterprise provide formal, codified guidance and support for recreating capabilities for base-level operations in schools? This would be evidenced by a strong message of “fidelity of implementation” running throughout guidance and training, along with formal routines, guidance, and training that support the faithful enactment of specific, practical tasks associated with particular roles.


(4)

Does the enterprise provide formal, codified guidance and support for recreating capabilities for adaptive, locally responsive use? This would be evidenced by strong messages of school-level ownership of (and agency in) the change process, along with formal routines, guidance, and training that support school-based analysis, problem solving, decision-making, design, and other discretionary activity.


(5)

Does the hub organization have the infrastructure and capabilities to support the continuous, enterprise-wide production and refinement of a formal, codified knowledge base? This would be evidenced by a communication infrastructure for exchanging knowledge and information among hubs and schools; capabilities for analyzing school performance and outcomes; capabilities for rapid prototyping and small-scale evaluation; and capabilities to codify new knowledge as materials, digital media, or other tools and resources.


A shell enterprise will lack an explicit design for practice (while instead emphasizing structures and culture). A diffusion enterprise will feature a strong design for practice and strong supports for base-level practices, while featuring comparatively weaker supports for adaptive use. An incubation enterprise will feature a strong design for practice and strong supports for adaptive use, while featuring comparatively weaker supports for base-level practice. An evolutionary enterprise is likely to be strong on each of the preceding dimensions.


AN EXPLORATORY CASE: THE NEW TECH NETWORK


We continue by exploring our framework for developmental evaluation as a tool for analyzing school improvement networks as learning systems. We do so in the context of a broader study examining efforts within one leading school improvement network—the New Tech Network—to improve instructional practice while building the educational infrastructure needed to do so (Lenhoff, 2013). The broader study ran from May 2010 to May 2012.


OVERVIEW: THE NEW TECH NETWORK


The New Tech Network (NTN) is a high school-level improvement network that pursues college and career readiness through “deeper learning” and the development of “21st century skills” (e.g., critical thinking, oral communication, collaboration, and creativity). As detailed in Table 1, NTN functions in accord with our conceptualization of school improvement networks. At the close of this study, NTN featured a central hub organization, a nationwide network of more than 120 high schools and middle schools, and a school-wide model for implementing interdisciplinary, project-based learning. NTN supports new schools, whole-school conversions, and schools-within-a-school. At the time of this writing, NTN reports operating in over 150 high schools and middle schools.


Table 1. New Tech Network: Overview as of 2012/201313

Characteristic/Description

Hub Organization

1996: Established in a single school in Napa, California, as part of a four-year effort by education, business, and community leaders to reimagine high school education.

2003: Secured a $6 million grant from the Gates Foundation to establish the New Tech Foundation.

2009: Acquired by the KnowledgeWorks Foundation of Cincinnati, Ohio.

Staff as of 2012/2013: 15 staff members in the central office in Napa, California; 30 field-based development and training staff.

 

Schools

From 1996 to 2013, grew from one school in Napa, California, to 125 schools in 19 states (119 high schools and six middle schools).

37% of schools were in urban districts, 25% in suburban districts, and 38% in rural districts.

50% of students were female, 57% were of color, 50% were eligible for free or reduced-priced lunch, and 5% were English language learners.

Projected growth was in weaker schools (owing to available funding for school improvement) and middle schools (to establish a feeder pattern into high schools).

 

Core Design Elements

A common design for interdisciplinary, project-based learning intended to transform schools’ core instructional capabilities in all academic content areas.

Extensive use of information technology, including one-on-one student computing.

A focus on establishing a culture of trust, respect, responsibility, and accountability.

A focus on establishing external partners to support implementation and effectiveness, including local businesses, colleges, universities, and government agencies.

 

Costs/Support to Schools

Fees for the initial, 4.5-year contract are between $450,000 and $500,000, with continuation fees estimated at $20,000 per year.

Support includes:

o

Access to Echo, the New Tech Network’s online learning management system.

o

Five days of initial training in the summer preceding Year 1 implementation.

o

A minimum of seven days of site-based support from a New Tech coach.

o

Approximately two weeks per school year of facilitated collaboration among groups of geographically proximal schools.

o

Two two-day leadership summits.

o

An annual three-day conference.


With its school-wide improvement model, NTN focuses on reforming the structure and culture of schools, both as a condition for program adoption and in the context of implementation.5 The language of “fidelity of implementation” is used to underscore the expectation of structural consistency among NTN schools. Yet NTN also goes further, in that it seeks to develop the capabilities needed to effect coordinated changes in practice.


NTN is a strong candidate for developmental evaluation. For example, dependence on public and private investment is likely soon to draw pressure to demonstrate replicable effectiveness on rigorous impact evaluations.6 Further, NTN is advancing a complex, evolving model for school-wide improvement in increasingly weak schools, in turbulent state and national policy environments, and with support from a growing and evolving hub organization.7 Finally, NTN faces circumstances that align with the core assumptions underlying our framework for developmental evaluation:


NTN’s success depends on developing new practices for (and understandings of) interdisciplinary project-based learning in newly recruited schools—practices unlikely to be strongly represented in the prior professional preparation and experiences of teachers and schools leaders.

It would be difficult for schools to create such knowledge and capabilities independently, absent external support.

The scale, geographic distribution, and rapid growth of NTN challenge social mechanisms for producing, using, and refining practical knowledge and, thus, warrant the development and use of formal resources.8


STUDY DESIGN


We designed our analysis as an exploratory case study using a longitudinal, embedded case study design (Scholz & Tietje, 2002; Stebbins, 2001; Yin, 2009). NTN functions as the case. Within NTN, we examined three distinct subunits and the relationships among them: the NTN hub organization, the NTN school-wide improvement model, and three diverse New Tech schools in one state that began implementation in school year 2010/2011.


School 1, located in a midsize suburb, was a school-within-a-school: specifically, a second small school within a larger comprehensive high school. More than 95% of its students were White, and 16% qualified for free or reduced-price lunch.

School 2, located in a small city, was a diverse new school that recruited students from the local comprehensive school nearby. About half of the students were White, and 60% qualified for free or reduced-price lunch.

School 3, located in a large industrial suburb near an urban center, was a whole-school conversion as part of a state-mandated turnaround effort. More than 80% of its students were Black or Hispanic, and 78% qualified for free or reduced-price lunch.


We further examined this case as situated in environments consisting of four key components: policy, regulatory, and other institutional supports; resource endowments; market functions; and proprietary activity (Glazer & Peurach, 2013; Lenhoff, 2013).


DATA COLLECTION


Our data collection and analysis were intentionally structured to be consistent with the notion of developmental evaluation: that is, as a context within which we could function as “critical friends” who provided feedback and analysis over the course of the study.


Consistent with methods of organizational ethnography (Brewer, 2004; Fine, Morrill, & Surianarain, 2008; Lee, 1999), data collection included the accumulation of documents and artifacts, participant observation, and interviews. It also included the regular review of Echo, NTN’s online learning management system: a repository of hundreds of school-created projects, hub-created projects, background materials, and guidance for implementation.


Our participant observation included: six day-long site visits in each of the three schools; two statewide professional development sessions; four national conferences; and nine formal and informal meetings between NTN leaders, staff members, and district coordinators. We also conducted two sessions at NTN’s annual conference in the summer of 2011. These sessions focused on fostering conversation about synergies between fidelity and adaptation. Additionally, we collaborated with a regional education service agency to co-facilitate a standing “professional learning community” for directors of New Tech schools in one state.


Finally, we conducted 20 semi-structured interviews with 17 participants involved in the implementation of the New Tech program in the three schools participating in our study (two superintendents, two school directors, 10 teachers, two regional district coordinators, and one NTN school development coach). Further, we conducted eight semi-structured interviews with executives, developers, and trainers in the NTN hub (including staff members who have been with the network since its inception). We complemented our interviews with ongoing, informal conversations with staff from participating schools and districts, the NTN hub, and the regional education service agency that supported the professional learning community for New Tech leaders.


ANALYSIS


We used iterative memo writing as our primary analytical method (Miles & Huberman, 1994), concurrent with (and in interaction with) our data collection. In addition, we give explicit attention to leveraging principles of positive organizational scholarship in maintaining an empathetic yet critical stance in seeking to identify and report strengths and vulnerabilities within the network (Cameron & Spreitzer, 2011; Dutton, Quinn, & Cameron, 2003). For our broader study, this involved categorizing and reporting evidence about schools, the program, the hub organization, and broader environments. For the developmental evaluation, this involved categorizing and reporting evidence using the framework described above.


VALIDATION


Longitudinal, iterative data collection and analysis created opportunities to validate our emerging interpretations through extended observation, triangulating among categories of evidence, formal and informal member checking, resolving “negative cases,” and mining the academic literature. Particularly important were interviews with staff members in the NTN hub, which provided opportunities to present, discuss, and refine the findings reported below.


FINDINGS


Our primary finding is that, at the time of our study, NTN was operating as an incubation enterprise, with a weak emphasis on exploitation (i.e., faithful enactment of established procedures and methods) and a strong emphasis on exploration (i.e., adaptive, locally responsive use). With that, our framework predicts variability in practice as a function of schools’ prior knowledge of (and capabilities for) project-based learning. Indeed, our analysis of implementation in three schools supports this prediction.9


These findings should not be interpreted as an assessment of NTN as it currently operates. As reported below, our study closed with NTN initiating a set of improvement efforts suggesting movement toward an evolutionary enterprise. Rather, these findings demonstrate the usefulness of our framework for developmental evaluation in critically analyzing school improvement networks as learning systems.


THE FIVE GUIDING QUESTIONS


We began by using our five guiding questions to generate and analyze evidence of key components of NTN as a learning system.


Design for Practice


At the time of our study, NTN had a formal design for practice featuring four key roles, the responsibilities of which were formalized in online materials and evaluation rubrics as guiding principles and standards of performance.10

Teachers were to collaborate in pairs to design, enact, and assess interdisciplinary project-based learning opportunities that incorporated technology in novel ways, to respond to district and state standards, and to leverage community partners.

Students were to collaborate in small groups to engage academic content in the context of co-enacted projects, using information technology (rather than textbooks) as a primary resource and producing artifacts (rather than conventional assessments) as evidence of content mastery and skill development.

Akin to principals, directors functioned as the primary onsite change agent, with responsibility for instructional organization and management, program administration, recruiting students and teachers, serving as the primary liaison with the hub, and maintaining relationships with community partners.

Advocates functioned as supplemental, school-level leaders responsible for serving as liaisons to the hub and to the New Tech coach, organizing agendas for coaching days, and serving as teacher–leaders on project-based learning.


Strategy for Developing Capabilities


The NTN hub organization recognized the essential challenge of recreating the above-described capabilities in new NTN schools. As one NTN staff member explained in a spring 2012 interview:


It’s really hard for people to take all this great technical expertise and know how to place it into a school and then use it as a tool. . . . We have coaches and people in our organization, the “why” and the purpose is in their bones. It is now tacit for them. It is a part of who they are. And, so, how do we stop and make sure that it becomes a part of who these new school leaders are so they can build that in teachers?


NTN’s strategy for doing so emphasized exploration rather than exploitation, with each school charged with using NTN-provided resources and methods to operationalize interdisciplinary, project-based learning in locally responsive ways. Put plainly by one NTN staff member in a spring 2012 interview, the goal is “to use our design to support their vision.” Another NTN staff member explained further that “as a school development organization, we’re trying not to talk about the single model but, rather, design principles, because we think that that’s a better way of being more inclusive about what we do in co-designing the schools.” Explanations for pursuing an exploration-centered strategy focused on:


The high costs of developing formal resources to support project-based learning in all content areas and grade levels.

Perceived variability in schools, districts, and states that complicated formalizing widely useful materials and resources.

Beliefs in both the value of and possibilities for school-based invention.

A general aversion among NTN staff members to formal routines and guidance for practice, which NTN staff members referred to skeptically as a “cookie cutter” and “plug and play” approach.


Support for Recreating Base-level Operations


In NTN, designs for base-level operations were codified most directly in performance rubrics that detailed, in principle, the essential practices associated with key roles. However, NTN did not provide formal routines and guidance for use in actually establishing base-level operations faithful to those principles.


For example, NTN’s teacher evaluation rubric described “proficient,” base-level teachers as (among other things) effectively managing groups, differentiating instruction, remediating within the context of a project, building skills for student collaboration, and regularly assessing learning outcomes. However, NTN did not provide new teachers with a set of detailed and tested instructional projects that provided specific routines and guidance for enacting and coordinating these practices immediately, at an acceptable performance level.


Similarly, NTN’s leadership evaluation rubric described “proficient” leaders as (among other things) establishing a clear vision and mission, modeling cultural expectations (including open communication and constructive feedback), supporting teachers’ professional development, developing and nurturing external relationships, and meeting district and legal requirements. But, again, NTN did not provide routines, protocols, and guidance for enacting and coordinating these practices immediately, at an acceptable performance level.


The closest approximation to a source of readily useable routines and guidance was Echo, NTN’s online learning management system. However, teachers and school leaders uniformly expressed frustration with Echo as a source of readily useable projects and guidance. Indeed, NTN staff reported that Echo was intended to provide models of well-designed projects that could be emulated or adapted. Even then, they reported that weak vetting procedures resulted in the incorporation of many low-quality projects. Further, while Echo contained an array of articles, reports, and other information, this guidance was simply available as a sort of “knowledge sharing,” and not synthesized into a coherent, practice-aligned knowledge base.


Support for Adaptive, Locally Responsive Use


The essential capabilities to be recreated across NTN schools are those for designing and implementing interdisciplinary instructional projects that are adapted to local circumstances and responsive to local needs. As one New Tech staff member explained in a spring 2012 interview, “For schools to own it, they need to design it.” This work begins immediately, school-wide, during summer training prior to Day 1 of Year 1 as a New Tech school, and it continues thereafter. In our analysis, formal supports for this work were variably developed, with strong support for school-based design and weak support for school-based evaluation, reflection, and learning.


Much as the language of “fidelity of implementation” was used in NTN to underscore the expectation of structural consistency among schools, it was also used to underscore the expectation that schools throughout the network would use a common process for designing school-specific resources that adhere to common quality standards. Formal resources supporting this work included a “Project Planning Form” to guide the process of designing a project from scratch; features embedded within Echo for organizing, using, and sharing newly created projects in conventional ways; and rubrics that provided “in principle” standards for evaluating the comprehensiveness, rigor, and relevance of projects.


However, NTN lacked formal supports for teachers in actually using school-designed projects to support project-based learning that met NTN’s standards for proficiency. Yet implementing project-based learning in the context of daily instruction requires capabilities that are analytically and practically distinct from designing instructional projects.


Further, while NTN’s performance rubrics advocated for teachers and school leaders to reflect on (and improve) school-designed projects, NTN did not provide a formal process for reviewing and revising teacher-designed projects in response to implementation or outcomes, nor did it structure a process for the repetitive enactment-and-refinement of projects within or between schools. Unless drawn from Echo and used without modification, iterative use of a project was limited to teachers reusing their own, self-designed projects, such that a second iteration would not occur until a year after the first (if at all).


The closest approximation of a formal process for improving self-designed projects was a “Critical Friends Protocol” structuring collaborative assessment, evaluation, and reflection. However, beyond the protocol itself, such work was weakly supported by coordinated routines and guidance for assessing, evaluating, and reflecting on the implementation and outcomes of projects. Moreover, the use of the protocol was not formally structured into the work of New Tech schools. Rather, the protocol was used primarily in the context of school visits by NTN coaches (roughly seven days per school year) or at the initiative of individual school leaders.


Weaknesses in formal supports were addressed through extensive reliance on social supports for school-based design and improvement. These include: a NTN coach who served as the primary liaison between schools and the hub organization; initial visits to experienced schools to establish a vision for success; two days of “shadowing” opportunities for new leaders and staff prior to Year 1 implementation; and regularly scheduled “Meetings of the Minds” that brought together teachers from geographically proximal schools.


Yet the ratio of social supports to the scope and depth of change in New Tech schools (i.e., school-wide, interdisciplinary, technology-mediated project-based learning) was small. Further, as would be predicted by our initial review, the rapid growth of the network led to a set of conditions that complicated the use of social supports, including:

the geographic distance separating hubs and schools;

cultural and logistical obstacles to moving staff among classrooms and schools;

small ratios of expert and proficient teachers, school leaders, and coaches as compared to novices;

personnel transiency; and

variability in local environments.


Hub Infrastructure and Capabilities


NTN staff members reported a culture of learning, innovation, and healthy competition among staff members that supported an open exchange and debate of information, and that drove a press for continuous improvement. Even so, at the time of our study, the infrastructure and capabilities in NTN to produce, refine, and formalize practical knowledge were, again, variably developed.


For example, NTN featured a highly developed communication infrastructure for moving information through the network. Formal, structural resources included regularly scheduled meetings and conferences (local and national), advisory groups of high-performing teachers and school leaders, and demonstration sites that served as key thought partners. Social resources included relationships between NTN coaches, leaders, and teachers; relationships with key thought partners (including the KnowledgeWorks Foundation and the Hewlett Foundation’s “Deeper Learning” initiative); and relationships with several universities.


Even so, NTN lacked infrastructure and capabilities for leveraging this information as a chief “input to”/resource for producing, refining, and formalizing practical knowledge. For example, the NTN hub did not have tools, systems, or capabilities for measuring and analyzing implementation and outcomes to identify promising practices. Further, though Echo served as a potential platform for formalizing routines and guidance, NTN had no procedures in place for identifying and vetting high-quality projects, and no established tradition of actually using them (and, again, assessing outcomes). Finally, NTN had no means or methods to assess the effect of program modifications on network-wide implementation and outcomes.11


The result, thus, was that practical knowledge emerging throughout NTN was retained primarily socially, not formally, in individual people and small groups. Further, it was exchanged socially, not formally, person-to-person and group-to-group. Finally, absent rigorous evaluation and review, it existed more as craft knowledge than as evidence-based knowledge.


But, again, this heavy reliance on the social production, use, and refinement of knowledge was challenged by (a) the breadth and depth of knowledge needed to recreate capabilities for project-based learning in a large, varied network of schools, and by (b) conditions that complicated the use of social mechanisms (e.g., geographic distance; practical constraints on person-to-person, group-to-group exchange; an increasingly small ratio of experts to novices; and personnel transiency).


INTERPRETATION: NTN AS AN INCUBATION ENTERPRISE


Based on the preceding analysis, our interpretation is that, at the time of our study, NTN was operating as an incubation enterprise. (In spring 2012 interviews, two staff members actually used the term “incubator” in describing NTN’s strategy for supporting schools.) NTN featured a formal design for school-wide project-based learning, along with formal guidance and professional learning opportunities to support schools in developing interdisciplinary, project-based learning activities sensitive to local context. At the same time, NTN featured:


a learning strategy that privileged exploration over exploitation;

weak supports for establishing base-level operations in new schools;

weak supports for school-based assessment, evaluation, and reflection;

weak capabilities in the NTN hub to support the production, use, and formalization of practical knowledge; and

a reliance on social supports to compensate for the preceding weaknesses in formal supports.


Operating as an incubation enterprise can have advantages, including stemming school-level resistance to external intervention, motivating ownership and agency, and leveraging local knowledge and capabilities. However, realizing this potential would require carefully managing the network both (a) to reduce the demands on the network to produce, use, and refine a formalized practical knowledge base, and (b) to construct conditions that support the social production, use, and refinement of knowledge. Potential means of doing so include:


operating at a scale and geographic distribution that would support the social development of knowledge and capabilities;

limiting the scope of capabilities to be created within schools (for example, by focusing on a small number of content areas); and

recruiting schools with a critical mass of teachers and school leaders with prior capabilities for both project-based learning and iterative assessment, evaluation, and reflection.


Yet, at the time of our study, NTN was not managing the network either (a) to mitigate demands on producing, using, and refining a formalized practical knowledge base, or (b) to construct conditions that favored the social production, use, and refinement of knowledge. Rather, NTN was aggressively expanding the scope of the network to over 120 high schools distributed around the country while, at the same time, beginning implementation in middle schools in order to establish “feeder patterns” into high schools. Further, NTN was seeking to develop capabilities for project-based learning in all academic content areas and at all grade levels. Finally, rather than recruiting schools with prior capabilities for project-based learning and self-managed improvement, NTN was expanding into schools that varied widely in their initial capabilities and in their histories of self-managed improvement, including schools undergoing mandated turnaround efforts for low student performance.12


RISKS FOR IMPLEMENTATION AND OUTCOMES


The interactions between (a) operating as an incubation enterprise and (b) variability in initial capabilities among newly recruited schools risks a “Matthew effect.” The stronger the initial capabilities of a given school, the greater the potential to leverage supports for school-based exploration and adaptation, and the greater the potential to compensate for weaknesses in supports for base-level operations and for continuous improvement. The weaker the initial capabilities, the lower the potential to leverage the strengths of NTN-provided supports and to compensate for weaknesses.


That is precisely the performance pattern that we observed in our implementation analysis in three case study schools (Lenhoff, 2013).


Many teachers at School 1 had practiced project-based learning in previous contexts. Further, the district was so supportive of the effort that it was considering shifting its middle schools to the NTN model. These initial capabilities and appreciations for NTN buoyed the school’s efforts to transform practice. While School 1 still experienced implementation challenges (including within-school variation in the design and implementation of project-based learning), School 1 represented the closest approximation to the NTN model.

While School 2 featured veteran teachers experienced in traditional instruction, it did not have a critical mass of teachers with prior experience with project-based learning, nor did it have widespread commitment among teachers for a shift to project-based learning. Further, it featured a brand-new school leader in the director role who had difficulty gaining the trust, respect, and commitment of teachers. With weak initial capabilities, weak commitment, and weak leadership, School 2 experienced the half-hearted, tension-filled, and generally weak implementation of project-based learning.

As part of a state-required turnaround effort, the teachers and leaders at School 3 were largely skeptical of NTN from the start. Many of them demonstrated weak traditional instructional practice, with no prior capabilities for project-based learning. When they attempted to make the shift to project-based learning, they often resorted to truncated interpretations of the NTN model, such as interpreting “student-directed learning” as “let the students do whatever they want.”


In spring 2012 interviews, NTN staff members reported recognizing many of the issues identified in our developmental and implementation evaluations. For example, they reported weaknesses in Echo as a resource for teachers and school leaders, especially in relation to the accumulation of weak and untested projects. Further, they reported a weakening of social support mechanisms, especially with respect to recruiting and developing new coaches to support new schools. Finally, they reported observing teachers with weak initial capabilities adhering tightly to NTN-supplied guidance for designing projects and feeling successful as a result, despite weaknesses implementing these projects at base levels of proficiency.


At the close of our study, these developing understandings among NTN staff members were contributing to an agenda for improvement that had potential to move NTN toward an evolutionary enterprise, to reduce variation in implementation and outcomes among NTN schools, and to increase the prospects of successful impact evaluation. Key initiatives included:


 eliminating low-quality projects from Echo;

experimenting with a set of conventional “starter projects” for use in new schools;

developing capabilities in the NTN hub to create new types of material and digital supports to new teachers, school leaders, and coaches; and

developing systems and capabilities to support more rigorous evaluations of program effectiveness.


A productive beginning point for further research on NTN would be to focus on the design, implementation, and effects of these initiatives.


DISCUSSION AND CONCLUSION


With high school reform central to the national reform agenda, and with school improvement networks a leading strategy for large-scale reform, this analysis was motivated by recognition that the legitimacy and viability of school improvement networks is increasingly tied to success in rigorous impact evaluations. In contrast to the conventional evaluation focus on program implementation and outcomes, the purpose of this analysis was to explore a framework for developmental evaluation focused keenly on the logical antecedents to effective implementation and outcomes: specifically, matters related to developing, using, and refining a formal knowledge base supporting the large-scale replication of capabilities.


Our case study of the New Tech Network provides evidence of the possibility of using this framework to critically examine school improvement networks as “learning systems” capable of producing a formal knowledge base supporting effective, large-scale operations in a diverse pool of schools. With that, this analysis represents a first step toward formalizing methods of developmental evaluation as a form of improvement-focused evaluation. Further advancing our framework and its use will require attention to three key issues:


(1)

Developing formal routines and guidance that could be used reliably and effectively to generate evidence using our five guiding questions; to analyze that evidence within our interpretive scheme of shell, diffusion, incubation, and evolutionary enterprises; and to structure conversations with network stakeholders about findings and possible next steps.

(2)

Developing means of doing the preceding within the flow of the day-to-day operations of the network, such that feedback is both timely and presented in a form that is understandable and useful to network stakeholders.

(3)

Examining conditions that enable and constrain network stakeholders in using this feedback to make strategic decisions about the management of practical knowledge in their networks.


This third point is especially pressing. Indeed, evaluation use is a topic that has received formidable attention from researchers, policymakers, funders, and others for decades, a key theme being difficulty in making effective use of evaluation processes and results.


Conducting and using the type of developmental evaluation proposed here promises to be especially tricky. Some of the challenges lie within networks. For example, the current form of a given network (i.e., shell, diffusion, incubation, or evolutionary) likely has complex roots that would need to be understood, untangled, and reconsidered. These include ideological orientations toward school improvement, resource limitations, public identity, and endogenous environmental conditions that encourage specific approaches to improvement in specific types of schools. Some of the challenges lie beyond networks. These include recruiting researchers willing to take up the cause, persuading funders to support it, and creating political cover for networks willing to open their enterprises to a new and uncertain form of evaluation.


One of the most formidable challenges within and beyond networks lies in understanding the dynamics between exploitation and exploration as complementary learning processes. Again, in the United States, “fidelity of implementation” and “adaptive use” have long been viewed as antithetical, and bound up with deeply held beliefs and ideologies about power, control, authority, and knowledge in schools. Yet there is much to suggest that generating the practical knowledge needed to support large-scale high school improvement depends on understanding and managing exploitation and exploration as interdependent and synergistic.


These are challenges worth identifying and confronting. Indeed, the payoff could be formidable: improved returns on billions of dollars in public and private investment, certainly; but, more importantly, improved educational experiences and outcomes for millions of students otherwise underserved, both by their public high schools and by the reformers charged with improving or replacing those schools.


The alternative is to not act on the argued and demonstrated value of developmental evaluation: that is, to stay the course; generate predictably equivocal (and likely weak) impact evaluations; and, in the absence of clear evidence of positive program impact, withdraw support. Doing so fails to recognize the complexity and uncertainty that appear to be endemic to large-scale educational reform, and the learning required to confront that complexity and uncertainty. It also risks loss of intellectual capital that accumulates in school improvement networks over time through collaborative, experiential learning, as retained both socially (e.g., in communities of practice) and formally (e.g., in routines, guidance, tools, and artifacts).


These failures and risks are avoidable—but only if policymakers, philanthropists, and other patrons of large-scale educational reform efforts recognize uncertainty and complexity as legitimate, and only if they continue to support the development and use of improvement-focused evaluation as a complement to impact evaluation.


Notes


1. The federal Investing in Innovation (i3) program is a case in point: a multi-year, multi-billion dollar funding stream that has provided support for leading school improvement networks, though on the condition that funded initiatives include rigorous impact evaluations. The i3 program is one component of the Obama administration’s efforts to advance the use of evidence in support of social policy and innovation—efforts described by Haskins and Baron (2011) as the most expansive in the history of the U.S. government.

2. Exploitation and exploration have roots in March (1996), and they parallel “single loop learning” and “double loop learning” as advanced by Argyris and Schön (1978).

3. See, also, Adler and Borys (1996) on formalization and fidelity as “enabling” (rather than “coercive”) bureaucratic forms, as well as Shedd and Bacharach (1991) on synergies between “control” and “commitment” in organizing and managing professional work in organizations that also have complex relationships with their environments.

4. For a classic case study of this phenomenon, see Cohen (1990) on the efforts of an elementary school teacher to reinvent her instruction in response to calls for more ambitious teaching and learning. Also, see Baden-Fuller and Winter (2012) for broader organizational scholarship analyzing the comprehensive preconditions that support favorable responses to in-principle guidance for practice. Indeed, in education, two enduring risks follow from the use of in-principle guidance and site-based incubation in schools with weak initial capabilities. One risk is captured in the adage that teachers and leaders will change programs more than programs change teachers and leaders (and in ways that resemble extant practice). The other risk is rapid abandonment and quick regression to past practice.

5. For example, prior to program adoption, NTN uses a “Conditions for Success” rubric to evaluate school design and culture, instructional organization, technology, facilities, external partnerships, and staffing. During follow-up implementation visits, it uses a “School Success Rubric” to evaluate the further establishment and maintenance of key organizational arrangements.

6. For example, in 2012, NTN was awarded an i3 development grant, which brings with it extensive evaluation requirements. As one NTN executive explained in a spring 2012 interview, “We need to be able to demonstrate that the work we do can be replicated—that we can maintain quality and reproduce the same impact, the same results, in a myriad of communities, types of schools, and types of students.” Yet, at the time of our writing, we could not identify any rigorous internal or external peer-reviewed evaluations showing statistically significant, replicable program effects on student outcomes as compared to non-NTN schools.

7. For example, regarding environmental turbulence, consider the uncertainty for any national network of state-to-state NCLB waivers, the Common Core State Standards Movement, and the reauthorization of the federal Elementary and Secondary Education Act.

8. The initial conference paper in which we reported this analysis included a comprehensive review of available resources supporting project-based learning (Peurach, Glazer, & Lenhoff, 2012). We identified traditions of academic theory and research on project-based learning (Hmelo-Silver, 2004; Thomas, 2000), organizations promoting project-based learning (the Buck Institute for Education), and alternative educational systems organized around project-based learning (Montessori and Waldorf). Even so, we did not identify an integrated body of professional knowledge on which schools could draw to support the school-wide enactment of interdisciplinary, project-based learning, nor any major, university-based preparation programs organized around project-based learning from which schools could recruit teachers and school leaders. Among the most highly developed programs of research and development on project-based learning was a collaboration between the University of Michigan and Northwestern University focused on the enactment of technology-mediated, project-based science in 26 high-poverty middle schools and high schools (Blumenfeld, Fishman, Krajcik, Marx, & Soloway, 2000; Krajcik & Blumenfeld, 2006). Over seven years, these researchers succeeded in developing five project-based science units that, with coaching, could be enacted successfully at a large scale (Krajcik & Blumenfeld, 2006). Moreover, they linked their success to iterative, collaborative design research featuring both exploitation and exploration (Blumenfeld et al., 2000; Krajcik & Blumenfeld, 2006). While concerned that heavier-than-anticipated formalization made the program “somewhat closed” when compared to their original vision, they acknowledged this “closed-ness” as a necessary tradeoff for effective implementation and outcomes at scale (Krajcik & Blumenfeld, 2006, p. 673).

9. What follows is a brief summary of our primary findings. For more extensive elaboration of these findings, see Peurach et al. (2012) and Lenhoff (2013).

10. These include the “Teacher Rubric,” “Principal Evaluation Rubric,” “School Success Rubric,” and “TNT Site and Advocate Duties” document, all available to schools through Echo, the NTN’s online learning management system.

11. The NTN hub had procedures in place to collect common performance indicators across schools (e.g., performance on state accountability assessments, AYP status, graduation rates, ACT and SAT scores, college course credits earned, college application and acceptance rates, and post-secondary enrollment). However, NTN staff reported problems both in collecting and analyzing these data: for example, schools not reporting their data, school-within-a-school organizational arrangements that complicate disaggregating data about NTN students, and variation among state accountability assessments. Further complicating matters was the lack of a standard, valid metric across schools. Rather, NTN reported that “currently, a single standard of measurement does not exist that can assess our vision of achievement. Most standardized tests simply do not measure critical thinking, collaboration, creativity, and communication skills” (New Tech Network, 2012).

12. For example, in our analysis, NTN’s “Conditions for Success” rubric (the primary resource for evaluating schools’ readiness to begin implementation) focused exclusively on structural, cultural, and administrative conditions, absent any assessment of existing capabilities for project-based learning or for school-driven, continuous improvement. Moreover, NTN staff members varied in their anecdotal assessments of the existing capabilities of newly recruited schools. For example, of eight NTN staff members interviewed in spring 2012, six recognized increasing variation in the initial capabilities of newly recruited schools, owing primarily to increasing numbers of low-performing schools seeking membership in the network. By contrast, two others did not recognize such variation.

13. Regarding evidence summarized in this table: On the founding of NTN, see Borja (2002). Beyond the Gates Foundation, other supporters of NTN include the Carnegie Foundation of New York, the Hewlett Foundation, the Steelcase Corporation, the Toshiba Corporation, and the Buck Institute for Education. Trajectories for projected growth derive from spring 2012 interviews with NTN staff members and from NTN being awarded a federal i3 grant to support expansion in South Carolina, with a focus on what NTN described as “two of the nation’s persistently lowest achieving, lowest income, most economically under-resourced rural communities” (New Tech Network, 2015). Lastly, note that in 2014, the KnowledgeWorks Foundation spun off NTN as an independent, nonprofit organization.


References


Adler, P. S., & Borys, B. (1996). Two types of bureaucracy: Enabling and coercive. Administrative

Science Quarterly, 41(1), 61–89.


Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in education

research? Educational Researcher, 41(1), 16–25.


Argyris, C., & Schön, D. (1978). Organizational learning: A theory of action perspective. Reading, MA:

Addison Wesley.


Baden-Fuller, C., & Winter, S. G. (2012). Replicating organizational knowledge: Principles or templates?

Philadelphia, PA: University of Pennsylvania, Wharton School.


Blumenfeld, P., Fishman, B. J., Krajcik, J. S., Marx, R. W., & Soloway, E. (2000). Creating usable

innovations in systemic reform: Scaling up technology-embedded project-based science in urban

schools. Educational Psychologist, 35(3), 149–164.


Borja, R. R. (2002, May 29). Improving the practice of large-scale school reform. Education Week,

21(38), 26–31.


Brewer, J. D. (2004). Ethnography. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (312-322). Thousand Oaks, CA: Sage.


Bryk, A. S. (2009). Support a science of performance improvement. Phi Delta Kappan, 90(8), 597–600.


Bryk, A. S., Gomez, L. M., & Grunow, A. (2010). Getting ideas into action: Building networked

improvement communities in education. Stanford, CA: Carnegie Foundation for the Advancement of

Teaching.


Cameron, K. S., & Spreitzer, G. M. (Eds.). (2011). The Oxford handbook of positive organizational

scholarship. New York, NY: Oxford University Press.


Cohen, D. K. (1990). A revolution in one classroom: The case of Mrs. Oublier. Educational Evaluation

and Policy Analysis, 12(3), 311–329.


Datnow, A., & Park, V. (2009). Towards the co-construction of educational policy: Large-scale reform in

an era of complexity. In D. Plank, B. Schneider, & G. Sykes (Eds.), Handbook of education policy

research (pp. 348–361). New York, NY: Routledge.


Dozois, E., Langlois, M., & Blanchet-Cohen, N. (2010). A practitioner’s guide to developmental

evaluation. Montreal, Quebec, Canada: The J.W. McConnell Family Foundation.


Dutton, J. E., Quinn, R. E., & Cameron, K. S. (Eds.). (2003). Positive organizational scholarship:

Foundations of a new discipline. San Francisco, CA: Berrett-Koehler.


Fine, G. A., Morrill, C., & Surianarain, S. (2008). Ethnography in organizational settings. In D. Buchanan

& A. Bryman (Eds.), The sage handbook of organizational research methods. Thousand Oaks, CA:

Sage.


Gamble, J. A. A. (2008). A developmental evaluation primer. Montreal, Quebec, Canada: The J.W.

McConnell Family Foundation.


Glazer, J. L., & Peurach, D. P. (2013). School improvement networks as a strategy for large-scale

education reform: The role of environments. Educational Policy, 27(4), 676–710.


Haskins, R., & Baron, J. (2011). Part 6: The Obama Administration’s evidence-based social policy initiatives: An overview. In R. Puttick (Ed.), Evidence for social policy and practice: Perspectives on how research and evidence can influence decision making in public services (28-35). London, England: Nesta.


Hatch, T. (2000). What does it take to break the mold? Rhetoric and reality in New American Schools.

Teachers College Record, 102(3), 561–589.


Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational

Psychologist Review, 16(3), 235–266.


Krajcik, J. S., & Blumenfeld, P. (2006). Project-based learning. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (646-684). New York, NY: Cambridge University Press.


Lenhoff, S. W. (2013). Co-construction, infrastructure, and purpose: Influences on implementation of

hub-outlet school reform (Unpublished doctoral dissertation). Michigan State University, East Lansing,

MI.


Lee, T. W. (1999). Using qualitative methods in organizational research. Thousand Oaks, CA: Sage.


March, J. G. (1996). Exploration and exploitation in organizational learning. In M. D. Cohen & L. S.

Sproull (Eds.), Organizational learning (pp. 101–123). Thousand Oaks, CA: Sage.


Mehta, J., Gomez, L. M., & Bryk, A. S. (2012). Building on practical knowledge: The key to a stronger

profession is learning in the field. In J. Mehta, R. B. Schwartz, & F. M. Hess (Eds.), The futures of school

reform (pp. 35–64). Cambridge, MA: Harvard Education Press.


Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis. Thousand Oaks, CA: Sage.


New Tech Network. (2012). NTN outcomes—why we gather what we gather. Retrieved May 20, 2012,

from http://www.newtechnetwork.org/content/ntn-outcomes-why-we-gather-what-we-do.


New Tech Network. (2015). New Tech Network: Funding summary. Retrieved January 2, 2015, from

http://www.newtechnetwork.org/sites/default/files/resources/fundingsummary.pdf.


Patton, M. Q. (2006). Evaluation for the way we work. Nonprofit Quarterly (Spring), 28–33.


Patton, M. Q. (2010). Developmental evaluation: Applying concepts to enhance innovation and use. New

York, NY: Guilford Press.


Penuel, W., Fishman, B., Cheng, B. H., & Sabelli, N. (2011). Organizing research and development at

the intersection of learning, implementation, and design. Educational Researcher, 40, 331–337.


Peurach, D. J., & Glazer, J. L. (2012). Reconsidering replication: New perspectives on large-scale

school improvement. Journal of Educational Change, 13(2), 155–190.


Peurach, D. J., Glazer, J. L., & Lenhoff, S. W. (2016). The developmental evaluation of school improvement networks. Educational Policy, 30 (4), 606-648.


Peurach, D. J., Lenhoff, S. W., & Glazer, J. L. (2012, June). Large scale high school reform through

school improvement networks: Examining possibilities for “developmental evaluation.” Paper presented

at the 2012 Conference of the National Center on Scaling Up Effective Schools, Nashville, TN.


Scholz, R. W., & Tietje, O. (2002). Embedded case study methods: Integrating quantitative and

qualitative Knowledge. Thousand Oaks, CA: Sage.


Shedd, J. B., & Bacharach, S. B. (1991). Tangled hierarchies: Teachers as professionals and the

management of schools. San Francisco, CA: Jossey-Bass.


Stebbins, R. A. (2001). Exploratory research in the social sciences. Thousand Oaks, CA: Sage.


Thomas, J. W. (2000). A review of research on project-based learning. Retrieved May 6, 2012, from

http://www.bie.org.


Van de Ven, A. H., Polley, D. E., Garud, R., & Venkataraman, S. (1999). The innovation journey. Oxford,

England: Oxford University Press.


Yin, R. K. (2009). Case study research: Design and methods (fourth edition). Thousand Oaks, CA: Sage.





Cite This Article as: Teachers College Record Volume 118 Number 13, 2016, p. 1-28
https://www.tcrecord.org ID Number: 20622, Date Accessed: 12/4/2021 9:43:07 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Donald Peurach
    University of Michigan
    E-mail Author
    DONALD J. PEURACH is an Associate Professor in the School of Education at the University of Michigan. His research focuses on large-scale, network-based improvement initiatives. He is the author of Seeing Complexity in Public Education: Problems, Possibilities, and Success for All (2011, Oxford University Press) and a co-author of Improvement by Design: The Promise of Better Schools (2014, University of Chicago Press). With Joshua L. Glazer and Sarah Winchell Lenhoff, his most recent publication is “The Developmental Evaluation of School Improvement Networks” (Educational Policy).
  • Sarah Lenhoff
    Wayne State University
    E-mail Author
    SARAH WINCHELL LENHOFF is an Assistant Professor of educational leadership and policy studies in the College of Education at Wayne State University. Her research focuses on instructional improvement, externally supported school reform, and the intersections between accountability policy and practice. She has published numerous policy reports with The Education Trust-Midwest, including Accountability for All: The Need for Real Charter School Authorizer Accountability in Michigan and Michigan Achieves: Becoming a Top Ten Education State. With Donald J. Peurach and Joshua L. Glazer, she has also co-authored a series of articles on the developmental evaluation of school improvement networks. Lenhoff started her career as a middle school teacher in New York City Public Schools.
  • Joshua Glazer
    George Washington University
    E-mail Author
    JOSHUA L. GLAZER is an Associate Professor in the Graduate School of Education and Human Development at George Washington University. His research focuses on the design and implementation of school improvement networks, education reform initiatives for schools serving high-poverty communities, and the structure and function of the education profession. He is currently directing a four-year study of the Tennessee Achievement School District and a three-year study of research practice partnerships in New York City and Baltimore. With Donald J. Peurach, his most recent article, published in the Harvard Educational Review, uses the concept of “epistemic communities” to examine occupational control in education.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS