Title
Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

A Model of Continuous Improvement in High Schools: A Process for Research, Innovation Design, Implementation, and Scale


by Lora Cohen-Vogel, Marisa Cannata, Stacey A. Rutledge & Allison Rose Socol - 2016

This article describes a model for continuous improvement that guides the work of the National Center on Scaling Up Effective Schools, or NCSU. NCSU is a research and development center funded by the Institute for Education Sciences, the research arm of the United States Department of Education. At the core of the Center’s work is an innovative process to bring to scale practices that have been shown to improve student achievement in high schools in Broward County, Florida, and Fort Worth, Texas. To do so, the Center’s model of improvement relies on three core principles. First, a prototype is built to reflect the core elements of programs or practices that have been shown to be effective locally. Second, rapid-cycle testing is used to allow the prototype to be revised in ways that adapt it to a particular school context. Third, a researcher–practitioner partnership is employed that strives to both take advantage of and build local ownership and expertise. In so doing, the continuous improvement model addresses well-known challenges faced by those attempting to scale up educational innovations, challenges such as building teacher buy-in and attending to the organizational context in which innovations are to be enacted.

The past several decades have seen a substantial rise in accountability pressures on schools and districts with a concomitant focus on improving large numbers of underperforming schools (Cohen-Vogel, 2011; Cohen-Vogel & Harrison, 2013; MidKiff & Cohen-Vogel, 2015; Osborne-Lampkin & Cohen-Vogel, 2014). These reform effortsand research about themhas shed light on the challenges of effectively scaling promising educational interventions . Where new practices take hold, they are typically short lived, limited to a small number of model schools or classrooms, and sit at a distance from the instructional core of schooling (Elmore, 1996). There is growing agreement that the primary problem of scale is a failure to understand the conditions under which teaching and learning take place and to adapt educational interventions to them (e.g., Coburn, 2003; McDonald, Klein, & Riordan, 2009; Means & Harris, 2013).


New approaches such as design-based implementation research (DBIR) and improvement science (IS) have emerged to address this failure, focusing attention not only on the identification and development of effective practices, but also on the processes of implementing and adapting practices to particular contexts so they can be scaled within a system . This chapter describes a model for continuous improvement in education inspired by the DBIR and IS approaches. The model guides the work of the National Center on Scaling Up Effective Schools (NCSU).


NCSU is a five-year research and development center funded by the Institute for Education Sciences, the research arm of the United States Department of Education. At the core of the Centers work is an innovative, collaborative process to bring to scale practices that have been shown to improve student achievement in high schools in Broward County, Florida, and Fort Worth, Texas. To do so, the Centers model of improvement relies on three core principles.  First, a prototype is built to reflect the core elements of programs or practices that have been shown to be effective locally. Second, rapid-cycle testing is used to allow the prototype to be revised in ways that adapt it to a school or grade-level context. Third, a researcherpractitioner partnership is employed that strives to both take advantage of and build local ownership and expertise so that changes in practice can be brought to scale with depth and sustainability. In so doing, NCSUs continuous improvement model addresses well-known challenges faced by those attempting to scale up educational innovations, challenges such as building teacher buy-in, attending to the organizational context in which innovations are to be enacted, and aligning new practices with the broader array of programs that exist in schools and districts .


In this chapter, we will first describe each of these principles in turn. We will then describe the ways they are manifest in our four-phase model for continuous improvement in urban high schools. As illustrated in Figure 1, these include the research phase, the innovation design and development phase, the implementation phase, and the scaling-up phase.


Figure 1. The National Center on Scaling Up Effective Schools four-phase model of continuous improvement


[39_20656.htm_g/00001.jpg]


PROTOTYPE BUILT ON CORE ELEMENTS EMBEDDED IN LOCAL PRACTICE


The first principle of NCSUs model of continuous improvement is that any prototype to be tested must reflect the core elements of programs or practices that have been shown to be effective in the district in which the improvement work is occurring. An intervention must not, therefore, be dropped in by researchers or reformers (as well-intentioned as they may be) nor borrowed from a neighboring district.


Importantly, part of identifying these core elements involves collecting practitioners perspectives about what is working in their own schools and contexts. In The Improvement Guide, authors Langley, Nolan, Norman, & Provost (2009) advise that any work that aims to improve organizational performance should begin with helping participants in the systemin education, teachers and administrators in a school or school districtorganize their theories and ideas about potential drivers of improvement.


This principle downplays the assumption embedded in experimental science (and its reliance on randomized control trials) that knowledge and outcomes are embedded in the program, and, as such, an organization (e.g., school) that aims to benefit from that same knowledge and outcomes needs merely to implement the program with fidelity. Instead, by relying on core elements of practices found in local, high-performing organizations, NCSUs model of continuous improvement acknowledges, as Catherine Lewis (2015) does in Educational Researcher, that knowledge is also in people and systems (p. 56).


RAPID-CYCLE TESTING


NCSUs continuous improvement model relies on rapid-cycle testing as its second core principle. The purpose of rapid-cycle testing is to generate knowledge about what changes, in which contexts, produce the desired behaviors and, ultimately, outcomes among students and their teachers. By running multiple tests of small changes (Morris & Hiebert, 2011), the prototype design is revised as the work progresses (Barab & Squire, 2004).


Rapid-cycle testing is critical to NCSUs continuous improvement model because it helps to answer, What works where, when, and for whom? (Means & Penuel, 2005). As such, it recognizes a key lesson from the third wave of implementation research in education: that program effectiveness is the product of interactions between policies, people, and placesin short, the local context in which the program is tried (Cohen-Vogel et al., 2015; Honig, 2006).1 Rapid-cycle testing moves beyond research methods that isolate the average effect of a particular intervention across contexts, being sensitive instead to the contextual conditions that are necessary for success .


Through rapid-cycle testing, improvement teams learn from each test, refine the prototype being tested, and test it again with a somewhat larger number of implementers. By encouraging both starting small and iterative testing of ideas in the specific environment of interest, the improvement model limits risks associated with early failure and allows the innovation to be gradually modified, or adapted, to the uniqueness of the system in which it is being implemented (Langley et al., 2009; Tichnor-Wagner, Wachen, & Cohen-Vogel, 2015).


Finally, rapid-cycle testing in context builds buy-in among practitioners in the sites where the innovation is being adopted. Because, as we discuss in detail in the next section, practitioners themselves are involved in innovation testing and adaptation, rapid-cycle testing helps develop buy-in and will also provide them with opportunities to learn routines, like the use of data in the improvement of their own practice.


AUTHENTIC RESEARCHERPRACTITIONER PARTNERSHIPS


With authentic researcherpractitioner partnerships, the third principle of its continuous improvement model, NCSU works to disrupt traditional, bounded roles, taking advantage of native knowledge and expertise, ensuring that practices are aligned with local goals and policy initiatives, and boosting the rate at which change can occur. Although pains are often taken in traditional quantitative research to keep researchers outside the intervention being tested, our continuous improvement model purposely involves researchers in innovation design and revision. Through this collaboration, researchers are expected to become smarter about how to target issues that matter to educators and about how to conduct solid research within the constraints of practicing education systems (Means & Harris, 2013, p. 360). The role of participants in the research is different too. Whereas traditional research often treats participants (e.g., teachers) as subjects, NCSUs model for continuous improvement includes participants in the design process, involving them as equals in the work.


Bringing partners together in a way that disrupts traditional roles is intended to mobilize collective knowledge-building around complex problems and potential solutions. According to Bryk, Gomez, and Grunow (2010) of the Carnegie Foundation for the Advancement of Teaching, social problems have a complex set of causes that require multiple types of knowledge. Effectively utilizing that knowledge requires a diverse colleagueship of expertise that is organized in ways that enhance the efficacy of individual efforts, align those efforts, and increase the likelihood that a collection of such actions might accumulate towards efficacious solutions .


In addition to drawing on a colleagueship of expertise to target the multifaceted causes of social problems, the inclusion of local educators in partnerships with researchers can help ensure that innovations designed to improve processes and outcomes align to the normative cultures and policy contexts of the districts in which the work occurs.


Apart from drawing on the collective expertise of practitioners and researchers, partners can work in communities to boost the rate at which improvement can occur (Englebart, 1992). Improvement communities in the two districts with which NCSU works represent between five and seven high schools in each district, providing the opportunity for the collaboratively designed prototypes to be tested at the same time in multiple settings. Information about how the prototype works in each setting comprised what Englebart (1992) called a knowledge infrastructure upon which the improvement communities rely as they set plans for taking their innovation to scale.


Finally, by enacting our third principle of continuous improvement and involving partners from the participating districts and schools with researchers in the process of improvement, NCSU strives to leave behind capacity in the district for future efforts to design, implement, and take to scale innovations aimed at solving local problems and enhancing performance. It is not enough, that is to say, for NCSU to leave behind a proven program or practice. Rather, we strive to leave behind a system of profound knowledge about how to enact change in an organization (Deming, cited in Langley et al., 2009, p. 75). That system of profound knowledge involves how to build shared ownership, to detect and learn from variations in practice, to build and share knowledge among practitioners, to motivate frontline innovators, and so forth (Lewis, 2015, p. 55).


THE FOUR PHASES OF NCSUS CONTINUOUS IMPROVEMENT MODEL


The remainder of this chapter describes the four phases of the continuous improvement model being used by NCSU in its work with Broward County Public Schools and the Fort Worth Independent School District: the research phase, the innovation design and development phase, the implementation phase, and the scaling-up phase. In the sections that follow, we describe the phases of work, highlighting how the principles described above are enacted in each.


PHASE I: RESEARCH


In the first phase of the continuous improvement model, researchers with NCSU study higher and lower value-added schools2 in our partner districts in order to identify the programs and practices that likely explain differences in schools performance. Specifically, using a set of surveys and week-long visits to high schools in each district, we collect data on eight essential elements that research has shown to be related to school effectiveness. The elements are drawn from the work of Goldring, Porter, Murphy, Elliott, & Cravens (2009) and Bryk, Sebring, Allensworth, Luppescu, and Easton (2010) and include, for example, a rigorous and aligned curriculum, quality instruction, and the systemic use of data. A list of the elements, along with their definitions, is available in Table 1.


Table 1. Framework of Essential Components of Effective Schools

Component

Definition

Learning-centered leadership

Principals in effective schools set a vision with specific priorities around student learning and facilitate continued school improvement and support for improving instruction through collaborative, shared leadership. They engage both school-level and classroom-level factors to focus staff, resources, and improvement strategies squarely on students academic and social learning.

Rigorous and aligned curriculum

Effective schools make deliberate efforts to align curricula with state and district standards, across grade levels and with feeder middle schools, and within and across subject areas. They place a high value on and put supports in place to ensure curricular rigor for all students and tracks.

Quality instruction

Teachers in effective schools foster the development of higher order thinking skills, promote creative thinking, embrace rigorous content, meet the individualized needs of their students, and incorporate real-life applications in their classrooms.

Personalized learning connections

In effective schools, actors report strong connections between the students and the school, as well as widely distributed meaningful relationships among students and adults in the school.

Culture of learning and professional behavior

This culture is defined by a shared focus on high expectations for students, where students internalize the cultural values to promote their own academic success. Effective cultures of learning are collaborative, with actors across organizational levels working together to meet the school mission.

Systemic use of data

Effective high schools are data-driven and have information-rich environments, where actors operate in a culture of data use targeted toward improving the learning experiences of students.

Connections to external communities

Effective high schools actively work to build deep, sustained connections between the school, parents, and the larger school community.

Systemic performance accountability

Actors in effective schools feel an individual and collective responsibility for achieving rigorous student learning goals, recognizing and internalizing their individual roles in promoting student learning. Participants respond to external accountability structures in ways that signal that they believe them to be valid measures of school success.


Once collected, data are analyzed to reveal the key differentiating features between higher and lower value-added schools. In Broward County, findings revealed that, in contrast to the lower valued-added schools, higher value-added schools personalized student experiences for academic and socialemotional learning (referred to, in short, as PASL). Examples of programs and practices supporting PASL included: (1) looping, wherein students were assigned to the same administrator and guidance team (and, in some cases, teachers) for more than one year; (2) instructional coaching teams and/or small learning communities that were using data in their daily practice to identify students who were struggling and providing them with additional services; and (3) a pervasive do the right thing culture (Rutledge, Cohen-Vogel, Osborne-Lampkin, & Roberts, 2015). In Fort Worth, higher valued-added schools displayed cultures in which students took ownership of and responsibility for their own learning (or SOAR), and these cultures were facilitated by intentional and systematic school practices. While some of these practices were present on a limited scope in the lower value-added schools, they were not systematic or integrated with broader school goals (see Table 2 for more about PASL and SOAR).


Findings from the Research Phase become what NCSU calls in its improvement process the design challenge. In Broward, the design challenge focused on implementing a systemic and deliberate school-wide approach to meeting the academic and socialemotional needs of individual high school students. In the area of personalization for academic learning specifically, adults were challenged to set high expectations for student learning, model successful academic behaviors, and employ activities that respond to students experiences, interests, and learning needs. For personalization for socialemotional learning, adults were challenged to promote a culture of care by providing students with opportunities to share their emotional concerns, problem solve, and establish positive relationships with others. They were also challenged with developing a behavior management system that promoted critical thinking and positive personal and interpersonal skills, standard approaches for managing behaviors, and administrative actions that supported teachers in implementing the system in fair and consistent ways. Finally, the improvement teams were asked to think about the interdependency of these PASL elements, and develop practices, programs, and routines that united them.


In Fort Worth, the design challenge focused on creating a set of norms and school-wide practices to foster a culture of learning and engagement among students. It emphasized the need to change students beliefs and mindsets to increase self-efficacy and develop practices that teach students how to engage in challenging academic work. The design challenge also specified a set of indicators for what it means when students have ownership and responsibility and the core elements of school practices that foster SOAR (Cannata, Taylor Haynes, & Smith, 2014). As discussed in the next section, these design challenges guided the work in the innovation design phase.


The fact that the prototype was to be built on core elements embedded in programs and practices found within higher value-added schools in NCSUs partner districts was critical for the research phase in two ways. First, it opened up research sites that would likely have been closed to researchers otherwise. Lower performing schools in both Florida and Texas, which have been operating within high-stakes accountability contexts since the 1990s, are increasingly difficult to access, as administrators work to protect their teachers and are suspicious of outsiders who may be there to play gotcha. Second, the fact that researchers conducted in-depth case study work that brought them into four of each districts high schools for three weeks imbued researchers with credibility and legitimacy as they collaborated with practitioners in later phases of the work.


The rapid-cycle testing principle was not enacted during the research phase. The goal of this phase of NCSUs continuous improvement model was aimed at identifying programs and practices already operating in the districts high-performing high schools rather than testing a prototype in context.


Within NCSU, partnerships bring together researchers from three universities with school and district practitioners in Broward County, Florida, and Fort Worth, Texas in what we referred to as District Innovation Design Teams (DIDTs). The research phase of the continuous improvement model did not rely on these DIDTs (indeed, they had not yet been built) or other researcherpractitioner partnerships, but it did have an important role to play in the construction of those partnerships later on. First, the innovation design teams were arguably more easily built because researchers were able to speak knowingly and with depth about programs and policies in high schools throughout the district. Moreover, the research findings, on which the design challenge was built, influenced decisions about who would be asked onto the DIDTs. In Broward, where findings revealed practices that emphasized socialemotional learning, guidance counselors were recruited onto the teams. In Fort Worth, DIDT members were recruited from programs aligned with the design challenge, such as Advancement Via Individual Determination (AVID).


Table 2. Overview of Findings from the Research Phase

 

Broward County Public Schools

Fort Worth Independent School District


Main findings


The key differentiating characteristics of higher and lower value-added schools was the presence of systemic and deliberate approaches for meeting the academic and socialemotional needs of individual students.


The key differentiating characteristic of higher and lower value-added schools was the presence of systemic practices that integrated academic press and support in ways that intentionally fostered a student culture of learning and engagement.

Design challenge

Personalization for academic and socialemotional learning

Student ownership and responsibility

Definition of design challenge

Creating a systemic, deliberate approach that promotes a culture of personalization in which students not only feel safe, but also exhibit a sense of belonging toward the school that, in turn, leads to higher motivation, engagement, and sense of self-efficacy.

Creating a set of norms and school-wide practices that foster a culture of learning and engagement among students by (1) changing student beliefs and mindsets to increase self-efficacy and (2) teaching students techniques to engage in challenging academic work.

Core elements of design challenge

High academic expectations for students; instruction that responds to students experiences, interests, and needs; practices that encourage student engagement; model academic behavior; foster anger management and problem solution; cultivate ethic of care; enact behavior management system that promotes critical reflection, and is positive, fair, and consistently applied.

High academic expectations for students; instructional supports to help students meet high expectations; organizational supports to help students meet high expectations; teach techniques to deeply engage students in academic work.

Key indicators

Teachers are aware of students current academic, behavioral, and socialemotional status and goal progression; students are aware of and motivated to progress toward their goals; students have positive relationships with PASL teachers and Educator Team members; students have a sense of connectedness to school; student attendance increases and rates of significant behavioral issues decrease; student achievement and graduation rates increase.

Students believe they can achieve challenging academic tasks; students are personally invested in academic success; students come to class prepared, complete assignments in a timely manner, and seek additional help when they are struggling; students are engaged in class, ask questions when they are confused, monitor own learning, and attempt to master material with which they struggle; students demonstrate life skills such as initiative and self-direction.


PHASE II: INNOVATION DESIGN AND DEVELOPMENT


Guided by the design challenge, the DIDTsincluding researchers as full members of the teamwork during the second phase to formulate ideas for an innovation prototype to be tested in the innovation schools (see Harrison, Wachen, Brown, & Cohen-Vogel, 2015, for a full description of the design and development work). After the prototype is generated, it is shared with design teams at each of the innovation schools that will test it. Members of these school teams offer advice for further developing and refining the prototype.


In Broward County, where the design challenge was to create norms and practices that personalized academic and socialemotional learning, the innovation prototype consisted of six components as follows:


(1)

Every ninth-grade student shall be assigned to an Educator Team comprised of a PASL Teacher, Guidance Counselor, and Assistant Principal.

(2)

PASL teachers shall conduct regular Rapid Check-Ins (RCIs) with their assigned students and use a tracking form to log these check-ins. Logs will be used to help teachers ensure that they regularly reach out to each of their assigned students.

(3)

Problem Solving Meetings shall be conducted by the Educator Team with a student when RCIs suggest the student is in need of academic or socialemotional support;

(4)

Educator Teams shall incorporate data on students academic and socialemotional well-being as they meet to discuss student needs.

(5)

School-level implementation teams shall build a PASL culture by communicating PASL goals, promoting student engagement in school activities, and aligning the school mission to PASL.

(6)

A Curriculum Tool Box shall be created with lessons on goal setting and anger management. Schools shall have the flexibility to decide when/by whom these lessons will be delivered.


The Fort Worth innovation prototype included five components aimed at facilitating increased levels of student ownership and responsibility for academic learning. They were:


(1)

Schools shall teach all students about growth mindsets and strategies for building them through a 45-minute lesson that may be broken up into shorter sessions or expanded through discussion activities.

(2)

Schools shall apply growth mindsets. Teachers shall use growth mindset strategies during classroom instruction and encourage students to reflect on their performance through goal-setting forms, reflection exercises, and self-assessments. Schools shall ask students to complete a behavioral reflection sheet when a disciplinary intervention is necessary and discuss their reflection with an adult, and shall expose students to growth mindset culture through posters and other displays.

(3)

Schools shall teach all students about the core steps of problem solving through a 45-minute lesson that may be broken up into shorter sessions or expanded through discussion activities.

(4)

Schools shall apply problem-solving strategies. Teachers shall reinforce problem-solving steps during classroom instruction and assign related activities every four to six weeks. Students shall have opportunities to make an effort to refine or improve their work. Schools shall expose students to a common culture of problem solving and display problem-solving steps.

(5)

Schools shall engage their staffs in ongoing professional learning tied to growth mindsets and problem solving.


The fact that the prototypes were, according to NCSUs first principle for continuous improvement, built on the core elements embedded in local practice played out in the innovation design and development phase in one key way. In both districts, there was a feeling among design team members that the innovation made sense in the context of local practices and fit with a logic model that was part of the district culture. In Broward, for example, practitioner partners spoke easily about how they expected RCIs by PASL teachers to build relationships among adults and students in the school, relationships that would lead to a stronger sense of belonging, increased attendance and engagement, and, ultimately, improved achievement and graduation rates.


While the innovation design work focused primarily on concept and materials development, there was some informal testing of prototype ideas with stakeholders back in the schools. There was no formal rapid-cycle testing during this phase, but during these conversations DIDT members would introduce both the concept and logic model behind PASL and SOAR, as well as ideas for the emerging prototypes, to their colleagues back at school. This exchange was considered by the DIDTs (and members of the DIDTs from the schools in which the innovations were going to be tried, in particular) to be crucial for setting the stage for future buy-in and implementation.


Indeed, these conversations led the DIDTs in both districts to adopt a complementary design team structure in each school. The School Innovation Design Teams (SIDTs), as they became known, were populated by members of individual school faculties and staffs with input from both DIDT members from the schools and the school principals. The decision to construct SIDTs re-emphasized the third principle of the NCSU continuous improvement modelthe importance of authentic researcherpractitioner partnerships. As described below, members of these SIDTs would become key actors in the implementation phase.


During this phase, the value of authentic researcherpractitioner partnerships was revealed in another sense as well. Having born witness to effective practices enacted in the original study schools, researchers roles on the DIDTs during this phase included (1) helping to ensure appropriate interpretation of the research findings, (2) reminding members about the core elements of the design challenge (especially when ideas went far afield), and (3) providing findings from other studies when the teams were narrowing in on the innovation prototype.  


PHASE III: IMPLEMENTATION


Armed with the innovation prototype, the NCSU process moves into its third phase: implementation. The Centers implementation phase is motivated by longitudinal studies of state and federal programs in education that have repeatedly found that mutual adaptation, or opportunities for educators to tailor programs to meet their local needs and circumstances, leads to support for new program initiatives, the local capacity to run them, and, ultimately, the provision of services to targets of the innovations (Cohen-Vogel et al., 2015, p. 259; see also Berman & McLaughlin, 1975; Birman, Orland, Jung, Anson, & Garcia, 1987; Jung & Kirst, 1986; Sarason, 1982). The implementation phase allows for mutual adaptation by structuring the work around cycles of improvement in multiple sites at once. In this sense, the implementation phase does not as much reflect the first principle of building on local practice as the second and third principles of rapid-cycle testing and practitioner involvement, respectively.


To encourage implementation through mutual adaptation, NCSUs approach employs rapid-cycle testing of the innovation prototypes in the form of Plan-Do-Study-Act (PDSA) cycles. According to Langley et al. (2009), the PDSA cycle is shorthand for testing a change in the real work settingby planning it, trying it, observing the results, and acting on what is learned by refining the change (IHI, Science of Improvement). Specifically, the PDSA cycle consists of four parts. First, the implementation team plans the test, asking what change (e.g., the prototype or parts thereof) will be tested, with whom/with what measures it will be tested, and what changes are expected as the result of trying out the prototype. Next, the team carries out the test, gathering information on what happened during the test and as a result of it. The team then studies the information gathered during the test, comparing it with predictions made about the prototypes effects. Having studied the information, the team acts, making a decision about whether to abandon the prototype, revise it, or test it out with a larger number of users.


In Broward County, the innovation design teams decided to test out RCIs in each of the three innovation schools. They identified the changes in behaviors and outcomes they expected to see as teachers conducted brief check-ins with their assigned students; teachers were to log these interactions on a RCI form. At first, design team members expected that, as a result, every ninth-grade student would be reached by a PASL teacher and known to her by name.3 Four teachers in each school used the RCI forms for a three-week period beginning in early April. Studying the forms and talking with the implementing teachers, SIDT members concluded that the RCIs allowed teachers to reach out to every student and increased student/teacher interaction. Talking with students, they further found that students were excited that someone cared about them and a teacher monitored them on a regular basis. Doing the test also revealed a need for a revision to the innovation; though the RCI form allowed teachers to mark a box to indicate a need for a Problem Solving Meeting between a student and his or her Educator Teams, there was no process in place for communicating that need to members of the Educator Team. The decision was made to add a referral step to the innovation and test the RCI process again with a larger number of teachers.


In Fort Worth, PDSA began with a structured approach where all schools tested the same set of two lessons (one on growth mindset, one on problem-solving) developed by the facilitators and used data collection instruments developed by the researchers. After testing and revising those introductory lessons based on the data they collected, attention turned to embedded extension activities that would allow teachers to reinforce growth mindset and problem-solving practices in their daily practice. This time, each school team was given flexibility to decide what practice they would test using PDSA. One school tested the revised growth mindset and problem-solving lessons with a larger group of teachers, as well as a peer editing process. Another school tested growth mindset-oriented praise language. The third school tested a behavioral intervention form intended to help students shift their mindset and think through a problem-solving scenario in the behavior discipline context. At the next meeting of the DIDT, the results were shared with the larger group; the two schools that had not tested the behavioral intervention form acted on the information provided to them and decided to test out the form in their own school during the next PDSA cycle. Through successive cycles, each school designed its own tests, sometimes building on the prior test and sometimes choosing a different practice to test. For example, after introducing a goal-setting and monitoring process at the end of each three-week grading cycle, one school collected data on how teachers were implementing the practice with their students. Through a PDSA cycle, the data indicated that teachers did not always understand what the process was trying to accomplish and thus struggled on how to support students who were unsure of what to do. As a result, the school developed additional guidance for teachers, including guidance targeted for different student populations.


Because Plan-Do-Study-Act cycles were conducted simultaneously in the three innovation schools in each district, the SIDTs made recommendations for revisions to the innovation based on their own tests. Though these revisions were shared among sites during DIDT meetings, not all revisions were incorporated into the innovations in all three schools, thus allowing for small variations (adaptations) among them.


We turn now to the third principle of our continuous improvement modelauthentic researcherpractitioner partnershipsand how they were manifest in the implementation phase.  During the implementation phase, the nexus of the researcherpractitioner partnership shifted somewhat away from the DIDT and toward the SIDTs. During the innovation design and development phase, decisions were made primarily by members of the DIDT. As the Center transitioned from the work of creating an innovation prototype to enacting and testing the prototype in schools, additional members of individual school faculties and staffs became directly involved in the work of continuous improvement. By involving SIDT members in this work, NCSU not only worked to build ownership and buy-in of the innovation itself, but also to build practitioners capacity for future improvement work.  


Educators in each of the schools engaged in regular PDSA cycles to adapt innovations to their individual contexts. This helped practitioners develop habits of mind and establish formal (e.g., PDSA) and informal organizational routines (e.g., a culture of feedback) that will enable them to address future problems of practice. In short, the active involvement of teachers and school leaders in the implementation phase helped make improvement part of the official work of the school (Peurach & Glazer, 2012; Resnick & Spillane, 2006).  


While this shift occurred, researchers were far from absent. They provided training on PDSA for both DIDT and SIDT members, and worked individually with each SIDT to plan their testing cycles. Researchers with NCSU also worked with SIDTs to design instruments by which data would be collected during the PDSA cycle and that would help them answer whether a change was in fact an improvement. Working in sustained partnership with practitioners encouraged researchers to stay focused on the behaviors and outcomes that practitioners expected to change and felt they could impact as the result of enacting the designed innovation. In this way, the continuous improvement model places a high value on practitioner knowledge and expertise.


PHASE IV: SCALING UP


Having tested the change with a small set of implementers simultaneously in a limited number of schools, NCSU moves into the fourth and final phase of its continuous improvement process in an effort to bring the innovation to scale. By scaling up, we mean not only that the components of the innovation are brought into new school settings (what we term scale out), but also that they deeply penetrate the original innovation schools and are able to be sustained (Coburn, 2003).


Here, as in Phase III, rapid cycle testing is central. With its recurring cycles that bring the innovation to ever more users, PDSA helps not only to implement the innovations but also to take them to scale. After testing the change on a small scalewith a few teachers or classroomsPDSA cycles repeat (see Figure 2). As members of the improvement teams learn from each test and refine the change, they prepare to implement the change on a broader scalefor example, with an entire grade level. After successful implementation within a unit, the team can continue to use PDSA to spread the change to other parts of the school or other schools entirely, effectively using it to adapt the change to new contexts, resulting in system change.


Figure 2. Repeated use of PDSA cycle for system change

[39_20656.htm_g/00003.jpg]

Note. PDSA = Plan-Do-Study-Act. Adapted from Deming, 200; Langley et al., 2009.


In Broward County, rapid-cycle testing continued in the three original innovation schools with both the RCIs and other components of the innovation (e.g., goal-setting lessons), but the use of rapid-cycle testing for scale out began with at-large DIDT members themselves. At-large members consisted of assistant principals and teachers from high schools in the district who were members of the DIDT but not employed at any of the three original innovation schools. Several of these members spoke of using some of the PASL innovation components in their own practice, and were enthusiastic about bringing the innovation to their own schools. These site nominations were discussed, along with other possibilities, with members of the district administrator team and principal supervisors, in particular. Conversations about which schools to target for scale out were complemented with discussions with district leaders about the results that SIDT members in the original schools attributed to the innovation and process of improvement. Members of the faculties and staffs from three selected scale-out schools in the district were identified by their principals and asked to sit on newly created SIDTs. These SIDT members attended a two-day Summer Institute during which the prototype was introduced and modeled by the DIDT and members of the SIDTs in the original innovation schools. Plans were made for PDSA cycles to be carried out in each of the scale-out schools beginning in the fall.


In Fort Worth, efforts to scale in to a larger number of units within the three innovation schools was, to some extent, limited by the nature of the innovation itself; DIDT members decided to expose the entire student body in each high school to the mindset lessons. As such, the original schools were already implementing the practices school-wide. Scale in, then, became a process of deepening the implementation among teachers and other school staff. For example, discussion shifted to how to reinforce the material presented in the lessons for teachers and students without making it seem repetitive. There were also plans for specifying how materials would be adapted for particular subgroups of students, such as advanced students who could benefit from more challenging reading materials as part of the growth mindset lessons. As in Broward, decisions about scale-out proceeded by first ensuring that district leaders understood the results that were being achieved and the key components of the innovation that members of the improvement communities considered essential to those results. Recognizing that NCSUs continuous improvement model is based on adaptation to school context and the development of local capacity, a series of discussions was organized for principals of potential scale-out schools to learn about the innovation and the role of SIDTs in adapting the innovation for their school. Similar to Broward, there was already at-large representation from additional high schools on the DIDT, and these schools were considered candidates for scale-out sites, given the deep knowledge of some of their staff members.


Like rapid-cycle testing, authentic researcherpractitioner partnerships continued to be emblematic of the work in Phase IV. Alongside the PDSA cycles, researchers administered surveys of teachers and students and conducted field visits to the innovation schools. They produced easily digestible school reports about high-leverage findings from the surveys and fieldwork, and presented them to the SIDTs at the innovation and scale-out schools alike by phone and during the Summer Institute. In roles that emphasized research translation and use for practice, researchers also prepared and ran workshops in which school and district members of the teams worked together to identify what they saw as actionable patterns in the data and used them in their planning for further innovation refinement and scale.


The authentic researcherpractitioner partnerships provided other opportunities as well. By training members of the SIDTs in the scale-out schools, the practitioner partners who sat on the DIDTs in each district continued to build their capacity to support the innovation, and, as such, aided the process for transferring responsibility for and ownership over the innovation to the district.


For transfer, the partnership principle is essential. But equally essential is its potential for developing capacity for sustaining change (improvements) in systems beyond a single innovation (e.g., PASL). In fact, an organization that engages in a one-time improvement project would not be said to be engaging in continuous improvement (Park, Hironaka, Carver, & Nordstrum, 2013). Continuous improvement requires system change that relies not on one magic program or silver bullet, but instead on: (1) a set of organizational routines that help innovations travel through a system, (2) habits of mind that conceive of teachers and other practitioners as co-creators in the design process, (3) regular PDSA cycles to adapt innovations to their implementation contexts, (4) improvement teams within school districts organized around persistent problems of practice, and (5) participation in networked improvement communities with others engaged in improvement initiatives (e.g., Bryk & Gomez, 2008; Cohen-Vogel, 2014; Fishman, Penuel, Allen, & Cheng, 2013; Peurach & Glazer, 2012; Resnick & Spillane, 2006). In short, the NCSU process of improvement relies on authentic partnerships to help to bring to scale not only the programs that work to enhance student performance but also a self-sustaining learning process within an

educational system.


DISCUSSION


NCSUs continuous improvement model addresses the challenges of bringing effective educational interventions to scale by drawing on three core principles. The first principle is that an intervention must reflect core elements of programs and practices found in schools in the district in which the improvement project occurs. The Center acknowledges that interventions cannot be dropped in from other contexts, because knowledge and outcomes are not in the program but in the people and systems who will implement it. Second, NCSU relies on rapid, repeated cycles of testing and revision to gradually improve an innovation. This process of empirical tinkering (Morris & Hiebert, 2011) generates knowledge about what works where, when, and for whom (Means & Penuel, 2005). Employing authentic researcherpractitioner partnerships is the third principle of NCSUs continuous improvement model. Rather than assuming that teachers should wait for the district or state to mandate the implementation of a proven program, NCSUs approach recognizes the agency of practitioners to engage in disciplined inquiry (Lewis, 2015, p. 59) in their practice settings to make improvements to their classrooms, schools, and district. These partnerships also encourage researchers to conduct research on issues that matter to educators (Means & Harris, 2013).  

These three principles are manifest in each of the four phases of NCSUs model for continuous improvement. In the first phasethe research phaseresearchers at NCSU conduct surveys, interviews, and observations to identify effective programs, procedures, and practices that distinguish higher value-added schools from their lower value-added counterparts serving similar student populations. The decision to build from effective practices found in the district in which the improvement work will take place is highly aligned with NCSUs first principle. Additionally, the in-depth case study work conducted during this phase provides researchers with deep knowledge about local schools and the district, knowledge that establishes researchers credibility as they work to develop relationships and partnerships with practitioners for the later phases of the work.  

During the innovation design and development phase, members of district innovation design teams, comprised of researchers and practitioners alike, develop a prototype to be tested in the innovation schools. These prototypes continue to build on the core elements embedded in local practice. The fact that the design work occurs within the DIDT highlights NCSUs commitment to authentic practitioner involvement. This phase also provides opportunity for some informal rapid-cycle testing, as members of the DIDT share ideas with and elicit feedback from their colleagues back in the schools.  

The implementation phase of NCSUs process reflects the second and third core principles. To encourage gradual implementation based on local adaptation, NCSU uses rapid-cycle testing in the form of PDSA cycles. In both Broward and Fort Worth, the PDSA cycles are conducted simultaneously in the three innovation schools, giving members of the DIDTs the opportunity to share problems, propose solutions, and learn from one another while allowing members of the school-level teams (SIDTs) to make adaptations based on their local tests of change. By involving SIDT members in the PDSA process, the NCSU model builds practitioners capacity for future improvement efforts.  

In the fourth and final phase of the process, NCSU moves to bring the tested innovations to scale. Rapid-cycle testing becomes a tool not just for implementation, but also for scaling up the innovation. This occurs by taking advantage of knowledge already generated, putting it to use in new school settings, and adapting and extending this knowledge in response to the demands and constraints of implementation on a broader scale. NCSU relies on its partnership with practitioners as it prepares to test and adapt the innovation in additional schools in the districts.

A number of school districts and educational organizations (e.g., the Carnegie Foundation for the Advancement of Teaching, the Strategic Education Research Partnership) are taking up continuous improvement. Educational researchers too have begun to acknowledge that improvement science is well-suited for addressing major educational problems (Cobb, Jackson, Smith, Sorum, & Henrick, 2013; Lewis, 2015). Nevertheless, the approach is still relatively new in the field. Indeed, education has been much slower to adopt a science of improvement than manufacturing, business, and health care, all of which can point to deep and sustained reform as a result of continuous improvement efforts (e.g., Benedetti, Flock, Pedersen, & Ahern, 2004; Power et al., 2014).

Given how new continuous improvement is in the field of education, the NCSU model provides a valuable illustration of this work. Specifically, it highlights the potential costs and benefits of shifting away from controlled studies to a research and design enterprise focused on improvement. Although a controlled study of an intervention can provide critical evidence about what works, practitioners may fail to respond, either because they are unaware of the research or because the intervention addresses a problem they do not face or is irrelevant to their own challenges and unique contexts (Coburn & Turner, 2011; Cohen-Vogel et al., 2015; Fishman et al., 2013). Teachers appear more likely to enact innovations built upon practices that have been shown to work in their districts and that they are actively involved in developing, testing, and refining. Moreover, by starting small and allowing for innovation prototypes to fail early, the rapid-cycle testing at the core of NCSUs model actually reduces both the costs and risks associated with full-scale trials of an intervention.  

This is not to suggest that costs of continuous improvement are low. They are not. But investments in models like the one described here have pay-offs beyond the benefits of a single intervention. That is because, unlike traditional research studies, continuous improvement brings a systems approach to education, leveraging local expertise that already exists within the system and building school and district capacity to solve problems of practice well into the future (Park et al., 2013). Further research on the costs and benefits of investing in continuous improvement processes in education is warranted. As NCSU continues its work in Broward and Fort Worth and as continuous improvement projects emerge in various other educational settings, we expect to learn more about the promises and challenges of using continuous improvement as a tool for implementing and bringing to scale programs that improve outcomes for all students.

Acknowledgment

This research was conducted with funding from the Institute of Education Sciences (R305C10023). The opinions expressed in this report are those of the authors and do not necessarily represent the views of the sponsor.

Notes

1. The first wave of implementation research suggested that local governments had neither the will nor the capacity to implement programs initiated by higher level governments (Murphy, 1971; Pressman & Wildavsky, 1973). However, the second wave of research, focusing on implementation after the initial start-up years, challenged the notion that programming initiated by higher levels of government would never be implemented (Odden, 1991). Instead, longitudinal studies of state and federal categorical aid programs in education repeatedly found that time, coupled with mutual adaptation, led to support for new program initiatives, the local capacity to run them, and, ultimately, the provision of services to targeted student populations (Berman & McLaughlin, 1975; Birman, Orland, Jung, Anson, & Garcia, 1987; Jung & Kirst, 1986; Sarason, 1982).

2. The estimated fixed effect for each high school was put in rank order and classified by deciles of value-added for the following groups: English language learners, low-income students, and minority students (see Sass, 2012). From the list, we eliminated alternative, charter, and magnet high schools and then selected two higher and lower value-added schools for case study. With a strategy that balanced performance with demographic characteristics, we selected the two lowest-ranked schools on math and reading gains with low-income, minority, and ELL students. We then selected the two highest-ranked schools that had comparable student demographics.

3. In later cycles, design team members expected the innovation to improve students sense of connection to a teacher and the school as a whole and, as a result, reduce the proportion of ninth-grade students who were on track to receive a failing grade in any course.  

References

Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. The Journal of the

Learning Sciences, 13, 114.


Berends, M., Bodilly, S. J., & Kirby, S. N. (2002). Facing the challenges of whole-school reform: New

American Schools after a decade. Santa Monica, CA: RAND.


Berman, P., & McLaughlin, M. W. (1975). Federal programs supporting educational change (Vols. 1-3).

Santa Monica, CA: RAND.


Birman, B., Orland, M., Jung, R., Anson, R., & Garcia, G. (1987). The current operations of Chapter 1

programs: Final report from the National Assessment of Chapter 1 (Report No. OR-87-504). Washington,

DC: U.S. Department of Education. Retrieved from http://files.eric.ed.gov/fulltext/ED289935.pdf


Bodilly, S., Glennan, T., Galegher, J., & Kerr, I. (2004). Expanding the reach of education reforms:

Perspectives from leaders in the scale-up of educational interventions. Santa Monica, CA: RAND.  


Bryk, A. S., & Gomez, L. (2008). Ruminations on reinventing an R&D capacity. In F. Hess (Ed.), The

future of educational entrepreneurship: Possibilities of school reform (pp. 181206). Cambridge, MA:

Harvard Education Press.


Bryk, A. S., Gomez, L., & Grunow, A. (2010). Getting ideas into action: Building networked improvement

communities in education. Stanford, CA: Carnegie Foundation for the Advancement of Teaching.


Bryk, A. S., Sebring, P., Allensworth, E., Luppescu, S., & Easton, J. (2010). Organizing schools for

improvement: Lessons from Chicago. Chicago, IL: University of Chicago Press.


Cannata, M., Taylor Haynes, K., & Smith, T. M. (2014). Reaching for rigor: Identifying practices of

effective high schools. Nashville, TN: National Center on Scaling Up Effective Schools.


Coburn, C. (2003). Rethinking scale: Moving beyond numbers to deep and lasting change. Educational

Researcher, 32(6), 312.


Cohen-Vogel, L. (2011). Staffing to the test: Are todays school personnel practices evidence based?

Educational Evaluation and Policy Analysis, 33(4), 483505.


Cohen-Vogel, L. (2014, April). Presidential invited sessionClimbing out of the ivory tower: New forms of

researchpractice partnerships. Presentation at the annual meeting of the American Educational

Research Association, Philadelphia, PA.


Cohen-Vogel, L., & Harrison, C. (2013). Leading with data:  Evidence from the National Center on Scaling

Up Effective Schools. Leadership and Policy in Schools, 12(2), 122145.


Cohen-Vogel, L., Tichnor-Wagner, A., Allen, D., Harrison, C., Kainz, K., Socol, A. R., & Wang, Q. (2015).

Implementing educational innovations at scale: Transforming researchers into continuous improvement

scientists. Educational Policy, 29(1), 257277.


Datnow, A., Hubbard, L., & Mehan, H. (2002). Extending educational reform: From one school to many.

New York, NY: RoutledgeFalmer.


Deming, W.E. (2000). The new economics: For industry, government, education (2nd Edition). Cambridge, MA: MIT Press.


Elmore, R. F. (1996). Getting to scale with good educational practice. Harvard Educational Review66(1),

127.


Englebart, D. C. (1992). Toward high-performance organizations: A strategic role for groupware. San

Jose, CA: Morgan Kaufmann.


Fishman, B., Penuel, W. R., Allen, A., & Cheng, B. H. (Eds.). (2013). Design-based implementation

research: Theories, methods, and exemplars. National Society for the Study of Education Yearbook (Vol.

112, Issue 2). New York, NY: Teachers College Record.


Glennan, T. K., Bodilly, S. J., Galegher, J., & Kerr, K. (2004). Expanding the reach of education reforms:

Collected essays by leaders in the scale-up of educational interventions. Santa Monica, CA: RAND.


Goldring, E. B., Porter, A. C., Murphy, J., Elliott, S., & Cravens, X. (2009). Assessing learning-centered

leadership: Connections to research, professional standards, and current practices. Leadership and

Policy in Schools, 8, 136.


Harrison, C., Wachen, J., Brown, S., & Cohen-Vogel, L. (2015, October). A view from within: Lessons

learned from doing continuous improvement research. Paper presented at the conference of the National

Center on Scaling Up Effective Schools, Nashville, TN.


Honig, M. (2006). New directions in education policy implementation: Confronting complexity. Albany, NY:

State University of New York Press.


Institute for Healthcare Improvement. (2014). Model for improvement. Cambridge, MA. Retrieved from

http://www.ihi.org/resources/Pages/HowtoImprove.


Jung, R., & Kirst, M. (1986). Beyond mutual adaptation, into the bully pulpit: Recent research on the

federal role in education. Educational Administration Quarterly, 22, 80109.


Langley, G. L., Nolan, K. M., Norman, C. L., & Provost, L. P. (2009). The improvement guide: A practical

approach to enhancing organizational performance (2nd ed.). San Francisco, CA: Jossey-Bass.


Lewis, C. (2015). What is improvement science? Do we need it in education? Educational Researcher,

44(1), 5461.


McDonald, J. P., Klein, E. J., & Riordan, M. (2009). Going to scale with new school designs: Reinventing

high school. New York, NY: Teachers College Press.


Means, B., & Penuel, W. R. (2005). Research to support scaling up technology-based innovations. In C.

Dede, J. Honan, & L. Peters (Eds.), Scaling up success: Lessons from technology-based educational

improvement (pp. 176197). New York, NY: Jossey-Bass.


Means, B., & Harris, C. (2013). Towards an evidence framework for design-based implementation

research. In B. J. Fishman, W. R. Penuel, A. R. Allen, & B. H. Cheng (Eds.), Design based

implementation research: Theories, methods, and exemplars. National Society for the Study of Education

Yearbook (Vol. 112, pp. 320349). New York, NY: Teachers College Press.


MidKiff, B., & Cohen-Vogel, L. (2015). Understanding local instructional responses to federal

accountability mandates: A typology of extended learning time. Peabody Journal of Education, 90(1), 9

26.


Morris, A. K., & Hiebert, J. (2011). Creating shared instructional products: An alternative approach to

improving teaching. Educational Researcher, 40, 514.


Murphy, J. T. (1971). Title I of ESEA: The politics of implementing federal education reform. Harvard
Educational Reform
, 41(1), 3563.


Odden, A. (Ed.). (1991). Education policy implementation. Albany, NY: State University of New York Press.


Osborne-Lampkin, L., & Cohen-Vogel, L. (2014). Spreading the wealth:  How principals use

performance data to populate classrooms. Leadership and Policy in Schools, 13, 188208.


Park, S., Hironaka, S., Carver, P., & Nordstrum, L. (2013). Continuous improvement in education. Palo

Alto, CA: Carnegie Foundation for the Advancement of Teaching.


Penuel, W. R., Fishman, B. J., Cheng, B. H., & Sabelli, N. (2011). Organizing research and development

at the intersection of learning, implementation, and design. Educational Researcher40(7), 331337.


Peurach, D. J., & Glazer, J. L. (2012). Reconsidering replication: New perspectives on large scale school

improvement. Journal of Educational Change, 13, 155190.


Pressman, J. L. & Wildavsky, A. (1973). Implementation. Berkeley, CA: University of California Press.


Resnick, L., & Spillane, J. (2006). From individual learning to organizational designs for learning. In L.

Verschafeel, F. Dochy, M. Boekaerts, & S. Vosniadou (Eds.), Instructional psychology: Past, present, and

future trends (pp. 259279). Oxford, England: Pergamon.


Rutledge, S., Cohen-Vogel, L., Osborne-Lampkin, L., & Roberts, R. (2015). Understanding effective high

schools: Evidence for personalization for academic and social emotional learning. American Educational

Research Journal, 52(6), 10601082.


Sarason, S. B. (1982). The culture of the school and the problem of change (2nd ed.). Boston, MA: Allyn

& Bacon.


Sass, T. (2012). Selecting high- and low-performing high schools in Broward County, Florida for analysis

and treatment (Technical Report, National Center on Scaling Up Effective Schools). Available from

http://scalingupcenter.org


Stringfield, S., & Datnow, A. (1998). Scaling up school restructuring designs in urban schools. Education

and Urban Society30(3), 269276.


Tichnor-Wagner, A., Wachen, J., & Cohen-Vogel, L. (2015). Continuous improvement in education:

Understanding plan-do-study-act cycles in practice. Paper presented at the annual meeting of the

Association for Education Finance and Policy, Washington, DC.






Cite This Article as: Teachers College Record Volume 118 Number 13, 2016, p. 1-26
https://www.tcrecord.org ID Number: 20656, Date Accessed: 1/22/2022 8:07:29 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Lora Cohen-Vogel
    University of North Carolina at Chapel Hill
    E-mail Author
    LORA COHEN-VOGELL is Robena and Walter E. Hussman, Jr. Distinguished Professor of Policy and Education Reform in the School of Education at the University of North Carolina at Chapel Hill and, since 2010, the Co-Principal Investigator of the National Center on Scaling Up Effective Schools. Cohen-Vogel’s research focuses on teacher quality and the politics of education. She also works on continuous improvement research and other approaches for developing and bringing to scale processes for school system improvement.
  • Marisa Cannata
    Vanderbilt University
    E-mail Author
    MARISA CANNATA is Research Assistant Professor in the Department of Leadership, Policy, and Organizations at Peabody College of Education and Human Development, Vanderbilt University, and Director of the National Center on Scaling Up Effective Schools. Her research interests include continuous improvement research, high school reform, charter schools, and teacher hiring and career decisions. Cannata is co-editor of School Choice and School Improvement (Harvard Education Press).
  • Stacey Rutledge
    Florida State University
    STACEY A. RUTLEDGE is Associate Professor in the Department of Educational Leadership and Policy Studies at Florida State University. Her research explores policies aimed at improving teaching and learning and how these shape the work of district and school administrators and teachers, and, ultimately, students’ learning opportunities. For the last five years, she has been a project investigator in the National Center on Scaling Up Effective Schools. She is co-editor of The Infrastructure of Accountability: Data-use and the Transformation of American Education (Harvard Education Press).
  • Allison Socol
    University of North Carolina at Chapel Hill
    E-mail Author
    ALLISON ROSE SOCOL is a doctoral student in Policy, Leadership, and School improvement at the University of North Carolina at Chapel Hill. Her research interests include policies and practices that improve outcomes for students of color and students from low-income families.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS