Title
Subscribe Today
Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

What Does it Take to Break the Mold? Rhetoric and Reality in New American Schools


by Thomas Hatch - 2000

In 1991, the New American Schools Development Corporation (NASDC) was created in order to create “break the mold” models for a “new generation of American schools.” NASDC sought to achieve this goal by funding eleven different design teams including the ATLAS Communities Project - a collaboration of the Coalition of Essential Schools, the School Development Program, Harvard Project Zero, and the Education Development Center. While schools associated with ATLAS and the other design teams showed some signs of progress in the first few years of their work, there has been little evidence that these positive outcomes have been achieved by “breaking the mold.” In fact, the NASDC strategy and the ATLAS collaboration may have exacerbated basic conditions that make it difficult for schools and organizations to explore new ideas and develop innovative practices. By drawing on studies of innovation in business organizations, this paper argues that rather than trying to create “break the mold” school designs, reformers should aim to create the conditions that allow for a better balance between efforts to explore new ideas that may be successful in the future and the further expansion of practices that have been successful in the past.

In 1991, the New American Schools Development Corporation (NASDC) was created in order to create break-the-mold models for a new generation of American schools. NASDC sought to achieve this goal by funding eleven different design teams including the ATLAS Communities Projecta collaboration of the Coalition of Essential Schools, the School Development Program, Harvard Project Zero, and the Education Development Center. While schools associated with ATLAS and the other design teams showed some signs of progress in the first few years of their work, there has been little evidence that these positive outcomes have been achieved by breaking the mold. In fact, the NASDC strategy and the ATLAS collaboration may have exacerbated basic conditions that make it difficult for schools and organizations to explore new ideas and develop innovative practices. By drawing on studies of innovation in business organizations, this paper argues that rather than trying to create break-the-mold school designs, reformers should aim to create the conditions that allow for a better balance between efforts to explore new ideas that may be successful in the future and the further expansion of practices that have been successful in the past.


In the wake of growing concerns over the rising tide of mediocrity decried in A Nation at Risk and mounting frustrations over the lack of effectiveness of previous reform efforts, the Bush Administration established the New American Schools Development Corporation (NASDC) in 1991. A key part of the America 2000 initiative, NASDC sought to jumpstart learning in America by raising private funds to underwrite the design of a new generation of American schools (NASDC, 1991, p. 7). NASDC endeavored to support these new schools by launching a design competition in which it invited educators, parents, researchers, community leaders, business people, and anyone else with an interest in education to submit proposals describing how they would develop schools that would enable their students to reach world class standards. Key to the entire initiative was the idea that this competition should lead to substantial innovations in schooling. As NASDC put it in their request for proposals: . . . The design teams should understand that NASDCs intentions go far beyond the expectations of prior education reform efforts. . . . This is a request for break-the-mold designs, not for fixing up the design already in place (NASDC, 1991, pp. 2021).


In addition, NASDC argued that isolated innovations in a few aspects of a schools functioning were not sufficient. They pointed out that many reform efforts were already at work improving selected aspects of existing schools, but they wanted to promote comprehensive designs for very different schools or sets of schools. As they put it in the RFP: This is an effort to integrate all elements of a schools life. Proposals to explore teaching and learning in limited settings will not satisfy the requirements of this RFP. . . . The designs must be for whole schools, not for a single grade or program within a school; they should integrate all facets of the schools life. . . (NASDC, 1991, p. 20).


In response to the call by NASDC, over 600 groups submitted proposals that described how they would break the mold (Glennan, 1998). Those groups included a partnership between four of the most well-known organizations involved in education and reform at the time: the Coalition of Essential Schools, chaired by Theodore Sizer, Education Development Center, led by Janet Whitla, the Development Group of Harvard Project Zero, directed by Howard Gardner, and the School Development Program, headed by James Comer. Each of these organizations had developed specific approaches that they believed had already shown considerable promise in changing the nature of some schools. But they felt that large-scale innovations in many schools were more likely to be achieved if they pooled their resources and expertise in a joint project. They named the endeavor the ATLAS Communities Project (with ATLAS standing for Authentic Teaching and Learning for All Students). More than a compilation of existing ideas and resources, in their proposal to NASDC the ATLAS partners argued that what legitimizes and energizes our collaboration is the prospect of creating a new approach to education that is greater than, and different from, anything that we can accomplish alone (ATLAS, 1992, p. 4).


In 1992, NASDC gave one year of funding to ATLAS along with ten other design teams to create their break-the-mold designs. ATLAS began by establishing a staff composed of members of each of the partner organizations and by selecting four sites with which they would workGorham, Maine; Lancaster, Pennsylvania; Norfolk, Virginia; and Prince Georges County, Maryland. In each site, they worked with a pathway of schools consisting of at least one elementary, middle, and high school in a feeder pattern.1 At the end of the first year, NASDC provided two years of additional funding for ATLAS and eight of the other original design teams to continue to develop and implement their design. While Lancaster chose not to remain with ATLAS, the ATLAS work continued and expanded in the other three sites. At the end of the third year, NASDC granted two more years of funding to ATLAS and six other design teams to disseminate their designs to other sites. At that time, ATLAS started to work in several schools in Memphis, Tennessee, and began to discuss affiliations with schools in several other districts.2


The work during these first four years was not easy for ATLAS or for many of the other design teams (Bodilly, 1996; Hatch, 1998; Mirel, 1994; White, Muncey, & Fanning, 1999). But even though many of the designs did not develop as quickly as NASDC hoped (and some design teams did not receive continued funding as a result), there were also a number of signs that schools associated with each one of the designsincluding ATLASwere making progress. In particular, site members in Gorham reduced teacher-student ratios at the high school through block scheduling and established a comprehensive system of assessment that includes the use of portfolios, parent-teacher-student conferences, and exhibitions throughout the pathway. Even in Prince Georges County, where the work was generally viewed as moving more slowly, some schools redesigned their use of specialists so that grade-level teachers could have common planning time and a number of teachers developed classroom projects that built on the ideas of the ATLAS partners (http://www.edc.org/FSC/ATLAS/).


In addition, some ATLAS schools reported some improvements on measures of student performance such as test scores. In 1995, fourth graders at the ATLAS elementary school in Gorham, Maine, received the highest scores in the districts history on the Maine Educational Assessment; the percentage of middle school students in the ATLAS schools in Prince Georges County scoring satisfactory or excellent in reading on the Maryland School Performance Assessment Program rose from 9 to 28 percent from 1992 to 1995; and, at the ATLAS High School in Norfolk, the number of students at the school scoring above 1000 (combined) on the Scholastic Achievement Test has grown over 300 percent since the school began to work with the design in 1992 (http://www.edc.org/FSC/ATLAS/; Fashola, Olatkunbo, & Slavin, 1998; Squires & Kranyik, 1997).


Ironically, however, there is little evidence that directly links the changes these schools have madeor the extent to which they have broken the moldto any positive outcomes. In fact, the NASDC-sponsored evaluations of the design teams suggested that teams like ATLAS that made more ambitious attempts to break the mold made slower progress than others over the first four years of the project. Teams that tried to create a new organization sought to work with larger numbers of schoolsin high schools as well as elementary schoolsand on many aspects of schooling, not just the core academic subjects, tended to be less successful. Instead, those teams that had an already existing and established team (before NASDC funding) and that focused most directly on core academic subjects like reading and math (and often worked in a narrow range of grade levels) tended to be those that were able to support improvements in schools most quickly.


When improvements were made in the ATLAS schools and those associated with the other design teams, those improvements were likely to build onrather than to depart fromalready established practices. As a report on the first year of the dissemination of the NASDC designs in one district pointed out: Choosing restructuring models matched to the schools current beliefs and ongoing activities expedited startup and progress (Smith et al., 1997, p. 127). A report on the first four years of the ATLAS project similarly suggested that the ATLAS sites often focused on initiatives that were viewed as compatible with the traditional modes of operation in their local schools (White, Muncey, & Fanning, 1999). Even Gorham was viewed by many as having built on previous changes in school and classroom practices rather than as having broken the mold.


Similarly, the ATLAS collaboration helped the partner organizations to expand work already underway, not to develop new ideas. For example, some publications of the Coalition of Essential Schools (CES) following the development of ATLAS have included specific discussions of the work of Project Zero on Teaching for Understanding and on student assessment (Cushman, 1996; Sizer, 1996)two approaches that fit neatly into longstanding concerns at CES with pursuing depth over breadth in curriculum and in using student-generated performances as the basis for evaluation. But despite such synergies, none of the partner organizations has substantially changed its approach as a result of the ATLAS collaboration. In fact, by the end of the fourth year of funding from NASDC, rather than creating a new whole greater than the sum of its parts, ATLAS had essentially become a fifth organization that expanded on some of the ideas and modes of operation of the four original partners (McDonald et al., 1999). In short, the improvements associated with ATLAS and the other design teams appear to have resulted from stretching the mold in a few places rather than breaking it. Likewise, while NASDC and many of the design teams continue working to improve and scale-up their designs to schools around the country, for the most part the rhetoric around breaking the mold seems to have been abandoned (Hatch, 1999).


What happened? Why was it so difficult for ATLAS? Why didnt the ATLAS work result in more substantial and widespread innovations in schooling or even innovations in the approaches and operation of the ATLAS partners themselves? In this paper, I argue that despite the stated desire to break the mold, the NASDC strategy and the ATLAS collaboration exacerbated basic conditions that make it difficult for schools and organizations to explore new ideas and develop innovative practices. In order to do so, I build on analyses of the exploitation-exploration dilemma in business organizations to show that many of the factors that constrain the development of innovations in businesses are at work in schools and school reform efforts.3 In addition, I draw on the reports of a team of ethnographers documenting the ATLAS collaboration and the reports and records produced by the members of ATLAS to show how the demands of NASDC and the challenges of the ATLAS collaboration during the first four years may have made it even harder for the members of ATLAS to explore and develop the kinds of break-the-mold innovations that were envisioned.4 In particular, these studies illustrate the pervasive conditions that made it difficult to make significant and widespread improvements in conventional practices in sites like Norfolk and Prince Georges County.


In the end, these analyses raise fundamental questions about the extent to which breaking the mold is a productive or appropriate goal for school improvement. Developing innovative practices involves high risks and uncertain outcomescosts that, in some cases at least, may be too high for schools, their communities, and their students to bear. At the same time, few are satisfied with the status quo. In short, we have to find a middle ground between breaking the mold and simply tinkering with the current system. Therefore, in conclusion, I draw from experiences in Gorhamthe ATLAS site generally considered to have been the most innovativein order to identify some ways schools and districts may be able to find a better balance between the further expansion of practices judged to have been successful in the past and the exploration of practices that could be successful in the future.

DILEMMAS OF EXPLORATION AND EXPLOITATION IN SCHOOLS


Experts in organizations and organizational change have shown that creating an innovative organization involves managing a fundamental dilemma: balancing the competing goals of developing new knowledge (exploring) and taking advantage of established practices (exploiting) (Leventhal & March, 1993; March, 1991).5 Put simply, if organizations focus their resources largely or exclusively on maintaining those practices that they have employed in the past, then they run the risk that they may not be able to keep up with their competitors or adapt to future conditions. On the other hand, if a company departs from its established practices and invests too heavily in explorationin developing unproven new ideasthen they run the risk of going bankrupt before the new ideas can become practical and profitable. The challenge is to find the right balance between exploiting current practices and exploring new ideas that may lead to success in the future. A failure to find such a balance and an overemphasis on establishing or maintaining success in the short-term can contribute to the demise of even successful organizations and to the stagnation of entire industries.


Many factorslike the nature of feedback, level of competition, extent of turnover, and turbulence of the environmentplay a role in determining whether or not organizations can balance exploitation and exploration (March, 1981). These factors affect the extent to which organizations can free up resources to focus on the research and development of new practices as well as the extent to which organizations can collect and distribute the knowledge and expertise needed to develop and sustain innovations. Unfortunately, in schools many of these factors conspire to favor the exploitation of established practices. The constant demand to demonstrate quick improvements, the nature of the competition among schools and improvement efforts, the often high rates of turnover in schools and reform organizations, and the highly turbulent environment in which schools and reform efforts operate make it difficult to commit time and resources to the development of ideas that depart significantly from conventional practice. These same factors may also make it difficult for groups and individuals in schools and reform organizations to share with their colleagues any knowledge about an innovation that they may develop.


As a consequence, reform efforts not only need to change schools, they also need to deal with the conditions that discourage the exploration and development of new ideas in schools. Addressing these conditions is particularly important for organizations like ATLAS that are predicated on the idea that school improvement will be most successful if schools have an opportunity to identify and explore those ideas and practices that meet the needs and interests of their communities.6 Unfortunately, however, improvement efforts rarely address these conditions. In fact, for the most part, the efforts of ATLAS to break the mold were carried out in conditions that favored the exploitation of ideas that were most closely associated with already established practices, that were least controversial and disruptive, and that could be implemented most easily and most quickly. Ironically, the constraints established by NASDC and the challenges of the collaboration among the ATLAS partner organizations may have compounded rather than alleviated these pressures that discouraged exploration and the development of innovative practices.

THE FREQUENCY OF FEEDBACK


Feedback about the status or progress of new ideas and practices can be a critical part of the development of innovations. At the same time, the frequency of feedback and the implications derived from that feedback can discourage the members of an organization from pursuing those innovations in the first place. In fact, frequent feedback often increases the tendency of organizations to exploit practices that are already successful. Frequent feedback can have this effect because the exploration of ideas and approaches that depart significantly from current practices often involves substantial risks. In many cases, exploration never leads to productive developments; and even when exploration does lead to a new development it usually takes a long period of time before measurable success can be demonstrated. As a consequence, if the performance of an organization or the individuals within it is constantly monitored (either formally through evaluations or informally through interactions with customers, peers, or supervisors), then organizations and their members are wise to continue to do what they have been doing and not take any chances that may reduce their effectiveness. However, if organizations and their members receive less immediate feedback or if the feedback is not linked to formal or informal performance evaluations, then they have more latitude to try new procedures, take some risks, and experience fluctuations in performance. While such exploration may not produce immediate benefits, it may prove advantageous in the long run (Herriott et al., 1985; Leavitt & March, 1988; Leventhal & March, 1981).


In order to achieve the benefits of such explorations, however, some autonomy or some protection from immediate feedback is required. Thus innovative organizations are likely to be those that can afford to wait and can endure losses while a new product or practice is being developed or tested (Garud, Nayyar, & Shapira, 1997). The ability to wait may be particularly important because the success of an innovation often hinges on changes or developments outside the organization itself: it may take some time for customers to overcome previous buying patterns or long-established preferences. Unfortunately, the pressures on organizations to produce quick improvements in performance may be particularly strong when the company is not doing well. As a result, immediate negative feedback may lead organizations to pull back from exploring new practices at the very time when innovations are most needed.


In schools, feedback is constantly provided not only on the performance of students and teachers, but also on how smoothly the schools are operating. In the face of such constant feedback, schools and school staff have little latitude to try out new and unproven approaches. Instead, they are likely to be rewarded for pursuing practices that are knownor are generally believed to beeffective in raising test scores or in making improvements in other widely recognized measures of school performance (e.g., drop-out rates, college attendance, grades, suspensions, etc.) quickly. Schools have little freedom to wait and pursue innovations that do not offer immediate positive outcomes.


Consistent with this pattern, the schools in the ATLAS sites had little or no chance to explore and develop break-the-mold approaches before they began to receive feedback on how well their efforts were proceeding. In the very first years of the project, the site members got regular feedback from conventional measures of student and teacher performance, formal evaluations commissioned by NASDC, and informal assessments of daily operations by anyone who worked in or visited the schools. The sites were subjected to such immediate feedback even though it was well known that it might take several years before clear improvements could be demonstrated (ATLAS, 1992). For the members of the ATLAS sites, this regular feedback was particularly problematic because of the unconventional nature of some of the ATLAS ideas and because of the complex and exploratory nature of the ATLAS approach.


Feedback on the performances of students, teachers, and schools. The pressure for the ATLAS schools to produce quick improvements in student performance on standardized tests was reinforced both by implicit and explicit messages in many of the sites. For example, Maryland had embarked on an ambitious effort to assess effectiveness through the Maryland School Performance Assessment Program. In this program, the results of tests and other data on the performance of schools and school systems was publicized in widely read report cards. Those schools and systems that were particularly low-performing were often subjected to threats of takeover. These were not seen as idle threats in Prince Georges County, whose scores often placed it in the bottom third of Marylands school systems and, in some cases, next to last (White, Muncey, & Fanning, 1999).


Concerned about the inhibiting effects that such tests could have on efforts to develop new forms of curriculum and assessment, the ATLAS proposal raised the possibility that ATLAS schools would apply for waivers to excuse their students from participating. However, the waivers were themselves one of the contentious new ideas that had to be explored by those in the sites. There was no evidence that such waivers would directly contribute to improvements in performance, and everyone involved knew that asking for such waivers could create huge disagreements in the community and take up considerable time and effort. For example, in Norfolk, members of the ATLAS elementary school sought to seek a waiver for one of the standardized tests students were required to take, but the principal insisted on keeping the test to maintain credibility with the community (White, Muncey, & Fanning, 1999). As a result, for the most part students in the ATLAS sites continued to take many of the same tests as their peers in other schools. Thus conventional measures of student performance continued to be used by many peopleincluding administrators in the districts, community members, and local mediaas important vehicles for feedback on the performance of the schools even in the earliest stages of the project.


Conventional evaluations of teachers provided another source of feedback that could discourage ATLAS members from experimenting with the classroom practices supported by ATLAS. In one ATLAS schooleven though job descriptions and hiring criteria had been changed to highlight the importance of participating in ATLASteachers who were attempting to teach for understanding and were asking students to work on projects in groups were given poor evaluations because the evaluation form identified good classes as classes in which students stayed in their own seats and did not speak out of turn. At the same time that high stakes were attached to such feedback to ATLAS teachers from conventional sources, there were few clear incentives attached to the feedback from the members of ATLAS. While those who were interested in exploring teaching for understanding were offered the opportunity to meet regularly with members of the ATLAS partners to get their feedback, no explicit rewards existed to encourage teachers to take advantage of such opportunities. Similarly, there were no explicit sanctions for those who chose neither to seek nor respond to feedback from the ATLAS members.


The fact that the members of the ATLAS sites were subjected to regular feedback on the operation of the schoolsand not just on the performance of students and teachersalso may have inhibited efforts to experiment with new approaches. Feedback on daily operations came in the form of formal and informal conversations among school staff and between school staff and parents, district officials, ATLAS staff, NASDC evaluators, and other visitors. Unfortunately, trying to put the ideas supported by the ATLAS partners into place created numerous challenges and disruptions in the normal activities of the school. Yet those challenges and disruptionsseen as necessary and inevitable by reformerscould also be interpreted by school staff, students, and parents as evidence that the operation of the school was getting worse not better. Such concerns naturally led to more negative feedback from some quarters.


Concerns about embracing ATLAS ideas may have been compounded because many of the approaches the site members were being asked to explore were more likely to generate controversy and resistance than immediate gains in student performance. For example, practices like portfolio assessment and project-based activities developed at Project Zero (Seidel et al., 1997) and exhibitions of student work developed by members of the Coalition of Essential Schools (McDonald et al., 1992) were expected to lead to improvements in student learning in the future; however, no one expected that those improvements would be captured quickly or easily on conventional measures. At the same time, there was ample evidence that developing such practices was extremely difficult, often resulting in conflict that was not always satisfactorily resolved (Muncey & McQuillan, 1996; Seidel, Hatch, & Blythe, 1997).


Making things more difficult, the members of the ATLAS sites were truly being asked to explore the ATLAS ideas. Rather than being given already existing plans or materials, in many cases site members worked closely with ATLAS consultants to develop practices unique to their own schools. This approach is common among many school reform efforts and is expected to lead to increased success and to the sustainability of the innovations down the line. However, such a break-the-mold approach may also have contributed to even more initial disruptions and may have made it even harder to produce quick, measurable improvements in performance (Bodilly, 1996). For example, in Prince Georges County, the efforts of ATLAS to encourage the schools and teachers to participate in the selection and development of the reform initiatives represented an entirely different way of working from the usual reform projects in the district. The ATLAS site members were used to being dictated a program of instruction and otherwise being compliant to a bureaucracy that purported to know best (White, Muncey, & Fanning, 1999, chap. 8, p. 6). Not surprisingly, there was considerable resistance to the ATLAS approach, and even those who embraced it found it extremely difficult to implement.


Finally, the publicity and attention that came with the schools association with NASDC and ATLAS may have exacerbated fears of negative feedback and allowed little time for the schools to explore new ideas. Because ATLAS and NASDC were relatively high-profile, controversial initiatives, there was little chance for the ATLAS schools to try to work things out before their progress was examined and assessed; as Nunnery et al. (1997) put it in a report on the NASDC schools in Memphis, because restructuring was accompanied by much local publicity and external scrutiny, teachers may have felt pressures to achieve immediate, noticeable changes in teaching and learning (p. 92). The lack of wait time before being subjected to high-stakes feedback was particularly problematic in Prince Georges County. In the very first year of implementation, NASDC commissioned evaluations to help determine whether each of the design teams should receive continued funding and Prince Georges County was selected to serve as a case study for ATLAS. Following the first visit of the evaluators to Prince Georges County in 1993, NASDC and ATLAS were given feedback on the ATLAS schools that the site members in Prince Georges County viewed as at the very least critical and more often regarded as devastating (White, Muncey, & Fanning, 1999, p. 7). Such negative feedback on the operation of the schools and the progress of implementation helped to strain relationships among supporters and detractors in the sites and among site members and members of ATLAS.


All in all, the ATLAS schools and site members had to deal with a fundamental mismatch between the expectations and operations of ATLAS and the feedback that they constantly received. At the same time that their performance was closely monitored and relatively quick improvements were expected, they were being asked to develop break-the-mold initiatives where outcomes were uncertain and where significant disruptions to their daily lives, negative performance evaluations, and criticism were almost inevitable.


Feedback on the performance of ATLAS as a whole. While site members could not escape immediate and frequent feedback about the status of implementation and the success of the ATLAS endeavor, the members of the ATLAS staff and the partner organizations also received regular feedback on their efforts to develop a break-the-mold approach. As in the sites, neither the regularity or the nature of that feedback provided much encouragement for ATLAS to explore or develop an approach that went beyond the established practices of the partner organizations or that would break the mold of American schooling.


For the ATLAS partners, simply receiving the initial funding from NASDC and the publicity associated with the award provided some encouragement for participating in an effort to break the mold. However, the constraints on the use of those funds and the structures for reporting and feedback that NASDC established provided little support for the exploration and development of new ideas. For example, in order to encourage the design teams to be productive quickly, NASDC required ATLAS and the other design teams to sign a contract in which each team specified deliverables that they would submit to NASDC on a quarterly basis. The contract that ATLAS signed with NASDC required ATLAS to produce 44 deliverables over the course of the first year including such things as plans for the ATLAS curriculum, standards and benchmarks of student and school performance, and reports on work in each of the sites. These deliverables were required so that NASDC could monitor the operation of the team and provide regular feedback on the progress they were making. Furthermore, the deliverables were a key source of information that NASDC used to decide whether or not ATLAS should get funding for the next phase of the project. As a result of the tight schedule, numerous deliverables, and the high-stakes attached to the reviews of NASDC, ATLAS was given no wait time to develop the relationships or establish the initial knowledge about the partner organizations that could serve as a foundation for the exploration and development of a new and previously untested approach.


The exploration of break-the-mold ideas was also constrained by the fact that ATLAS was likely to receive negative feedback if their design did not conform to the demands of NASDC. At least in some cases, these demands favored the development of far more conventional ideas and practices than those that the ATLAS partners endorsed. Thus NASDC expected ATLAS schools to establish and pursue world class standards in the traditional domains of schoolingEnglish, History, Geography, Science, and Mathematics (NASDC, 1991). Yet the Coalition of Essential Schools, Project Zero, and Education Development Center had all been heavily involved in developing curricula that did not follow those traditional disciplinary divisions. From a CES perspective in particular, efforts to break down these traditional divisions constituted a key step in establishing successful and innovative schools (Sizer, 1984). In other words, ATLAS was being asked to break the mold while still conforming to some of the conventional structures of schooling favored by NASDC.


In addition to the reviews of the deliverables, the evaluation commissioned by NASDC and carried out by RAND also provided regular feedback on the operation and performance of the design teams and affiliated sites. Although the evaluators from RAND sought to minimize pressures to raise test scores by acknowledging that quick improvements in student outcomes were not expected, their evaluation still had inhibiting effects on ATLAS and the sites. For example, in their yearly analysis of the first two phases of NASDC, RAND assessed the extent to which the design teams were prepared to support the implementation of all the different aspects of their designs and the extent to which the sites were ready or able to implement those designs (Bodilly, 1996; Bodilly, Purnell, Ramsey, & Smith, 1995). This evaluation strategy reinforced the demands of NASDC for the creation of designs that could be implemented in a relatively short period of time. This was particularly problematic for ATLAS both because their approaches involved practices known to conflict with conventional approaches and established practices that required significant time for implementation and because they needed time to learn about one anothers philosophies and theories before they could create a design that reflected all of those approaches (Hatch, 1998).


Not surprisingly, ATLAS did not fare particularly well in the initial RAND evaluations, some of which raised questions about the ability of ATLAS to move on to the next phase of the NASDC work. For example, in a summary of the evaluation sent to ATLAS, RAND concluded that design team progress has been slower compared to NASDC teams categorized as using similar design approaches, the team enters the next phase still shaking out the design and implementation strategy, and we question whether ATLAS has developed effective strategies that will push the implementation (RAND, 1995, pp. 1, 8). In the end, while many members of ATLAS shared some of the concerns of NASDC and themselves wished that the work was proceeding more quickly and with more concrete results, the pressures of regular feedback and the negative nature of much of the feedback received created difficult conditions for the exploration of break-the-mold approaches that might be more likely to pay off over a longer period of time.

THE NATURE OF COMPETITION


Competition can create rewards and sanctions that promote or penalize organizations for exploring new approaches and can exacerbate the pressures created by frequent feedback. A competition for primacyin which there is a single winner or in which winners are rewarded much more handsomely than othersfavors those organizations that are willing to take some risks and explore and develop new ideas (March, 1991). In such competitions for primacy, rewards are provided for the organization that can establish a distinctive or unique approach that enables them to outperform all their competitors; the fact that the remaining organizations get few rewards provides a strong incentive for all involved to make a significant effort to explore new ideas. However, because the organizations are exploring new ideasand therefore taking more risksthe average performance of all the organizations is likely to be lower. One or two organizations might be successful, but the rest are not likely to perform as well.


Competitions that penalize organizations for finishing last or reward them simply for doing as well as others produce the opposite effects. For example, in such competitionswhich are basically competitions for relative standingorganizations that are doing passably only have to keep up with some of their competitors, and, therefore, they are likely to be rewarded for exploiting already established practices. On the other hand, those organizations that are not doing well only have to catch up to their competitors. Therefore, they are likely to be rewarded simply for adopting and exploiting the practices of their competitors, not for exploring new approaches. Similarly, the negative attention they get as a result of comparisons with other organizations may increase the pressures on them to improve quickly; and, again, exploiting already established practices is likely to be a more profitable strategy than taking on the risks and uncertain rewards of exploration Of course, if organizations lag far behind their competitors, they may be forced to take drastic steps to improve. While such steps may include attempts to develop radical new ideas, the high stakes and urgency of the problems do not necessarily create an ideal context for the deep and thoughtful exploration of new approaches. In fact, such organizations may get caught in a failure trap (March, 1995): poor performance may lead to efforts to improve which, when they do not pay off quickly, may be abandoned. That failure may, in turn, lead to other change efforts that also fail to produce quick positive results.


For the most part, schools are not involved in competitions for primacy. Administrators, teachers, students, and community members often use traditional measures of school performance as a gauge of which schools are doing well and which are not. But the rewards for doing the best on those measures may not be that much more substantial than for doing relatively well in comparison to other schools. In this arrangement it is often sufficient simply to keep up with those other schools. Why should schools that are doing fairly well take a chance on exploring a new and risky approach that may jeopardize their relative standing? Why shouldnt those schools keep pace by exploiting established practices? At the same time, there are often significant penalties for poor performance, and schools that are not doing well are rewarded simply for catching up. In such situations, it makes more sense to adapt the practices of competitors rather than to develop new and innovative approaches.


This form of competition for relative standing was clearly in evidence in many of the districts where ATLAS worked. For example, in the ATLAS schools in Norfolk, when the superintendent publicly declared that all teachers in the district were expected to improve the test scores of their students within a three-year period, teachers were understandably concerned about their ability to keep up with their peers in non-ATLAS schools. Knowing that some schools and teachers might be teaching to the test while they attempted to teach for understanding generated doubts and concerns about job security even among those who believed that the ATLAS efforts would eventually result in better test scores. Even though the very same superintendent also told the members of the ATLAS schools not to worry about their test scores, such concerns could not be eliminated, because anyone could still look at those test scores to see how ATLAS schools measured up to others.7


Implicit and explicit competition among the design teams also may have made it more difficult to pursue particularly innovative and not yet established practices. This did not have to be the case, however. In fact, NASDC originally planned a competition that would reward only the most original and plausible designs with continued funding for the implementation phase. With sufficient time for design teams to develop unique approaches (which probably would have required a much longer time frame than the one NASDC provided), such a competition might actually have created a much more supportive context for the exploration of new ideas. Ironically, due in large part to concerns of the design teams about a competition that would lead to one or two winners, NASDC decided to reward all promising designs with continued funding.8 As a consequence, at the end of the first year NASDC rewarded teams as long as there was some evidence that those teams had as much potential as the others to support improvements in a wide range of schools. In this arrangement, instead of having to develop unique ways to outperform their peers the design teams only had to keep pace with them.


The fact that schools associated with some designs were successful in producing quick improvements on conventional measures and in implementing their designs smoothly put more pressure on the other teams to produce equally positive results. Not surprisingly, it was designs like ATLASwhich were more comprehensive and required more exploration on the part of the sitesthat were described as slow starters. Designs that had a primary emphasis on increasing student achievement and a structured curriculum that sites could implement were described as fast starters (Smith et al., 1997). The constant threat that NASDC could cut off funding at almost any time if ATLAS did not seem to be as promising as the other designs only increased the risks to ATLAS of trying to bring their ideas together in new and previously untested ways.


The inhibiting effects of such implicit competition to keep pace with the other designs increased as NASDC moved into the dissemination phase of the project. At that point, NASDC chose to link continued funding to the ability of the design teams to attract schools and districts willing to pay the design teams for their services. To this end, NASDC held a series of design fairs in a few districts around the country, and the amount of funding that the teams received was related to the number of schools participating in these fairs who chose to implement that teams design. These schools and districtseven those that had shown considerable interest in the slow starting designsexpressed overwhelming concerns over the extent to which these designs could help to produce improvements in students performances on standardized tests (Bodilly, 1998).9 Thus evidence that ATLAS or another design was not as successful in improving test scores quickly could be extremely problematic.


As a consequence, the competition among the design teams may have contributed to cyclical pressures that could make it harder and harder for design teams to sustain any ambitious attempts to explore new and untested approaches. Given widespread concerns over improving test scores quickly, the fast starters were likely to be attractive to a much wider group of schools and districts. Those same designs were likely to be the most profitable and productive for NASDC; and, in turn, those designs were likely to be in a better position to receive the added funding that would enable them to sustain themselves and continue to develop their initiatives. The slow starterswho needed even more time to explore and experimenthad to deal with the fact that they were less likely to get continued or additional funding through this arrangement. Comparisons to the performance of the fast starters could lead to greater criticism and increased demands to produce similar results. Such demands, in turn, could make it more difficult for the slow starters to achieve positive results, garner further funding, and sustain their explorations.10


Beyond the pressures that can come from competition with other organizations, collaborating organizations or units of an organization can also be in competition for time, resources, and attention that can make it difficult to pursue the long-term development of new ideas. Thus, while ATLAS was originally conceived of as a supplement to the work of the partner organizations, it created demands for experienced staff members who were already in short supply. In fact, the initial ATLAS staff, which included about thirty different individuals working at least part-time, was almost as large as three of the four partner organizations. Since most of the staff members of the partner organizationsparticularly senior staffwere already committed to other projects, involving them in ATLAS could be very disruptive to the work of the partners.


Making things more complicated, even by the end of the first year ATLAS could be seen as a distinct organization in competition with the partners (McDonald et al., 1999). In fact, ATLAS sought additional funding from some of the very same foundations that often funded the work of the partner organizations. Furthermore, if ATLAS successfully developed a new, more effective approach, the work of the partner organizations could become obsolete. If ATLAS brought together the approaches of all four organizations, why would any school choose to work with an organization that only focused on one? In other words, by sharing the knowledge and resources needed to develop an approach that was greater than the sum of the parts, the partner organizations could have undercut their own efforts to support their work in the future. Such conflicting pressures did little to support the exploration and development of new, more integrated approaches in the ATLAS partnership.

THE ABSORPTION OF KNOWLEDGE AND THE EXTENT OF TURNOVER


The extent of prior knowledge about innovative practices also affects an organizations capacity to absorb knowledgeto collect and develop the knowledge needed to pursue an innovationand explore new ideas effectively and efficiently (Cohen & Leventhal, 1990; Leventhal & March, 1993). Specifically, the more unconventional and unfamiliar an innovation, the more difficult it is going to be to explore it and develop and share new knowledge about it. Ideas and practices that are entirely new and unfamiliar pose a particular challenge. Lacking any knowledge of new ideas or practices, the members of an organization may have difficulty recognizing the potential value of the innovation in the first place, and they may not know how to begin or carry out an investigation to learn more about it (Garud, Nayyar, & Shapira, 1997; Jelinek, 1997). As a consequence of this absorption of knowledge problem, more initial investment is required in order to explore new and unfamiliar ideas and the costs associated with pursuing them are particularly high. Conversely, it is much easier to exploit or adapt current practices that are familiar and well known.


One way to deal with this problem is to bring in new staff who have gained knowledge and expertise about ideas and practices that may be unfamiliar to existing staff. In fact, moderate turnover of staff over time creates ideal conditions for the development and pursuit of innovations (Leventhal & March, 1993; March, 1991). Moderate turnover allows for the introduction of individuals who can help establish the initial knowledge needed to explore and develop innovations effectively. Without any turn-over, staff may either become stagnantwedded to old ways of doing business and unable to recognize new opportunitiesor they may become so proficient in a tried and true approach that they fail to explore or develop the new capacities necessary to succeed in the future (Arthur, 1988; Leonard-Barton, 1992; Leventhal, 1997). With excessive turnover, the regular influx of new people may bring a constant stream of new ideas, but the influx of newcomers also may tax an organizations ability to sustain previously successful practices or constrain their ability to establish and share the initial knowledge needed to explore new ideas in depth. Excessive turnover among those in leadership positions can be particularly problematic because each new leader has to be brought up to speed on innovations underway, and the more unconventional or unfamiliar the innovation, the longer such a process may take.


By the same logic, schools and school systems that experience moderate turnover should have a better chance to develop the initial knowledge needed to carry out in-depth explorations of ideas that depart from conventional practices. In schools with moderate turnover, newcomers may bring some new ideas and knowledge that help to inform and motivate existing staff. Furthermore, newcomers may stay long enough to learn about and benefit from any innovations that are already being pursued.


Unfortunately, excessive turnover can often be a problem in schoolsparticularly those experiencing serious difficultiesas well as in reform organizations themselves. In the ATLAS project, turnover was a particular problem in the leadership positions in the schools as well as in virtually every other aspect of the work. For example, in Norfolk, principals of two out of the three original ATLAS schools left or were replaced during the first four years. Assistant principals changed with even greater frequency, and retirements and transfers meant that large numbers of new teachers joined the ATLAS schools as well (White, Muncey, & Fanning, 1999). During the first four years in Prince Georges County, the high school had four different principals. Furthermore, there were almost constant changes in the administrators to whom the principals and the members of the ATLAS site team reported. In the first year, they reported to the superintendent. In the second year, they reported to an area superintendent. The following summer, that area superintendent resigned and much of her staff also retired or transferred. As a consequence, in the third year the ATLAS site members in Prince Georges County spent much of their time introducing the new area superintendent and her staff to the project and negotiating to accommodate changes she wanted to make. When the new superintendent took over at the beginning of the fourth year, all of the area offices were abolished and by the middle of that year, another administrator had been assigned to oversee the ATLAS project in the county (White, Muncey, & Fanning, 1999).


The turnover among the leaders and staff members of ATLAS was just as problematic. Initially, ATLAS was led by a team of codirectors drawn from each organization, and the ATLAS staff consisted of individuals who worked for, and were housed in, one of the partner organizations. Due to numerous organizational problems with this arrangement, by the middle of the first year this collaborative approach was abolished and a more centralized structure was created. In the new arrangement, ATLAS was led by a management team including a single project director and two managers who worked with him. While the four original directors were longtime employees of one of the partners, and knew the ideas and practices of the organizations well, none of the members of the new leadership team had been on the staff of any of the partner organizations for a significant period of time before ATLAS was created. In addition, at the end of the first year the central staff was substantially reorganized and only five of the original ATLAS membersout of a staff of over twenty-fiveremained throughout the second year of the project; at the end of the second year, a significant turnover occurred again, as nearly half of the staff departed. At the end of the third year, the project director left and was replaced by the third leader in three years. In addition to the turnover in the ATLAS schools and the ATLAS design team, the leaders of three of the four ATLAS partner organizations also announced that they would be stepping down at some point during the life of the project.


Beyond the personal and professional disruptions such constant turnover brings, it also made it extremely difficult for members of the sites and the ATLAS staff to develop the initial knowledge needed to create and nurture a new approach (Hatch & White, 1997). Substantial amounts of time had to be devoted to introducing new members of the sites and new members of the ATLAS staff to the work of the partner organizations and to any work done or progress made previously. As a result, there was less time, more limited resources, and fewer people available to develop the ATLAS design or deepen the implementation. With such a constant stream of new staff and site members, it was as if ATLAS was condemned to constantly repeat the first year of the innovationa year always likely to be difficult and disruptive (Huberman & Miles, 1984).11


Furthermore, the new ATLAS staff and site members had to develop their understanding of new and unconventional practices at the same time that the regular feedback and escalating demands of competition continued to put pressures on ATLAS and the ATLAS schools to produce results right away. Such conflicting demands may have created a vicious cycle or circular trap (Leventhal & March, 1993): a failure to produce quick results may have contributed to turnover in staff and leadership which, in turn, may have made it even more difficult to produce results.12

THE TURBULENCE OF THE ENVIRONMENT


In a turbulent, unstable environment conditions change frequently and quickly. Under such circumstances, organizations and individuals are rewarded for being able to withstand or adapt to the short-term changes in their environment. They have little latitude to focus on developing new approaches that might be successful in the long term. Even if they had the time or resources for the necessary exploration, in a turbulent environment it is particularly difficult to anticipate, predict, or prepare for the future (March, 1991).


Although practices inside the classroom may have changed relatively little over the years (Cuban, 1984; Elmore, 1996; Tyack and Cuban, 1995), constant changes in leadership and policies, uncertain funding, and rapid growth of technologies all help to make the environment outside schools quite turbulent (Fullan, 1999). Both NASDC and ATLAS sought to deal directly with these conditions by encouraging the establishment of a policy environment and structures at the district level that would support the development of innovative designs. In the fourth year as NASDC sought to scale-up the designs to new sites, NASDC even went on to adopt a strategy in which it worked explicitly to partner with jurisdictions (districts and regions, or states) that were willing to agree to a series of demands that NASDC felt were needed to increase the chances of successful implementation (Glennan, 1998). Yet in the first four years, neither ATLAS nor NASDC was able to reduce the considerable environmental turbulence in which the schools and design teams had to operate. In particular, the turnover in the superintendency in two of the sitesin Norfolk and Prince Georges Countybrought substantial shifts in policies and practices. In addition to the shifts associated with these changes in leadership, the schools in all the sites had to deal with considerable uncertainty over their budget allocations each year, and they had to respond to changes that were being made in curriculum frameworks, standards, and assessment requirements at the county or state levels.


Many of these changes in the district and state contexts were beyond the control of ATLAS and NASDC, but some aspects of the activities of NASDC also may have contributed to the turbulence of the environment in which ATLAS had to work. In particular, NASDC constantly kept ATLAS and the other design teams guessing about whether or not they would get continued funding. These funding questions were particularly acute because it was not always clear how much money NASDC would be able to raise or how much money they would choose to allocate to the design teams.


As a result, ATLAS faced the prospect that the entire project could be shut down either because NASDC was not pleased with the performance of ATLAS or because NASDC itself could not afford to continue. The minutes of a meeting between members of NASDC and ATLAS in the spring of the first year made these dual threats abundantly clear. After representatives of NASDC repeatedly questioned the members of ATLAS about the progress of their design and their ability to move into the implementation phase of the project, the NASDC representatives reported on their own financial health. They revealed that they did not yet have the money to support the work of the design teams in the second year; they reported that they would not know whether or not they would be able to provide continued funding until a board meeting in June; and they urged ATLAS to lobby funders and the government on behalf of NASDC.13


Ultimately, funding for the second year was only secured after President Clinton endorsed the NASDC effort and after Walter Annenberg granted NASDC 50 million dollars as part of his gift of 500 million dollars to support improvements in public schools. But even after the future of NASDC was assured (at least for another year or two), it was not until the end of the first year that NASDC confirmed that ATLAS would receive continued funding. As a result, ATLAS had to plan a major summer institutethe central component of their design and implementation strategy in the second yearwithout knowing whether or notor how muchmoney they might have to pay for it. Similarly, teachers and staff in the ATLAS sites had to make plans for their summers and for the coming year without knowing whether or not ATLAS would continue to exist. Since the ATLAS partners depended so heavily on soft money and other grants for their operationsand since the ATLAS endeavor was so large and costlyit was not possible for them to pick up the slack and guarantee funding in the face of the uncertainties of NASDC. While communications between NASDC and ATLAS improved, questions about the extent of funding that ATLAS would receive from NASDC lingered as each new phase of the project loomed. Such substantial uncertainties made it extremely difficult for the members of the ATLAS design team and the sites to plan or prepare for a future beyond the life of each grant.

IS IT BETTER TO BREAK THE MOLD OR TINKER WITH IT?


Although the original goals of NASDC and ATLAS included the development of innovative, break-the-mold schools, the conditions in which the work was carried out were far from ideal for doing so. While one can imagine other factors that may help tip the balance between exploitation and exploration in schools in one direction or another, many of the factors that are known to affect the balance in business organizations seem to push schools and those inside them to exploit conventional practices rather than to explore new ones. These kinds of pressures discourage schools from devoting the necessary time and resources to the development of innovations that could lead to significant improvements in long-term performance and, instead, may lead schools to implement new practices in little more than superficial ways.


In the ATLAS work, the regularity and nature of the feedback that ATLAS site members and staff received, the competition created by the inevitable comparisons with the work of other schools and design teams, the extent of turnover at all levels of the project, and the uncertainties of policies, leadership, and funding all served to encourage the exploitation of established practices. Furthermore, in cases like that of ATLAS in which disruptions, resistance, and other problems are associated with implementation, in order to get sufficient numbers of people to buy into and participate in the development of innovative practices it may be necessary to make many compromises about how much work has to be done and how many things actually have to change. These compromises may lead to the watering-down of the innovation and prevent individuals and schools from developing the in-depth knowledge that is required of a full-fledged effort to develop new and more effective practices and programs.


Of course, these constraints on the development of new ideas cannot in and of themselves explain why the ATLAS schools developed they way they did. For one thing, these were not the only factors that affected the success of ATLAS. In the early years, ATLAS experienced numerous problems with organization, management, and staffing that leaders and staff members had considerable difficulty addressing. In addition, deep-seatedand initially unrecognizeddifferences in philosophy made it hard to develop working relationships or come to consensus on key issues of approach and strategy; and the sheer size and scope of the project and the distances among the collaborating organizations created challenges for communication and administration that could have impacted the work whether or not the conditions in schools constrained the development of innovations.14


Given the unfavorable conditions for breaking the mold and the other problems ATLAS and other school reform efforts face, it may seem hard to explain why any innovative practices develop at all. But the ATLAS schools did make some changes and there are examples of schools that are widely recognized as innovative. However, the fact that some innovations do develop does not negate the impact of those factors that constrain the exploration of new ideas in schools. In fact, the occasional development of innovative practices or even of a few wholly transformed schools is a natural part of a system that favors the exploitation of conventional practices. Rather than being seen as exceptions to the rule that schools cannot change, the development of a small number of innovative practices and schools may instead reflect the rule that schools can only change through the monumental effort, unusual resourcefulness, and strong leadership of key individuals or groups.


In this view, the emphasis on developing, recognizing, and rewarding strong leaders (which is often cited as a key ingredient in successful reforms) is itself a natural consequence of the exploitation/exploration dilemma. Since exploration is so risky, and since it is so unlikely to lead to improved performance, there have to be substantial incentives to encourage leaders to try new things. These incentives include considerable rewards and public recognition for being unique and for creating new and distinct pedagogical approaches. In essence, leaders who support innovations are in a competition for primacy when their success and continued support often rests on their ability to stand out from the rest of the pack. Once a new approach becomes recognized or accepted, however, the rewards for those who follow or attempt to emulate that approach may not be nearly as great. In fact, being seen as a follower, or as derivative or unoriginal, can be seen as a problem. Thus others who want to establish themselves as strong leaders and make a name for themselves are better off trying to create their own approach rather than trying to adopt, sustain, or improve the innovations initiated by someone else. In this way, the occasional innovations of particular schools or reform efforts are to be expected even in conditions that favor exploitation of conventional practices, but they do little to break the mold that constrains the widespread development or effective dissemination of innovations in the first place.


This analysis suggests that in order to break the mold of conventional schooling we have to shift the emphasis from efforts to develop and implement particular innovations to addressing directly the conditions that inhibit the exploration of new ideas.15 But for all the rhetoric about the need for drastic changes in schools, a fundamental question remains: Do people want an educational system that supports and encourages the exploration and development of new ideas and practices? Although many people are frustrated by the slow pace and incremental nature of change in schools, there is a basic trade-off at work. Exploring new ideas may lead to the discovery and development of programs and practices that were previously unimaginable. But at the same time, such exploration is not likely to lead to improvements in a short period of time, and many of the new ideas explored may never turn out to be productive advances. By comparison, it is worth noting that no more than two out of ten efforts to produce innovations in business are likely to be successful (Mansfield, 1981; Rosenberg, 1994).


In education, those odds may not be good enough. Given the inevitable disruptions and challenges of exploring new practices and the substantial chances that those practices will never be successful, there may be some advantages to a school system that rewards those who tinker with current practices and discourages those who wish to explore more unconventional alternatives. It is one thing to take risks in order to create a better car or computer chip, but it is a whole different matter to take those kinds of risks when the lives and livelihoods of children are at stake.


At the same time, the existence of a trade-off between exploration and exploitation does not erase the fact that many people believe that schools today can, and have to, be better. For many, there is as much risk in simply tinkering with the current system as there is in trying to change it radically. In order to deal with this dilemma, even if it is too difficult and too risky to break the mold with one swift kick, it might be possible to find a better balance between exploitation and exploration. This analysis suggests that in order to create such a balance it may be necessary to rethink some basic assumptions about what it takes to change schools. First, we have to recast the buy in problem. Typically, school reform efforts begin by introducing their approach to a school or community and seeking a vote that assures that some percentage (often 80%) of the staff buy in to that approach. This strategy often reflects an underlying assumption that a lack of motivation is a key problem for reform efforts and that if people can be motivated to pursue a reform it is more likely to be successful. Seen in terms of the exploration/exploitation dilemma, however, the capacity to absorb and develop new knowledgenot motivationis the key problem. We cannot expect people to be able to learn about an innovative new practice until they understand it well enough to see both its strengths and weaknesses, and begin to imagine how work can be done differently and more effectively. Therefore, schools have to spend considerable time developing the requisite initial knowledge, time that far exceeds the amount that is usually allotted for the introduction and discussion of reform efforts during design fairs, weekend retreats, or even summer institutes (Hatch, 1999; Stringfield et al., 1998); and the more unusual or innovative the approach, the more time is likely to be required to acquire the necessary initial knowledge to begin an effective improvement effort.


Second, the current efforts to improve teacher preparation and professional development cannot be as successful as they need to be unless such efforts go hand in hand with efforts to manage turnover effectively. Designing professional development workshops and preparation programs for people who are constantly moving from school to school may support individual development, but it does little to create the kind of organizational capacity required to undertake in-depth explorations of innovative practices. With a more stable staff, schools can build on and share the initial knowledge developed by veteran staff members and still bring in new employees with fresh perspectives and other kinds of expertise.


Third, while demands for accountability are growing, the extent to which schools and teachers can collect and respond to a wide variety of useful information on their performance may be better indicators of how well they will do in the future than the extent of improvement in their test scores over a short period of time. Thus in order to support the long-term development of schools, mechanisms have to be created that make it possible for schools to get useful feedback without establishing rewards for quick but superficial improvements or providing sanctions and punishments for predictable disruptions and setbacks.


Fourth, while reformers increasingly emphasize the need to garner commitments from districts to establish policies that support particular reforms, it may be more important to help schools secure the long-term funding that can sustain careful and deep exploration of new ideas. For example, reform efforts like ATLAS might be more successful if they can develop plans and mechanisms for generating the local income that can underwrite the work of the schools over the long term; in turn, funders like NASDC might be more successful if they can make commitments to sustain the work of promising reform efforts for five or ten years. Without such longterm support, schools and reform efforts have little protection from the constantly changing pressures of turbulent environments.


In their own way, the schools in Gorham may have been able to address many of these issues and that may help explain why they were generally regarded as having been able to move more quickly in developing innovative practices than ATLAS schools in Norfolk and Prince Georges County. For example, some members of the ATLAS partner organizations had worked with some schools and district administrators in Norfolk and Prince Georges County before the ATLAS Project began, but most of the schools selected to work with ATLAS had not previously worked with any of the partner organizations. In contrast, many members of the ATLAS schools in Gorham had already worked with members of one or more of the partner organizations. As a result, they had some of the initial knowledge that they needed in order to begin a successful exploration of the ATLAS approach. Furthermore, the superintendent and site members in Gorham were particularly excited about joining ATLAS because they viewed it as a way to deepen innovations already being explored rather than as another reform effort that they had to buy into.


In addition, turnover may not have been as much of a problem in Gorham as it was in the other sites. Because of the small size of the district, and the fact that all Gorham schools were ATLAS schools, when staff members did change schools they were often replaced by others who had a similar understanding of the new practices being explored. In this situation, site members could bring with them both a basic understanding of ATLAS and relevant new ideas and perspectives if they did change schools or positions.


In terms of feedback, while Gorham was subjected to many of the same external pressures as the other sites, within the district the superintendent created an Extended Leadership Team, which consisted of himself, the ATLAS site liaisons, the school principals, and a number of teachers. Their specific function was to share information about the progress of the work. They met regularly in four-hour seminars to hear about and reflect on the work being done in the schools. They then used that information to provide informal feedback to other district and school staff and to help them set priorities and make decisions about plans for the following year. As a consequence, that feedback informed both the development of policies, public statements, and the work in the schools. At the same time, because all the schools in Gorham were involved in the ATLAS endeavor, that feedback could be received and acted on without fear of comparisons with other schools in the district that might be using more conventional approaches. Finally, the superintendent viewed building public support for ATLAS and the long-term development of the Gorham schools as synonymous, and actively worked to preserve the budget for ATLAS-related activities and sustain those activities as funds from ATLAS and NASDC began to dry up.


In many ways the initiatives of ATLAS and NASDC today also have begun to move in some of these directions as well. In recent years, ATLAS has developed a process called charting the course, in which school districts and schools spend an entire year exploring their assets and needs, learning about ATLAS-related ideas, and determining whether or not ATLAS may be a productive approach for them. In addition, ATLAS has supported the development of teacher study groups, which are designed to provide school staff with a safe environment in which they can gather information and share feedback on how their classroom improvement efforts are faring. During the latter years of funding, NASDC also has provided resources and consultation to help design teams and sites obtain the funding and the commitments needed to support long-term improvement efforts. Whether these efforts will be sufficient to create conditions that can support some exploration of innovations in more than a handful of schools over a sustained period of time remains to be seen.


Regardless, this analysis of the first four years of ATLAS and NASDC suggests that the question that began this explorationwhat does it take to break the mold?may not be the right question at all. Instead, we may need to search for the proper balance between exploiting practices that have been effective in the past, while exploring new ideas that could lead to even greater success in the future. In education, and in reform, the key question may be how to support innovations and at the same time safeguard the rights and opportunities of every student.


The development of the ideas in this paper was supported by the generous contributions of the Spencer Foundation, the John D. and Catherine T. MacArthur Foundation, and the Rockefeller Foundation. I would like to thank James March, Ellen Lagemann, Karen Hammerness, Howard Gardner, Donna Muncey, Noel White, and an anonymous reviewer for their comments on an earlier draft of this article. The author is solely responsible for the content presented here.

REFERENCES


Arthur, B. (1988). Self-reinforcing mechanisms in economics. In P. Anderson, K. Arrow, & D. Pines (Eds.), The economy as an evolving complex system. Reading, MA: Addison-Wesley.


ATLAS Communities (1992, February). ATLAS Communities (Proposal to New American Schools Development Corporation). Providence, RI: Author.


Bodilly, S. (1996). Lessons from New American Schools Development Corporations demonstration phase. Santa Monica, CA: RAND.


Bodilly, S. (1998). Lessons from New American Schools scale-up phase: Prospects for bringing designs to multiple schools. Santa Monica, CA: Rand.


Bodilly, S., Purnell, S., Ramsey, K., & Smith, C. (1995). Designing New American Schools: Baseline observations on nine design teams. Santa Monica, CA: Rand.


Cohen, W. & Leventhal, D. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35(1), 128152.


Cuban, L. (1984). How teachers taught: Constancy and change in American classrooms, 18901990. New York: Teachers College Press.


Cushman, K. (1996). Looking collaboratively at student work: An essential toolkit. Horace, 13(2).


Elmore, R. (1996). Getting to scale with good educational practice. Harvard Educational Review.


Fashola, T., Olatkunbo, S., & Slavin, R. (1998). Schoolwide reform models: What works? Phi Delta Kappan, 79(5), 370379.


Fullan, M. (1999). Change forces: The sequel. London: Falmer Press.


Garud, R., Nayyar, P., & Shapira, Z. (1997). Technological innovation: Oversights and foresights. New York: Cambridge University Press.


Glennan, T. K. (1998). New American Schools after six years. Santa Monica, CA: RAND.


Hatch, T. (1998). The differences in theory that matter in the practice of school improvement. American Educational Research Journal, 35(1), 332.


Hatch, T. (1999). What does it take to go to scale? Reflections on the promise and the perils of large-scale school reform. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.


Hatch, T., & White, N. (1997). The raw materials of educational reform: Rethinking the knowledge of school improvement. Unpublished paper, ATLAS Seminar, Providence, RI.


Herriott, S., Leventhal, D., & March, J. (1985). Learning from experience in organizations. American Economic Review, 75, 298302.


Huberman, M., & Miles, M. (1984). Innovation up close. New York: Plenum Press.


Jelinek, M. (1997). Organizational entrepreneurship in mature-industry firms: Foresight, oversight, and invisibility. In R. Garud, P. Nayyar, & Z. Shapira (Eds.), Technological innovation: Oversights and foresights (pp. 181213). New York: Cambridge University Press.


Kearns, D., & Anderson, J. (1996). Sharing the vision: Creating new American schools. In S. Stringfield, S. Ross, & L. Smith (Eds.), The new American Schools Development Corporation designs (pp. 923). Mahwah, NJ: Erlbaum.


Leavitt, B., & March, J. (1988). Organizational learning. Annual Review of Sociology, 14, 319340.


Leonard-Barton, D. (1992). Core capabilities and core rigidities: A paradox in managing new product development. Strategic Management Journal, 13, 111125.


Leventhal, D. (1997). Three faces of organizational learning: Wisdom, inertia, and discovery. In R. Garud, P. Nayyar, & Z. Shapira (Eds.), Technological innovation: Oversights and foresights (pp. 167180). New York: Cambridge University Press.


Leventhal, D., & March, J. (1981). A model of adaptive organizational search. Journal of Economic Behavior and Organization, 2, 307333.


Leventhal, D., & March, J. (1993). The myopia of learning. Strategic Management Journal, 14, 95112.


Mansfield, E. (1981). How economists see R&D. Harvard Business Review, 59, 98106.


March, J. (1981). Footnotes to organizational change. Administrative Science Quarterly, 26, 563577.


March, J. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 7187.


March, J. (1995). The future, disposable organizations and the rigidities of imagination. Organization, 2, 427440.


McDonald, J. (1992). Steps in planning backwards: Early lessons from the schools. Providence, RI: Coalition of Essential Schools.


McDonald, J., Hatch, T., Kirby, E., Ames, N., Haynes, N., & Joyner, E. (1999). School reform behind the scenes. New York: Teachers College Press.


Mirel, J. (1994). School reform unplugged: The Bensenville New American School project, 19911993. American Educational Research Journal, 31, 481518.


Muncey, D., & McQuillan, P. (1996). Reform and resistance in schools and classrooms: An ethnographic view of the Coalition of Essential Schools. New Haven: Yale University Press.


New American Schools Development Corporation (NASDC). (1991). Request for proposals. Washington, DC: Author.


Nunnery, J., Bol, L., Dietrich, A., Rich, L., Kelly, S., Hacker, D., & Sterbin, A. (1997). Teachers initial reactions to their pre-implementation preparation and early restructuring experiences. School Effectiveness and School Improvement, 8(1), 7294.


RAND. (1995). Unpublished report.


Rosenberg, N. (1994). Exploring the black box: Technology, economics, and history. New York: Cambridge University Press.


Seidel, S., Hatch, T., & Blythe, T. (1997). How much is too little? Rethinking the minimal conditions for changing classroom practice. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.


Seidel, S., Walters, J., Kirby, E., Olff, N., Powell, K., Scripp, L., & Veenema, S. (1997). Portfolio practices: Thinking through the assessment of childrens work. Washington, DC: NEA.


Sizer, T. (1984). Horaces compromise: The dilemma of the American high school. Boston: Houghton Mifflin.


Sizer, T. (1996). Horaces hope. New York: Houghton Mifflin.


Smith, L., Maxwell, S., Lowther, D., Hacker, D., Bol, L., & Nunnery, J. (1997). Activities in schools and programs experiencing the most, and least, early implementation successes. School Effectiveness and School Improvement, 8, 125150.


Squires, D., & Kranyik, R. (1997). Linking school-based governance and instructional change: A case study of two ATLAS schools. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.


Stringfield, S., Datnow, A., Ross, S., & Snively, F. (1998). Scaling up school restructuring in multicultural, multilingual contexts: Early observations from Sunland County. Education and Urban Society, 30(3), 326357.


Stringfield, S., & Ross, S. (1997). A reflection at mile three of a marathon: The Memphis restructuring initiative in mid-stride. School Effectiveness and School Improvement, 8(1), 151161.


Tyack, D., & Cuban, L. (1995). Tinkering toward Utopia. Cambridge, MA: Harvard University Press.


White, N., Muncey, D., & Fanning, K. (1999). Final report of the ATLAS Ethnography Team. Paper prepared for the ATLAS Seminar, Cambridge, MA.


THOMAS HATCH is a senior scholar at the Carnegie Foundation for the Advancement of Teaching. His research focuses on normal and exceptional development and the contexts and conditions that support them. He currently serves as a codirector of the program for K12 teachers and teacher educators at the Carnegie Academy for the Scholarship of Teaching and Learning and leads projects exploring the opportunities and obstacles of school reform.




Cite This Article as: Teachers College Record Volume 102 Number 3, 2000, p. 561-589
https://www.tcrecord.org ID Number: 10498, Date Accessed: 10/27/2021 10:39:50 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Thomas Hatch


 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS