Evidence, Interpretation, and Persuasion: Instructional Decision Making at the District Central Office


by Cynthia E. Coburn, Judith Toure & Mika Yamashita - 2009

Background/Context: Calls for evidence-based decision making have become increasingly prominent on the educational landscape. School district central offices increasingly experience these demands. Yet there are few empirical studies of evidence use at the district level. Furthermore, research on evidence use among policy makers in noneducation settings raises questions about the models of decision making promoted by evidence-use policies, suggesting that they do not take into account key features of the interpretive process or the organizational conditions that shape how decision making unfolds.

Purpose/Objective/Research Questions/Focus of Study: The central premise of this article is that only by understanding the patterns by which personnel in school district central offices actually use information, and the factors that affect this use, can we begin to understand the promise and possibilities of evidence use. We ask: What is the role of evidence in instructional decision making at the central office level? What factors shape how decision processes unfold?

Research Design: We draw on data from a longitudinal case study of one midsize urban district, which we followed from 2002 to 2005. We relied on in-depth interviewing, sustained observation, and document analysis. We identified 23 decisions related to instruction that were captured in our data over the 3 years and for which we had at least three independent sources of information. We analyzed each decision using a coding scheme that was developed from prior research and theory and elaborated through iterative coding. We then used matrices to compare across decisions and to surface and investigate emerging patterns.

Conclusions/Recommendations: We argue that decision making in complex organizations like school districts is centrally about interpretation, argumentation, and persuasion. These processes are shaped in crucial ways by preexisting working knowledge and practices that guide how people come to understand the nature of problems and possible avenues for solutions. They are also influenced by organizational and political factors, including the organizational structure of the central office, resource constraints, and leadership turnover. We close by suggesting implications for efforts to foster substantive and productive use of evidence at the central office level.



Calls for evidence-based decision making have become increasingly prominent on the educational landscape. In a high-profile example of this trend, No Child Left Behind requires individuals at multiple levels of the system to use outcome data to guide decisions about programs and policies and select instructional approaches that are rooted in “scientifically based” research. Calls for evidence use are rooted in the common sense idea—borrowed from the business world—that people will make better, more efficient, more successful decisions if they carefully weigh the available evidence on competing options and select the one that shows the best likelihood of maximizing a valued outcome. Evidence use is also seen as a way to make policy making more rational, thus taking educational decision making out of the realm of politics (Massell, 2001; Slavin, 1989) and avoiding fads and cycles of rapid reform (Coalition for Evidence-Based Policy, 2002; Slavin, 1989, 1998).


School district central offices sit at the nexus of these demands for evidence use. They are asked to use evidence to decide how to participate in state and federal programs, how to manage their own organization, and what to require of and provide to schools and teachers. Yet, to date, there are few empirical studies of evidence use at the district level (Coburn, Honig, & Stein, in press). Furthermore, research on evidence use among policy makers in noneducation settings raises questions about the models of decision making promoted by evidence-use policies (Feldman & March, 1988; Kennedy, 1982; March, 1988; Weiss & Bucuvalas, 1980). More specifically, this research suggests that the idealized portrait of decision making upon which the policies rest does not take into account key features of the nature of interpretation that sit at the foundation of decision making. Nor does it address the kind of organizational conditions present in public bureaucracies, like school districts, that shape how decision making unfolds.


The central premise of this article is that only by understanding the patterns by which personnel in school district central offices actually use information, and the factors that affect this use, can we begin to understand the promise and possibilities of evidence use. To that end, we draw on a conceptual framework rooted in sensemaking theory and frame analysis to analyze the role of evidence in instructional decision making in one urban district over 3 years. Sensemaking theory and frame analysis are useful for analyzing evidence use in district decision making because they illuminate the underlying processes of interpretation that animate evidence use and provide insight into the organizational factors that shape it. We focus on decision making around instruction because curriculum and instruction have long been at the center of calls for evidence-based practice (see, for example, Slavin 1989, and, more recently, U.S. Department of Education, 2002), and there is an emerging consensus that school district central offices can and should play a central role in instructional improvement (Elmore & Burney, 1997; Hightower, Knapp, Marsh, & McLaughlin, 2002; Knapp, Copland, & Talbert, 2003).


Over the course of 3 years, we observed planning meetings and professional development, interviewed district personnel at multiple levels, and collected extensive internal and external documents. We identified 45 decisions related to instruction made by the district during this time and conducted in-depth analysis of 23 of them. On the basis of this analysis, we argue that decision making in complex organizations like school districts is centrally about interpretation, argumentation, and persuasion. Evidence and other forms of information do not provide answers, but rather provide the grist for meaning making and interpretation. These processes are shaped in crucial ways by preexisting beliefs and practices that guide how people come to understand the nature of a problem and possible avenues for solutions. The ways in which district decision makers negotiate differences in interpretations in turn are influenced by the organizational structure of the district, content knowledge of key district leaders, resource constraints, and leadership turnover. We close by suggesting implications from this analysis for efforts to foster substantive and productive use of evidence at the central office level.


EVIDENCE AND INSTRUCTIONAL DECISION MAKING


When policy makers exhort districts to use evidence in their decision making, they often envision that evidence will play an instrumental role. That is, they imagine that district administrators will use evidence directly and centrally to make decisions related to policy or practice (Johnson, 1999; Weiss, 1980). Weiss described the image of instrumental use in the following way: “A problem exists; information or understanding is lacking either to generate a solution to the problem or to select among alternative solutions; research [or other forms of evidence] provides the missing knowledge; a solution is reached” (pp. 11–12). This is the image of decision making embedded in recent legislation, including No Child Left Behind. And it is the predominate image promoted by many handbooks and guides for district-level decision making (Celio & Harvey, 2005; Cooley, 1983; Ligon & Jackson, 1986; Vrabel, 1999).


Yet those who study decision making in public policy settings suggest that this vision of information use bears little resemblance to what can or does happen in practice. First, the instrumental model assumes that there is a shared definition of the problem to solve. However, scholars of public policy suggest that, given the complexity and the multidimensionality of the social world, there are always multiple potential causes for problematic situations (Stone, 1988). Any representation of the cause—any way of defining the problem—inevitably highlights some aspects of the situation and deemphasizes or ignores others (Weiss, 1989). How a particular policy problem is defined or “framed” is crucial, for it assigns responsibility and creates rationales that authorize some policy solutions and not others (Benford & Snow, 2000; Schneider & Ingram, 1993; Stone; Weiss).


Second, the instrumental model of decision making assumes that evidence is neutral and that its meaning and implications are self-evident. Yet existing research shows that evidence does not independently inform decision making because it is always mediated by interpretation (Kennedy, 1982). There are often multiple legitimate interpretations of the meaning and implications of a given piece of information (Johnson, 1999), which can lead to conflict and controversy. Furthermore, available evidence often does not point directly to an appropriate solution. Rather, there is a space between a given finding and appropriate action, and again, interpretive processes and competing agendas can play a central role in moving from information to proposals for solutions. Thus, evidence does not provide the answer; it provides the grist for argumentation and meaning making (Johnson). All of this suggests the central role of interpretation and negotiation in making meaning of evidence. And it suggests that persuasion plays a key role in defining the problem and using evidence to deliberate about policy solutions.


To understand the ways in which district-level decision makers draw on and interpret evidence in their deliberation about instructional issues, we turn to two distinct yet complementary theoretical traditions: frame analysis and sensemaking theory. Frame analysis is an approach to studying the process by which these problem definitions emerge and change in social interaction and negotiation. Frame analysts see problem frames as interpretive devices that condense complex social situations, creating a way to understand the “raw data of experience” (Babb, 1996, p. 1034). Social situations are complex and multifaceted. The act of framing something as a problem involves noticing, punctuating, and organizing a vast array of information into an explanation that renders the complexity meaningful.


Frame analysts identify two kinds of framing activities: diagnostic and prognostic (Snow & Benford, 1992). Diagnostic framing involves defining problems and attributing blame. It is important because it identifies some individuals or groups as responsible for the problem, and thus identifies targets for change (Cress & Snow, 2000; Stone, 1988). Prognostic framing involves articulating a proposed solution to the problem, thus putting forth particular goals and suggesting tactics for achieving those goals (Benford & Snow, 2000; Cress & Snow).


Diagnostic and prognostic framing are not only interpretive, they are often strategic. People define problems strategically to shape others’ meaning-making processes in an effort to mobilize them to take action (Fiss & Hirsch, 2005; Fligstein, 2001). Individuals and groups attempt to construct ways of framing the problem that create a “deep responsive chord” (Binder, 2002, p. 220) with others. This resonance motivates others to act—or, in the case of decision making, to join in support of a policy solution. However, framing can be contested. Ways of framing the problem are challenged as others put forth alternative portrayals of the situation or alternative paths to pursue (called counterframes), often with alternative implications for roles, responsibilities, and resources (Benford & Snow, 2000; Stone, 1988).


Evidence and other forms of information play a central role in both diagnostic and prognostic framing. It provides the “raw material used by policy actors to frame, support, oppose, and justify policy arguments in various decision-making arenas” (Johnson, 1999, p. 24). Policy makers use evidence as a way to generate resonance for a particular diagnostic or prognostic frame. Yet, although frame analysts note its importance, they have yet to investigate the role that data, research, and other forms of evidence play in the problem-framing process.


For this, we turn to sensemaking theory. Sensemaking theory provides an account of how people come to understand and enact external cues—evidence in this case—that are available to them in their environment. Sensemaking theory is rooted in the insight that the meaning of information or events is not given, but is inherently problematic; individuals and groups must actively construct understandings and interpretations. People construct these understandings by placing new information into their preexisting cognitive frameworks, also called “working knowledge” by some theorists (Kennedy, 1982; Porac, Thomas, & Baden-Fuller, 1989; Vaughan, 1996; Weick, 1995).1 Thus, individuals and groups come to understand new information through the lens of their working knowledge, often significantly reconstructing it along the way.


Prior research suggests that working knowledge shapes how individuals interpret evidence as part of decision processes at the district level. First, we know that central office administrators tend to search for and pay greater attention to evidence that resembles what they already know and expect to find, and not even notice information that challenge their beliefs (David, 1981; Hannaway, 1989; Kennedy, 1982; Spillane, 2000). Working knowledge also influences how people interpret evidence. Not only can people with different beliefs interpret the same evidence in contrasting ways (Coburn, 2001), but there is also a strong tendency for district administrators (and others) to discount evidence when it does not support their preexisting beliefs or actions (Birkeland, Murphy-Graham, & Weiss, 2005; Coburn & Talbert, 2006; David, 1981; Kennedy).


Sensemaking theorists suggest that these interpretive processes are fundamentally social. People make meaning of new information in ways that are shaped by interaction, signaling, and negotiation (Porac et al., 1989; Vaughan, 1996; Weick, 1995). Shared understandings that guide interpretation emerge over time as subgroups in an organization work together. Both powerful and, at times, quite subtle, shared understandings become part of the working knowledge that can shape how individuals and groups interpret the meaning and implications of various forms of evidence (Coburn & Talbert, 2006; Kennedy, 1982; Spillane, 1998).


Finally, there is preliminary evidence that both sensemaking and framing processes are shaped by the organizational and political context within which they unfold. Organizational structure can shape working knowledge and shared understandings by influencing patterns of social interaction through which they develop (Coburn & Talbert, 2006). Organizational politics can play a role as differences in interpretation become the subject of ongoing power struggles when people in different parts of the organizations seek to promote a particular interpretation by strategically employing their resources in support of their position (Starbuck & Milliken, 1988; Weick, 1995).


By bringing sensemaking theory and frame analysis to bear on evidence use in school districts, we provide a way to understand the role of evidence in deliberation about policy. By moving away from normative descriptions of how district leaders should use evidence, we provide one of the few empirically based analyses of what actually happens as district leaders engage with evidence in the course of their everyday work. In so doing, we provide insight into the processes of interpretation and persuasion that sit at the heart of the endeavor, and the organizational and political conditions that shape how these interpretive processes unfold.


METHODS


To understand the role of evidence in instructional decision making at the central office level, we draw on data from a longitudinal case study of one midsize urban school district. We chose case study methodology because it is particularly useful for investigating processes as they unfold over time (Merriam, 1998) and for uncovering the interaction among significant factors that influence the phenomenon in complex social settings (Yin, 1984). Focusing on a single case allows for the depth of observation necessary to track decision making as it proceeded through multiple levels of the district. The in-depth observation made possible by the single case provides the opportunity to generate new hypotheses or build theory about sets of relationships that would otherwise have remained invisible (Hartley, 1994; Merriam, 1998).


At the time of the study, the district served approximately 50,000 students, the majority of whom were low-income students of color and one fourth of whom were classified as English language learners (ELLs). Data were collected across 3 years, from fall 2002 until spring 2005. Collecting data over time provided the ability to trace trajectories of decision making across multiple years. Existing research suggests that decision making is not an event, but rather a process that stretches out over time (Weiss, 1980). Yet, nearly all prior studies of decision making in school districts are cross-sectional and thus provide only a snapshot of the state of decision making at a given moment, failing to provide insight into the dynamics of decision making over time.


We relied on in-depth interviewing (Spradley, 1979), sustained observation (Barley, 1990), and document analysis. Over the course of 3 years, we conducted 71 interviews with 38 members of the central office and 3 union officials. We also conducted 31 interviews with 17 external consultants who were working with the district in different capacities during the time of the study. All interviews were audiotaped and transcribed. We supplemented the interviews with observations of 33 planning meetings. These meetings were at multiple levels of the central office, from executive leadership meetings, to planning meetings at the department level, to design meetings between district staff and external consultants. We recorded observations with detailed field notes, but on some occasions we also taped and transcribed key meetings. We also observed 31 days of professional development for teachers and school leaders. This gave us insight into both the fruits of the instructional decision making and the ways in which experience doing the professional development fed back into ongoing decision making. Finally, we collected and analyzed numerous documents related to topics that were the focus of instructional decision making during the time of our study. These documents include minutes and agendas of meetings, draft and final copies of policy and planning documents, as well as written feedback provided on draft documents, and public communication regarding decisions (newsletters, press releases, informational sheets, and so on).


All data were entered into NUD*IST, a software program for qualitative data analysis. We read through all the data, identifying decisions related to issues of instruction that occurred during the time period of the study. Our identification of decisions was guided by prior research by Carol Weiss on decision making in public bureaucracies (Weiss, 1980). Weiss argued that many decisions are not made through formal deliberations in which key decision makers are at the table, considering alternatives and setting policy. Rather, policy often emerges through a series of conversations and actions in which ways of thinking about problems emerge, and small steps set the organization on a particular course, closing off some potential avenues for action and narrowing the range of potential solutions that it is possible to envision or pursue. In this way, Weiss argued, decisions “accrete” (p. 188) over time as these small steps add up to a policy decision regardless of whether formal action is taken. Thus, in identifying key instructional decisions in the district, we not only looked at the policy issues that were explicitly addressed and voted on in cabinet-level meetings in the district, but we also sought to identify instructional issues that were discussed and debated outside of these formal meetings that nonetheless shaped district action and instructional guidance to schools. We limited our analysis to decisions that had organizational implications (i.e., the way that the district decided to organize professional development) rather than individual ones (i.e., how an individual staff person is going to work with a particular school). We also excluded decisions about staffing and hiring.


Ultimately, we identified 45 decisions related to instruction that were captured in our data over the 3 years. These included decisions about curriculum adoptions, the design of coaching, the focus of professional development, the structure of compensation for professional development, homework policy, and the development of curriculum frameworks in math and literacy, among others. However, we were concerned about potential bias introduced if the majority of our information about a decision came from one or two people. Thus, we limited the second stage of data analysis to 23 decisions for which we had at least three independent sources of data.


We created longitudinal records of all data related to each of the 23 decisions. We analyzed the dynamics of diagnostic and prognostic framing in each decision by identifying specific frames that individuals invoked, analyzing the process of negotiation among and between different frames and analyzing the relationship between this process and decision outcomes. Based on prior theory, we were particularly interested in two aspects of this process: (1) the role of individual worldviews and shared understandings, and (2) the role of evidence. To investigate the former, we analyzed interview data to identify beliefs and shared understandings related to issues under debate. For example, we analyzed individuals’ beliefs about high-quality reading instruction for those involved in decision trajectories related to literacy policy. We then analyzed the relationship between these beliefs and how individuals interpreted evidence, framed the problem or solution, and responded to others’ framing activities. To investigate the role of evidence, we analyzed each decision according to what evidence was invoked, where it was invoked in the process, and how the evidence was used. During the course of this analysis, we noticed that many of the ways that evidence was used corresponded to categories of evidence use highlighted in previous scholarship, including instrumental use, conceptual use, and symbolic use (Feldman & March, 1988; Weiss, 1980; Weiss & Bucuvalas, 1980). We created operational definitions for each of these categories of evidence use and recoded each decision using these definitions. Through this process, we uncovered another use of evidence—evidence as signal. We created an operational definition and recoded our data using this expanded typology (definitions for each kind of evidence use are included in the findings).


Finally, we investigated the factors that influenced how the decision process unfolded. We used an emergent process to identify and code for factors. During initial analysis, several factors emerged as potentially influential, including organizational structure, content knowledge, resource constraints, and the role of the superintendent. We used iterative coding to more sharply define the factors (Miles & Huberman, 1994; Strauss & Corbin, 1990). For each factor, we created matrix displays (Miles & Huberman) to compare all 23 decisions in order to investigate patterns and further refine our understanding.


Several methodological features of the study ensure that the patterns reported here represent patterns present in the research site. These strategies include intensive immersion at the research site (Eisenhart & Howe, 1992), systematic sampling of decisions (Miles & Huberman, 1994), efforts to explore countervailing evidence (Miles & Huberman), systematic coding of data (Miles & Huberman; Strauss & Corbin, 1990), and sharing findings with key informants (both district personnel and external consultants involved in decision making) and incorporating their insights into the final analysis (Eisenhart & Howe; Miles & Huberman).


EVIDENCE, INTERPRETATION, AND PERSUASION


In many ways, the district in this study possessed the key elements that advocates for evidence use tell us are necessary for evidence-based decision making. It had a strong research office and a well-developed data system that allowed decision makers to track a range of school-level and student-level indicators longitudinally. It also had a series of superintendents who were committed to evidence-based decision making. And indeed, staff used evidence of various sorts in all 23 of the decisions we analyzed. However, an up-close look at the process of decision making reveals that evidence played a much less straightforward role than policy makers typically think. To investigate this role, we begin by describing the process of decision making in the district, discussing first the dynamics by which problems were defined and then moving on to debates about solutions. We then analyze the conditions that shaped when and how evidence was used in the framing process.


FRAMING THE PROBLEM


As suggested by frame theorists, diagnostic framing was a crucial part of the decision-making process. How people in the district framed the relevant problem to be solved created parameters within which information was sought and solutions were deliberated. Yet although problems were often triggered by evidence, they were mediated by interpretation and negotiation. Research and other forms of evidence also played a powerful, if indirect, role by shaping the working knowledge and shared understandings that district personnel brought with them as they interpreted evidence and made attributions about the cause of the problem. We illustrate these points by discussing one decision trajectory: the decision to adopt a supplementary mathematics curriculum in the elementary grades. We then analyze this example and situate it in our broader sample of decisions.


Vignette 1: Adopting Supplementary Mathematics Program for Elementary Schools


Early in our study, the district was grappling with persistently low test scores in mathematics at the middle school level, and community pressure to do something about them. Key decision makers interpreted the implications of the test scores in very different ways as they framed the problem of low mathematics achievement. Some district decision makers raised questions about the appropriateness of the adopted elementary mathematics textbook, suggesting that the reform mathematics text did not provide enough attention to basic skills, which was subsequently showing up on test scores at the upper elementary and middle school level. (Elementary mathematics scores were above the national average.) For example, one district leader explained,


When I was analyzing the test score data of middle school students, they were not doing well. [The director of research] said they weren’t doing well because they didn’t have the basic facts and so they didn’t have the computation skills. So, I thought, well elementary needs to go back. We need to do some more computation.


Others—particularly members of the mathematics leadership in the district—did not see the problem as a function of the elementary curriculum, but rather argued that teachers were not adequately implementing the curriculum because of a lack of appropriate professional development: “You cannot blame the curriculum until teachers are teaching it! And you cannot say it’s the curriculum when you haven’t given teachers enough professional development to teach well.”


District leaders discussed the nature of the problem in a series of meetings with the superintendent. Individuals made their case, using both rhetorical strategies and at times offering various forms of evidence to support their view. For example, one strong proponent of the view that low test scores were due to lack of basic skills in the elementary textbook pointed to the fact that elementary schools in a neighboring suburban school district supplemented the same textbook series with a curriculum focused on basic skills, and their scores were much higher than in this district: “I had teachers [from a neighboring district] who were actually using [a supplementary curriculum] come out . . . so we could ask questions about how it helped them with their students.” Others pointed to limited professional development offered by the district on the curriculum, and anecdotal evidence that teachers were not using the sections of the curriculum meant to foster computational fluency, to support the view that the problem was poor-quality implementation of the curriculum. Ultimately, after conferring with colleagues in another district about how to handle the controversy, the superintendent defined the problem as lack of computation in the elementary curriculum but also affirmed the value of the curriculum for its emphasis on conceptual approaches to mathematics. In an open letter to the community and the district, she wrote,


Students need to memorize basic facts, but they also need to be able to use math in context. What all students need and what we all must provide is a balance between conceptual understanding, problem solving ability, and basic computational skills or procedures…In the struggle to provide [more attention to basic skills], the district mathematics department will work with principals and teachers to identify cost-effective materials and time-effective strategies to provide a balanced approach.


This definition of the problem subsequently guided future decision making. Talk about issues related to middle school curriculum or instruction were off the table. Instead, discussion turned to the adoption of supplementary curriculum at the elementary level. This involved a spirited debate about the advantages of specific supplementary curricula over others, with people in different parts of the district invoking evidence to advocate for a particular curriculum and others critiquing the nature and quality of the research studies. Finally, the assistant superintendent in charge of elementary schools adopted one of the supplementary curricula for elementary schools and provided professional development to fourth- and fifth-grade teachers for using it in concert with the textbook series.


Analysis of problem framing. Evidence played an important but indirect role in diagnostic framing. In nearly all the decisions, evidence in some form—student outcomes, feedback from teachers or the community, or data gathered through personal observation—served as the jumping-off point for problem framing. In the case of the adoption of the supplementary mathematics adoption, low math scores in the middle school served as a signal that something was amiss. Yet the move from evidence to problem definition was by not a direct one. Rather, it was mediated by interpretation and negotiation. (See Appendix A for the dynamics of diagnostic framing.)


Individuals often interpreted evidence in contrasting ways. For example, in the decision about supplementary mathematics, the low math scores in middle school meant different things to different people in the district. Some interpreted them as an indication of the lack of basic skills in the elementary curriculum, others as poor implementation of the existing curriculum. Interestingly, no one publicly identified the problem of low test scores in middle school as an indication of a problem in middle school instruction or the middle school curriculum. There is quite a large interpretive space between low test scores in the middle grades and diagnosing the problem in any of these ways. People in the district did not, by and large, see it as problematic to make this interpretive leap. Nor did they have systematic data on teachers’ classroom practice and levels of implementation that would help them gain a finer grained understanding of why the scores were so low. As shown in Appendix A, interpretive space was a common feature of the problem-framing process. In other instructional decisions, for example, individuals in the district interpreted these same middle school math scores as an indication of the lack of differentiated instruction and the lack of formative assessment (both during decision making about summer professional development), and lack of congruence in district policy (in the decision to pursue a mathematics framework in the district).


As suggested by sensemaking theory, how individuals interpreted evidence was often rooted in their preexisting worldviews and shared understandings. We determined that preexisting worldviews played a role if there was a link between an individual’s beliefs about instruction, which we obtained during interviews, and the problem frames that individuals put forth in conversations with their colleagues. In the supplementary mathematics curriculum decision, for example, individuals’ beliefs about how children learn mathematics and appropriate instructional approaches shaped their interpretation of the low middle school scores. There were many in the district who favored direct, sequential approaches to instruction and a focus on basic skills. One district employee involved in the controversy described her vision of the nature of mathematics and high-quality mathematics instruction in the following way: “Math is math. It’s hierarchical and one thing after another. The [curriculum department] is into explanation. I’m not saying it’s not good, but [I’m saying] that direct instruction is the way math [should be] taught.” Many of the individuals in the district with these beliefs had long criticized the textbook series—a National Science Foundation-sponsored curriculum designed to embody the National Council of Teachers of Mathematics (NCTM) standards—as not paying enough attention to basic skills. Thus, it is perhaps not surprising that this and other individuals explained the low test scores at the middle school level by blaming the elementary curriculum.


Others in the district saw high-quality mathematics instruction as rooted in higher order understanding supported by computational fluency. For example, one district leader described her stance on mathematics instruction:


I don’t believe you can do just computation, you know? You’ve got to do that, but you’ve also got to do it in a larger concept of math concepts so that kids move past just the basic manipulation of numbers into a better understanding of math itself. And you’re not going to get that [with a supplementary program].


Those who saw the central purpose of mathematics instruction as developing students’ reasoning and problem-solving abilities were generally quite supportive of the existing mathematics curriculum. It is perhaps also not surprising that these individuals saw the low test scores in middle school not as a problem with the curriculum but as teachers’ poor implementation of it.


When there were conflicting ways of framing the problem (12 out of 23 decisions), district personnel put forth and modified problem definitions, using different arguments to persuade one another of the wisdom of a particular problem frame. In the supplementary mathematics decision, administrators made their case individually to the superintendent, using both rhetorical strategies and offering various forms of evidence to support their view. However, not all decisions involved conflicting ways of framing the problem. In fact, in 10 of 23 decisions, shared understandings enabled district leaders to move quickly from evidence to attribution without discussion. However, although there was no conflict, there was also no impetus to probe whether their diagnosis of the problem was supported by evidence.


Evidence did play one final role in the problem-framing process. District personnel’s engagement with research literature at times influenced how they framed the problem by influencing the working knowledge through which they noticed and interpreted information. Here, rather than using research directly to provide evidence for a solution, engagement with research instead influenced the working knowledge that individuals in the district brought to bear on understanding a situation and interpreting it as problematic. Weiss and her colleagues (Weiss, 1980; Weiss & Bucuvalas, 1980) call this phenomenon conceptual use of research. We determined that engagement with research informed problem definition when we observed district personnel reading or talking in depth about research literature and then saw the language, conceptual categories, or ideas from the research play a role in how problems were defined.2


Although we did not see evidence of conceptual use of research in the district’s decision to adopt the supplementary mathematics curriculum, we did see conceptual use of evidence in other decisions in our sample. For example, the district was working with an external consultant who was helping them redesign their system of professional development. On many occasions, this consultant spoke to the district about the qualities of effective professional development and brought relevant research literature to their attention. This research emphasized the importance of professional development that was situated at the school site and in the issues, questions, and curriculum that teachers were grappling with. By the second year working with this consultant and the second year of our study, the district leadership became concerned that their mechanism for providing ongoing professional development in mathematics and literacy (a series of workshops throughout the year intended to provide follow-up for intensive institutes in the summer) was not situated at the school site. This represented an important change for the district. A year earlier, when we interviewed district personnel about what they saw as high-quality professional development, situated professional development did not emerge as a key criterion. But by the second year of our study, there was consensus that it was problematic that professional development was not situated at the school site. In response to this perceived problem, leadership reconfigured the district calendar so that students arrived at school late four times during the year (which they called “late-start days”) to allow for time during the school day for ongoing, situated professional development. Conceptual use of research played a role in problem framing in 7 out of 23 decisions.


Ultimately, how the problem was framed was crucial for how the decision process unfolded. It pointed toward and legitimized some responses and not others, thus shaping the direction of future action. For example, in the case of the supplementary mathematics curriculum, attributing the low test scores in middle school to problems with the elementary curriculum precluded further conversation about the middle school teaching or curriculum. Instead, it narrowed the discussion to issues of computation at the elementary level in general, and the adoption of a supplementary curriculum that emphasized basic skills in particular. Thus, framing the problem in this way set the parameters within which the deliberation about solutions unfolded. (See Appendix A for more details on the relationship between how a problem was framed and the continued discussion of solutions.)


FRAMING SOLUTIONS


Just as interpretive processes were central to how problems were framed (diagnostic framing), they also were front and center to discussions about appropriate solutions (prognostic framing). District personnel often had quite different interpretations of appropriate solutions, which led to considerable negotiation and debate. Evidence played an important role in district administrators’ attempts to convince others of appropriate solutions. However, decision makers only rarely used evidence in an instrumental fashion. Furthermore, invoking evidence, even instrumentally, did little to shift decision makers’ preexisting views. Instead, when there were frame conflicts, power and politics came into the equation as the district adopted compromise solutions, orchestrated consensus, or exercised authority to select solutions. We provide evidence for these claims first by providing an example that illustrates the phenomenon and then by situating the example in patterns among our broader sample of decisions.


Vignette 2: Framework for K–5 Literacy


Like many districts, this district experienced a great deal of controversy about appropriate ways to teach children to read. Individuals in different areas of the district central office had different fundamental assumptions about how children learn, and effective approaches to instruction. There was also a great deal of diversity among teachers and schools in instructional approaches. Many in the district saw this state of affairs as problematic. They attributed low reading scores to a lack of uniform approach to instruction that was effectively supported by the district. In consultation with an external consultant, upper-level decision makers decided that they needed to address this problem by developing a K–12 framework for literacy that would guide future decision making and professional development.


The district had an existing document that provided guidance on reading instruction (referred to internally as “the green book”) alongside the adopted elementary textbook. However, some felt that the document was limited because it focused on specific instructional approaches rather than putting forth research-based information on how children learn to read and then discussing implications for effective instructional approaches. Others criticized the document because it was only K–5 and not K–12. Still others criticized the approach to instruction promoted by the document because it did not provide enough attention to skills development.


To develop the new framework, the curriculum office convened a large group of stakeholders from the district—representatives of special education, the English language learner division, multicultural education, and the literacy team—with university researchers with expertise on literacy. The group read research articles together and discussed them as a way to build shared and research-based understandings of how children learn to read. Although the initial meetings were productive, relations became strained as the group moved toward writing the framework. Different visions of what the framework should be emerged. Some argued that teachers needed a framework that was practical, offering concrete advice about how to teach reading: “We wanted them to have something they could take back to their classroom and use with their kids that would have an immediate benefit. I’m not saying knowing the research isn’t beneficial . . . but it would just have been too, too dry.” Others argued that the framework should be a statement of principles, rooted in research-based understandings of how students learn to read: “[The framework] should be about principles and about why we make [instructional] decisions. . . . It needs to have more frontloading of [research] about proficient readers and what proficient readers are able to do and what teachers need to know to be able to help support the development of proficient readers.”


There was also considerable conflict between the early grades and secondary literacy group in the district office about what constitutes high-quality reading instruction. Members of the secondary group felt that the framework should have a greater emphasis on phonics and phonemic awareness: “Teachers need to know and understand that phonemic awareness and phonics builds on into vocabulary and then moves into fluency and comprehension. . . . There are people out there teaching reading and can’t even tell you what phonemic awareness is or what the sounds are.” Members of the elementary group felt that the framework should put forth a strong message that reading is centrally about the construction of meaning. As one explained, “The heart of all of this is and should be around comprehension and how well kids comprehend what they are reading. . . . And, so, if you use that as your guiding role for everything you do in your classroom, it helps focus on what’s important.” These conflicting solutions were exacerbated because district personnel felt torn between the press to develop the framework and other projects that needed their attention. Progress on the framework stalled, started up, and stalled again.


In early spring of the second year of the study—over a year after work on the framework was initiated—the associate superintendent made it clear that the framework needed to be completed by the end of spring. In response, leadership in the curriculum department abandoned the extensive and inclusive process and hired a consultant, a strong reading teacher in the district, and a retired principal to write the framework. They were asked to complete a framework that covered literacy K–5 only and do so in a tight timeframe.


Because of the limited time and because of the desire to build on the work that had been done in the district, the consultant and the curriculum leadership decided to base the new framework on the green book (the district’s existing framework) but add information from a recent document put out by a state task force and the influential National Reading Panel (NRP) report, “Teaching Children to Read: An Evidence-Based Assessment of the Scientific Research Literature on Reading and Its Implications for Reading Instruction” (National Institute of Child Health and Human Development [NICHD], 2000).3 Inclusion of the NRP report was motivated in part by political concerns. The consultant explained, “Like it or not, NCLB is here with us and is with us here big in [the district], so I felt there had to be some explicit attention paid to . . . the phonemic awareness [and] phonics.” Once the consultants drafted the document, it was circulated to representatives of different divisions in the central office (special education, ELL, and so on), a panel of university experts who had been gathered to advise the district on reading instruction, and a focus group of three teachers. In addition, several staff members representing different divisions were asked to write additional sections—for example, one on different learning styles.


Different stakeholders had conflicting recommendations for changes that they wanted to see in the draft framework. There were disagreements about the relative weight given in the document to comprehension (which had a heavy focus) versus phonics and phonemic awareness (which was put at the end of the document and accorded less text); about the research base for the particular approach to reading comprehension that was promoted; about the relative merits of intrinsic versus extrinsic motivation; and about whether ELLs should be instructed in their primary language or English. Participants varied in their strategies for arguing their position. Some cited specific studies or the NRP report to support their arguments, whereas others made broad statements about the research base or cited practitioner resources or programs, which may or may not have strong research undergirding them. For example, one outside expert stated,


The research shows that the biggest predictor of whether ELLs learn to read well in English is whether they can read in their first language. Although not all schools can provide first-language literacy instruction for all students, teachers at those schools that do should be reassured that such instruction is effective.


The district curriculum leaders then made decisions about how to adjudicate the conflicting recommendations. Ultimately, they included nearly all the recommendations about specific content in the document, even when the recommendations conflicted with each other or with the existing text. The curriculum leaders did this by structuring the framework to have a main section with information about a given topic, followed by a section entitled “Differentiate!” which included information about reading instruction for particular groups of students. At times, these sections on differentiation put forth quite different messages about instruction than that in the main text, arguing that a given approach advocated for in the main section did not work for a particular group of students.4 For example, in the section on motivation, the main text stated that teachers should “avoid the use of extrinsic rewards (such as candy) and competition for grades.” However, the section on differentiation that immediately followed directly contradicted this statement, stating that “students may not be intrinsically motivated to read,” and “extrinsic rewards, used appropriately, may be necessary for some students.”


The curriculum department released a draft version of the framework to teachers at a summer professional development workshop. However, the superintendent subsequently had the curriculum department pull the framework back because it was not entirely complete and, some reported, she was not in favor of the approach to instruction that it advocated. By the end of our study and 3 years into the process, the curriculum department was just finishing the revision, long delayed because of the department’s heavy workload and multiple responsibilities.


Analysis of solution framing. As with diagnostic framing, interpretation and negotiation played a central role in how individuals in the district framed potential solutions. There was often an interpretive space between a shared definition of a problem and potential solutions. In the case of the literacy framework, for example, although there was a shared agreement among district personnel that the district lacked a uniform approach to instruction, there were many potential ways that it could address this problem. And even though there was agreement among key decision makers on the need for a framework to guide policy making, there were very different interpretations of what a framework should look like and the specific approaches to reading instruction that it should promote. In this and other decisions, how the district moved from problem definitions to the solutions they pursued involved prognostic framing and counterframing as individuals put forth, argued for, and modified potential solutions in interaction and negotiation with their colleagues. (See Appendix B for details on the dynamics of prognostic framing debates.)


Evidence played a role in solution framing in 17 out of 23 decisions that we analyzed, including all 14 of the decisions in which there were conflicting frames, and three out of the nine decisions in which solution frames were shared.5 However, district personnel only occasionally used evidence in an instrumental fashion. We determined that district personnel used evidence in an instrumental fashion if they used research or data centrally to select or generate solutions, or weigh the costs and benefits of multiple, competing options. Key to this designation is that attention to data or research evidence came before advocacy for a particular solution. We saw evidence of instrumental use in only 6 out of 23 decisions.


Even when district personnel used evidence in an instrumental manner, interpretation still played a central role. For example, in decision making about summer literacy professional development the first year of the study, district decision makers pointed to the increase in test scores in schools that had participated in a multiyear grant to improve reading instruction to argue that the district should draw on this approach in its new professional development initiative. However, different individuals interpreted the implication of the increased test scores in different ways. Some decision makers saw them as demonstrating the need to promote sustained professional development because the schools in question received a lot of professional development during the initiative. For example, the superintendent said, “We have about eight schools who have participated in [this program] and what we need is to have that kind of consistent professional development across the district, not just these few sites with intentional work going on.” Others saw it as demonstrating the need to emphasize particular instructional approaches to reading instruction, although different individuals emphasized different aspects of the comprehensive model that the schools were involved with when making this argument: “We should really look at how teachers use their time because [the project] has been very focused on how teachers use their time and effective instruction.” Still others saw the increase in scores as evidence for extending the involvement of the reform organization to more schools in the district. All these solutions grew centrally and directly out of the data from this reform effort, as those advocating instrumental decision making might expect. Yet, all involved different interpretations of the meaning and implications of the data for future directions of the district.


Furthermore, even when data were used instrumentally in one stage of the decision trajectory, they were not always used in this manner at other stages. In this example, after upper-level decision makers drew on evidence from the reading reform effort to set priorities for professional development, those who were ultimately responsible for working out the details of the professional development eschewed the model in question, opting for another approach without considering evidence of the effectiveness. This dynamic occurred in four of the six decisions that used evidence in an instrumental manner. Overall, then, instrumental use of evidence played a comparatively small role in the 23 decisions we analyzed.


It was actually more common for district personnel to use research studies, data, or general claims that “research says” to justify, persuade, and bring legitimacy to potential solutions that were already favored or even enacted. Weiss and others (Feldman & March, 1988; Weiss, 1980) call this “symbolic use” of evidence. We determined that evidence was used in a symbolic manner if attention to research or data came after the emergence of a solution, if there was evidence of selective use of research or data, or if participants evoked evidence in very general terms (for example, “research says”) to generate legitimacy for a particular solution. In the development of the literacy framework, for example, the writing team included information from the NRP report explicitly as a way to bring legitimacy to their document. Similarly, different framework reviewers invoked research as they sought to persuade others of the merits of solutions they were promoting or to critique and delegitimize alternative solutions; this was the case when one of the reviewers invoked the phrase “research says” to argue for more attention to native language literacy for ELLs. In this instance, as in others, no further elaboration about the nature of the research was offered or requested. As is demonstrated in Appendix B, 14 out of 23 decisions involved symbolic use of evidence during solution framing. District decision makers were much more likely to use evidence symbolically when there were conflicting solution frames: 13 out of 14 decisions with conflicting frames, but only 1 of the 9 decisions in which solution frames were shared from the outset.


Finally, as was the case with problem framing, district personnel also used research in a conceptual manner with solution framing. That is, engagement with research influenced working knowledge, which in turn influenced the way district personnel saw particular solutions as appropriate. As with problem framing, we determined that engagement with research informed solution framing in a conceptual manner when we observed district personnel reading or talking in depth about research literature and then saw the language, conceptual categories, or ideas from the research shape their interpretation of evidence or the solutions that individuals found to be appropriate. We found only six decisions with evidence of conceptual use of research.


But, although evidence was invoked during solution framing in 17 decisions, it was not particularly effective in persuading others of the wisdom of a particular solution, especially when there were conflicting frames. To use the language of frame analysts, invoking evidence did not necessarily generate resonance. Although referring to evidence, either instrumentally or symbolically, appeared to strengthen commitment to solutions when there was shared agreement about the value and appropriateness of a solution (three decisions), there were only two decisions in which drawing on research, evaluation evidence, or data in the course of frame disputes caused decision makers to question their assumptions or consider alternative solutions. (In Appendix B, these are the decisions for which there were conflicting frames, but debate was resolved through the development of shared understandings.)


When decision makers were unable to persuade one another of the merits of particular solutions using evidence or other means (12 out of 23 decisions), they did one or more of the following three things. First, they addressed conflicting frames by narrowing the range of participants. This happened with the literacy framework as decision making moved from broad participation of individuals representing multiple divisions and interests in the district, to hiring a few external consultants who made key decisions in consultation with leaders of the curriculum division. Although other constituents were given the opportunity to comment on the draft, the decisions about responding to the feedback and adjudicating differences remained with a few people. As you can see in Appendix B, district decision makers addressed conflicting solution frames by narrowing participation in seven decisions.


The second way that district decision makers addressed conflicting ideas about appropriate solutions was by using what we call structural elaboration. That is, they built the differences into the structure of the program or policy, often creating greater complexity. For example, this happened with the literacy framework when district staff built conflicting recommendations into the framework itself. It also happened when the leadership created two frameworks—one that was K–5 and the other that was 6–12—rather than one K–12 framework as a way to manage dissension between elementary and secondary literacy specialists. Structural elaboration played a central role in resolving frame disputes in four decisions.


The final way that disagreements over solutions were resolved was through exercises of authority. In these instances, individuals with positional authority6 selected the ultimate solution. This happened with the literacy framework when the superintendent ordered that the framework not be distributed. There were also multiple decisions (including summer professional development, supplementary mathematics curriculum, underperforming schools initiative, and the elimination of late start) in which the superintendent or an assistant superintendent stepped in and chose an approach after extended debate did not yield a shared solution. We saw exercises of authority to resolve conflicting solutions in seven decisions, all of which had conflicting frames.


It is important to note that all three of these strategies—narrowing participation, structural elaboration, and exercising authority—are essentially political. Solutions were reached not because the key stakeholders had been persuaded that it was the most appropriate route to go using evidence or other means. Rather, the solution involved political compromise (structural elaboration), use of power to orchestrate consensus (narrowing participation), or the exercise of power by those in structural positions of authority. Thus, in this district, political strategies, not the use of evidence, shaped the decision outcome in 12 out of the 14 decisions in which there were conflicting ideas about solutions.


FACTORS THAT SHAPED THE DECISION PROCESS


The process of problem framing and debates about solutions were shaped by several factors. Here we discuss four: content knowledge among district administrators, organizational structure, resource constraints, and leadership transition. We discuss each of these factors in turn.


Content Knowledge


As suggested by sensemaking theorists, individuals in the district made meaning of information and events by drawing on their working knowledge. District leader’s content knowledge7—or the understandings that individuals held about the nature of subject matter, what constitutes “good” instruction, and how students learn in a given subject—played a particularly important role in instructional decisions. We determined that content knowledge played a role if there was a relationship between an individual’s beliefs about instruction and the frames that he or she put forth.


Content knowledge was important to how framing debates unfolded because it shaped how district decision makers interpreted the meaning and implication of evidence and what they saw to be appropriate solutions. For example, in the decision to adopt a supplementary mathematics curriculum discussed in Vignette 1, individuals’ conceptions of how students learn mathematics and what constitutes high-quality mathematics instruction led to different interpretations of the same low middle school test scores. Those who saw high-quality mathematics instruction as involving direct instruction in basic skills argued that the low test scores signaled limitations with the standards-based elementary curriculum that emphasized active, conceptually based instruction. In contrast, those who conceived of high-quality mathematics instruction as instruction that promoted students’ construction of meaning alongside the development of skills interpreted the test scores not as something having to do with the curriculum itself, but as providing evidence that teachers were not implementing the standards-based curriculum appropriately. Content knowledge also influenced district leaders’ sense of appropriate solutions. Once the district decided to move ahead with the adoption of a supplementary curriculum, the basic skills proponents promoted the adoption of a computer-based curriculum that tutors basic skills. The reform mathematics proponents advocated a supplementary curriculum that focused on building both skills and conceptual understanding.


The central role of content knowledge may also account for the limited influence of evidence in resolving frame disputes. When district personnel did proffer evidence in support of problem or solution frames during frame disputes, the evidence rarely addressed the assumptions about high-quality instruction or how children learn, which were at the root of the dispute. For example, during the first years of our study, the district was in the midst of debating the use and expansion of uniform kindergarten assessments. The curriculum department and the research department had very different visions of the appropriate assessment for young children that were based in different ideas about reading development. The research department, whose leadership favored direct instruction in reading skills and early acquisition of phonics and phonemic awareness, developed assessments that measured discrete reading skills. Many in the curriculum department worried that the assessment tested skills out of context. As one member of the department explained,


There’s this kindergarten test that the district gives . . . that’s a timed test, where they go “da da da” . . . . Some of us don’t believe that that’s best practice. . . . Rather than looking at literacy as a whole, [the assessment] is looking at decoding skills and how well kids read words.


The curriculum department advocated instead for a kindergarten assessment that measured students’ literacy in the context of authentic texts.


Members of the research department argued for their approach to kindergarten assessment not by offering evidence for the efficacy of skills-based approaches to instruction, but by citing evidence for the validity and reliability of the assessment itself: “Our assessment has been highly touted by many. We have gotten extremely high reliability and validity. It predicts the [district assessment] in second grade.” This evidence was not persuasive to those in the curriculum department; they were not disputing the kindergarten assessment on its technical qualities, but rather because of their strong beliefs about appropriate reading instruction. The curriculum leaders sought an assessment that measured fundamentally different things. In this district, individuals’ content knowledge was particularly influential in decisions related to curriculum adoption, professional development design, assessment policy, and the development of subject matter frameworks. Overall, content knowledge played an important role framing activities in 16 out of 23 decisions.


Organizational Structure of the Central Office


Most school districts have highly complex and departmentalized organizational structures (Hannaway, 1989; Meyer & Scott, 1983; Spillane, 1998). In this district, as in others, responsibility for instruction was divided among multiple organizational subunits, each of which had distinct, yet often overlapping, sets of responsibilities. As a result, decision making related to instruction was often stretched across multiple units and layers of the central office. This had two major consequences for the way that decision making unfolded. First, people in different divisions tended to have different content knowledge. For example, members of the special education division had quite different ideas about high-quality reading instruction than those in the curriculum office on the one hand, and the ELL division on the other. Similarly, members of the research office had quite different views of high-quality assessment and valued outcomes to measure than those in the curriculum office. Although these differences may have originated in different disciplinary backgrounds and training (Spillane) and differences in the nature of their work (Coburn & Talbert, 2006), they appeared to be sustained by a lack of ongoing interaction across divisions. In this district, there were few structured opportunities for anyone other than the directors of these divisions to meet and work with one another, even when they were working in the same schools.


As a result of this state of affairs, decision trajectories that implicated multiple divisions were more likely to involve multiple ways of framing problems and conflicting ideas about appropriate solutions than those that were primarily located in a single division. A total of 10 out of 15 decisions that involved more than one division involved conflicting problems, compared with only 1 out 7 decisions that unfolded within a single division. (We did not have enough information to make this designation for one decision.) Similarly, 12 out of 15 cross-division decisions involved conflicts over appropriate solutions, compared with only 2 out of 8 decisions that unfolded within a single division. The “silo” structure of the district also enabled and perhaps even encouraged the tendency to respond to differences by narrowing participation back into particular divisions in which the shared understandings enabled what appeared to district administrators to be more efficient decision making.


Resource Constraints


This district, like many urban districts, was experiencing an acute contraction of resources during the time of our study. As a result of shrinking state funding and declining district enrollments, the district was forced to cut $20 million from their budget 3 years in a row. Over the course of the 3 years of our study, the central office became leaner while the responsibilities remained the same and in some respects increased. As a result, the remaining central office personnel picked up more and more responsibilities, had less time to meet those responsibilities, and had less funding to bring in consultants or other external sources of support. The vast decline in resources created what Cohen and his colleagues called “decision making under high load” (Cohen, March, & Olsen, 1988). They predicted that under these conditions, there will be an increase in decision activity but a decrease in productive decision making as decision makers shift from one problem to another; choices will take longer; and decision processes will be less likely to actually solve the problem. And indeed, we found that the contraction of resources had three important consequences for decision making. First, over the course of the 3 years, as predicted by Cohen and his colleagues, decision trajectories became increasingly interrupted and drawn out. Conversations about a particular issue or concern would surface, be discussed, and then recede from view in the press of competing priorities, only to surface later in the face of a new crisis or an impending deadline. This meant that it became more common for decisions to be made at the last minute, as was the case with the literacy framework, which was essentially prepared in 2 months after 1 1/2 years of delay. It also became more common that the decisions were not resolved at all. Thus, in Year 1 of our study, 42% of decisions were either drawn out and made at the last minute or unresolved, compared with 45% in Year 2 and just over 60% in Year 3.


Second, resource constraints appeared to lead to more conservative decision making. We determined that a decision took a conservative path if the solution borrowed heavily from preexisting district practices. In the crush of impending deadlines, district personnel tended to reach to the familiar as the basis for solutions. Thus, in the case of the literacy framework, district staff ultimately based their new framework on the existing text of their old framework (the green book) even though the perceived limitations of the document had caused district personnel to want to develop a new framework in the first place. Conservative decision making increased over the 3 years as resources to search for new and novel solutions became increasingly constrained. A total of 20% of decisions in the first year of our study involved conservative decision making, compared with 57% in the second year and 63% of decisions in the third year of our study.


Finally, there was a shift from more substantive to less substantive use of evidence. More specifically, constrained resources and time made it less likely that decision making involved conceptual use of research, less likely that conceptual use of research enabled shared understandings, and more likely that decision makers used evidence symbolically.8 In the early years of the study, district personnel spent time with external consultants reading and discussing research literature on key topics under debate. These conversations resulted in conceptual use of research in multiple decisions, as was the case with decisions about late-start days discussed earlier. However, as resources became more constrained, district staff members had less time for these extended discussions informed by research and data. They had to make decisions at a pace that precluded in-depth conversation necessary to surface and examine underlying assumptions. And, they were less able to parlay the discussions that did occur into shared understandings about the nature of the problem or appropriate solutions. Thus, although sustained engagement with research during Year 1 led to conceptual use in five decisions with shared understanding in four of the five, sustained engagement with research in Year 2 led to conceptual use in four decisions, but shared understandings in only one. Finally, there were no instances of sustained engagement with research at all in Year 3.


At the same time that there was a decrease in conceptual use of research as resources declined, there was an increase in symbolic use. During the first year, decision makers invoked evidence symbolically in 50% of the six decisions made that year. But that percentage increased to 57% of seven decisions in Year 2 of the study, and 70% of 10 decisions in Year 3. Thus, as substantive use of evidence declined, political uses of evidence increased.


Leadership Turnover


During the course of our study, the district experienced a change in its superintendent, which came with a shift in priorities and significant turnover at the upper levels of the district hierarchy. These changes had a number of consequences for the nature of the decision-making process.


First, the new superintendent employed a more unitary style of decision making than the previous one had. She drastically scaled back the level of participation in instructional decision making (narrowing participation) for many of the key issues that had been under debate for some time, often inviting only those individuals into the decision process who had beliefs about instruction similar to her own. One potential benefit of this approach was that it streamlined decision making and, at least on the surface, created the appearance of shared understandings of problems and solutions. However, the alternative conceptions of problems and solutions did not go away; they simply went underground, and many individuals who were responsible for implementing the decisions had little understanding of or agreement with the logic of the decisions made by the more exclusive group of people. For example, the superintendent, in consultation with several key advisors, instituted a new homework policy for underperforming schools meant to address low achievement. All children in these schools were required to complete homework packets every weekend and over holiday breaks in reading and mathematics. Members of the curriculum department were expected to produce the homework packets but were not consulted as part of this decision and did not understand or agree with the approach. As one member of the curriculum department explained, “We have an ethical issue with it. . . . Doing homework this way is not best practice.” As a result, members of the curriculum department constructed homework packets that were more in line with their approach to teaching and learning than that put forth by the superintendent.


The superintendent turnover and the resulting shake-up of upper-level leadership also shifted the individuals in positions of authority. Thus, those who had the ability to use their authority in frame disputes under the first superintendent were not always able to use their authority in this way under the new superintendent, and some district managers who had limited authority under the first superintendent experienced greater authority under the new superintendent. This had implications for how ongoing frame disputes were resolved. For example, in the first 2 years of the study, controversy about reading instruction resulted in contrasting ideas about priorities for professional development for elementary teachers. These differences were largely resolved through structural elaboration as members of the curriculum department led concurrent professional development that varied in focus depending on who was leading the institutes. However, when the new superintendent arrived, she brought with her different sets of understandings about what constitutes good reading instruction. As part of her initiative for underperforming schools, the superintendent designed and delivered professional development in literacy and involved a few key advisors in the process who shared her views, none of whom were members of the district literacy staff. This professional development varied sharply in form and focus from that provided by district literacy staff in previous years. Thus, although the district made quite different decisions after the arrival of the new superintendent, it was not because she created new ways to negotiate the differences about matters of instruction that complicated the decision process in the past, nor was it because district personnel engaged with research and data in a way that helped them understand their problems in a new way. Rather, the new superintendent brought with her alternative frames for diagnosing the problems, and different potential solutions, and she involved different people in the decision-making process.


DISCUSSION AND IMPLICATIONS


Calls for evidence-based decision making have become increasingly prominent in the educational landscape, yet few studies have looked beyond schools and classrooms to examine the dynamics of evidence use in district-level policy making. Here, we draw on sensemaking theory and frame analysis to provide an in-depth portrait of the interactive processes of decision making, with particular attention to the role of evidence therein. Our study challenges conventional images of evidence use in policy settings by highlighting the central role of interpretation. We provide evidence that the meaning and implications of evidence is not self-evident. There is always an interpretive space between a given piece of evidence and the diagnosis of a problem, and between an understanding of the problem and prescriptions for a solution. Our study shows that district decision makers draw on their preexisting working knowledge—especially their content knowledge—as they operate in this interpretive space. It is by connecting evidence to working knowledge that individuals render it meaningful. Consequently, evidence is always mediated by working knowledge in the decision-making process.


This study also highlights how evidence use is situated in broader processes of deliberation that characterize policy making in public settings. As suggested by frame theory, decision makers can and do interpret the nature of problems and appropriate solutions in contrasting ways. When faced with these circumstances, decision makers attempt to persuade others of the merits of their problem or solution frame. This study extends frame analysis by showing when and under what conditions district administrators invoke evidence as part of their framing processes to justify, support, and legitimize their proposed solutions. Further, it provides evidence that invoking evidence in this manner is not always an effective strategy to generate resonance. Rather, faced with conflicting interpretations, district leaders turn to political means—structural elaboration, narrowing participation, and use of authority—to make decisions about instructional policy.


Finally, this study extends sensemaking theory by illuminating how evidence use is shaped in consequential ways by organizational conditions within a district. The multilevel and multidivisional structure of most school districts encourages the development of diverse, and at times conflicting, ideas about what constitutes good instruction by shaping the patterns of interaction through which ideas are constructed and shared. But it also goes further, demonstrating how resource constraints make it difficult to use evidence in substantive ways because district personnel have less time to search for new or novel solutions and less time to engage with evidence and each other in ways that encourage and enable them to rethink their assumptions and develop shared understandings. Executive leadership also plays a role by bringing new ways of framing problems and solutions and determining levels of inclusiveness of the process. Although changes in upper-level leadership shape the substance of framing debates and some of the strategies used to resolve them, they do not appear to affect underlying interpretive processes.


This account highlights the prominent role of politics in evidence use. That decision making in school districts is political is not a surprise. Political scientists and organizational theorists have long highlighted the fact that decision making is a venue through which resources—real and symbolic—are generated and distributed (Bacharach, 1988; Malen, 2006; March & Olsen, 1989; Pfeffer, 1981; Scribner, Aleman, & Maxcy, 2003). Furthermore, frame analysts have focused attention on the way in which framing processes can be strategic sites for power struggles (Starbuck & Milliken, 1988; Stone, 1988). By focusing on the ways in which evidence is used in framing and decision making, this study demonstrates the limits of evidence use as a strategy for circumventing the politics of decision making. Advocates for evidence-based decision making often position evidence use as an antidote to overly politicized decision making on the part of district leaders. Yet we show that research and evidence often become further tools in the very political processes that they are meant to circumvent. Decision makers use evidence symbolically to garner legitimacy for their position, and when decision makers do invoke evidence, the debate can quickly shift to conflicts about the quality of the evidence.


At the same time, by focusing on the cognitive aspects of framing processes and the organizational conditions that shape them, this study provides insight into strategic levers for facilitating more productive use of evidence within the context of the highly politicized environment of district decision making. First, it highlights the need for adequate resources to support the level and complexity of decision making in a given district at a particular historical moment. In this study, as resources to support decision making decreased (including time, adequate personnel, and external support), symbolic use of evidence increased, and conceptual use of research decreased. The decline in the conceptual use of research is particularly striking. If, as we have suggested, evidence is always mediated by individuals’ working knowledge, then a key way to influence decision making is to influence the working knowledge that decision makers draw upon to interpret evidence, frame problems, and select and argue for solutions. In this district, joint engagement with research enabled district personnel to forge shared understandings of problems and solutions, avoiding the kinds of conflicts that were ultimately resolved using political strategies like narrowing participation, structural elaboration, and using authority. Conceptual use of research also played an important role when district decision makers were able to develop shared solutions in the face of conflicting frames. This suggests that substantive evidence use requires adequate time and staffing resources to analyze data, to research solutions, and to engage deeply with the evidence and each other in the process of deliberation and debate.


Second, this study highlights the need to develop greater opportunities for individuals in different divisions within school districts to interact in substantive ways with research and data. People in organizations create shared understandings through social interaction and negotiation (Porac et al., 1989; Vaughan, 1996). In district central offices, however, this interaction most frequently occurs within a given division. In the absence of opportunities to interact across divisions in substantive ways, beliefs and practices are likely to become increasingly diverse between divisions over time (Coburn & Talbert, 2006), exacerbating the kind of cross-division conflict over matters of instruction that we saw in this district. Cross-divisional engagement is likely necessary to develop sets of understandings about instructional issues that are truly shared and that, in turn, can facilitate joint definitions of problems and shared visions of solutions.


Third, this study points to the importance of content knowledge in instructional decision making. Assumptions about what constitutes best practice, how students learn, and the nature of mathematics or literacy played a central role in over two thirds of the decisions we analyzed. Yet, in many districts as in this district, positions related to instruction (e.g., directors of professional development or subject matter specialists) are marginal to the main lines of authority in the district (e.g., positions with supervisory authority over principals and schools). As a consequence, those who are most likely to have content expertise tend to be peripheral to central decision-making authority, and those with decision-making authority do not necessarily have content expertise. When there are differences in views of appropriate ways to frame problems or appropriate solutions, it is the people with central decision-making authority who have the power to limit participation and select the solution. This means that it is possible, and perhaps common, that those with less robust understandings of high-quality instruction and the nature of student learning of subject matter are in positions in which they are adjudicating differences in interpretation of the nature of evidence and making final judgments about appropriate policy directions to pursue.


This in-depth portrait of the process of evidence use in instructional decision making differs markedly from the normative models put forth by advocates of evidence-based decision making. Yet understanding how these processes unfold in complex social settings is critical if we want to develop structures to enable district leaders to use evidence in more productive and generative ways. Ultimately, evidence use in public policy settings is not a simple endeavor. It requires a great deal of knowledge on the part of decision makers about the substantive matters that are the target of decision making. It requires structures for enabling people in different parts of the organization to engage in deliberation and debate. And it requires organizational conditions that encourage and enable people to engage with evidence in substantive ways.


Acknowledgements


We would like to thank Joan Talbert, Kristin Crosland, and Angie Eilers for help with data collection, and Michael O’Neill for administrative assistance. We also thank Nathan MacBrien, Warren Simmons, James Spillane, Sam Stringfield, Joan Talbert, several members of the district, and two anonymous reviewers for comments on earlier versions of the article.


Notes


1. Emerging from the field of the sociology of knowledge, the concepts of working knowledge and worldview emphasize the integrated, situated, and embedded nature of the knowledge that individuals draw upon in the course of their work. Kennedy (1982) defined working knowledge in the following way: “Working knowledge is the organized body of knowledge that [people] use spontaneously and routinely in the context of their work. It includes the entire array of beliefs, assumptions, interests and experiences that influence the behavior of individuals at work. It also includes social science knowledge. The term working, as used here, has two meanings. First, it means that this is a special domain of knowledge that is relevant to one’s job. Second, it means that the knowledge itself is tentative, subject to change as the worker encounters new situations or new evidence” (p. 2). Working knowledge encompasses but also goes beyond such constructs as conceptions about the nature of subject matter (Thompson, 1992), conceptions of how students learn (Thompson), subject matter knowledge (Fennema & Franke, 1992), pedagogical content knowledge (Shulman, 1986), and leadership content knowledge (Stein & Nelson, 2003). Thus, the concept of working knowledge acknowledges the degree to which beliefs and knowledge are intertwined and the degree to which aspects of knowledge do not exist as isolated or discrete categories.

2. This approach, which required that we actually observe engagement with research, was necessary to distinguish this phenomenon from symbolic use of research, in which research is used to justify decisions that are already made. However, it is a rather stringent criterion for determining conceptual use of research and likely underreports its prevalence. Indeed, other studies that have discussed conceptual use of evidence have reported higher levels of conceptual use of research than we report here (see Coburn et al., in press, for a review).

3. An earlier draft of the new framework put together by another consultant was based heavily on a framework developed by the Ministry of Education in New Zealand, a direction that was supported by several assistant superintendents who had recently made a trip to New Zealand to learn about their literacy work. However, the external consultant felt that the framework was too ambitious for teachers in the district and too progressive for the current political climate. She explained, “I felt that the teachers, the amount of staff development that would have to be done for the teachers to succeed or buy into or be able to be successful in that New Zealand kind of framework is so not really within reach at this point. So I was advocating what I saw was a more middle ground.” She shared her concerns with the leadership of the curriculum department, and that initial draft was abandoned.

4. The framework justifies this approach by stating early in the document that there is no single approach to reading instruction that is effective with all children: “This framework is based on the following beliefs: . . .There is no single method or single combination of methods that can successfully teach all children to read and write. Teachers must determine the appropriate combination and sequence of methods needed for the students they teach.” One member of the curriculum leadership explained, “I was brought into [the curriculum department] to make sure that . . . special education and ELL felt they were in the conversation. That is what I think I was put here to do and what I’ve tried to do. I can’t make them like each other. I can’t make these polarized views [get along] . . . but in my mind, different kids get to reading in different ways. Neither one of them is wrong. We have to provide the range so that children get there. . . . So you need to have both views in [the framework].”

5. Five decisions did not employ evidence at all as part of prognostic framing. In these instances, solutions were either accepted without argumentation because they “made sense” to those involved in decision making, or argumentation for solutions unfolded without any explicit reference to data, research, or evaluation reports. There was one additional decision for which we did not have adequate information to determine the use of evidence in solution framing.

6. We define positional authority as power that is “coded into structural design” (McAdam & Scott, 2005, p. 10) of organizations and legitimized by shared norms that authorize particular uses of power for particular roles (Dornbusch & Scott, 1975; Gerth & Mills, 1946).

7. Drawing our inspiration from Shulman’s work on teachers’ content knowledge, we focus on three aspects of district leaders’ content knowledge: subject matter knowledge, pedagogical content knowledge, and curriculum knowledge in a given subject area. Subject matter knowledge includes one’s knowledge of the discipline (Shulman, 1986). Pedagogical content knowledge includes knowledge of how to represent key aspects of a subject matter to the learner, including “an understanding of what makes the learning of specific topics easy or difficult, the conceptions and preconceptions that students of different ages and backgrounds bring with them to the learning . . . knowledge of strategies most likely to be fruitful in reorganizing the understanding of learners” (pp. 9–10). Curricular knowledge includes understanding of the range of curricular resources and approaches that are available in a given subject matter and when it is appropriate to use which approach (Shulman, 1986, 1987). Consistent with conceptions of working knowledge, we view these forms of knowledge as encompassing both formal knowledge that is rooted in the profession’s collective and accumulated wisdom, and practical knowledge that is situated in particular contexts and rooted in personal inquiry and experience.

8. The pattern is less clear with instrumental use of evidence. During this same period, instrumental decision making initially decreased substantially from Year 1 to Year 2, but then increased again slightly in Year 3.


References


Babb, S. (1996). “A true American system of finance”: Frame resonance in the U.S. labor movement, 1866 to 1886. American Sociological Review, 61, 1033–1052.


Bacharach, S. (1988). Notes on a political theory of educational organizations. In A. Westoby (Ed.), Culture and power in educational organizations (pp. 277–288). Milton Keynes, England: Open University Press.


Barley, S. R. (1990). Images of imaging: Notes on doing longitudinal field work. Organizational Science, 1, 220–247.


Benford, R. D., & Snow, D. A. (2000). Framing processes and social movements: An overview and assessment. Annual Review of Sociology, 26, 611–639.


Binder, A. J. (2002). Contentious curricula: Afrocentrism and creationism in American public schools. Princeton, NJ: Princeton University Press.


Birkeland, S., Murphy-Graham, E., & Weiss, C. (2005). Good reasons for ignoring good evaluation: The case of the drug abuse resistance education (D.A.R.E.) program. Evaluation and Program Planning, 28, 247–256.


Celio, M. B., & Harvey, J. (2005) Buried treasure developing a management guide from mountains of school data. Seattle: University of Washington, Center on Reinventing Public Education (CRPE), Daniel

J. Evance School of Public Affairs.


Coalition for Evidence-Based Policy. (2002). Bringing evidence-driven progress to education: A recommended strategy for the U.S. Department of Education. Washington, DC: Author.


Coburn, C. E. (2001). Collective sensemaking about reading: How teachers mediate reading policy in their professional communities. Educational Evaluation and Policy Analysis, 23, 145–170.


Coburn, C. E., Honig, M. I., & Stein, M. K. (in press). What is the evidence on districts’ use of evidence? In J. Bransford, L. Gomez, D. Lam, & N. Vye (Eds.), Research and practice: Towards a reconciliation.

Cambridge, MA: Harvard Educational Press.


Coburn, C. E., & Talbert, J. E. (2006). Conceptions of evidence-based practice in school districts: Mapping the terrain. American Journal of Education, 112, 469–495.


Cohen, M. D., March, J. G., & Olsen, J. P. (1988). The garbage can model of organizational choice. In J.

G. March (Ed.), Decisions and organizations (pp. 294–334). Oxford, England: Basil Blackwell.


Cooley, W. W. (1983). Improving the performance of an educational system. Educational Researcher, 12(6), 4–12.


Cress, D. M., & Snow, D. A. (2000). The outcomes of homeless mobilization: The influence of organization, disruption, political mediation, and framing. American Journal of Sociology, 105, 1063–1104.


David, J. L. (1981). Local uses of Title I evaluations. Educational Evaluation and Policy Analysis, 3, 27–39.


Dornbusch, S. M., & Scott, W. R. (1975). Evaluation and the exercise of authority. San Francisco: Jossey-Bass.


Eisenhart, M. A., & Howe, K. R. (1992). Validity in educational research. In M. D. LeCompte, W. L. Millroy, & J. Preissle (Eds.), The handbook of qualitative research in education (pp. 643–680). San Diego, CA: Academic Press.


Elmore, R., & Burney, D. (1997). School variation and systemic instructional improvement in Community School District #2, New York City. Pittsburgh, PA: High Performance Learning Communities Project, Learning Research and Development Center, University of Pittsburgh.


Feldman, M. S., & March, J. G. (1988). Information in organizations as signal and symbol. In J. G. March (Ed.), Decisions and organizations (pp. 409–428). Oxford, England: Basil Blackwell.


Fennema, E., & Franke, M. L. (1992). Teachers' knowledge and its impact. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 147–164).  New York: Macmillan.


Fiss, P. C., & Hirsch, P. M. (2005). The discourse of globalization: Framing and sensemaking of an emerging concept. American Sociological Review, 70, 29–52.


Fligstein, N. (2001). Social skill and the theory of fields. Sociological Theory, 19, 105–125.


Gerth, H., & Mills, C. M. (1946). From Max Weber: Essays in sociology. New York: Oxford University Press.


Hannaway, J. (1989). Managers managing: The workings of an administrative system. New York: Oxford University Press.


Hartley, J. F. (1994). Case studies in organizational research. In C. Cassell & G. Symon (Eds.), Qualitative methods in organizational research: A practical guide (pp. 208–229). Thousand Oaks, CA: Sage.


Hightower, A., Knapp, M. S., Marsh, J. A., & McLaughlin, M. W. (2002). School districts and institutional renewal. New York: Teachers College Press.


Johnson, B. L., Jr. (1999). The politics of research-information use in the education policy arena. Educational Policy, 13(1), 23–36.


Kennedy, M. M. (1982). Working knowledge and other essays. Cambridge, MA: The Huron Institute.


Knapp, M., Copland, M., & Talbert, J. (2003). Leading for learning: Reflective tools for school and district leaders. Seattle: University of Washington, Center for the Study of Teaching Policy.


Ligon, G., & Jackson, E. (1986, April) Linking outcomes to organizational planning. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.


Malen, B. (2006). Revisiting policy implementation as a political phenomenon: The case of reconstitution policies. In M. Honig (Ed.), New directions in education policy implementation: Confronting complexity (pp. 83–104). Albany: State University of New York Press.


March, J. G. (1988). Decisions and organizations. Cambridge, England: Basil Blackwell.


March, J. G., & Olsen, J. P. (1989). Rediscovering institutions: The organizational basis of politics. New York: Free Press.


Massell, D. (2001). The theory and practice of using data to build capacity: State and local strategies and their effects. In S. H. Fuhrman (Ed.), From the capitol to the classroom: Standards-based reform in the states. One hundredth yearbook of the National Society for the Study of Education (pp. 148–169). Chicago: National Society for the Study of Education (NSSE) and University of Chicago Press.


McAdam, D., & Scott, W. R. (2005). Organizations and movements. In G. F. Davis, D. McAdam, W. R. Scott, & M. N. Zald (Eds.), Social movements and organizational theory (pp. 4–40). Cambridge, England: Cambridge University Press.


Merriam, S. B. (1998). Qualitative research and case study applications in education (Rev. ed.). San Francisco: Jossey-Bass.


Meyer, J. W., & Scott, W. R. (1983). Organizational environments: Ritual and rationality. Beverly Hills, CA: Sage.


Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage.


National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups. Washington, DC: U.S. Government Printing Office.


Pfeffer, J. (1981). Power in organizations. Marshfield, MA: Pitman.


Porac, J. F., Thomas, H., & Baden-Fuller, C. (1989). Competitive groups as cognitive communities: The case of Scottish knitwear manufacturers. Journal of Management Studies, 26, 397–416.


Schneider, A., & Ingram, H. (1993). Social construction of target populations: Implications for politics and policy. American Political Science Review, 87, 334–347.


Scribner, J. D., Aleman, E., & Maxcy, B. (2003). Emergence of the politics of education field: Making sense of the messy center. Educational Administration Quarterly, 39, 10–40.


Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educational Researcher, 15(1), 1–22.


Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57, 1–22.


Slavin, R. E. (1989). PET and the pendulum: Faddism in education and how to stop it. Phi Delta Kappan,

70, 752–758.


Slavin, R. E. (1998).  Show me the evidence! Proven and promising programs for America's schools. Thousand Oaks, CA: Corwin Press.


Snow, D. A., & Benford, R. D. (1992). Master frames and cycles of protest. In A. Morris & C. Mueller (Eds.), Frontiers in social movement theory (pp. 135–155). New Haven, CT: Yale University Press.


Spillane, J. P. (1998). State policy and the non-monolithic nature of the local school district: Organizational and professional considerations. American Educational Research Journal, 35, 33–63.


Spillane, J. P. (2000). Cognition and policy implementation: District policymakers and the reform of mathematics education. Cognition and Instruction, 18, 141–179.


Spradley, J. 1979. The ethnographic interview. New York: Holt, Rinehart and Winston.


Starbuck, W. H., & Milliken, F. J. (1988). Executives’ perceptual filters: What they notice and how they make sense. In D. C. Hambrick (Ed.), The executive effect: Concepts and methods for studying top managers (pp. 35–65). Greenwich, CT: JAI Press.


Stein, M. K., & Nelson, B. S. (2003). Leadership content knowledge. Educational Evaluation and Policy Analysis, 25, 423–448.


Stone, D. A. (1988). Policy paradox and political reason. Glenview, IL: Scott Foresman.


Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.


Thompson, A. G. (1992). Teachers’ beliefs and conceptions: A synthesis of the research. In D. A. Grouws (Ed.), Handbook on research on mathematics teaching and learning (pp. 127–146). New York: Macmillan.


U.S. Department of Education. (2002, April). Guidance for the Reading First program. Retrieved October 4, 2007, from http://www.ed.gov/programs/readingfirst/guidance.pdf


Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. Chicago: University of Chicago Press.


Vrabel, D. (1999). Reference guide to continuous improvement planning for Ohio school districts. Columbus: Ohio State Department of Education.


Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage.


Weiss, C. H. (1980). Knowledge creep and decision accretion. Knowledge: Creation, diffusion, utilization, 1, 381–404.


Weiss, C. H., & Bucuvalas, M. J. (1980). Social science research and decision making. New York: Columbia University Press.


Weiss, J. A. (1989). The powers of problem definition: The case of government paperwork. Policy Sciences, 22, 97–121.


Yin, R. (1984). Case study research: Design and methods. Thousand Oaks, CA: Sage.





APPENDIX A


Uses of Evidence in Diagnostic Framing



Decision


Evidence as signal

Conceptual use of research?


Problem frames

 


Consequence of problem framing

Supplementary mathematics curriculum

*Low test scores in middle school mathematics

*Complaints from the community about existing mathematics curriculum

No

*Lack of basic skills in elementary mathematics curriculum

*Lack of districtwide focus

*Lack of implementation of elementary mathematics curriculum

 

Debate shifted to focus on which supplementary mathematics curriculum to adopt

Focus of summer professional development in mathematics 2003

*Low test scores in middle school mathematics

No

*Lack of sustained professional development

*Lack of implementation of elementary mathematics curriculum

 

Conversation narrowed to focus on details of professional development rooted in elementary mathematics curriculum

Focus of summer professional development in literacy 2003

*Observations of literacy instruction in course of ongoing work in schools

*Research on early literacy instruction

*Experience teaching students to read

Yes

*Lack of sustained professional development

*Lack of focus on basic skills (phonics and phonemic awareness) in reading instruction

*Lack of higher order comprehension

 

Debate shifted to focus on effective approaches to professional development and comprehension strategies

Kindergarten assessment

None

No

*Existing kindergarten assessment is not reliable enough to guide policy decisions

*Existing assessment is developmentally appropriate

 

Decision to develop new kindergarten assessment with debate about what this assessment should entail

Coordinating professional development with compensation plan

*List of district professional development offerings

No

*Lack of coordination between district offerings and union professional compensation plan

 

Began conversations to forge connections between professional development offerings and compensation plan

School-site coaching initiative

*Evaluation report from another district

*Anecdotal evidence from other districts

Yes

*School site coaches in new initiative do not have adequate training

 

Discussed ways to provide support to on-site coaches

Professional development for district personnel

*Anecdotal evidence of lack of effectiveness of district professional development

No

*Professional development staff lack capacity

 

Discussed ways to develop district personnel capacity

Focus of follow-up professional development math 2003–2004

* Surveys of teacher participants

*Anecdotal information about teacher needs

No

*Summer professional development did not adequately address assessment

*Summer professional development did not adequately address professional learning communities

*Summer professional development did not adequately address differentiated instruction

 

Picked up on suggestion from external consultant to use an approach that appeared to address professional community; further conversation about details of implementation

Focus of follow-up professional development reading 2003–2004

*Surveys of teacher participants

No

*Summer institute did not spend enough time on differentiated instruction

 

Conversation shifted to ways to address differentiation

Schedule change to create time for professional development

*Limited participation in follow-up professional development

*Anecdotal evidence of limited spread of approaches at school site

*Limited resources to support follow-up sessions

Yes

*School year professional development is not situated at the school site

 

Discussed alternative to structure follow-up sessions that could bring them into the schools

Algebra adoption

*Low test scores

*Anecdotal evidence of challenges to facing students who transfer schools

No

*Having two algebra series in the district is a problem for students who move from school to school

*It’s difficult to provide support for two different series

*It’s not the curriculum, it’s the instructional strategies

 

Decision to adopt a single curriculum for the district; debate turned to which curriculum

Literacy framework

*Anecdotal evidence of diversity of classroom practice

*Disagreement in district about what constitutes high-quality reading instruction

No

*Lack of congruence in district policy

*Prior framework is too focused on activities, not enough focus on principles for literacy instruction

*Not enough attention to reading skills in prior framework

*Prior framework is not rooted in research

 

Embarked on an inclusive process bringing together people in different parts of the district and external experts to craft new K–12 literacy framework

Mathematics framework

*Low mathematics scores

*Anecdotal evidence of diversity of classroom practice

*Complaints from the community about existing mathematics curriculum

*Disagreement in district about what constitutes high-quality mathematics instruction

No

*Lack of congruence in district policy

 

Hired like-minded consultants to write framework

Secondary literacy framework

*Anecdotal evidence of diversity of classroom practice



No

*Lack of congruence in district policy

*Prior framework did not address secondary

*Current writing team for framework is not sensitive to needs of students of color

 

Secondary group split off from elementary group and pursued a framework focused on cultural competence

Focus of summer professional development in mathematics, 2004

*Continued low achievement in mathematics

*Feedback from participants the previous summer

No

*Lack of differentiated instruction

*Lack of formative assessment

 

Discussion focused on how to integrate differentiation into summer professional development

Focus of summer professional development in K–5 literacy, 2004

*Feedback from participants the previous summer

*Anecdotal evidence from working in schools

Yes

*Lack of differentiated instruction

*Lack of cultural competence

*Teachers don’t have ways to organize classrooms to enact reading strategies

*Not enough practical activities in professional development

 

Met in small groups to plan professional development centered on differentiated instruction

Focus of summer professional development in secondary literacy, 2004

*Anecdotal evidence from working in schools

Yes

*Lack of cultural competence

*Lack of direct instruction in reading strategies

 

Discussion focused on differentiated instruction and cultural competence

Underperforming schools initiative

*Persistent low test scores in subset of schools

No info

No information

 

No information

Homework policy

*Persistent low test scores in subset of schools

No

*Students do not do enough homework

*Homework is not the issue

 

Mandated homework for weekends and holidays; discussion of implementation details

Focus of follow-up professional development mathematics 2004–2005

*Anecdotal evidence of limited spread of approaches at school site

Yes

*Schools do not have ownership over the professional development work

 

Decision to leave follow-up planning to the school site

Focus of follow-up professional development literacy 2004–2005

*None

No

*Schools do not have resources to provide professional development on their own

 

Decision to provide prepackaged professional development modules to schools; discussion of what to include in those modules

Redesign of coaching initiative

*Negative teacher feedback on coach professional development

No

*The district does not have a clear vision of coaching

*The professional development provider was not responsive to district needs

 

Discussion turned to new ways to design on-site coaching

Elimination of late-start days

*Parent feedback

*Evidence from surveys of effectiveness of late-start days

*Anecdotal evidence that late-start days are of poor quality

Yes

*Late-start days are difficult for parents

*Late-start days foster high-quality professional development

 

Late start days were eliminated

Note: Decisions are arrayed in chronological order; frame(s) in bold are the ones that prevailed, shaping decision outcomes.


APPENDIX B

Uses of Evidence in Prognostic Framing



Decision


Dominant Solution Frames


Use of evidence in framing debates

How debate was resolved


Decision Outcome

Supplementary mathematics curriculum

*District should adopt computer-based basic skills curricula

*District should adopt supplemental curriculum that teaches skills in context of real-life problems

*Instrumental: Considered evidence of effectiveness of different curricula; Others critiqued the methodology of the studies

*Symbolic: Used “research says” to critique proposed curricula

*Use of authority

Assistant superintendent picked one curriculum

Focus of summer professional development in mathematics 2003

*Professional development should focus on number sense, assessment strategies, differentiation, and community of learners

*Professional development should focus on rigorous mathematical content

*Professional development should be situated in district-adopted curriculum

*Instrumental: Used achievement data that documented dip in math scores in sixth grade to add sixth-grade teachers to what had been a K–5 institute

N/A

Professional development focused on number sense, using the district-adopted curriculum

Focus of summer professional development in literacy 2003

*Professional development should be practical

*Professional development needs to be rooted in research on how students learn to read

*Professional development should focus on differentiated instruction to meet the needs of students from different racial and language groups

*Professional development should be modeled on successful program in the district

*Professional development should focus on comprehension strategies

*Conceptual: Research articles influenced how some participants think about the nature of literacy

*Instrumental: Upper-level decision makers used data from schools involved in innovative approach to advocate expansion of the approach to all schools in the district

*Symbolic: Used “research says” to justify already-favored focus; selective use of research

*Narrowing participation


Focused on practical approaches to comprehension strategies

Kindergarten assessment

*Kindergarten assessment should be psychometrically valid

*Kindergarten assessment should assess key early reading skills

*Kindergarten assessment should be developmentally appropriate

*Kindergarten assessment should be formative, providing information that helps teachers make instructional decisions

*Assessments are not valid for English language learners

*Symbolic: District personnel provided evidence of psychometric validity to support use of assessments; others critiqued the methodology of the studies used to determine validity

*Use of authority

Kindergarten assessment was expanded to include other early reading skills

Coordinating professional development with compensation plan

*New professional development initiative should be coordinated with professional compensation plan

None

N/A

Infrastructure put in place for teachers to use professional development as part of compensation plan

School-site coaching initiative

*District should hire consultant to provide training to school-site coaches

*Any new coaching effort should build on existing union-led effort

*Symbolic: Provided assurances that consultant promotes “research-based” approach

*Structural elaboration

New district coaching initiative runs in parallel to existing union-led initiative; hired consultant to train school-site coaches

Professional development for district personnel

*District professional development providers need coaching

*Coaches should be nationally known

*Symbolic: Provided assurances that consultant promotes “research-based” approach

N/A

Hired consultants to coach literacy and mathematics personnel; subsequently fired literacy coach for “not meeting district needs.” At close of study, still had not hired new consultant

Focus of follow-up professional development math 2003–2004

*Follow-ups should focus on lesson study

None

N/A

Lesson study became centerpiece of follow-up

Focus of follow-up professional development reading 2003–2004

*Follow-up sessions should each be focused on differentiation

*We should organize by grade level because each grade level needs different things

None

*Structural elaboration

Planned separate follow-up sessions focused on differentiation

Schedule change to create time for professional development

*The district should use the schedule for professional development used in high schools for elementary

None

N/A

District instituted late-start days

Algebra adoption

*The district should adopt a reform mathematics curriculum

*The district should adopt a traditional mathematics curriculum

*The district should adopt a compromise curriculum

*Conceptual: Read research on how students learn algebra; used this to create dimensions to evaluate curricula

*Instrumental: Systematically analyzed each curriculum along multiple dimensions

*Symbolic: Used “research says” to argue against particular approach

*Developed shared understanding

*Use of authority

Adoption committee reached consensus on compromise curriculum; superintendent canceled adoption because of lack of funds

Literacy framework

*The framework should be practical, offering teachers concrete ways of teaching reading

*The framework should be guided by research on how students learn to read

*The framework should emphasize a balanced approach, with strong focus on early reading skills

*The framework should focus on reading as the construction of meaning

*Symbolic: Extensive use of “research says” to advocate particular approaches; intentional inclusion of studies to gain legitimacy

*Narrowing participation

*Structural elaboration

*Use of authority

Document was released to teachers; Superintendent subsequently withdrew it. Work continued on the document and was still not complete at close of study

Mathematics framework

*Framework should reflect a “big picture or conceptual approach” to mathematics and be aligned to NCTM standards

*Framework should have grade-level indicators to guide teacher practice

No information

N/A

Framework aligned with and informed by NCTM standards was completed and reviewed by key stakeholders

Secondary literacy framework

*Instructional approaches in framework should “reflect the population the district serves”

*Framework should be rooted in research on how adolescents learn to read

*Symbolic: Extensive use of “research says” to justify particular approaches; selective attention to research literature

*Narrow participation

No draft of the framework was ever produced

Focus of summer professional development in mathematics, 2004

*Professional development should address differentiation by focusing on learning styles

*Professional development should address differentiation by focusing on ongoing assessment and modifying lessons

*Teachers can’t differentiate unless they have a strong grasp of mathematical content, so institutes must focus on content

*Conceptual: Read research about professional development and teacher learning

*Instrumental: Read research on student learning styles and Meyers-Briggs to see if an appropriate approach for their needs

*Developed shared understanding

Middle schools focused on learning styles using Meyers-Briggs analysis; Elementary schools focused on ongoing assessment using lesson study

Focus of summer professional development in K–5 literacy, 2004

*Summer professional development should focus on guided reading

*Summer professional development should focus on cultural competence

*Summer professional development should focus on ongoing assessment

*Symbolic: Used “research says” and anecdotal reports of teacher needs to justify approach; selective attention to research

*Narrow participation

*Structural elaboration

*Use of authority

Assistant superintendent mandated the use of a speaker representing a particular program; professional development providers divided into grade-level groups and planned separately

Focus of summer professional development in secondary literacy, 2004

*Summer professional development should focus on cultural competence

*Summer professional development should focus on research-based strategies for improving content area comprehension

*Summer professional development should include emphasis on direct, explicit approaches to comprehension instruction

*Symbolic: Used “research says” to justify approach; had researcher come and summarize research on best practices in secondary literacy, but did not act on the findings

*Narrow participation

Professional development focused on cultural competence

Underperforming schools initiative

*Underperforming schools need an infusion of resources and special attention from the superintendent

*Underperforming schools should focus on teaching and learning, not test preparation

*Symbolic: Used “research says,” references to outcome data from nearby districts, and results of a needs assessment to justify approach

*Narrow participation

*Use of authority

Superintendent instituted program for underperforming schools similar to program she used in another district

Homework policy

*Children should do homework over weekends and holidays to avoid gap in their learning

*Children should do homework to practice concepts found on standardized tests

*Homework should not be standardized; rather, it should be rooted in classroom experiences

*Homework should provide meaningful learning opportunities for students

*Symbolic: Used “research says” to justify approach in communication to parents; opponents used “research says” to discredit approach

*Narrow participation

*Use of authority

Superintendent mandated new policy

Focus of follow-up professional development mathematics, 2004–2005

*Professional development should be situated at school site

*Conceptual: Notion of situated professional development rooted in readings about high-quality professional development

N/A

Focus of follow-up was left to individual schools

Focus of follow-up professional development literacy, 2004–2005

*Follow-ups should be an extension of the institute

*Schools should have latitude in how they use follow-up sessions

None

N/A

District offered modules to schools that extend institute; Schools chose whether to use them

Redesign of coaching initiative

*The principal needs to be the leader of coaching in a building

*Coaching should be focused on particular content areas

*Coaches need more time to do their work

*We should use the New Zealand model

*Conceptual: Discussion of research changed the way some participants thought about attributes of successful coaching programs

*Symbolic: “Research says” was invoked in discussions; advocates of particular programs assured others that they were “research based”

*Debate is not resolved

Coaching initiative remained in discussion at close of study

Elimination of late-start days

*The district should continue late-start days because they are a vehicle for situated professional development

*The district should end late-start days because they cause child care difficulties for parents

*Conceptual: Notion of situated professional development rooted in readings about high-quality professional development

*Instrumental: Survey of school experiences of late-start days revealed positive experiences; anecdotal evidence from parents suggested that they were unhappy with late-start days

*Symbolic: Position paper in support of late-start days cited studies of high-quality professional development to support their continuation

*Use of authority

Superintendent canceled late-start days

Note: Decisions are arrayed in chronological order; frame(s) in bold are the ones that prevailed, shaping decision outcome.

 




Cite This Article as: Teachers College Record Volume 111 Number 4, 2009, p. 1115-1161
https://www.tcrecord.org ID Number: 15232, Date Accessed: 10/21/2021 10:50:03 PM

Purchase Reprint Rights for this article or review