|
|
Districts’ Efforts for Data Use and Computer Data Systems: The Role of Sensemaking in System Use and Implementationby Vincent Cho & Jeffrey C. Wayman - 2014 Background: Increasingly, teachers and other educators are expected to leverage data in making educational decisions. Effective data use is difficult, if not impossible, without computer data systems. Nonetheless, these systems may be underused or even rejected by teachers. One potential explanation for such troubles may relate to how teachers have made sense of such technologies in practice. Recognizing the interpretive flexibility of computer data systems provides an avenue into exploring these issues. Objective: This study aims to explore the factors affecting teachers’ use of computer data systems. Drawing upon the notion of interpretive flexibility, it highlights the influence of sensemaking processes on the use and implementation of computer data systems. Research Design: This comparative case study draws upon interview and observational data gathered in three school districts. Matrices were used to compare understandings about data use and about computer data systems within each district by job role (i.e., central office member, campus administrator, and teacher), as well as across districts. Results: Our findings challenge commonplace assumptions about technologies and their “effects” on teacher work. For example, access to a system or its functions did not determine changes of practice. Paradoxically, we even found that teachers could reject or ignore functions of which they were personally in favor. Although computer data systems can support changes of practice, we found that agency for change rested in people, not in the technologies themselves. Indeed, teachers’ sensemaking about “data” and “data use” shaped whether and how systems were used in practice. Although central offices could be important to sensemaking, this role was often underplayed. Conclusion: We provide recommendations regarding how researchers, school, and district leaders might better conceptualize data and data systems. These recommendations include recognizing implementation as an extended period of social adjustment. Further, we emphasize that it is the unique duty of school and district leaders to share their visions regarding data use, as well as to engage in dialogue with their communities about the natures of schooling and data use. INTRODUCTION A typical belief about technologies is that they improve work. Faced with a task that is burdensome or time-consuming, we seek out tools that will make our efforts more productive. For example, the work of effective data-informed decision-making has been portrayed as difficult without the benefits of computerization (Knapp, Swinnerton, Copland, & Monpas-Hubar, 2006; Means, Padilla, DeBarger, & Bakia, 2009; Wayman, Cho, Jimerson, & Spikes, 2012; Wayman, Stringfield, & Yakimowski, 2004). Via computer data systems, teachers now have access to a variety of functionalities for enhancing their decisions about students. These include: the integration of longitudinal data; the disaggregation of those data by class, student, or other demographic factors; and the calculation of future performance (e.g., Brunner et al., 2005; Chen, Heritage, & Lee, 2005; Wayman, Cho, & Shaw, 2009). Thus, computer data systems may be seen as a natural remedy for the technical problems associated with data use. For districts, such remedies may be in high demand. Standards-based accountability policies continue to elevate the importance of effective data use (Ingram, Louis, & Schroeder, 2004; Spillane, Parise, & Sherer, 2011; Valli & Buese, 2007), and districts have continued to increase their investment in data systems (Burch, 2010; Means et al., 2009). Despite increased access to computer data systems, however, there is increasing evidence that data systems may be underutilized (Means et al., 2009; Wayman et al., 2009; Wayman et al., 2012). Contrary to the hope that such tools might become integral to teachers everyday work, data systems seem to be used more sporadically. If data systems can improve work, why have not districts investments resulted in more widespread teacher use? A conclusion of this paper is that providing access to technology systems may be only a first step to supporting changes to the use of data in practice. Although some districts have attempted to address the technical challenges associated with data use, they have yet to adequately attend to the social and interpretive dynamics underlying system implementation. Thus, our aim in the present study is to examine teachers use of data systems by focusing on the role of sensemaking in system implementation. Drawing upon the concept of interpretive flexibility (e.g., Leonardi, 2009a; Pinch & Weibe, 1984; Winner, 1993), we examine understandings about data systems and how these might vary among social groupings, such as district or job role (i.e., central office member, campus administrator, and teacher). In doing so, we are guided by one fundamental research question: What factors affect teachers use of computer data systems? To answer this question, we rely upon data from a comparative case study of three districts, juxtaposing how they and people in various roles conceived of data use and computer data systems. LITERATURE REVIEW To support our perspective, it is necessary to review the literature in two areas: computer data systems for examining student data, and the interpretive flexibility of technology. In the following narrative, we provide a section for each. COMPUTER DATA SYSTEMS In order to understand computer data systems, it is helpful to understand their prominence in data use. At the heart of data use initiatives is the premise that information about students can and should be leveraged in educational decisions. In line with Black and Wiliams (1998) promotion of formative assessment, advocates for data use thus envision bringing together a wide variety of data as feedback about student learning (e.g., Hamilton et al., 2009). Some of these data may include standardized end-of-year or interim tests, many of which are intended to align to state standards (Ingram et al., 2004; Polikoff, Porter, & Smithson, 2011; Spillane et al., 2011; Stecher, Hamilton, & Gonzalez, 2003). Other commonly collected forms of data may include grades, attendance, portfolios of student work, and student demographic information (e.g., ethnicity, gender, or special needs status) (Lachat & Smith, 2005; Supovitz & Klein, 2003; Wayman & Stringfield, 2006). Without technologies (i.e., computer data systems), it would be difficult, if not impossible, for educators to manage and analyze such prodigious assemblies of data. Wayman et al. (2004) have reviewed the common characteristics and categories of data systems, while others (e.g., Brunner et al., 2005; Chen, Heritage, & Lee, 2005; Wayman & Stringfield, 2006) have described the use of individual systems in practice. Features that are increasingly available to teachers include: customized reporting for teachers or other audiences; information about individual student strengths and weaknesses; information about group- or classroom-level instructional needs; disaggregation by ethnicity, at-risk status, or other special program status; and longitudinal, diagnostic, or predictive calculations of student performance. As new functionalities and associations with new technologies (e.g., mobile devices and classroom clickers) continue to emerge, definitional boundaries around systems continue to blur (Wayman, Cho, & Richards, 2010).1 Computer data systems are not simply passive repositories for data about students. They have the ability to combine, compare, and transmit in ways that could benefit educators decisions for students. They can deliver not only predictions about test achievement and other measures of progress, but also links to outside instructional to resources that are relevant to the data at hand (Wayman, Cho, & Shaw, 2009). Such potential has led to positive images around computer data systems. For example, educators have reported having a more robust sense of students needs by drawing upon a wide variety of data (Lachat & Smith, 2005; Wayman & Stringfield, 2006). Touting the benefits of one particular system, Tucker (2010) exemplifies much of the optimism regarding the potential for computer data systems to benefit teaching and learning: With a couple of mouse clicks, classroom teachers can now get such data as interim test scores, subject grades, attendance records, and English language learner status on a single computer screen. Thanks to [this system], a high school instructor who may have a student for just one period a day can now see how that student is progressing across all courses, and can identify students at risk of academic failure. Teachers are now able to spot long-term learning trends, even for students who moved often among schools and who have only just arrived in the class. (p. 2) In the hopes of leveraging these sorts of technological advancements, districts investment in computer data systems has continued to grow (Burch & Hayes, 2009; Means et al., 2009). The observation we draw about this literature base and from our own research experience is that views about data systems have tended toward technological determinism. In other words, effects on work are assumed to originate from the data systems themselves. These effects are assumed to be universal and predetermined by the tool in question. In this view, bringing about change becomes a matter of funding and offering access to the right system. Technological determinism creates blind spots in educational policies by equating technology adoption to educational progress (Brooks, 2011). Accordingly, it may be time to problematize computer data systems and to question their agency in creating educational change. In other words, technologies might have the potential to remedy some of the technical problems of data use, but whether (and how) they get used is a people problem. Its necessary to also address the human and contextual factors at play in the enactment of technologies. Sensemaking perspectives may provide alternative ways to understanding these other factors around technology use. Sensemaking perspectives have been applied toward unraveling issues in policy implementation (Datnow, 2006; Palmer & Snodgrass Rangel, 2011; Spillane, Reiser, & Reimer, 2002), and interest in applying sensemaking frameworks to problems in data use has continued to mount (Coburn & Turner, 2011; Honig & Venkateswaran, 2012; Young, 2006). Even so, we are unaware of any study that has specifically addressed sensemaking issues around computer data systems. Nonetheless, ways to begin to understand sensemaking about technologies are prevalent in fields such as Information Systems (IS) and Management Information Systems (MIS). One example of this is the notion of interpretive flexibility. In the following section, we draw upon scholarship in these fields to explore this notion. THE INTERPRETIVE FLEXIBILITY OF TECHNOLOGY One issue at stake in studying technologies and their use involves whether they have predetermined outcomes for organizations. Technologically deterministic perspectives tend to portray technologies as imbued with preset goals or effects on work (e.g., Barley, 1990; Brooks, 2011; Orlikowski & Iacono, 2001). Thus technologies might be assumed to naturally and universally make work faster, reduce burden or tedium, or otherwise improve efficiency. Technologically deterministic perspectives can be contrasted with those that emphasize sensemaking. Take, for example, problems around teachers underuse of data systems (Means et al., 2009; Wayman, Cho, & Johnston, 2007; Wayman et al., 2009). Some factors associated with these problems include: teacher dissatisfaction with the data available or their timeliness; trouble finding the data desired; lack of familiarity with the data and their potential uses; and ease of use. From a technologically deterministic perspective, responses would involve ensuring that systems provided the right data, redesigning systems to work more quickly or easily, and improving users knowledge about the tools. Although such thinking can have merits, it also assumes that how people use their technologies and do their jobs is purely rational (Orlikowski & Barley, 2001; Scott & Davis, 2007). On the other hand, sensemaking perspectives might see these challenges as symptoms of a larger problem: how people have understood and interpreted their worlds. Sensemaking frameworks portray people as constantly processing and adapting to information from the environment (Brown & Duguid, 1991; McDaniel & Driebe, 2001; Weick & Roberts, 1993). Such perspectives help to explain nonrational or counterintuitive behaviors in organizations (McDaniel & Weick, 1989; Scott & Davis, 2007). For example, the logic promoted by technologically deterministic perspectives is that utility and practicality (i.e., the right technology delivering the right data) are what will drive technology use. Under this logic, technologies that offer obvious benefits and advantages should win out over others. To the contrary, Leonardi (2009a) describes how negative conversations and opinions spread early on during technology adoption can derail the use of a system that promised to deliver what people said they wanted. Further, educators use of parallel systems offers another counterpoint to the conventional logic. Wayman et al. (2007) found that some teachers maintained traditional paper gradebooks simultaneously with online gradebooks, despite the burdens of the former and promises of the latter. In this same district, schools were found to be using the district student information systems and their own systems for the same data, despite the obvious human and financial costs this entailed. Considerations such as these suggest that although the material characteristics of technological artifacts are important, accounting for sensemaking around those artifacts may equally important. The notion of interpretive flexibility provides a venue into examining the sense people have made about their technologies. It begins by recognizing that social groups can have different values, expectations, and beliefs about the world.2 Originating in the scholarship on the Social Construction of Technology (SCOT) (e.g., Pinch & Weibe, 1984; Winner, 1993), interpretive flexibility suggests that technological artifacts can carry different meanings for different social groups. Therefore, although designers and planners may have certain hopes for the use of computer data systems, interpretive flexibility suggests that the reality of the artifact is determined by users and their social interactions. When viewed across social groups, the significance and purposes of a technology can be seen as varying. This contrasts with current frames of understanding in the scholarship on data systems. In the light of interpretive flexibility, technological artifacts are not neutral, nor are they imbued with fixed, universal meanings. Rather, users values, interests, and assumptions shape the experience and enactment of a technology (Leonardi, 2009b; Markus & Robey, 1998; Orlikowski & Barley, 2001; Orlikowski & Iacono, 2001). In short, technologies are not simply machines, dropped into organizations without the need to account for sensemaking. Instead, each exists in a particular time and place, among particular sets of people. Technologies are not simply the computer. Rather, the burden on researchers is to attend to the narratives, social interactions, and experimentation involved in their use (Barley, 1990; Brown & Duguid, 1991; Davidson & Chismar, 2007; Orlikowski, 1996). This sheds light on what technologies really mean to work. Consequently, interpretive flexibility leads one to locate agency for changes to work not in tools, but in people. For educational researchers, interpretive flexibility affords a useful way to conceptualize computer data systems, as well as the data they afford. Examining the meanings of data systems across districts or roles within districts (i.e., social groups) can help to illuminate why data systems might be underutilized. For example, some district leaders or vendors might be surprised when a powerful, easy to use tool fails to make the impact on teachers data use that they had envisioned. Whereas a natural response might be to doubt the tool or its design, a better response might be to investigate what sense teachers have made of those artifacts. If leaders or vendors are surprised, it might be because they neglected to realize that teachers visions about a systems fit to practice could be different from their own. In this manner, interpretive flexibility sheds light on social and organizational issues in district data initiatives. METHODS The notion of interpretive flexibility allows us to attend to the ways in which meanings around data and around computer data systems might vary according to social groupings. The social groups examined in this study were school districts and job roles within districts. Each school district was considered its own case, or bounded system (Merriam, 2009; Stake, 1995), with the roles of central office member, campus administrator, and teacher embedded within them. We use the term people to describe these groups all together, without reference to role. ACCESS TO THE STUDY DISTRICTS AND THEIR CONTEXTS The data for present paper were collected during the second year of a larger, three-year study on district data use. This larger study was aimed at determining data use practices common among school districts, as well as guidance around how to improve district policies around data use. Data were collected in three districts in Texas.3 Boyer School District was a high-achieving district of approximately 8,000 students that mostly served a non-Latino White population,4 less than 5% of whom were economically disadvantaged. Gibson School District was a district of mid-range achievement that served approximately 25,000 students of various ethnic backgrounds,5 half of whom were economically disadvantaged. Musial School District was a district of mixed achievement that served approximately 45,000 students of various ethnic backgrounds,6 a third of whom were economically disadvantaged. The districts in this study were not known for (nor were they selected because of) their success at using data. These districts were subject to the Texas Assessment of Knowledge and Skills (TAKS) test, the statewide criterion referenced accountability test. Other assessments used by each district, however, varied. For example, Musial used district-wide benchmark tests, which were intended to align to the state test. Gibson, however, used interim assessments that were intended to align to specific curriculum units and lesson objectives that teachers were expected to have taught. Boyer used benchmark tests but these were rarely mentioned by participants. Although the study districts had numerous data systems, two were the focus for the present study. The first was Front End and second was Flightpath.7 Although different, these systems overlapped in that they afforded access to state- and district-level test data, drill down by objectives for students and classrooms, and other forms of disaggregation. Front End had been designed by the Musial districts technology department in coordination with a teacher user group and the associate director for data use. Front End drew upon a wide assortment of data from the districts data warehouse. This was considered an improvement over the districts previous practice of emailing large Excel spreadsheets of student data. Before the advent of Front End, site licensing issues had prevented teachers from direct, individualized access to student data. Accordingly, Front End was new to the teachers in this study. The second was Flightpath, an assessment system that was present in two districts. In Gibson, Flightpath had been in use for teacher appraisal data. During our study, it was being expanded to take the place of the previous assessment system. It delivered data from state tests, the districts interim assessments, and tests designed at the campus level. In Boyer, Flightpath was implemented similarly. It was accessible by educators throughout the district for state test and district benchmark data. DATA COLLECTION Data collection took place from March 2010 to January 2011. Data sources included interviews (e.g., individual interviews and focus groups) and observations. In total, 82 central office members, campus administrators, and teachers participated in individual interviews or focus groups. Generally, central office members participated in interviews individually and campus-level educators participated in focus groups. Interviews Our interview sampling was aimed at developing a sense for the unique perspectives of job roles within each district. The sample of central office began with a start list that then expanded based upon other participants recommendations about other potentially important or knowledgeable informants. Seventeen central office members were interviewed in total. The sample of campus-level educators began by choosing one high school, middle school, and elementary school at random from each district (nine schools overall). In each school, two focus groups were conducted, one for administrative teams and the other for teachers. Principals were asked to participate in their campuss administrator focus group and choose members of their administrative team that could contribute to the conversation. Teachers were selected from each campus at random, with checks to ensure a variety of grade levels or content areas. Teacher focus groups typically consisted of four to six teachers. In all, 19 campus administrators and 46 teachers participated in focus groups. Before beginning each interview (individual or focus group), participants were informed about our interest in their uses of data and computer data systems. They were informed that their remarks would be anonymized, but that in aggregate their insights could contribute to recommendations for each district or for the field at large. All interviews were recorded and transcribed. Interviews followed semi-structured protocols (e.g., Merriam, 2009; Miles & Huberman, 1994; Weiss, 1994), which are provided in Appendix A. All protocols addressed the same lines of inquiry, but differences in audience (i.e., central office, campus administrator, or teacher) made some questions better jumping-off points than others. For example, it was more sensible to start central office members off with questions about the district before discussing particular issues around teachers data use or data system use. On the flip side, it was more sensible to start teachers off with questions about their actual work using data. Conforming changes in wording were also made when asking about interactions with people in other job roles. In the end, all protocols addressed issues that included: district initiatives around data use and around data systems; how teachers used data; how teachers used computer data systems; and the fit of data systems to teachers work. Observations Observations provided a first-hand sense for how districts were conceptualizing data and computer data systems. We were purposeful in selecting field experiences that could provide a glimpse into how people perceived data systems, challenges to system implementation, and how particular data systems might be considered important to work in the district. In line with the partnership between our research team and the study districts, our selection of these events was facilitated by central office members. They arranged access to or otherwise invited us to these events. Thus, the overall sample of events included trainings around computer data systems and principals meetings, as well as meetings between the research team and central office leaders about the larger study. In total, 13 observational sessions were conducted. Most sessions lasted between 60 and 90 minutes, and the average duration an observation was approximately 80 minutes. All observational data were collected by the first author. Following recommendations of Emerson, Fretz, and Shaw (1995), details during field experiences were quickly recorded as jottings, which were later expanded into lengthier fieldnotes. A personal shorthand using abbreviations, acronyms, and symbols helped keep the jottings process speedy and fluid. For example, the word teacher would be written as tchr and circled; PowerPoint would be written as PPT. Throughout the margins of these jottings, time of day would be recorded every three to five minutes, helping to provide a sense for the overall pace of events during the field experience. Efforts were made to directly quote or paraphrase statements made by participants. Other details collected included actions, mannerisms, and reactions among them. Subsequently, the first author would expand these jottings into longer, more tightly knit fieldnotes. The basic aim was to recreate the experiences in prose. This conversion process took typically took place within a day after the field experiences. In this way, a typical set of completed fieldnotes for a computer data system training session would begin with a description of the environment before the actual training began. Examples of this might include the music playing, how people were seated or grouped, and information displayed on the wall. How and if those present interacted with the observer would also be recorded. When specific images from the computer data system were displayed, these would be drawn and labeled with information such as what windows were open and what kinds of information were viewable to the audience. What presenters and audience members said and did would be noted, turn by turn. Thus, if a presenter recounted a story, asked for a show of hands, or told a joke, fieldnotes would describe the content of these events, along with when they occurred. Further, fieldnotes would include descriptions of responses, such as estimates for how many audience members paid active attention, raised their hands, or laughed along. Similar notes were made if a particular data system function elicited murmurs of approval, was seen as problematic by the audience, and how. DATA ANALYSIS As data were being collected, they were also being analyzed. This helped to sharpen data collection and analysis jointly (Bosk, 2003; Merriam, 2009). Informally, data analysis was supported through the use of a research journal. Formal analysis of transcripts and field notes began with a start list of codes, which resulted in case portraits and culminated in the use of matrices for cross-case comparisons. Research Journaling In February 2010, the first author began a research journal using Microsoft OneNote. This journal aimed to record key events, decisions, musings, and questions related to the study. It supported the reliability of research findings by providing a trail for reviewing how insights were developed (Erlandson, Harris, Skipper, & Allen, 1993; Yin, 2009). As data collection progressed, a concerted effort was made to follow each interview or observation with a research journal entry that pushed data analysis forward. Two practices in this were especially beneficial. One was writing a journal entry within 24 hours after field experiences. This helped to capture ideas while they were still fresh. The other was keeping the journal open and accessible on an adjacent computer screen while transcribing interviews or writing up field notes. On one hand, this made it easier to sketch out and to record reactions to interesting statements or occurrences. On the other, it provided a way to monitor the progress of data collection. Some entries attempted to draw connections among various the interviews and field experiences. Others evaluated whether data collection was adequately addressing issues of interest. Additionally, notes about next steps, as well as memos oriented toward connections to the literature or theory, were part of this research journal. Aside from OneNotes basic search functions, many of these activities were facilitated by its capacity to organize entries by pages, tabs, and tags. Codes and Coding All data were coded using the Atlas.ti software. This allowed data to be tagged and later disaggregated according to district name and participant role (i.e., teacher, campus administrator, or central office member). Observational data were analyzed in the same manner as interview data. The aim behind the observation was to enrich and help triangulate information from our interviews (Yin, 2009), but not necessarily afford generalizations about the districts on their own. Because observations took place in situations where it was typical to have one speaker holding the floor at a time (e.g., meetings and trainings), many statements and utterances were directly attributable to specific people. Signs of relative engagement or enthusiasm (e.g., applause, laughter) were not coded as speaking an opinion. Instead, these were seen as glimpses into the spirit of activities in the district, providing an informal check on whether any adjustments in data collection or analysis were in order. Coding began with a start list of codes that were refined and strengthened as the study progressed (Eisenhardt, 1989; Miles & Huberman, 1994). Appendix B provides examples for how the final coding scheme was applied to raw data. Table 1 compares the start list to the final list of codes. Although the two lists are similar, explaining some of the differences between them helps to highlight how insights about sensemaking around data and computer data systems emerged. For example, although there was a code for capturing work practices and situations regarding how particular data were used, it soon became apparent that participants also devoted significant effort to explaining the why and ought to behind their data use. These things seemed neither predetermined nor universal. Thus, a code was added to capture the intents and purposes for data.
Other refinements were made to better account for sensemaking issues in initiatives for supporting data use. Whereas the initial focus was on formal plans and initiatives, it soon became apparent that these efforts were intertwined with official narratives, backstories, and participants personal opinions. Thus, the code for these issues was redefined to account for these dimensions around districts efforts. On a similar note, a code was added to separate efforts to support the use of computer data systems from those efforts around data use. Although our initial assumption was that the former was merely a subset of the latter, it became evident that labor in these endeavors was often divided. For example, each district had personnel who were primarily responsible for managing and offering professional development for the technology side of things, while others were responsible for the instructional or data use side of things. Overall, the final coding scheme threw into relief how people in each district and each role thought about data, computer data systems, and their respective uses. Next we describe our procedures for analyzing these data by district and role. Within- and Cross-Case Analyses Within-case analyses resulted in case portraits describing each individual district. These helped to highlight any trends in how people from different job roles saw data, computer data systems, and central offices influences on their use. Roles were free to echo each other or to tell their own stories. This afforded the application of replication logic across the roles, shedding light on instances in which some patterns might hold and others might fail (Yin, 2009). Accordingly, the use of matrices (Eisenhardt, 1989; Miles & Huberman, 1994) also supported the development of case portraits. In simple terms, matrices are tables with rows and columns. An example of a within-district matrix might be one comparing how different job roles regarded their computer data systems. Job roles would comprise the rows of the matrix and themes or issues of interest would comprise the columns. In this example, rows included central office members, campus administrators, and teachers. Columns included characterizations about data systems, which functionalities were actually used, and training or other district structures. Thus, the first cell would include a short summary of how central office members conceived of their data systems. In this way, cells could be compared within themes by role and vice versa. The process was repeated for the various issues of interest. Altogether, within-case analyses provided portraits of district trends and the degrees to which particular understandings were shared organizationally. Cross-case analyses were aimed at developing generalizations that might span the three districts. These, too, were facilitated by the use of matrices. Cross-district matrices resembled the ones described above, except that they juxtaposed information from the three case portraits. Thus, a cross-district matrix examining notions about data would juxtapose what each role for a district saw as the intents and purposes of data. Also employing a replication logic (per Yin, 2009), such comparisons would reveal whether a certain perspective was particularly characteristic to a role. For example, although this was not found, it might have been possible for campus administrators to be universally (regardless of district) focused on data use as being about achievement for accountability purposes. Moreover, adding roles to cross-case matrices allowed for the comparison of overall patterns among districts. For example, one pattern might be shared notions about data throughout a district (regardless of role), versus notions about data that diverged by role within district. STUDY LIMITATIONS This study relied primarily upon qualitative interview data to examine issues around data and data systems in three Texas school districts. In this way, participants descriptions of the use of data or of computer data systems are self-reported. Even though observational data were collected, the events observed took place away from the day-to-day demands of school-level work. In other words, although we aimed our data collection and analyses at issues around the social construction of data and of computer data systems, other methods, including a broader range of observations, might better target the issues around practice. Due to the constraints of resources, this study provides a snapshot of three districts in the span of only one year. Although our overall sample of districts afforded a general picture of what is possible in district life, more districts would enhance the generalizability of this study. On a similar note, we have provided a general picture of school level issues in each district. Our interest was in seeking out what might be analytically generalizable or replicated (Yin, 2009) about schools in districts, regardless of school type. Thus, we were purposive in including each level (i.e., elementary, middle, high school) in our sample of schools per district. A study with larger resources might have met similar analytical aims by sampling more schools, but with the bonus of generalizability about unique issues facing particular school levels. Although a larger sample of schools was not possible, we did engage in peer debriefing with other researchers working on the larger study. Peer debriefing can enhance validity by providing researchers with venues for evaluating and investigating alternative perspectives (Erlandson et al., 1993; Merriam, 2009). RESULTS The notion of interpretive flexibility allowed us to attend to the ways in which meanings around data and around computer data systems might vary according to social groupings. Comparing and juxtaposing issues from each set of perspectives yielded insights into patterns that spanned groups and how particular issues had organizational consequences. Because our research question related to the factors affecting teachers use of computer data systems, we gave special attention to teachers unique perspectives and how they fit into other organizational issues. Through these analyses, we found that teachers sense and enactment of data systems were grounded in their unique notions about data and data use. Further, we found that these notions were consistent based on their educational social group (i.e., their district or their role), and that the actions and messages of central office could play a role in how these notions developed. We present our findings in four sections: (a) notions about data, (b) making sense of computer data systems, (c) teachers enactment of computer data systems, and (d) the influence of central office. In conducting our analyses and presenting these results, we have sought to avoid normative assertions about any systems, approaches, or actions being better than others. NOTIONS ABOUT DATA In order to understand the factors influencing teachers uses of computer data systems, we first explored the district contexts for system use. In particular, we began by examining whether people in each district and each role similarly conceptualized data and data use. If the natures of data and data use were fixed, then we would have expected to find similar notions about concepts such as which data were most useful, what data are good for, and why data ought to be used. Instead, we found a diversity of notions about data, many of them closely associated with district or role. This complicated the landscape in which teachers were expected to use computer data systems, and which other roles were expected to support their implementation. Figure 1 provides an overview of the pattern of notions about data among the districts. In the two cases of Musial and Gibson, notions about data were internally consistent throughout each district, regardless of role. Boyer, however, was particularly fragmented and without consensus. There, characterizations of data and data use were most clearly delineated by role. In the following sections, we will discuss notions of data in each of our three study districts. Figure 1. Pattern of notions about data among the study districts Musials Notions About Data The Musial districts perspective was deeply tied to the goal of improving according to accountability rankings. Regardless of campus or central office department, success for both students and the district was described as increasing achievement in state rankings and state test scores. In Musial, data use was consistently framed within these larger aims. The district superintendent was invested in these notions. For example, his opening address at the district leadership retreat was on evidence of success and culminated in publicly applauding various school leadership teams. Those that were asked to stand included those with high overall state test passing rates, jumps in state ranking, or notable percent increases in passing rates. Central office and campus administrators also saw accountability measures as necessary to improving student learning. Central office members saw accountability measures as a way to make achievement evident. As one central office member explained, [The state test] is how the state measures mastery and proficiency of its curriculum . . . [student achievement] is not really negotiable for our district. Throughout the central office, participants described success according to these measures as a duty they owed to the state, to the local community, or to parents. In line with these attitudes, campus administrators saw these measures as vehicles for closing achievement gaps. They especially valued linking state standards to demographic data. As one Musial principal explained, the accountability system afforded a special, laser-like focus on students, ensuring that fewer students might fall through the cracks. Although some teachers believed that data and the best uses of data systems extended beyond accountability, most talked about state test data when it came to data use. Other forms of data were rarely mentioned. Indeed, the teachers most frequently mentioned practice using data was to look at state test results, noting overall categories of achievement such as tallies of students passing, failing, or scoring at the commended level. They considered this as providing a general awareness about student needs. It also began the process of sorting of students into groups, such as for tutorials or other assistance. None described using these data for individualized instructional practices (e.g., tailoring lessons or attention for particular students). Even those that found this dynamic lamentable conceded that it was just the way things were. As one teacher said: Do they know where our kids are in terms of the [state test] scores? They know that. They have projections on whether well be exemplary or recognized or whatever based on those scores. But do they know what they need to do to help us improve our instruction? Im not really sure. Gibsons Notions About Data The Gibson districts perspective was aligned around the notion of student expectations, commonly referred to as the SEs. The difference between the SEs and state standards was in their granularity. The SEs constituted specific components of larger objectives. These provided the basis from which teachers were expected to design activities. In this light, data could take on many forms. Rather than be limited to a particular test, the object of interest was feedback for practice. The superintendent named two benefits to this approach over one focusing on the state test. First, he believed that data at the SE-level were more actionable. Second, he believed that they opened up larger conversations for educators about instructional practices, lesson content, and degrees of instructional rigor. His belief was that offering students a broader range of learning experiences could come first, and that state test requirements could be fulfilled along the way. One form of SE data that was common to all Gibson schools was the district interim assessments. These were designed to target particular SEs found in the district curriculum units, such that they could provide feedback about practice. In the words of one central office member: The [interim assessments] were generated to inform instruction. If youve taught this curriculum, the material in the curriculum of [our district], and your students take this test thats directly aligned with the curriculum. . . . We want you as a teacher to look at the results to determine how well the students mastered the curriculum that you just taught over the last nine weeks. Thus, the Gibson perspective stood in contrast to the one in Musial. Although both districts saw data use as important to serving students, data use in Musial took on the character of state test and accountability measures, while data use in Gibson were oriented toward locally designed, shorter cycle feedback. Although educators throughout Gibson saw attention to the SEs as important, they also recognized that the district interim assessments were not the only source for this sort of feedback. In fact, all roles admitted that there were challenges associated with the interim assessments. Many of these related to agreeing about the content of the assessments or to maintaining the teaching pace suggested by curriculum guides. These were seen, however, not as pitfalls for attending to the SEs, but as motivations to use other sources of data about the SEs. For example, one principal encouraged her teachers to create common assessments that were tied to the SEs. She lauded that that these common assessments were more reflective of the SEs and timelier than the interim assessments. Similarly, although teachers mentioned the use of state test data for large-group and early-in-the-year decisions, they also described using SE-level data for informing classroom instructional practices. Examples of this included quizzes and SE-based exit tickets about a days lesson, both of which might be used in collaboration with colleagues. Boyers Notions About Data In contrast with the preceding districts, there were clear divisions among roles in the Boyer district regarding perspectives on data. For example, central office members felt that data should be thought of holistically, with each form of data providing another dimension or piece of the puzzle about students. Accordingly, central office views on data generally focused on the importance of seeing the whole student. At the cabinet level, the opinion was that everything informs. Accordingly, cabinet members began to promote the need for what they called informative assessment at central office meetings, principals meetings, and the opening of the year address. For the most part, the Boyer central office notion of data use centered on this level of general awareness about students. Rarely did central office members discuss data in terms of specific educational practices. Rather, they emphasized understanding about the needs, motivations, and histories of students. This was even the case when describing the need to target certain kinds of students. For example, central office members designed and implemented trainings for data use at each campus that were intended to impress upon teachers the importance of using data holistically. Specifically, they provided teacher teams with the data for three individual students, each representing a target group: students who excelled, economically disadvantaged students, and students who struggled academically. Teachers were asked to examine the data and to make inferences about the students history of school experiences. In contrast, campus administrators saw data more specifically in terms of practice. They saw data as being important to meeting individual students needs. One described this as choosing the right kids to work with on the right objectives at the right time. They also saw data as supporting programmatic decisions, such as when designing interventions for struggling students or making course scheduling decisions. Campus administrators did not make mention of informative assessment or the need enrich ones understanding of the whole student via data. Teachers presented yet another view about data. As found throughout Musial, the general sentiment from Boyer teachers was that data were about testing. Unlike Musial, Boyer teachers did not focus on any particular test. Teachers at different levels named different tests, with the common thread being that teachers were required to give students assessments, but not to systematically reflect or act upon their results. In other words, Boyer teachers viewed data as being about compliance and reporting information to central office, not necessarily use. Overall, teachers felt that data failed to capture what they knew in their heart of hearts about students. As one teacher lamented: There was a day when a lot of worth was placed on the intuitiveness of a classroom teacher. We couldnt really put it down in numbers necessarily, but we knew a lot. [From this] you could function on what you knew. So thatll leave a bad taste in your mouth about data. I just want my focus to be teaching these kids, and I trust my intuition to do that. Another teacher described this in terms of being able to adapt lessons to students interests. She felt that her classroom was so amazing and so different when she re-arranged her teaching this way, and not according to information on a piece of paper. In short, Boyer teachers tended to describe teaching if it was an art that was not readily amenable to being driven by data. Accordingly, teachers found routine classroom data most informative. These data included grades, running records, and portfolios of student work. SENSEMAKING ABOUT COMPUTER DATA SYSTEMS Given that notions about data differed among social groups, we next examined if notions about computer data systems also varied. If technologies had predetermined meanings and effects on work, then we would have found similar ideas about what particular systems were about, regardless of social grouping. Instead, we found that understandings about systems varied. This depended upon groups preconceived notions about data use. This dynamic played a part in users satisfaction with systems. In the following narrative, we will discuss these notions in terms of interpretive flexibility and value judgments about data systems. Interpretive Flexibility and Notions About Data Use As described in our literature review, the notion of interpretive flexibility represents a departure from technologically deterministic assumptions about technology. It suggests that the same technological artifact can mean different things to different social groups (e.g., Leonardi, 2009a; Orlikowski & Iacono, 2001; Pinch & Weibe, 1984). In this study, we were interested in the social groupings of district or of role. Thus, we examined whether districts or roles with access to the same computer data systems saw these technologies in similar or different ways. Findings about the Flightpath system were particularly informative because this system was present in both the Gibson and Boyer districts. Thus, we were able to examine understandings about Flightpath across the two districts and across their individual role groups (i.e., central office member, campus administrator, teacher). Although technologically deterministic views would predict that the significance of Flightpath would be the same everywhere, we found the opposite. In total, we found four divergent perspectives about Flightpath. This pattern was consistent with the pattern around notions of data: Gibson views were consistent regardless of role, while Boyer was fragmented. Those four views are described next. Gibson perspective on Flightpath. Regardless of role, educators in the Gibson district saw Flightpath similarly. In line with their overall focus on SEs, Gibson educators characterized Flightpath as a tool for adjusting practices according to the SEs. Educators at every level valued Flightpaths abilities to deliver SE level data and analyses. An example of teacher use of Flightpath was to compare performance according to SEs across the state test results and district interim tests, which could then be used in planning with other teachers. Boyer perspectives on Flightpath. This view can be contrasted with the views in Boyer. Boyer central office members valued Flightpath for its delivery of state test and benchmark data. However, they also felt that such data alone could not provide a full picture of the whole student. Accordingly, they often named additional systems for augmenting the insights afforded by Flightpath. For example, the PMI Plus assessment system offered short snapshots of student performance every few weeks. As one central office member described: PMI Plus is a really good, comprehensive look at the child. I mean, Flightpath is very good for [state test] data and for the benchmarks that we give, but PMI Plus is great in terms of just identifying exactly what their weaknesses are in reading and math, and science too. Central office members hoped that various systems would be used in concert to develop a multidimensional view of the student. Boyer campus administrators views about Flightpath differed subtly from those of their central office. For example, they were less concerned about Flightpaths potential limitations. As one campus administrator described: Flightpath is a really good program that will take all the data from the [state] test and disaggregate amongst the objectives and the student expectations. It will give a good graph and good data showing where the kid has deficiencies and where their strengths lie. Unlike central office members, they did not focus on the importance of overall views of the whole student. They were more concerned with the ways in which systems afforded individualization toward students needs. In this, they saw Flightpath and PMI Plus as affording similar kinds of individualization based upon test data, and less effort was made to distinguish the two from one another. In fact, campus administrators compared Flightpath and PMI Plus to other systems that were unique to their campuses. These comparisons were on the basis of systems capacities to offer individualized attention to students. For example, one campus administrator described how Flightpath and PMI Plus are for us but the program she expected teachers to favor is actually for the child. Thats where the child gets the practice and extra help. This alternative system delivered tailored, computerized instruction to students based upon teachers development of learning plans. On a similar note, another campus team was especially enthusiastic about their system that used individualized student data to help students choose and apply to colleges, as well as to make high school course selection decisions. They praised how this system was helpful in getting the right kids in the right school. Boyer teachers provided yet another take on Flightpath. Few described using the system. Instead, the general sentiment was that data systems were things to be used when mandated. One teacher suggested that the dearth of Flightpath use was because the system contained mostly state test data. Another lamented the tendency to rely too much on technologies in education, stating, Were in a society so driven by technology that we go overboard. In fact, only one teacher spoke with any enthusiasm about Flightpath. He was new to the district and had used it elsewhere. He reported regularly using the system for analyzing teacher-made tests. Although this work typically took him between 30 and 40 minutes per test, he felt that it offered him a bang for the buck in terms of what I can do in instruction. He also reported offsetting this time commitment by doing most of his Flightpath work at home. Patterns in perspectives. In a technologically deterministic world, ideas about the utility and practicality of a system are the result of the system. Thus, the same technology ought to be seen and used the same way, regardless of district or of role. To the contrary, these four perspectives highlight the interpretive flexibility of Flightpath. As evidenced by Figure 2, we observed that the overall pattern of perspectives on Flighpath was similar to the patterns among their notions about data (Figure 1). Gibson roles were coherent to each other, while Boyer was fragmented by role. This sort of pattern was supported by findings in Musial, even though the district did not use Flightpath. In Musial, notions about data use cohered at the district level, as did understandings about Front End. We suggest that notions about data may have preceded those about computer data systems. In the next passages, we describe how ideas about data use served as educators lenses on data systems. These lenses influenced how systems came to be defined and otherwise evaluated for utility or practicality. Figure 2. Study districts evaluations about data systems fit to work Judgments about the fit of systems. In order to determine how meanings around data systems might be embedded in notions about data use, we compared each set of notions about data use (e.g., data as being about accountability rankings; data as being about SEs) to descriptions of why and how particular data systems fit. It became evident that notions about data served as users interpretive lenses, giving them expectations for what ought to be in systems. In turn, this led to judgments about systems value, utility, and practicality. Importantly, these judgments were about the match between two sets of social constructions: one set around data use, and the other around data systems. Contrary to technologically deterministic perspectives, less important were the actual data available or system features themselves (see the next section, Teachers Enactment of Computer Data). Not surprisingly, users were most satisfied with their systems when their expectations were met. Figure 2 provides an overview of districts evaluations for how systems fit teachers work. Overall, educators in the Musial and Gibson districts were most satisfied with their systems. Just as one might take it as a given that light bulbs produce light, they tended to take it as a given that their data systems gave them the data they expected. Because educators in each district had particular notions about what constituted data and data use, those were the data they oriented to in their systems. More specifically, teachers in Musial (where accountability data were prioritized) used Front End to access accountability data. Teachers in Musial reported creating reports about students based upon their rankings on state tests. In the words of one teacher,
I do love Front End. I love being able to read and to have that resource. After getting to know students personally, Ill go and look at it. . . . My push is to make sure that everybody is not just passing, but that also at commended [status]. Teachers in Gibson (where progress on SEs was prioritized) used Flightpath to access and monitor progress on the SEs. One Gibson teacher spoke at length: What I like about using Flightpath is that you can break down by student, by class, by teacher, by school, by district. You can break data down into many different categories, but it makes it easy to see exactly which SE, teacher, and students were low and which ones did well. [Item analyses] give you a really good understanding of what they understand and what they dont, and they show you misconceptions as well. Not all educators, however, felt that they were able to get what they wanted from their data systems. In these cases, the problem was in wanting things they did not perceive to be in their systems. For example, in the Musial district, a handful of teachers dissented from their colleagues about Front Ends value. These particular teachers considered data and data use to be important for daily instructional practice and believed the state tests were inadequate for this purpose. Rather than state test data, they wanted real feedback about performance. Although the issue of real feedback might be interpreted as suggesting the need for systems to have the right data, we note that this may be subjectivemost Musial teachers did not question Front Ends value. In Boyer, teachers were generally dismissive of test data in favor of information from regular classroom activities. As indicated above, few used Flightpath of their own accord. When teachers did describe using systems, this was in terms of complying with mandates to test and enter data using computer systems. Analyzing or reflecting about data via data systems was a personal choice, and Boyer teachers did not generally perceive that the data offered in their systems would be valuable. An exception to this was Boyers online gradebook, which many teachers felt fit well into their daily practices. In the words of one teacher, Its what were doing every day, if not every period. They described using the online gradebook to determine student needs, as well as to collaborate with parents or other teachers. This form of technology use was in accord with Boyer teachers overall attitude that the data from daily classroom activities were most valuable. Finally, it should be noted that the Boyer districts overall pattern of fragmentation by role was evident in notions of fit among roles. For example, central office members in Boyer considered the online gradebook as simply a gradebook, and did not predict it to be teachers favored system for obtaining regular feedback about students. Similarly, campus administrators tended to be more positive about the importance of particular systems to teachers work than teachers themselves reported. PARADOX AROUND TEACHERS ENACTMENT OF SYSTEM FEATURES Having found that notions about data use framed how users conceived of and evaluated their systems, we further focused on what this meant for teachers uses of systems. As described previously, technologically deterministic perspectives tend to assume that technologies have predefined effects on work. In other words, having access to data systems would be assumed to result in expanded data use practices universally, regardless of social context. If they do not, then it is assumed that better, more powerful, or more desirable features must be needed. Contrary to these perspectives, we did not find that simple access to functionalities resulted in use. Rather, notions about data use served as an interpretive lens through which certain features were favored, while others were ignored or rejected. Each data system offered a range of features and functionalities, from which teachers used only a sliver. For example, as Musial personnel developed the Front End system, they took care to attend to feedback from the districts teacher user group and associate director for data use. Consequently, Front End provided teachers with a number of features they had asked for, such as direct and timely access to data about state test results, student attendance, tardies, discipline, and district benchmark test scores. Despite the potential value of these features, they were rarely mentioned in our interviews. Instead, it was accountability data (e.g., state test passing/failing rates or commended levels) that teachers actually used. In accord with their districts notions about data, Musial teachers used their system to generate lists of students to target for academic support according to accountability standards. In the words of one teacher: It kind of let me know who is at risk and which kids struggle. I definitely want them up front. . . . I mean I know that my kids struggle all the time, but it just lets me know which ones I need to focus more attention on. . . . [Front End] let me know their reading levels. Where are they reading at? Have they failed tests in the past? These are kids I really need to focus on. A similar pattern could be found in Gibson, which was expanding Flightpath to also handle assessment data. Whereas Flightpath had only been used for teacher appraisal data, it now offered SE-level data for state test and district benchmark tests for each student and class. Moreover, it supported the automatic creation of tutorial reports based upon individual student performance. One important reason behind Flightpaths expansion was teacher demand for features that allowed them to scan and instantly analyze locally-designed data (e.g., classroom assignments or subject area common assessments). Despite the potential value of these features, however, it was the SE-level data from state test and district benchmarks that teachers actually used, in accord with their districts notions about data. For example, one explained: I show my kids a lot of the data. I let them actually see it so it connects with them. Our warm-ups are mostly missed questions from the [district benchmark test]. I put the data underneath my book on my projector, [review data about students previous responses,] and talk through the question. I use it a lot. This pattern presented a paradox. Although teachers were aware of and positive about the assorted features available to them, the selection they reported actually using was much more limited. In other words, simple accesseven to the right data and right system featureswas not enough. We interpreted these findings to suggest that notions about data use illuminated certain system features, while making it easier to ignore others. In effect, general knowledge about a feature did not necessarily result in making it a priority for practice. This explanation could also be extended to the teachers in Boyer. Because these teachers saw data as being things to collect and report, but not necessarily to use, many felt comfortable rejecting systems wholesale. They did as much as it would take to comply with central office mandates (i.e., priorities). The exception to this was their use of the online gradebook, which fit into their personal priorities at work. THE INFLUENCE OF CENTRAL OFFICE Sensemaking shaped teachers enactment of computer data systems, but we found that central offices could play a hand in these processes. We have shown that the messages emanating from central offices about which data to prioritize or to value influenced teachers notions. However, central offices fell short when it came to helping teachers make sense out of their data systems. Instead, they invested work around the technical aspects of system implementation. In the following narrative, we explicate these findings. Messages About Data From Central Office In previous sections, we demonstrated that district messages played a part in ones notions about data. In this section, we summarize these results in terms of the central office. In the Gibson and Musial districts, there was a strong message from the central offices regarding which data were to be prioritized: SE data in Gibson, and state test data in Musial. And, teachers in these districts most often described data and data use in these terms. This was true despite comments from teachers indicating that other data could also prove valuable. Thus, the foregrounding of SE and state test data in Gibson and Musial (respectively) seemed to have a strong effect on how teachers conceptualized dataand consequently, what they saw in and used in their data systems. Conversely, there was a vague message emanating from the Boyer central office about which data were most important. Leaders perspectives regarding data ranged from everything informs and informative assessment to attending to the whole student. Accordingly, we observed varied notions about data and data use (notions that were fragmented by job role). These notions were coupled with similarly disparate views and uses of data systems. Central Office Supports for Data Systems Teachers and central office personnel reported different views about the degree and value of support offered by central offices around data systems. Although we observed and heard central office report investment of significant efforts around systems, many of these efforts were technical in nature. As a result, sensemaking was taken for granted. Indeed, teachers in all three districts reported needing more help understanding and gaining value from their data systems. This discord can be illustrated by comparing teachers views about central office support to central offices accounts of their work. Teachers views about central office support. Although teachers were often clear about their central offices efforts around data use, the same could not be said about data systems. For example, teachers in Musial readily spoke about their central offices efforts to focus their attention on state test performance. But it was a different story for computer data systems. Most teachers in the district were expected to learn about Front End via online training modules (i.e., Powerpoint slides). Even when pressed, these teachers could not describe any support from the central office about how to use Front End. They described being provided access to the system, with formal support from the school and central office stopping there.8 In fact, a common way for teachers to describe having learned to use Front End was via experimenting on their own or via pulling a colleague aside for help. When asked about their districts support for data use, teachers in Gibson were of mixed opinions ranging from negative to positive. Some were put off by having to deal with stacks of data, such as special education paperwork or hand copying data from file folders. Others were quite positive about the involvement of the central office. For example, one group especially was positive about how conversations with central office members and instructional coaches helped them to rethink their instructional and data use practices. When it came to the support for computer data systems, however, teachers were less positive. One group used the term cognitive dump to describe trainings, lamenting that there was little support beyond being told about some potential benefits and how to log in. Another group called it the here it is, now go approach, expressing the wish that they might have trainings more regularly, such that skills might be deepened over time. A similar pattern was seen in Boyer. Teachers saw the central office as offering some training for data use, but described little support for the use of computer data systems. They characterized the central office as requiring that data be produced (i.e., via monthly or bi-weekly assessments) and then turned in. At trainings, they were shown data but not instructed on how to use them. When it came to computer data systems, they described no expectations about their use. Some were frank about how they were required to use systems to upload data, but not for other purposes. For the most part, which systems they used, and how, was up to them. Central offices technical focus on supporting computer data systems. Teachers portrayal of their central offices might lead one to conclude that not much was being done about computer data systems in these districts. To the contrary, much was being done, but little of it was about influencing teachers sensemaking about computer data systems or their potential value in practice. Interviews and observations collected from central offices revealed their attention to data systems to be focused on technical and logistical issues. These included ensuring that systems ran smoothly, that systems had desirable features, and that teachers received a basic overview for the technical side to system use (e.g., how to log in and find certain data). Although only Gibson central office members used this term, an attitude of deployment generally characterized central offices approaches to data systems. Formal attention was rarely given to how or why particular features might be meaningful to particular classroom practices. Regardless of district, central offices took it for granted that data systems benefits would be self-evident. We saw this in several ways. For example, the Musial district created the associate director for data use position for supporting data use and the creation of Front End. Although this person worked with principals and conducted some training for certain teachers, most of the district was expected to learn Front End by viewing the online training modules. Similarly, the Gibson central office attempted to support Flightpath via short trainings at school campuses. This was seen as being sufficient because Flightpath was similar to its predecessor and had been in use for teacher appraisal. Neither of these central offices questioned whether the full depth and breadth of these systems would be used, nor did they expect the need for following up with teachers on how to embed systems in regular practice. The Boyer central office was similar in believing that how to best use systems would be self-evident. Their view that everything informs, however, meant that they emphasized many systems at once. At the same time, the Boyer central office treated the use of computer systems as if it was a minor component to data use. On one hand, they treated systems as easily interchangeable for each other. To the chagrin of a few teachers and principals, funding for data systems that were not widely used or that were considered redundant was cut at the beginning of the year. The expectation from central office was that the remaining systems could perform the tasks just as well. On the other hand, Boyer central office members considered the integration of data by hand to be sufficiently comparable to the use of an integrated data system. For example, central office trainings for data use involved delivering teachers with paper printouts from various systems. Central office members envisioned teachers accessing each system for its unique benefits, and then integrating the information on their own. Although there had been some consideration of purchasing an integrated data system for the district, this was eventually determined unnecessary. While cost played a factor in this determination, one central office member considered this human resources solution to be better than a data system. He asserted that using instructional coaches to print out and otherwise assist teachers might improve teachers quality of data use. DISCUSSION The question framing this article relates to the factors affecting teachers use of computer data systems. To address this question, we drew upon issues around sensemaking, particularly the notion of interpretive flexibility. This helped to illuminate that the ways in which data systems are used (or ignored) springs from users notions about data use. Sensemaking about data systems was structured by notions about data and how data should be used to support instruction. In turn, these notions were influenced by signals emanating from central office. Peoples notions cohered within social groupings (i.e., district or job role), serving as interpretive lenses for systems. Where there was a clear message (Gibson and Musial), social grouping did not matter beyond district. But where messages were unclear (Boyer), the sense that people made was distinguishable by role. Contrary to technologically deterministic views, we did not find technologies to be agents of change, or influence work on their own accord. In fact, even functions that teachers considered potentially beneficial might not actually be applied in practice. Rather, we located agency in educators sensemaking about data and data systems. These processes explained how functions were favored, ignored, or rejected in practice. Subsequently, we saw room for central offices to assume less about the obviousness of the benefits of computer data systems. More could have been done to support and shape teachers understandings about data systems. In the following sections, we discuss how this may be done by considering broader notions about data use, understanding that the technology may not bring about change on its own, and redefining technology implementation for district leaders. Finally, we close with some additional considerations about school leadership and vision. PARTICULAR NOTIONS ABOUT DATA USE ARE NOT A GIVEN Our study illuminated the variety of notions, definitions, and uses for data that may exist within districts or within roles. Further, our study showed that educator interaction with technology was shaped and thus varied by these notions. We believe it is thus important to explore how this phenomenon extends beyond technology. That is, to understand how data can best inform educational practice, it is important to first understand the variety of notions of what data use is and how these notions affect practice. For example, policies such as No Child Left Behind have made it easy to equate data use with accountability, but it is inappropriate to stop there. Only one of our three study districts saw data use as an accountability-driven phenomenon. There were other lenses, which varied by context. Thus, we emphasize the need for scholars to provide an emic account of educators perspectives on data. Accordingly, the burden for researchers is to determine participants own definitions and priorities for data and relate these dynamics back to the phenomena under examination. In doing so, researchers should be better able to identify how participants locate data in their professional decision making. Our data suggest that many of these personal accounts will be consistent based on social grouping, as suggested by research relating to technologies and their interpretive flexibility (e.g., Bailey & Barley, 2011; Barley, 1990; Carlile, 2002). Thus, the current study provides a bridge for researchers to continue examining the influence of social groupings on computer data system use (and on data use in general). Although the categories of social groups in this study may be informative in this regard, so might other groupings. For example, teachers also interact with grade level or subject area teams, entire faculties, and school leadership teams. On a similar note, methodological shifts might further illuminate sensemaking patterns. For example, Coburn (2001) applies an ethnographic case study approach to uncover the ways in which teachers school-level professional communities influenced their sensemaking about data and how to implement reading policy. Moreover, Daly (2012) provides thoughtful arguments regarding how social networking analysis could shed light on the pathways by which understandings about data use pass through schools. Contextualizing data use in such manners would help to illuminate the unique circumstances that contribute to changes in practice. TECHNOLOGIES OCCASION (BUT DO NOT DETERMINE) CHANGE The underlying promise of computer data systems is that they will naturally change schooling by their very presence. Access to computing power is assumed to be enough for progress. Contrary to these assumptions, our findings suggest that technologies are not necessarily in the drivers seat for changing practice. While providing access to data systems might be an important first step, we did not find that the introduction of a system necessarily resulted in changes to practice. Technological features may be ignored, rejected, or misconstrued. Some may be favored over others, despite their relative benefits. Indeed, we found it exceedingly rare for a teacher to mention having used or experimented with a system function just because it was there. In accord with structuration theory (e.g., Jones & Karston, 2008; Leonardi, 2009b; Orlikowski, 1992), we posit that computer data systems provide occasions for changes to practice, but that envisioning and habituating those practices is a social matter. In this way, agency is located in people, but also embedded in their interpretive processes and understandings about the world (e.g., ODay, 2002; Scott & Davis, 2007; Weick, 1993). Thus, we posit that technological features and other material resources provide only a starting point for what might be easy or hard to do in work. If certain systems or functionalities are underutilized, then the larger issue may be in how educators have made sense of them. IMPLEMENTATION AS AN EXTENDED PERIOD OF ADJUSTMENT We found that central offices failed to make the most of opportunities to shape sensemaking. Although central offices sent messages about what kinds of data use were important, and although they attempted to support teachers access to data systems, they rarely focused on their intersection. Research in other fields suggests that notions about technologies, relationships among workers, and new designs for technologies can co-evolve iteratively over time (Barley, 1990; Davidson & Chismar, 2007; Orlikowski, 1996). Leonardi (2009b) suggests that the notion of a fixed implementation line fails to capture the ways in which technology implementation takes place over an extended period of social adjustment. Our study offers a bridge between the research on technology implementation and the research on data use initiatives. Although educational scholars have begun to explore the ways in which school level sensemaking around data may be influenced by interactions with central offices (e.g., Honig & Venkateswaran, 2012; Spillane et al., 2011), less attention has been given to such issues involving computer data systems. Heifetz and Linsky (2002) provide one way to attune to sensemaking opportunities. They distinguish between technical challenges and adaptive challenges, describing the former as those that can be resolved using everyday knowhow and routines. Adaptive challenges, however, require that people develop, adjust to, and apply new ways of looking at the world. Such challenges require that leaders attend to values, habits, and attitudes of people in their organizations. Recognizing such people problems around data systems raises the stakes for central office leadership. In this view, system implementation does not stop at decisions about purchasing and training. First, district leaders should treat issues around data use and data systems as conversations over time. Simply affording access to potentially beneficial or desirable system functionalities was not sufficient to ensure that teachers would use them. We found that messages from central office influenced what kinds of data were prioritized in practice. In Musial and Gibson, where the messages about data use were relatively focused, teachers focused their uses of data systems on those purposes and virtually ignored functionalities that they themselves had asked for. In Boyer, where the message was most diffuse, teachers were least engaged in leveraging their data systems. These or other messages could be considered first steps toward creating more dialogue about core issues in teaching, learning, data use, and the role of data systems. Thus, ensuring that teachers make the most out of data systems involves not only developing teachers knowledge about systems, but also how various functions fit into the districts overall messages about data use. Engaging in such work also provides district leaders with opportunities to engage teachers and administrators in dialogue about what forms of data use are most congruent to their beliefs and values (Wayman, Jimerson, & Cho, 2012). Conversations about data systems can provide an important venue not only for expanding technical knowledge, but also for reshaping what people understand about data use and what might become the districts collective vision. Second, district leaders and technology designers can be attentive to how understandings about data and data systems are coproduced over time. This involves continually seeking out feedback from users. For example, district leaders might enact formal processes to capture innovative practices as they emerge. This involves not only attending to uses that they expected to benefit students, but also unusual or dissenting practices. Further, improved communications and relationships could help to share effective practices out to others. Such considerations might apply not only to computer data systems, but also to technologies in education more generally. Likewise, technology designers (both local and commercial) might use similar information to evolve new versions of their products. Instead of relying upon their own assumptions about how data systems fit into a districts particular context, data system developers can leverage feedback regarding what teachers are actually seeing and doing with those systems. Moreover, although standards-based accountability policies have made the development of data systems around state standards lucrative, this is not to say that other kinds of systems might not also emerge in light of other ways to think about school improvement. For example, improving collaboration among school faculty or fostering a positive school cultures among students are other aims around schooling that data systems could support. Third, district leaders might broaden modes of communication about data systems and their use. While traditional meetings or trainings can certainly play a role in sensemaking about data and data systems, so might Web 2.0 technologies, such as self-produced online videos, wikis, blogs, and microblogs (i.e., Twitter). One characteristic of Web 2.0 technologies is that they encourage users to be active participants in the generation of new meaning and understandings (Greenhow, Robelia, & Hughes, 2009; Treem & Leonardi, 2012). Innovative practices, questions, and musings might be addressed and fleshed out within a larger community of like-minded colleagues. Online videos could help to store and distribute important knowledge. Thus, communities of practice (Brown & Duguid, 1991) might be supported that could transcend the planning period, teachers lounge, individual school, or home. ADDITIONAL CONSIDERATIONS AROUND SCHOOL LEADERSHIP AND VISION Although we have described different worldviews about data and data systems, we have sought to avoid normative assertions about any being better than the others. This distance was necessary in order to demonstrate issues around interpretive flexibility. Each district worldview was set on equal footing with the other. Thus, a question looms: If there can be different visions for the use of data and of data systems, then is there a best one? Interpretive Flexibility and Vision Despite interpretive flexibilitys utility as an analytic tool, some have voiced concerns that too strict of an interpretive stance can obscure important moral and ethical questions about technology (Kling, 1992; Winner, 1993). Accordingly, we suggest that researchers and school leaders continue to press on issues around the content of beliefs and values when it comes to educational data use. In other words, although scholars have highlighted the importance of creating cultures of inquiry in schools (Copland, 2003; Datnow, Park, & Wohlstetter, 2007; Hamilton et al., 2009), less has been done to compare the contents of particular school visions around data. This issue of vision is complicated in that it involves bridging individual and shared notions. Shared goals and a common language around the hows and whys of schooling are considered important supports to data use and school improvement (Hamilton et al., 2009; Wayman & Stringfield, 2006; Wohlstetter, Datnow, & Park, 2008). Although goals and language help to coordinate and plan work, vision is also more than that. Senge (1994) describes how vision in terms of its ability to bind people within a shared sense of identity, destiny, and genuine commitment to organizational endeavors. By tapping into values around schooling, vision has the ability to lend initiatives legitimacy and a sense of authenticity. One of the challenges of school leadership, however, is that administrators are expected to channel a host of value systems into organizational practice at once (Begley, 2006). These may include not only their personal values, but also those of the organization, profession, community, and society. Doing this well could make positive contributions to schools professional climates and cultures (Datnow et al., 2007; Talbert, 2010). However, conflicting value systems can also result in discord. For example, Johnson (2007) suggests that principal accountability measures may have distracted principals from strongly pursuing their personal goals regarding culturally responsive instruction and parent involvement. Others have highlighted the stresses and frustrations voiced by teachers when school and district responses to accountability conflict with personal or professional values around schooling (Ingram, Seashore Louis, & Schroeder, 2004; Valli & Buese, 2007; Wills & Sandholtz, 2009). A Community Vision for Data Use In our study, districts ranged in terms of both their relationships with accountability pressures and their overall cohesion to any one worldview. Points of contrast among our study districts were typically around commonly available data types. This included differences in how state tests, interim assessments, and classroom-level data were prioritized. It also included if attention was drawn to accountability-related demographic groups, learning objectives data, or student-teacher interactions. These details serve as reminders that people have agency in what sort of visions around data use they choose to adopt in their communities. Indeed, one charge for school leaders may be to ensure that schools engage in such dialogue. What kinds of schools do their communities want? For example, it stands within reason that an organization that prioritized responsiveness to students cultural heritages could leverage some computer data system toward those very purposes. The data stored and leveraged might differ from, but perhaps also insightful about, student learning and success. Examples might include students personal interests, students sense of belonging, their sense of efficacy in socio-political arenas, or information relating to culturally responsive instructional practices. To sum, there probably isnt one best vision for data use, but some will be more compatible with deeper, collective values and beliefs around schooling than others. In the end, a vision for data use is not only about fostering more data use, but also about what kinds of data use localities feel are best for the kids. Just as the details might change or evolve according to the unique conditions of each districts context (Wayman, Jimerson, & Cho, 2012), how this manifests at the school level will also depend upon school leaders abilities to harmonize the various internal and external demands at their schools. Data systems can play a role in this by serving as a way to tie together and represent those collective values and beliefs, but the manners in which these things are enacted will depend upon sensemaking and how it has been managed. CONCLUSION In the end, data systems are what people make of them. The findings of this study have run counter to some of the everyday assumptions made about technologies. Although computer data systems can certainly be vital to data use, we have also shown them to be problematic. Rather than systems having a direct and universal effect on data use, we found that sensemaking issues around data focused on what aspects of systems were used in practice. In other words, the use of a computer data system was not simply a matter of the utility of data or practicality of the system. Such dynamics underscore the importance of central offices engagement with schools around how and why data can support schooling (Honig & Venkateswaran, 2012; Wayman et al., 2012). Messages from central offices about data use often held much currency in educators priorities around data use. Indeed, they had the potential to trump what teachers personally considered valuable. It was not enough to have the right data or system functions. Shaping and attending to values, beliefs, and vision was also necessary. Taking on this view suggests that districts may get a strong return on data system investments by treating implementation and sensemaking as issues that unfold over time. Notes 1. Thus, although categorizing systems based upon the intentions of their designers provides an etic view of their potential, emic views of what they actually mean to users in practice may also prove valuable to the field. 2. Drawing upon other studies of technologies and organizing (e.g., Bailey & Barley, 2011; Barley, 1990; Carlile, 2002), our definition of social group includes job role as well as organization at large. 3. Pseudonyms are used for each district. 4. 80% non-Latino White, 10% Latino. 5. 40% Latino, 30% non-Latino White, 20% African American. 6. 50% non-Latino White, 25% Latino, 10% African American. 7. Pseudonyms are used for any systems mentioned in this study. 8. The exception to this came from our observational data of one-time Front End trainings for some specialized teachers (e.g., the teacher user group or teachers of gifted students). Although they were positive about the system and their support, they were not representative of the district at large. When asked, none of the teachers in our focus groups reported contact with the associate director for data use or other forms of central office support. References Bailey, D. E., & Barley, S. R. (2011). Teaching-learning ecologies: Mapping the environment to structure through action. Organization Science, 22(1), 262-285. Barley, S. R. (1990). The alignment of technology and structure through roles and networks. Administrative Science Quarterly, 35, 61-103. Begley, P. T. (2006). Self-knowledge, capacity and sensitivity: Prerequisites to authentic leadership by school principals. Journal of Educational Administration, 44(6), 570589. doi:10.1108/09578230610704792 Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139148. Bosk, C. L. (2003). Forgive and remember: Managing medical failure (2nd ed.). Chicago, IL: University of Chicago Press. Brooks, C. (2011). Locating leadership: The blind spot in Albertas technology policy discourse. Education Policy Analysis Archives, 19(26). Brown, J. S., & Duguid, P. (1991). Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science, 2(1), 40-57. Brunner, C., Fasca, C., Heinze, J., Honey, M., Light, D., Mandinach, E. B., & Wexler, D. (2005). Linking data and learning: The Grow Network study. Journal of Education for Students Placed At Risk, 10(3), 241-267. Burch, P. (2010). The bigger picture: Institutional perspectives on interim assessment technologies. Peabody Journal of Education, 85(2), 147-162. Burch, P., & Hayes, T. (2009). The role of private firms in data-based decision making. In T. J. Kowalski & T. J. Lasley II (Eds.), Handbook of Data-Based Decision Making in Education (pp. 54-71). New York, NY: Routledge. Carlile, P. R. (2002). A pragmatic view of knowledge and boundaries: Boundary objects in new product development. Organization Science, 13(4), 442-455. Chen, E., Heritage, M., & Lee, J. (2005). Identifying and monitoring students learning needs with technology. Journal of Education for Students Placed At Risk, 10(3), 309-332. Coburn, C. E. (2001). Collective sensemaking about reading: How teachers mediate reading policy in their professional communities. Educational Evaluation and Policy Analysis, 23(2), 145170. doi:10.3102/01623737023002145 Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research & Perspective, 9(4), 173206. doi:10.1080/15366367.2011.626729 Copland, M. A. (2003). Leadership of inquiry: Building and sustaining capacity for school improvement. Educational Evaluation and Policy Analysis, 25(4), 375395. Daly, A. J. (2012). Data, dyads, and dynamics: Exploring data use and social networks in educational improvement. Teachers College Record, 114(11), 1-38. Datnow, A. (2006). Connections in the policy chain: The co-construction of implementation in comprehensive school reform. In M. I. Honig (Ed.), New directions in education policy implementation: Confronting complexity (pp. 105-123). Albany, NY: State University of New York Press. Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high-performing school systems use data to improve instruction for elementary students. Los Angeles, CA: Center on Educational Governance, University of Southern California. Davidson, E. J., & Chismar, W. G. (2007). The interaction of institutionally triggered and technology-triggered social structure change: An investigation of computerized physician order entry. MIS Quarterly, 31(4), 739-758. Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532-550. Emerson, R. M., Fretz, R. I., & Shaw, L. L. (1995). Writing ethnographic fieldnotes. Chicago, IL: The University of Chicago. Erlandson, D. A., Harris, E. L., Skipper, B. L., & Allen, S. D. (1993). Doing naturalistic inquiry: a guide to methods. Newbury Park, CA: SAGE. Greenhow, C., Robelia, B., & Hughes, J. E. (2009). Web 2.0 and classroom research: What path should we take now? Educational Researcher, 38(4), 246259. doi:10.3102/0013189X09336671 Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E., Supovitz, J. A., & Wayman, J. C. (2009). Using student achievement data to support instructional decision making (Practice guide No. NCEE 2009-4067). National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED506645 Heifetz, R. A., & Linsky, M. (2002). Leadership on the line: Staying alive through the dangers of leading. Cambridge, MA: Harvard Business Press. Honig, M. I., & Venkateswaran, N. (2012). Schoolcentral office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 118(2), 199222. Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and teacher decision making: barriers to the use of data to improve practice. Teachers College Record, 106(6), 1258-1287. Johnson, L. (2007). Rethinking successful school leadership in challenging U.S. schools: Culturally responsive practices in school-community relationships. International Studies in International Education, 35(3), 4957. Jones, M. R., & Karston, H. (2008). Giddenss structuration theory and information systems research. MIS Quarterly, 32(1), 127-157. Kling, R. (1992). Audiences, narratives, and human values in social studies of technology. Science, Technology, & Human Values, 17(3), 349365. Knapp, M. S., Swinnerton, J. A., Copland, M. A., & Monpas-Hubar, J. (2006). Data-informed leadership in education. Seattle, WA: Center for the Study of Teaching and Policy. Lachat, M. A., & Smith, S. (2005). Practices that support data use in urban high schools. Journal of Education for Students Placed At Risk, 10(3), 333-349. Leonardi, P. M. (2009a). Why do people reject new technologies and stymie organizational changes of which they are in favor? Exploring misalignments between social interactions and materiality. Human Communication Research, 35, 407-441. Leonardi, P. M. (2009b). Crossing the implementation line: The mutual constitution of technology and organizing across development and use activities. Communication Theory, 19, 278-310. Markus, M. L., & Robey, D. (1998). Information technology and organizational change: Causal structure in theory and research. Management Science, 34(5), 583-598. McDaniel, R. R., & Driebe, D. J. (2001). Complexity science and health care management. Advances in Health Care Management, 2, 11-36. McDaniel, R. R., & Weick, K. E. (1989). How professional organizations work: Implications for school organization & management. In T. J. Sergiovanni & J. H. Moore (Eds.), Schooling for Tomorrow: Directing Reforms to Issues that Count. Boston, MA: Allyn and Bacon. Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision making in schools: Teacher access, supports and use. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation and Policy Development. Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks: SAGE. ODay, J. A. (2002). Complexity, accountability, and school improvement. Harvard Educational Review, 72(3), 293-329. Orlikowski, W. J. (1992). The duality of technology: Rethinking the concept of technology in organizations. Organization Science, 3(3), 398-427. Orlikowski, W. J. (1996). Improvising organizational transformation over time: A situated change perspective. Information Systems Research, 7(1), 63-92. Orlikowski, W. J., & Barley, S. R. (2001). Technology and institutions: What can research on information technology and research on organizations learn from each other? MIS Quarterly, 25(2), 145-165. Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the IT in IT research - A call to theorizing the IT artifact. Information Systems Research, 12(2), 121-135. Palmer, D., & Snodgrass Rangel, V. (2011). High stakes accountability and policy implementation: teacher decision making in bilingual classrooms in Texas. Educational Policy, 25(4), 614-647. doi:10.1177/0895904810374848 Pinch, T. J., & Weibe, E. B. (1984). The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science, 14, 399-441. Polikoff, M. S., Porter, A. C., & Smithson, J. (2011). How well aligned are state assessments of student achievement with state content standards? American Educational Research Journal, 48(4), 965995. doi:10.3102/0002831211410684 Scott, W. R., & Davis, G. F. (2007). Organizations and organizing: Rational, natural, and open systems perspectives. Upper Saddle River, NJ: Pearson Education. Senge, P. M. (1994). The fifth discipline: The art & practice of the learning organization (1st ed.). New York: Doubleday Business. Spillane, J. P., Parise, L. M., & Sherer, J. Z. (2011). Organizational routines as coupling mechanisms. American Educational Research Journal, 48(3), 586-619. doi:10.3102/0002831210385102 Spillane, J. P., Reiser, B. J., & Reimer, T. (2002). Policy implementation and cognition: Reframing and refocusing implementation research. Review of Educational Research, 72(3), 387-431. doi:10.3102/00346543072003387 Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: SAGE. Stecher, B., Hamilton, L., & Gonzalez, G. (2003). Working smarter to leave no child behind: Practical insights for school leaders. Santa Monica, CA: RAND. Supovitz, J. A., & Klein, V. (2003). Mapping a course for improved student learning: How innovative schools systematically use student performance data to guide improvement. Philadelphia, PA: Consortium for Policy Research in Education. Talbert, J. E. (2010). Professional learning communities at the crossroads: How systems hinder or engender change. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), Second International Handbook of Educational Change (pp. 555571). Dordrecht: Springer Netherlands. Retrieved from http://rd.springer.com/chapter/10.1007/978-90-481-2660-6_32 Treem, J. W., & Leonardi, Paul M. (2012). Social media use in organizations: Exploring the affordances of visibility, editablity, persistance, and association. Communication Yearbook, 36, 143-189. Tucker, B. (2010). Putting data into practice: Lessons from New York City. Washington, DC: Education Sector. Valli, L., & Buese, D. (2007). The changing roles of teachers in an era of high-stakes accountability. American Educational Research Journal, 44(3), 519-558. Wayman, J. C., & Stringfield, S. (2006). Technology-supported involvement of entire faculties in examination of student data for instructional improvement. American Journal of Education, 112(August), 549-571. Wayman, J. C., Cho, V., & Johnston, M. T. (2007). The data-informed district: A district-wide evaluation of data use in Natrona County School District. Austin: The University of Texas. Wayman, J. C., Cho, V., & Richards, M. P. (2010). Student data systems and their use for educational improvement. In P. L. Peterson, E. Baker, & B. McGraw (Eds.), International Encyclopedia of Education (Vol. 8, pp. 14-20). Oxford: Elsevier. Wayman, J. C., Cho, V., & Shaw, S. (2009). First-year results from an efficacy study of the Acuity data system. Austin, TX: The University of Texas at Austin. Wayman, J. C., Cho, V., Jimerson, J. B., & Spikes, D. D. (2012). District-wide effects on data use in the classroom. Education Policy Analysis Archives, 20(25). Retrieved from http://epaa.asu.edu/ojs/article/view/979 Wayman, J. C., Jimerson, J. B., & Cho, V. (2012). Organizational considerations in establishing the data-informed district. School Effectiveness and School Improvement 23(2), 159-178. Wayman, J. C., Stringfield, S., & Yakimowski, M. (2004). Software enabling school improvement through analysis of student data (No. 67). Center for Research on the Education for Students Placed At Risk. Weick, K. E. (1993). The collapse of sense making in organizations: The Mann Gulch disaster. Administrative Science Quarterly, 12(38), 628-652. Weick, K. E., & Roberts, K. H. (1993). Collective mind in organizations: Heedful interrelating on flight decks. Administrative Science Quarterly, 38(3), 357381. Weiss, R. S. (1994). Learning from strangers: The art and method of qualitative interview studies. New York, NY: The Free Press. Wills, J. S., & Sandholtz, J. H. (2009). Constrained professionalism: Dilemmas of teaching in the face of test-based accountability. Teachers College Record, 111(4), 10651114. Winner, L. (1993). Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology, & Human Values, 18(3), 362-378. Wohlstetter, P., Datnow, A., & Park, V. (2008). Creating a system for data-driven decision-making: Applying the principal-agent framework. School effectiveness and school improvement, 19(3), 239259. Yin, R. K. (2009). Case study research: Design and methods. Applied Social Research Methods Series (4th ed., Vol. 5). Thousand Oaks, CA: SAGE. Young, V. M. (2006). Teachers use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112(4), 521-548.
APPENDIX A Central Office Interview Protocol " What has the district been doing this year to improve data use? How have these efforts involved computer data systems? " In what ways is the district emphasis on data use being shared with teachers? " From where you sit, how are you seeing teachers and principals using data? " What problems have you seen them solve by using data? How typically does this happen? Which specific data were most important? " How does your district figure out if teachers data use needs are being met? " Whats your reaction to complaints that data are not meaningful? " Which computer data systems are most helpful to teachers right now? Which features or functions? How typically does this happen? Any complaints or limitations? " What are the purposes of these systems? Their histories in the district? " Ideally, how should these systems fit into a teachers everyday work? How typically does this happen? What has the district done to support this? How do these things affect your job? " How does your district figure out if teachers needs around computer data systems are being met? " Imagine that you woke up tomorrow and your districts challenges around data use had been solved. What would have happened? Campus Administrator Interview Protocol " From where you sit, how are you seeing teachers using data? " What problems have you seen them solve by using data? How typically does this happen? Which specific data were most important? " How do these things affect your job? Your typical day? " How knowledgeable is central office about the data use needs at your school? How well do they meet those needs? " What has the district been doing this year to improve data use? How have these efforts involved computer data systems? How are these efforts working out? " In what ways is the district emphasis on data use being shared with teachers? " Whats your reaction to complaints that data are not meaningful? " Which computer data systems are most helpful to teachers right now? Which features or functions? How typically does this happen? Do you use these systems? " Do teachers describe any complaints or limitations with these systems? If so, how do these affect your job? " Ideally, how should these systems fit into a teachers everyday work? How typically does this happen? What has the district done to support this? How do these things affect your job? " How knowledgeable is central office about the data system use at your school? How well do they meet your data system needs? " Imagine that you woke up tomorrow and your districts challenges around data use had been solved. What would have happened? Teacher Interview Protocol " Which data do you pay attention to the most? Which are most informative? What do you do next? How typically does this happen? " How do administrators or other staff factor into these activities? " How knowledgeable is central office about the data use needs at your school? How well do they meet those needs? " What has the district been doing this year to improve data use? How are these efforts working out? " Whats your reaction to complaints that data are not meaningful? " Which computer data systems are most helpful to you? Which features or functions? How typically does this happen? What are the purposes of these systems? " Ideally, how should these systems fit into a teachers everyday work? How typically does this happen? What has the district done to support this? " Do your systems do what you want them to? What do you do when you need help? How typically does this happen? " What role have administrators or staff played in your data system use? " How much emphasis does your district place on using data systems? How is that emphasis shared with you? " How well does your district meet your data system needs? " Imagine that you woke up tomorrow and your districts challenges around data use had been solved. What would have happened? APPENDIX B Below we exhibit the final code list using interview and observational data. Names have been replaced with pseudonyms.
|
|
|
|
|
|