Interventions to Promote Data Use: An Introduction
by Erica O. Turner & Cynthia E. Coburn - 2012
Setting the stage for the special issue, this article discusses the increased attention to data use in policy and practice, provides an overview of the major ways that scholars have studied data use, highlights the limitations of the extant research, summarizes the contributions of the articles in this special issue to addressing these limitations, and previews the articles that follow.
In recent years, a range of influential actors and initiatives have promoted data use as a central strategy for fostering improvement in U.S. public schools. The strategy is most notable in President George W. Bushs 2001 No Child Left Behind Act (Honig & Coburn, 2008) and continues to be a key federal priority as evidenced by the hundreds of millions of dollars devoted to supporting data use in President Barack Obamas Statewide Longitudinal Data System Grant Program (National Center for Education Statistics, n.d.) and Race to the Top initiative (U.S. Department of Education, 2009). State education agencies have been building and expanding their student data systems at a rapid pace (Data Quality Campaign, 2011; Gottfried, Ikemoto, Orr, & Lemke, 2011). School district leaders have invested significant resources in training school and district personnel to integrate data in their ongoing practice (Datnow, Park, & Wohlstetter, 2007; Knapp, Swinnerton, Copland, & Monpas-Huber, 2006; Means, Padilla, DeBarger, & Bakia, 2009). And, an array of actors and organizations have emerged to work with schools on inquiry processes, benchmark assessment systems, and other practices designed to foster data use in schools (Burch & Hayes, 2009; Coburn, 2010; Kerr, Marsh, Ikemoto, Darilek, & Barney, 2006; Marsh, Pane, & Hamilton, 2006; Smylie & Corcoran, 2009).
Most of these organizations and actors promote what we call data use interventions: initiatives, policies, programs, and tools designed to alter the ways that educational decision-makers access, draw on, interact with, and respond to data in their ongoing work. Although the interventions vary in significant ways, they are all rooted in the conviction that if the right data are collected and analyzed, they will provide answers to key educational questions and inform actors decisions, and better educational outcomes will follow. This model has been largely an article of faith among advocates of data use interventions.
Despite the proliferation of data use interventions and the growing public attention to and enthusiasm for the idea of data use, there is shockingly little empirical research on these approaches. To date, studies of data use interventions have tended toward description or prescription rather than analysis. Where studies have been more analytic, scholars have focused on understanding the processes of data use or the outcomes of data use interventions, but not the connections between the two. And there has been an uneven and atheoretical approach to studying the relationship between data use interventions and the contexts in which they unfold. These shortcomings in the existing literature are exacerbated by the fact that most studies are not of the highest quality, often lacking the designs required for the conclusions they draw. As a result, few studies, or even combinations of studies, help us understand how, why, and with what consequences data use interventions make a difference in schools.
In this special issue, we begin to address these limitations. The articles review the existing research, highlight key lessons that emerge from this literature, and forge a bold agenda for the next generation of research on data use interventions. The articles take a broad view of what constitutes data, including the ubiquitous standardized test scores, but also encompassing other forms of data, such as observations of teaching and performance assessments. Reflecting the diversity of the school reform landscape, the articles examine a variety of data use interventions, from comprehensive reform initiatives that embrace data use as a central but not sole lever of change, to teacher inquiry groups, data coaches, accountability policy, and interim assessments.
The articles assess research evidence about the relationship between data use interventions and change in teacher practice and student learning. They also analyze what we know about the processes by which these outcomes are achieved. They provide guidance on the role of organizational and environmental context in how data use interventions unfold. In addition, they put forth new conceptual frameworks that help us understand how different facets of the data use phenomenon might fit together, highlighting aspects of this educational reform strategy that are often hidden from view.
Studying data use interventions helps us gain insight into one of the most central reform ideas in contemporary public school policy and practice. This scholarship has the potential to illuminate how these interventions are actually used in practice and how they are influenced by real-world contexts. This in turn can help us understand when and under what conditions data use interventions may lead to various outcomes of interesta question made increasingly urgent as these interventions are promoted more widely. Studies may also shed light on the unanticipated or unintended effects of these interventions on public schooling. In addition, studies of data use interventions can provide guidance to educators and policy makers as they decide how to use limited time and resources in ways that will improve teaching and learning. Achieving this potential, however, will require a new generation of more analytic, high quality, and theoretically informed studies that build on the existing research base. The articles in this special issue help us understand where we are as a field and point us in the right directions.
EXISTING RESEARCH ON DATA USE INTERVENTIONS
The existing literature on data use in education has established some important findings, but on the whole, the body of research is fragmented and uneven. First, a significant portion of the existing scholarship on data use interventions has focused on describing the nature of these interventions rather than analyzing them (e.g., Datnow et al., 2007; Halverson, Grigg, Prichett, & Thomas, 2007; Kerr et al., 2006; Nichols & Singer, 2000; Supovitz, 2006). For example, there is a substantial body of work that describes the technological data systems in place in schools and districts (e.g., Wayman, Conoly, Gasko, & Stringfield, 2008; Wayman & Stringfield, 2006; Wayman, Stringfield, & Yakimowski, 2004) but does not analyze how people use these systems or their impacts. Other research focuses on advocating for the use of various interventions or offering technical advice about how to use them well (e.g., Bernhardt 1998; Boudett, City, & Murnane, 2005; Love, Stiles, Mundry, & DiRanna, 2008). Although the descriptive, advocacy, and advice literatures have plenty to contribute to those interested in data use interventions, this work often lacks empirical evidence on the processes or outcomes of the interventions described. Absent this evidence, it is difficult to assess how these interventions are used in practice, how they play out differently in different contexts, or even the impact of the interventions on various outcomes of interest.
Second, studies have tended to examine either the outcomes or the processes of data use interventions, but not both. For example, quite a bit of research has investigated the consequences of formative assessmentpractices to help teachers use classroom assessment to inform their instructionon student outcomes (for reviews, see Fuchs & Fuchs, 1986; Black & Wiliam, 1998; and Young & Kim, 2010). Likewise, there is increasing evidence of the moderate effects of accountability systems on increasing average student achievement on standardized tests (Carnoy & Loeb, 2002; Dee & Jacob, 2009; Hout & Elliott, 2011). Yet, these studies tend not to investigate the processes by which those outcomes are achieved. For example, studies of accountability policy rarely directly investigate how test score data are used in practice (see Jennings, 2012, this issue). Thus, we know little about how teachers interpret test score data and how they come to respond in one way and not in another (for an exception, see Booher-Jennings, 2005). Similarly, the formative assessment literature has paid little attention to how teachers and students beliefs about learning and assessments or their interactions with each otherthe social processes of formative assessmentaffect outcomes (Black & Wiliam, 1998; Coburn & Turner, 2012a). In the absence of an understanding of what happens in schools when people engage with data use interventions, we lack knowledge of how variation in these social processes might matter for the efficacy of a given intervention.
At the same time, studies that have investigated processes of data use typically have not examined the outcomes of these interventions (Marsh, 2012, this issue). For example, many studies of the use of protocols for fostering data conversations have paid careful attention to the ways in which protocols influence the nature of teachers discussions about data. These studies suggest that protocols can focus participants conversations on specific aspects of classroom practice, focus participants conversations on particular evidence of student learning, or even foster a tendency for participants to turn away from data (Earl, 2007; Horn & Little, 2010; Ikemoto & Honig, 2010; Lasky, Schaffer, & Hopkins, 2008; Little & Curry, 2008; Little, Gearhart, Curry, & Kafka, 2003; Timperley, 2008). But these studies generally do not address the impact of different kinds of conversations on either teacher practice or student learning and achievement. Likewise, studies of schoolwide inquiry processes investigate educators strategies and actions related to data use as they try to improve instruction and student achievement (e.g., Datnow et al., 2007; Halverson et al., 2007; Supovitz, 2006; Symonds, 2004), but these studies can rarely speak to the effects of these changes on student learning or achievement (Christman et al., 2009, is an exception here). Thus, we know that some interventions influence how teachers and others use data, but we do not always know if and how this matters for key outcomes.
Understanding the linkages between data use interventions and the processes and outcomes of data use interventions is crucial to understanding the pathways by which interventions lead to outcomes that matter. Absent information about the processes of data use, we can know that something leads to given outcomes but not know how or why. Absent information about outcomes, we have little knowledge to help policy makers and educators make informed decisions about whether particular data use interventions are worth their time, effort, and resources.
Third, existing scholarship offers an atheoretical and fragmented approach to understanding the contexts of data use interventions. Many studies attend to the role of organizational context. These studies identify a long list of contextual features that matter, seemingly covering nearly all aspects of the organizational context of schooling, from the importance of time (Honig, Copland, Rainey, Lorton, & Newton, 2010; Ikemoto & Marsh, 2007; Ingram, Louis, & Schroeder, 2004; Little et al., 2003; Marsh, 2012, this issue; Marsh et al., 2006; Means et al., 2009; Weinbaum, 2009) to leadership (Coburn & Turner, 2012b; Copland, 2003; Halverson et al., 2007; Honig et al., 2010; Marsh, 2012, this issue; Supovitz & Klein, 2003) to favorable policy environment (Marsh, 2012, this issue; Marsh et al., 2006; Means, Padilla, & Gallagher, 2010). But because so little of this research employs strong theoretical frameworks, we know little about how these myriad contextual factors interact with one another. Stronger theories of context could help to build knowledge across studies, interpret or explain findings, highlight relationships that persist over time, and suggest causal mechanisms.
Furthermore, few studies attend to the political contexts of data use (see Coburn, Honig, & Stein, 2009; Henig, 2012, this issue, for a review). Yet, preliminary work suggests that political pressure can influence which data people notice (Englert, Kean, & Scribner, 1977; Kennedy, 1984) and that data can become ammunition in conflicts about educational decisions (Coburn, Toure, & Yamashita, 2009; Hallett, 2010; Kennedy, 1982). Political context may be particularly salient in relation to accountability policy because data use is an explicit lever of social control in this approach (Hallett, 2010; Henig, 2012, this issue; Stecher, Hamilton, & Gonzalez, 2003). Absent information about the political contexts of data use, we may miss the ways in which politics influences the processes of data use and how data use interventions can become part of struggles for power in schools and school districts.
Finally, studies of data use interventions have not always met the highest standards for drawing conclusions about these interventions (Hamilton et al., 2009; Marsh, 2012, this issue). Many existing studies have not examined data use as it plays out in real time and in the contexts of classrooms, schools, and cities (Daly, 2012, this issue; Henig, 2012, this issue; Jennings, 2012, this issue). Instead, scholars tend to rely on self-report of teachers and school leaders (e.g., Ingram et al., 2004; Symonds, 2004) and/or study data use interventions retrospectively (e.g., Datnow et al., 2007; Petrides & Nodine, 2005). In addition, many studies make causal claims about the outcomes of data use interventions without research designs that are well-suited to making those claims. Few studies are designed to disentangle the effects of data use as distinct from other aspects of complex initiatives that include data use alongside other changes and interventions (Jennings, 2012, this issue; Marsh, 2012, this issue). For example, many studies of comprehensive school or district reforms make claims about the impact of using data on student achievement, but they do so by selecting sites to study because they have demonstrated improvement in student achievement in the tradition of effective schools research (e.g., Armstrong & Anthes, 2001; Snipes, Doolittle, & Herlihy, 2002). Despite the co-occurrence of these reform efforts and high achievement, these studies cannot demonstrate that it is the data use aspect of the reform effortrather than the addition of resources or focus on achievement generally, for examplethat leads to increased student achievement. Higher quality studies will advance our ability to draw claims about the outcomes of data use interventions, as well as the processes by which these outcomes are produced.
CONTRIBUTIONS OF THIS SPECIAL ISSUE
The articles in this special issue address these limitations. They meticulously review existing empirical literature on data use interventions to expand our understanding of the processes, contexts, and outcomes of these interventions. These articles go beyond describing data use interventions to analyze how and why the interventions unfold the way they do in districts, schools, and classrooms. In so doing, they offer a corrective to the idealized views of data use interventions often portrayed in the descriptive and prescriptive literatures. They uncover the positive and negative ways that data use interventions are actually used in practice, the ways in which context may limit the potential for change, and the mixed impact of these interventions.
Second, taken as a whole, these articles provide guidance for understanding the relationship between the processes and outcomes of data use interventions. Drawing on existing empirical research, they show how processes of data use can vary substantially from school to school and district to district depending on the context. They further show how variations in the processes of data use matter for the outcomes of data use interventions.
Third, the articles illuminate the overlapping social, political, and organizational contexts of data use interventions. Drawing together findings from existing research, these articles contribute insights about how informal social networks; political upheaval about the goals, efficacy, and governance of public education; and intersecting organizational factors influence the processes and outcomes of data use interventions. They also offer new and compelling conceptual and theoretical frameworks that help explain how environmental factors interact and how context matters for data use.
Finally, the articles attend to the quality of studies of data use interventions, especially the degree to which they have research designs to allow them to investigate issues of data use directly. The authors suggest a range of ways that future studies may be better designed to provide higher quality research on data use interventions.
The articles in this issue take stock of what we know about data use interventions, suggest conceptual frameworks for investigating them, and propose agendas for future research to move the field toward more systematic, theoretically grounded, and rigorous research on the data use phenomenon. The first three articles survey existing literature on different kinds of data use interventions. The final two articles investigate two key aspects of the social context within which these data use interventions unfold: social relations and power and politics.
We begin with an article by Julie Marsh (2012, this issue) that presents a comprehensive literature review of the various programs and materials designed to support educators in accessing, interpreting, and responding to data. In Interventions Promoting Educators Use of Data: Research Insights and Gaps, Marsh uses a conceptual framework derived from the theory-of-action articulated by data advocates and literature on data use. She provides evidence for the importance of interventions that promote safety and trust, target multiple points in data use processes, and provide opportunities for collaboration among actors; however, she also highlights the ways that comprehensive data reforms and data use tools are supported and undercut by complex and interrelated contextual factors. Marsh finds mixed evidence on outcomes of the data use support interventions and mixed quality of studies. She concludes with recommendations for research that fills in gaps in the literature, addresses concerns of policy makers and practitioners, and improves the quality of research on data use.
Next up, in Getting at Student Understanding: The Key to Teachers Use of Test Data, Jonathan Supovitz (2012, this issue) focuses his attention on the ways that teachers use information from interim and medium-cycle assessments to understand the nature of student learning and make changes in their practice. Supovitz finds that these assessments can give teachers important information about student learning when they are designed to provide data about students developmental path toward a learning goal, students thinking processes, or students misconceptions. He provides evidence that with information about student understandingnot just student achievementteachers can use assessment data to change their classroom instruction to improve student learning. Based on his review of the literature, Supovitz develops a conceptual framework to guide future research on the relationships between assessment design, teacher use of these assessments, teachers instructional practice, and student learning.
Jennifer Jennings (2012, this issue) investigates the role of accountability policy in teachers data use in her article, The Effects of Accountability System Design on Teachers Use of Test Score Data. Rather than seeing accountability policy as monolithic and uniform, Jennings argues that it is a multifaceted policy that varies considerably from state to state. She identifies five key features of accountability systems that potentially matter for the way teachers use data: the amounts and sources of pressure, locus of pressure, goals for student performance, features of assessments, and scope of the accountability system. Although Jennings finds few studies of how teachers actually use data associated with accountability systems, she draws on evidence from a number of fields to hypothesize these connections. She then employs this analysis to suggest future research that would more fully flesh out how these features of accountability systems, in interaction with organizational and individual factors, influence whether teachers use data in productive or distortive ways.
We then turn to articles that investigate the social and political context within which data use interventions unfold. In Data, Dyads, and Dynamics: Exploring Data Use and Social Networks in Educational Improvement, Alan Daly (2012, this issue) argues that social network theory provides important concepts and a methodological approach for understanding the relationship between the social context and the processes of data use. He argues that five concepts from social network theoryembedded relationships, social position, boundary spanners and brokers, quality of social interactions, and subgroupscan enhance our understanding of data use processes. He also shows how social network methods can contribute to the systematic investigation of the role of social relations in data use. He closes by presenting a research agenda to investigate the role of social relations in data acquisition, interpretation, and use.
Finally, in The Politics of Data Use, Jeffrey Henig (2012, this issue) draws on empirical and theoretical work from political science to offer new conceptual tools for understanding the relationship between data use interventions and the underinvestigated contexts of power and politics. Henig emphasizes the importance of understanding data use interventions as a part of broader changes in education that have ushered in new actors and created new political alliances around various kinds of data use interventions. Turning much of the conventional wisdom about data use interventions on its head, Henig articulates the limitations of data use interventions to circumvent political influence on decision making and instead suggests the potential of politics as a route through which data might be used to improve education and promote democratic ends. Based on these insights, he suggests directions for future research.
The special issues closes with a series of commentaries from leaders in the field of data use: Warren Simmons, executive director of the Annenberg Institute for School Reform; Melissa Roderick, professor at the University of Chicago and a senior director of the Consortium on Chicago School Research; and Janet Weiss, professor of public policy who is currently serving as a vice provost and dean at the University of Michigan. These commentaries offer fresh perspectives, raise important themes, and explore the implications of these articles for research and practice.
Armstrong, J., & Anthes, K. (2001). How data can help: Putting information to work to raise student achievement. American School Board Journal, 188(11), 3841.
Bernhardt, V. L. (1998). Data analysis for comprehensive schoolwide improvement. Larchmont, NY: Eye on Education.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 774.
Booher-Jennings, J. (2005). Below the bubble: Educational triage and the Texas accountability system. American Educational Research Journal, 42, 231268.
Boudett, K. P., City, E. A., & Murnane, R. J. (Eds.) (2005). Data wise: A step-by-step guide to using assessment results to improve teaching and learning. Cambridge, MA: Harvard Education Press.
Burch, P., & Hayes, T. (2009). The role of private firms in data-based decision making. In T. J. Kowalski & T. J. Lasley, II (Eds.), Handbook of data-based decision making in education (pp. 5471). New York: Routledge.
Carnoy, M., & Loeb, S. (2002). Does external accountability affect student outcomes? A cross-state analysis. Educational Evaluation and Policy Analysis, 24, 305331.
Christman, J., Neild, R., Bulkley, K., Blanc, S., Liu, R., Mitchell, C., & Travers, E. (2009). Making the most of interim assessment data. Lessons from Philadelphia. Philadelphia, PA: Research for Action.
Coburn, C. E. (2010). The Partnership for District Change: The challenges of evidence use in a major urban district. In C. E. Coburn & M. K. Stein (Eds.), Research and practice in education: Building alliances, bridging the divide (pp. 167182). New York, NY: Rowman & Littlefield.
Coburn, C. E., Honig, M. I., & Stein, M. K. (2009). Whats the evidence on districts use of evidence? In J. Bransford, D. J. Stipek, N. J. Vye, L. Gomez, & D. Lam (Eds.), Educational improvement: What makes it happen and why? (pp. 6786). Cambridge, MA: Harvard Educational Press.
Coburn, C. E., Toure, J., & Yamashita, M. (2009). Evidence, interpretation, and persuasion: Instructional decision making in the district central office. Teachers College Record, 111(4), 11151161.
Coburn, C. E., & Turner, E. O. (2012a). Putting the use back in data use: An outsiders contribution to the measurement communitys conversation about data use. Measurement: Interdisciplinary Research and Perspectives, 9(4), 227234.
Coburn, C. E., & Turner, E. O. (2012b). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research and Perspectives, 9(4), 173206.
Copland, M. (2003). Leadership of inquiry: Building and sustaining capacity for school improvement. Educational Evaluation and Policy Analysis, 25, 375395.
Daly, A. J. (2012). Data, dyads, and dynamics: Exploring data use and social networks in educational improvement. Teachers College Record,114(11).
Data Quality Campaign. (2011). States could empower stakeholders to make education decisions with data . . . but they havent yet. Retrieved from http://www.dataqualitycampaign.org/files/DFA2011%20Mini%20report%20findings%20Dec1.pdf
Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high-performing school systems use data to improve instruction for elementary students. Los Angeles, CA: University of Southern California, Center on Educational Governance.
Dee, T. S., & Jacob, B. (2009) The impact of No Child Left Behind on student achievement. NBER Working Paper. Cambridge, MA: National Bureau of Economic Research.
Earl, L. M. (2007). Leadership for evidence-informed conversations. In L. M. Earl & H. Timperley (Eds.), Professional learning conversations: Challenges in using evidence for improvement (pp. 4352). Dordrecht, The Netherlands: Springer.
Englert, R. M., Kean, M. H., & Scribner, J. D. (1977). Politics of program evaluation in large city school districts. Education and Urban Society, 9(4), 429450.
Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis, Exceptional Children, 53, 199208.
Gottfried, M., Ikemoto, G., Orr, N., & Lemke, C. (2011). What four states are doing to support local data-driven decision-making: policies, practices, and programs (Issues & Answers Report, REL 2012No. 118). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Mid-Atlantic. Retrieved from http://ies.ed.gov/ncee/edlabs
Hallett, T. (2010). The myth incarnate: Recoupling processes, turmoil, and inhabited institutions in an urban elementary school. American Sociological Review, 75(1), 5274.
Halverson, R., Grigg, J., Prichett, R., & Thomas, C. (2007). The new instructional leadership: Creating data-driven instructional systems in schools. Journal of School Leadership, 17(2), 158193.
Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J. A., & Wayman, J. C. (2009). Using student achievement data to support instructional decision making (No. NCEE 2009-4067). Washington, DC: National Center for Educational Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Henig, J. R. (2012). The politics of data use. Teachers College Record,114(11).
Honig, M. I., & Coburn, C. E. (2008). Evidence-based decision making in school district central offices: Toward a research agenda. Educational Policy, 22(4), 578608.
Honig, M. I., Copland, M. A., Rainey, L., Lorton, J. A., & Newton, M. (2010). Central office transformation for district-wide teaching and learning improvement. Seattle, WA: Center for the Study of Teaching and Policy.
Horn, I. S., & Little, J. W. (2010). Attending to problems of practice: Routines and resources for professional learning in teachers workplace interactions. American Educational Research Journal, 47(1), 181217.
Hout, M., & Elliott, S. W. (Eds.). (2011). Incentives and test-based accountability in public education. Board on Testing and Assessment, Division of Behavioral and Social Sciences and Education, National Research Council. Washington, DC: National Academies Press.
Ikemoto, G. S., & Honig, M. I. (2010). Tools to deepen practitioners engagement with research: The case of the Institute for Learning. In C. E. Coburn & M. K. Stein (Eds.), Research and practice in education: Building alliances, bridging the divide (pp. 93108). New York, NY: Rowman & Littlefield.
Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the data driven mantra: Different conceptions of data-driven decision making. In P. A. Moss (Ed.), Evidence and decision making (pp. 105131). Malden, MA: National Society for the Study of Education.
Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record, 106, 12581287.
Jennings, J. L. (2012). The effects of accountability system design on teachers use of test score data. Teachers College Record,114(11).
Kennedy, M. M. (1982). Working knowledge and other essays. Cambridge, MA: The Huron Institute.
Kennedy, M. M. (1984). How evidence alters understanding and decisions, Educational Evaluation and Policy Analysis, 6(3), 207226.
Kerr, K. A., Marsh, J. A., Ikemoto, G. S., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes, and lessons from three urban districts. American Journal of Education, 112(4), 496520.
Knapp, M. S., Swinnerton, J. A., Copland, M. A., & Monpas-Huber, J. (2006). Data informed leadership in education. Seattle, WA: Center for the Study of Teaching and Learning.
Lasky, S., Schaffer, G., & Hopkins, T. (2008). Learning to think and talk from evidence: Developing system-wide capacity for learning conversations. In L. M. Earl & H. Timperley (Eds.), Professional learning conversations: Challenges in using evidence for improvement (pp. 95108). Springer.
Little, J. W., & Curry, M. W. (2008). Structuring talk about teaching and learning: The use of evidence in protocol-based conversation. In L. M. Earl & H. Timperley (Eds.), Professional learning conversations: Challenges in using evidence for improvement (pp. 2942). Springer.
Little, J. W., Gearhart, M., Curry, M., & Kafka, J. (2003). Looking at student work for teacher learning, teacher community, and school reform. Phi Delta Kappan, 83(5), 184192.
Love, N. B., Stiles, K. E., Mundry, S. E., & DiRanna, K. (Eds.) (2008). The data coachs guide to improving learning for all students: Unleashing the power of collaborative inquiry. Thousand Oaks, CA: Corwin Press.
Marsh, J. A. (2012). Interventions promoting educators use of data: Research insights and gaps. Teachers College Record, 114(11).
Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education: Evidence from recent RAND research (OP-170). Santa Monica, CA: RAND Corporation.
Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision making in schools: Teacher access, supports and use. Report prepared for U.S. Department of Education, Office of Planning, Evaluation and Policy Development. Prepared by SRI International, Menlo Park, CA.
Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level from accountability to instructional improvement. Report prepared for U.S. Department of Education, Office of Planning, Evaluation and Policy Development. Prepared by SRI International, Menlo Park, CA.
National Center for Education Statistics. (n.d.). 20 states win grants for longitudinal data systems. Retrieved from http://nces.ed.gov/programs/slds/fy09arra_announcement.asp
Nichols, B. W., & Singer, K. P. (2000). Developing data mentors. Educational Leadership, 57(5), 3437.
Petrides, L., & Nodine, T. (2005). Anatomy of school system improvement: Performance-driven practices in urban school districts. San Francisco, CA: Institute for the Study of Knowledge Management in Education and New Schools Venture Fund.
Roderick, M. (2012). Drowning in data but thirsty for analysis: Commentary on Interventions to Promote Data Use. Teachers College Record,114(11).
Simmons, W. (2012). Data as a lever for improving instruction and student achievement. Teachers College Record,114(11).
Smylie, M. A., & Corcoran, T. B. (2009). Nonprofit organizations and the promotion of evidence-based practice in education. In J. D. Bransford, D. J. Stipek, N. J. Vye, L. M. Gomez, & D. Lam (Eds.), The role of research in educational improvement (pp. 111136). Cambridge, MA: Harvard Education Press.
Snipes, J., Doolittle, F., & Herlihy, C. (2002). Foundations for success: Case studies of how urban school systems improve student achievement. Washington, DC: Council of Great City Schools.
Stecher, B., Hamilton, L., & Gonzalez, G. (2003). Working smarter to leave no child behind: Practical insights for school leaders. Santa Monica, CA: RAND Education.
Supovitz, J. A. (2006). The case for district-based reform: Leading, building, and sustaining school improvement. Cambridge, MA: Harvard Education Press.
Supovitz, J. A. (2012). Getting at student understandingThe key to teachers use of test data. Teachers College Record,114(11).
Supovitz, J., & Klein, V. (2003). Mapping a course for improved student learning: How innovative schools use student performance data to guide improvement. Philadelphia, PA: Consortium for Policy Research in Education.
Symonds, K. W. (2004). After the test: Closing the achievement gaps with data. Naperville, IL: Learning Points Associates, and Bay Area School Reform Collaborative.
Timperley, H. (2008). Evidence-informed conversations making a difference to student achievement. In L. M. Earl & H. Timperley (Eds.), Professional learning conversations: Challenges in using evidence for improvement. (pp. 6980). Dordrecht, The Netherlands: Springer.
U.S. Department of Education. (2009). Race to the Top program: Executive summary. Retrieved from: http://www2.ed.gov/programs/racetothetop/executive-summary.pdf
Wayman, J. C., Conoly, K., Gasko, J., Stringfield, S. (2008). Supporting equity inquiry with student data computer systems. In E. B. Mandinach & M. Honey (Eds.), Linking data and learning (pp. 171190). New York, NY: Teachers College Press.
Wayman, J. C., & Stringfield, S. (2006). Technology-supported involvement of entire faculties in examination of student stat for instructional improvement. American Journal of Education 112(4), 549571.
Wayman, J. C., Stringfield, S., & Yakimowski, M. (2004). Software enabling school improvement through analysis of student data. Baltimore, MD: Center for Research on the Education of Students Placed at Risk, Johns Hopkins University.
Weinbaum, E. (2009). Learning about Assessment. An evaluation of a ten-state effort to build assessment capacity in high schools. Philadelphia, PA: Consortium for Policy Research in Education.
Weiss, J. A. (2012). Data for improvement, data for accountability. Teachers College Record,114(11).
Young, V. M., & Kim, D. H. (2010). Using assessments for instructional improvement: A literature review. Education Policy Analysis Archives, 18(19), 136.