|
|
Data as a Lever for Improving Instruction and Student Achievementby Warren Simmons - 2012 This commentary draws on the articles in this issue to underscore the importance of community engagement and districtwide capacity building as central to efforts to use data to inform accountability and choice, along with school and instructional improvement. The author cautions against treating data as an all-purpose tool absent adequate attention to developing solutions to the problems data illuminate. The articles in this issue illuminate the perils and promise of the push to develop large-scale data systems fostered by No Child Left Behind (2001) and expanded recently by the Race to the Top federal education program. As the articles by Marsh (2012, this issue) and Jennings (2012, this issue) attest, data are often touted as being the equivalent of an educational Swiss Army knifea tool fit for a variety of purposes, including informing student, educator, and school accountability; instructional improvement; school choice; teacher compensation; professional development; school budgeting; and other important functions. The effects of data use depend on the comprehensiveness and validity of the measures used to generate results, their suitability for the purposes being served, the ability of users to make appropriate and valid inferences, and the ability of the system and individuals within it to respond to the results in a timely and effective manner (Jennings, 2012; Marsh, 2012; Supovitz, 2012, this issue). Murnane, City, and Singleton (2008) summarized the multiple challenges districts face to meet these challenges in an article describing the design and use of the MyBPS data system in Boston Public Schools. While citing considerable progress over the course of this effort, Murnane and his colleagues emphasized the time needed to create the right mix of assessments, develop the analytic capabilities of staff (teachers, principals, central office administrators, and so on), and generate the portfolio of tools required to address the needs the data revealed. On this last point, Murnane et al. underscored the difficulty that Boston Public Schools encountered in developing instructional tools that were responsive to the sizable variations both in student needs and in knowledge and skills of the educators spread across Bostons 134 public schools. Building on Murnanes observations, Marsh (2012), in this issue, reviews research on the impact of data use on knowledge and practice. Marsh highlights the ways that districts often struggle to create curricular and instructional resources that respond to the diverse needs within and across schools after these needs are revealedand now publicly sharedby their new data systems. Teachers frustrations with the paucity of tools that promote academic achievement among English language learners and students with disabilities were echoed in our own focus groups with educators in Boston, conducted as part of a transition study (Aspen Institute and Annenberg Institute for School Reform, 2006) commissioned by departing superintendent Thomas Payzant. These concerns reflect the importance of ensuring that district implementation of data systems to strengthen accountability does not outstrip the organizations ability to put effective tools in the hands of educators and schools with varying degrees and kinds of needs. A lack of balance between the accountability and capacity building uses of data undermines reciprocal accountabilityand it fosters teaching to the test and the outright cheating that damaged the credibility of reform efforts in Atlanta, the District of Columbia, and Baltimore in the past year. THE UNDEREXAMINED ROLE OF THE DISTRICT IN DATA USE The districts, as opposed to schools, role in data use for accountability and improvement is a critical issue that has thus far escaped attention. As Marsh noted, most existing research on data use focuses on the impact of data use on the knowledge, skills, and practices of teachers and school leaders; overwhelmingly, the school has been the unit of analysis when assessing the impact of data on outcomes and practices. In comparison, there has been little research on the effects of data use on the policies, structures, practices, and beliefs of district central and intermediate offices, not to mention the school board policies and actions. The Annenberg Institute aimed to address that lack by developing a smart district framework (School Communities That Work, 2002; Ucelli, Foley, & Mishook, 2007), which outlines what districts are responsible for: " providing schools, students, and teachers with needed and timely supports and interventions; " ensuring that schools have the power and resources to make good decisions; and " making decisions and holding people throughout the system accountable for using indicators of school and district performance and practices. The ability of teachers and entire schools to use data to improve practice is mediated by district capacity to accomplish the first two items, not just the third. And although a bevy of studies point to district weaknesses in all three areas (e.g., Christman & Corcoran, 2002; Corcoran, 2007; Honig, Copland, Rainey, Lorton, & Newton, 2010; Supovitz, 2006), research on data systems has mostly been preoccupied with their effects on students, teachers, and schools rather than on the policies, practices, and cultures of the larger system. Putting more and better data in the hands of teachers and principals has the potential to enhance their effectiveness and accountability only if the districts resources and contractual agreements provide: " time for school staff to meet on a regular basis to analyze data; " professional development that strengthens educators ability to make appropriate inferences and develop appropriate supports and/or interventions within a particular school; and " effective tools and strategies for needs that individual schools struggle to address on their own (e.g., supporting the achievement of English learners and students with severe disabilities). When a district lacks capacity in one or more of these areas, its use of data can create an imbalance between accountability and support. Left unexamined, this imbalance can promote what Jennings (2012), in this issue, calls distortive data use. For instance, Jennings characterized cheatingwhen some educators improve results by altering test scores in the face of strong accountability pressures and weak guidance from the districtas an extreme case of distortive data use. Her article underscores the importance of increasing research on the districts role in supporting central office and school-based educators ability to use data as a lens to diagnose student and school performance and as a compass to guide instructional and organizational improvement. In the absence of research on how districts develop and support what Jennings calls productive rather than distortive uses of data, practitioners and policy makers may overlook some of the levers that districts (or alternative systems of schools, for that matter) might use to generate meaningful and sustainable improvement in practice and student outcomes. This chronic blind spot about the districts role in using data to support school improvement is reflected in the four school turnaround models promoted by the U.S. Department of Educations School Improvement Grant (SIG) program. These models imply a set of district supports without being explicit about their nature beyond policies governing school staffing and management. Most of the actions called for in these models occur at the school rather than the district level and are documented and evaluated accordingly. However, as word about the struggles of SIG implementation spreads (Klein, 2012), lack of district capacity is a recurring theme in the barriers that failing schools must overcome to succeed. Rather than being a new lesson, lack of district capacity has been a thorn in the side of various iterations of school reform, from comprehensive research-based school designs (Rowan, Correnti, Miller, & Camburn, 2009) to the implementation of small schools and learning communities in high schools (e.g., Martinez & Harvey, 2004; Wasley et al., 2000). Its surprising, then, that research on data use continues to overlook the role that districts play in using data to foster organizational accountability and district supports for teaching and learning. VARYING APPROACHES TO THE DISTRICT ROLE Part of the difficulty here is that districts have different theories of action when it comes to their role in supporting teaching and learning at the school level. These theories not only vary across districts, but also within them, over fairly short periods of time. In From Smart Districts to Smart Education Systems, I outlined three broad theories of action guiding district support to schools: professional learning communities, managed instruction, and portfolios of schools (Simmons, 2007). Rather than being entirely mutually exclusive, these approaches are often blended, with one superseding the others in the eyes of district leadership. When theories of action shift rapidly or are layered on top of one another, it can be challenging to use data to monitor and assess the effectiveness of district practice in supporting schools. For example, for 6 years prior to Joel Kleins appointment as chancellor of the New York City Department of Education (NYCDOE) in 2002, the district pursued a mix of managed instruction and professional learning communities as its approach to supporting improvement at the school level (Phenix, Siegel, Zaltsman, & Fruchter, 2005). Through its 32 K8 Community School Districts, High School Division, and Chancellors District for low-performing schools, the NYCDOE managed schools through a mixture of governance and support structures. These structures supported teacher and principal collaboration and planning (professional learning communities) while also providing instructional guidance in core subjects through voluntary or mandated curricula and curricular frameworks. Schools with higher levels of performance were granted more autonomy to operate through their professional learning communities, whereas struggling schools experienced intensive scrutiny and mandatory supports for curriculum and professional development. During this era, data on the NYCDOEs role in supporting school improvement were both abundant and consistent with the districts theory of action (TOA). Chancellor Kleins arrival, however, was marked by major shifts in the departments TOA (Hill, 2011). By swiftly moving to a performance management or school portfolio model, Klein essentially delegated the responsibility for instructional improvement to schools and partner support organizations while retaining and heightening the NYCDOEs responsibility for accountability and equitable distribution of fiscal and human resources. This shift in TOAs evolved over 9 years and was accompanied by several departmental reorganizations. Using data to monitor and assess the NYCDOEs support for schools during this period would have been difficult given the repeated and rapid changes in organizational structure that occurred during the Klein era. Although the NYCDOE might represent an extreme when it comes to rapid and repeated organizational and TOA change, students, educators, and community members in Philadelphia, Chicago, New Orleans, the District of Columbia, and Los Angeles, to name a few places, have also been served by districts that have undergone rapid and divergent changes in their organizational structures and TOAs with the comings and goings of different district and school board leaders every 3 years or so. This chameleonic approach to the districts role in school support and accountability makes it difficult to use data to assess the nature and efficacy of organizational support at the district level. The paucity of short-term or longitudinal data on district supports weakens district accountability while making organizational shape-shifting easier, given a lack of clarity about the role and efficacy of system supports and interventions. Furthermore, this dearth of data about districts weakens the knowledge base about systemic reform and replaces evidence with rhetoric that reinforces calls for decentralization while depicting districts as failed organizations. DATA COLLECTION AND INTERPRETATION: NOT JUST A TECHNICAL ISSUE The failure to use data to examine the district role has contributed to the polarization of our national debate on education reform along the lines of the ideologies and values associated with different TOAs. Henigs (2012) article in this issue stands out for emphasizing that data use and interpretation occur in a political context. As Henig states eloquently, Politics is about ideology, which leads individuals subconsciously to filter out information that challenges their assumptions about how public education is and should be structured and delivered. Whether this occurs subconsciously, as Henig suggests, or quite deliberately, education stakeholders should always consider the lens being used to guide data use and analysis because these lenses reflect a set of prevailing organizational values, beliefs, and cultures that may not be widely shared by all stakeholders, both inside and outside the halls of academe. For example, most informed education practitioners and policy makers could anticipate the contrasting positions that Joel Klein and Linda Darling-Hammond would offer when confronted with the same data. Applying a mantle of objectivity to data use obscures how the politics of education reform privileges some data and interpretations over others. In addition to analyses being informed by varying ideologies, the increased prominence of what Carrie Leana (2011) called outsiders in reform has introduced a new set of disciplines to the analytical table. A table that was once crowded with sociologists, psychologists, anthropologists, and political scientists now includes greater numbers of economists and experts in business and law. As I noted in the Fall 2011 issue of Voices in Urban Education (Simmons, 2011), education reformers, philanthropists, and researchers with backgrounds in business and economics are predisposed to using a performance management perspective in defining problems and solutions in education. This perspective highlights the importance of data as a tool to strengthen human capital development (selection, assignment, evaluation, compensation, and supervision of teachers and principals) as a critical path toward education improvement. In contrast, stakeholders with backgrounds in culture, teaching, and learning stress the importance of instructional capacity building as a primary route to improvement. Although these perspectives are not incompatible, the politics of education reform tends to convert divergent research perspectives into competing ideologies. Henigs (2012) article in this issue unpacks the influence of politics over what Henig calls the reigning ideology: Politics can affect the kind of data collected, who has access to them, and the extent to which those data are applied to broad collective problems or the pursuit of narrower agendas. Data systems, and the broader regimes within which they are embedded, can alter the distribution of power and influence, pushing some groups and the values they hold to the margin while giving others stronger holds on the levers of policy change. (p. 172) Further complications arise from national, state, and local differences in culture and politics. These differences cause the prominence of race, ethnicity, poverty, and class to rise or fall depending on which lens is being used and whose perspective is being privileged. Moreover, the growing politicization and alignment of print and social media with particular ideologies means that analyses made by researchers must now compete with those of journalists and bloggers to gain the attention of policy makers and the broader public at the national, state, and local levels. THE NEED FOR RESEARCH THAT SUPPORTS BOTH EQUITY AND EXCELLENCE In light of these factors, analyses that treat data use solely as a technical problem are either naïve or purposeful in ignoring the social, cultural, and political forces that also govern its use. Like Henig, and, to some extent, Jennings in this issue, I believe that research needs to heighten the transparency of the role played by politics and the media in determining the hierarchy of ideologies and disciplines used to convert data into national, state, and local policy. This research helps ensure that subconscious and/or intentional biases are made public rather than remaining hidden and that a range of stakeholders have the information they need to advocate for greater balance in the perspectives used to inform policy and practice. The prevailing public discourse about data use and analysis focuses on the school as the primary unit of change (Fullan, 2011). But with the growing number of partners that deliver supports to schoolstraditional central offices, charter management organizations, unions, contractors, reform support organizations, community-based organizations, and research institutesfar more attention needs to be paid to identifying indicators and data that traditional and alternative school systems can use to inventory these supports and determine their effectiveness. These types of partnerships are becoming less programmatic and more systemic as an increasing number of urban districts adopt portfolio approaches to school management and capacity building. Data that locate success or failure solely within the walls of individual classrooms and schools conceal the roles played by an expanding number of actors and fail to provide information that holds these systemic partners accountable for the quality of their work. Finally, all of this means that a research agenda focused on data use and its role in school reform should not be informed by researchers alone. It may be appropriate to start with researchers, given adequate disciplinary breadth. But the agenda should be vetted with a broad group of stakeholders to foster greater balance and build the kinds of bridges Henig envisions that might reduce partisanship and enhance equity and social justice, while supporting the national preoccupation with excellence. The articles in this issue provide an important foundation for this undertaking. References Aspen Institute, Education and Society Program, and the Annenberg Institute for School Reform, Brown University. (2006). Strong foundation, evolving challenges: A case study to support leadership transition in Boston Public Schools. Washington, DC: The Aspen Institute, Education and Society Program, and Providence, RI: Brown University, the Annenberg Institute for School Reform. Christman, J. B., & Corcoran, T. B. (2002). The limits and contradictions of systemic reform: The Philadelphia story. Philadelphia, PA: Consortium for Policy Research in Education. Corcoran, T. B. (2007). Teaching matters: How state and local policymakers can improve the quality of teachers and teaching (CPRE Policy Brief RB-48). Philadelphia, PA: Consortium for Policy Research in Education. Fullan, M. (2011, April). Choosing the wrong drivers for whole system reform (Seminar Series Paper No. 204). East Melbourne, Victoria, Australia: Centre for Strategic Education. Henig, J. R. (2012). The politics of data use. Teachers College Record, 114(11). Hill, P. T. (2011). Leadership and governance in New York City School reform. In J. A. ODay, C. S. Bitter, & L. M. Gomez (Eds.), Education reform in New York City: Ambitious change in the nations most complex school system (pp. 1732). Cambridge, MA: Harvard Education Press. Honig, M., Copland, M. A., Rainey, L., Lorton, J. A., & Newton, M. (2010). Central office transformation for district-wide teaching and learning improvement. Seattle: Center for the Study of Teaching and Learning at the University of Washington. Jennings, J. L. (2012). The effects of accountability system design on teachers use of test score data. Teachers College Record, 114(11). Klein, A. (2012, February 10). SIG program promising despite bumpy first year, urban districts say. [Web log comment]. Education Weeks Blogs, Politics K12. Retrieved from http://blogs.edweek.org/edweek/campaign-k-12/2012/02/but_even_as_states_districts.html Leana, C. R. (2011). The missing link in school reform. Stanford Social Innovation Review, 9(4), 3035. Marsh, J. A. (2012). Interventions promoting educators use of data: Research insights and gaps. Teachers College Record, 114(11). Martinez, M., & Harvey, J. (2004). From whole school to whole system reform. Report of a Working Conference sponsored by The National Clearinghouse for Comprehensive School Reform in Partnership with the Annenberg Institute for School Reform, Consortium for Policy Research in Education, and New American Schools. Washington, DC: The National Clearinghouse for Comprehensive School Reform. Murnane, R. J., City, E. A., & Singleton, K. (2008). Using data to inform decision making in urban school districts: Progress and new challenges. Voices in Urban Education, 18, 513. Phenix, D., Siegel, D., Zaltsman, A., & Fruchter, N. (2005). A forced march for failing schools: Lessons from the New York City Chancellors District. Education Policy Analysis Archives, 13(40), 124. Rowan, B., Correnti, R., Miller, R. J., & Camburn, E. M. (2009). School improvement by design: Lessons from a study of comprehensive school reform programs. Madison: University of WisconsinMadison, Consortium for Policy Research in Education. School Communities That Work: A National Task Force on the Future of Urban Districts. (2002). School Communities That Work for results and equity. Providence, RI: Brown University, Annenberg Institute for School Reform. Simmons, W. (2007). From smart districts to smart education systems: A broader agenda for educational development. In R. Rothman (Ed.), City schools: How districts and communities can create smart education systems (pp. 191214). Cambridge, MA: Harvard Education Press. Simmons, W. (2011). What will it take to end inequities in access to effective teaching? Voices in Urban Education, 31, 26. Supovitz, J. (2006). The case for district-based reform. Cambridge, MA: Harvard Education Press. Supovitz, J. A. (2012). Getting at student understandingThe key to teachers use of test data. Teachers College Record, 114(11). Ucelli, M., Foley, E., & Mishook, J. (2007). Smart districts as the key entry point to smart education systems. In R. Rothman (Ed.), City schools: How districts and communities can create smart education systems (pp. 4156). Cambridge, MA: Harvard Education Press. Wasley, P. A., Fine, M., Gladden, M., Holland, N., King, S. P., Mosak, E., & Powell, L. C. (2000). Small schools, great strides: A study of new small schools in Chicago. New York, NY: Bank Street College of Education.
|
|
|
|
|
|