|
|
Research and Policy Perspectives on Data-Based Decision Making in Educationby Martin Orland - 2015 This article summarizes some significant insights of articles in this issue from the perspective of public policy, emphasizing their potential resonance in today's policy environment in using data for program improvement as well as accountability purposes. The suite of papers in this special Teachers College Record volume on data-driven decision making in education reflects a burgeoning subdiscipline of scholarship on the topic that has been stimulated by the constantly evolving educational policy landscape. For at least two decades, policy makers have resonated to the importance of data in education as an accountability tool and have advocated policies for the collection and reporting of such data to fulfill accountability objectives. Early examples of this at the federal level include the creation of the National Education Goals Panel in 1990 (National Education Goals Panel, 1999) to annually report on national and state educational progress toward the National Education Goals adopted by president and the nations governors, as well as requirements by the U.S. Office of Management and Budget for data documenting the effectiveness of federal programs both in and outside of education under the Government Performance and Results Act (GPRA) of 1994. The argument or implicit logic model among policy makers for the role and importance of data for accountability has taken two principal forms that we can label soft and hard accountability. Under soft accountability, the revelation of comparable public data about the performance of various levels of the system (e.g., states, districts, schools) is assumed to create public pressure on these entities future performance (National Research Council, 2011). That is, naming and shaming jurisdictions through publishing data about student achievement, dropout rates, qualifications of teachers, spending levels, and so on are assumed to create needed incentives for these entities to maintain their relative standing on these indicators if they are high or to improve them if theyre low. One prominent example of soft accountability through data is the increasing promulgation of state reports during the 1990s and 2000s providing public metrics on critical indicators for all schools in a state.1 Currently, all 50 states and the District of Columbia now publish such report cards annually as a requirement for receiving federal Title 1 funds.2 Similarly, at the federal level one can point to the dramatic expansion of the National Assessment of Educational Progress (NAEP). Since 2003, NAEP has been required to publish statewide student achievement results in reading and mathematics for fourth and eighth grade every other year,3 and it has just published 12th-grade state achievement findings in 11 states for the first time (National Assessment of Educational Progress, 2013). Hard accountability goes beyond naming and shaming as a way to incentivize desired behaviors by stipulating major consequences for substandard performance on particular data metrics. There is little doubt that such consequential hard accountability features of the No Child Left Behind Act (NCLB) have, for better or worse, profoundly shaped instructional services in schools since the early 2000s. For example, analysts have noted substantial shifts in instructional time spent on mathematics and reading over the last decade because of the consequence-laden accountability features of NCLB regarding the achievement of adequate yearly progress in these two subject areas (Center on Education Policy, 2007; National Research Council, 2011). More recently, a hard accountability perspective can be seen in the development of new teacher evaluation data systems that contain real consequences for ineffective teachers identified through student assessment and other data (Doherty & Jacobs, 2013; Hull, 2013; Mead, 2012). In contrast to the clear salience among policy makers of the role of data to foster individual and system accountability, their potential for directly informing the nature and quality of instructional services has not been as easily recognized. The reasons for this are many. For one thing, the logic model for the role of data to inform and improve teaching and learning is much more complex than for accountability purposes. As noted, data use for accountability only requires an assumption about incentivizing behavior through naming and shaming or stipulating clear consequences tied to reported metrics. The model implicitly assumes that once properly incentivized, the affected relevant actors will know (or be able to find out) what is necessary to maintain or improve their efforts and will act accordingly (National Research Council, 2011). Whats more, accountability policies are popular with the public (The Center for Education Reform, 2013) and dont in and of themselves require large public resource commitments. One can reasonably characterize them as reform on the cheap. This is not the case regarding the use of data for program improvement. As made clear throughout this volume, the nature of the data themselves (i.e., whether they are really actionable), the need to tailor the implementation of data for improvement initiatives to unique cultural contexts, and the trainings and ongoing supports needed to convert data first to useful information and then to appropriate action make their use as an effective instrument for program improvement a challenging proposition. It requires a coordination and alignment among different parts of the educational system (district, school, teacher), new resource investments, and much trial and error in implementation given the absence of a strong knowledge base regarding what works in exploiting the potential of data for improvement purposes (Hamilton et al., 2009). Despite these challenges, there are many reasons to be optimistic about the increased sensitivity and responsiveness of education policy makers at the federal, state, and local levels to embrace significant data for improvement agendas over the next decade. One is the data for accountability movement itself. This movement, which shows no signs of abating, will continue to lead policy makers to search aggressively for interventions holding promise for increasing student achievement and reducing the achievement gap. Policy makers seem to be increasingly aware that although stronger accountability systems (both hard and soft) may represent a necessary condition to spur educational improvement, it is not by itself a sufficient condition (McDonnell, 2013). Incentivizing alone is not enough. It must be teamed with reforms providing the needed knowledge and capacity for educational delivery systems to improve the quality of their efforts (McDonnell & McLaughlin, 1982). In this regard, the potential of data to inform program improvements at the district, school, and classroom levels has emerged as one such promising avenue. For example, it has appeared as one of four pillars of educational improvement of the current Obama administration undergirding its Race to the Top program, school turnaround reform agenda, and other educational initiatives (The White House, Office of the Press Secretary, 2009). Several states and school districts have similarly adopted a data for improvement agenda as a policy priority in recent years (Data Quality Campaign, 2011, 2013). A second related factor leading policy makers to embrace the concept of data use to improve local education practice is their apparent comfort with a rationalistic perspective for fostering educational improvement. As noted by Mandinach, Friedman, and Gummer (2015, this issue), the U.S. Secretary of Education has frequently spoken about the importance of data to drive evidence-based practice by teachers in classrooms. In this sense, the policy rhetoric around the role of data in local education decision making closely mirrors that of its conceptual cousin of knowledge use (sometimes referred to as knowledge utilization). Both movements have at their core the concept of a more evidence-informed decision-making calculus by education service providers. This perspective, as exemplified by such federal initiatives as the What Works Clearinghouse,4 has reignited decades-old debates within the education research community regarding the nature of evidence and the extent to which a scientific paradigm should ground education knowledge production (Howe, 2009; Johnson, 2009; National Research Council, 2002; Popkewitz, 2004). However, no such debate appears evident among policy makers in which a rationalistic perspective appears self-evident and that is likely to continue for the foreseeable future (Education Sciences Reform Act of 2002; Strengthening Education Through Research Act of 2014). A third factor driving policy maker attention to the use of data for improvement purposes is the recent explosion of more robust education data systems at all levels of government. Spurred on by policy needs (e.g., linked student-teacher unit records for accountability purposes), technological breakthroughs (e.g., lower unit costs for data storage, advances in ensuring disparate data system interoperability, ubiquity of portable electronic hardware for both data collection and analysis), and the entrance of new entrepreneurial actors into what has traditionally been relatively insular educational environments, the amount of useful data that can be potentially available to inform program and instructional choices has never been greater (Anagnostopoulos & Bali, 2011). Both the promise and challenges of developing a new data culture in our educational institutions in which data for program improvement play a central role can be seen in current federal and state policy efforts to create new student assessment and teacher evaluation systems. The initial policy momentum and desire for data for both these reforms come from accountability concerns. In the case of student assessment, policy makers wished to create summative assessment measures aligned with college and career ready standards (Sambolt & Blumenthal, 2013), whereas in teacher evaluation, they wanted to distinguish successful teachers from poor classroom performers (Race to the Top Fund Assessment Program, 2010). Yet each reform also embraced data use for program improvement as an explicit corollary system reform goal. In the area of student assessment, each of the two federally supported assessment consortia include interim/benchmark assessments intended to inform instructional decision making within the school year as well as banks of formative assessment tools to support real-time teaching and learning activities in classrooms (Center for K-12 Assessment & Performance Management at ETS, 2012). These efforts in turn have spurred states and school districts to initiate professional development efforts designed expressly to help principals and teachers use such data for improvement purposes (Data Quality Campaign, 2014). Similarly, with respect to teacher evaluation, emerging state policies with federal government encouragement (i.e., through ESEA waiver approvals and Race to the Top grants) include requirements that school administrators and teachers develop explicit mutually agreed-on plans for future professional growth based on teacher evaluation data (Race to the Top Fund, 2009; United States Department of Education, 2012 ). In addition, over 30 states have formally adopted the process of student learning objectives (SLOs) as a component of their teacher evaluation systems (Lacireno-Paquet, Morgan, & Mello, 2014). Such systems require that sound classroom-level assessment data be established not only to measure SLO attainment but also to guide teachers in monitoring student progress and adjusting instruction accordingly (Center for Assessment, 2013; Lachlan-Haché, Cushing, & Bivona, 2012). As with the policies for implementing new state assessment systems aligned with new standards requirements, unprecedented levels of state and local technical assistance and professional development activities are now under way to enable principals and teachers to use the new teacher evaluation data for these purposes (Data Quality Campaign, 2014). The articles in this issue begin to provide the intellectual foundation needed to ground a data for improvement agenda that can shape and sustain policy actions in this fertile environment. Such an agenda requires attention to at least three distinct elementsconceptual clarity, knowledge production and dissemination, and human capital investmentand there are rich examples of each in the articles presented. The Gummer and Mandinach article (2015, this issue) tackles a significant part of the first requirement in providing a conceptual framework and operational definition around data literacy for teaching. Similarly, Gerzon (2015, this issue) provides a clear and compelling depiction of the elements required for successful school-level data use. As I have argued elsewhere (Orland & Anderson, 2013), these efforts are not merely elegant academic exercises but a necessary prerequisite to moving a policy agenda forward. Unless policy makers understand what data literacy for teaching or effective school-level data use is (and is not!) and why its important for improving the educational enterprise, they cannot be expected to embrace reforms intended to promote and sustain its presence. The policy environment is replete with competing initiatives vying for the attention of policy makers. A compelling case for why they should pay attention to the importance of the data for improvement agenda over others begins with a clear definition of the relevant constructs and their importance, which can both rally supporters around a set of core ideas and provide grounding for specific policy actions. The knowledge agenda around data use for improvement purposes is vast, but as this issue shows, we are beginning to gather some important insights with policy implications. Marsh, Bertrand, and Huguets (2015, this issue) research shows that knowledge about how to work with adult learners is critical if professional learning communities are to foster appropriate data use in schools and that educators require a safe environment with which to examine data and identify appropriate actions in response. Similarly Schildkamp and Poortmans study identifies a number of factors (e.g., data availability, data literacy, leadership) that appear to influence the use of data in data teams. And Jimerson and Waymans (2015, this issue) research suggests a number of practical and feasible ways that districts can improve the quality of their professional development efforts, including improvements in planning to build data use capacity, and specific approaches to data system training and knowledge codification. Each of these studies has clear implications for policy leaders and administrators designing professional development and technical assistance strategies around data use for improvement purposes. Datnow and Hubbards (2015, this issue) literature review noted a number of critical gaps in the current knowledge base, including understanding what data-informed instruction looks like and why data use sometimes fosters positive impacts but at other times does not. Other articles in the issue point to logical extensions of their research that hold promise for improving the craft of data use for program improvement in our schools. The Institute of Education Sciences has just awarded its first national Research and Development Center on Knowledge Utilization5 and plans to award a second Knowledge Utilization Center in 2015.6 It would be advantageous if one or both of these centers paid attention to how research on data use for program improvement can inform the broader topic of knowledge utilization and, conversely, how insights regarding knowledge utilization can be directly applied to questions of data use for improvement purposes. The two areas of scholarship have similar theoretical foundations, foci for empirical study, as well as practical implications for school improvement, yet researchers and their networks have tended to reside on one side of the knowledge use/data use fence or the other. Both research agendas would gain from more synergy with the other. Several articles in this volume focus on the human capital challenge associated with data use in education for improvement purposes. Bocala and Boudett (2015, this issue), for example, note how few teacher candidates learn the habits of mind to engage deeply in data inquiry, while Mandinach and colleagues document the lack of depth in preservice course syllabi that ostensibly pertains to data-driven decision making. Authority over both perservice and in-service requirements represents one of the more direct policy levers available to help ensure that both teachers and administrators have the requisite knowledge and skills needed to engage with data effectively, a goal that at least now has strong rhetorical support within the policy community. Policy makers would be wise to take heed of the specific findings, conclusions, and recommendations of Bocala and Boudett and Mandinach et al. in this regard, including that the content of data use courses be a part of rather than apart from real problems of teaching practice. Farley Ripple and Buttram (2015, this issue) approach the issue of human capital development from another intriguing angle: through conscious attention to the role that teacher networks play in developing data use capacity. One particularly important and policy-relevant insight from this work is the critical role school leaders can play both as significant knowledge sources in data advice networks and in identifying individuals and knowledge sharing structures within their buildings (e.g., peer coaching) that can enculturate data use reforms. Taken together, the articles in this issue suggest a nascent field of research beginning to attract important scholarship and gaining internal coherence as well as policy relevance. This is a timely occurrence given recent advances in both the ubiquity and accessibility of data to educators, a trend that is sure to accelerate in the years ahead. In this context, it is critical that researchers continue to investigate and increasingly communicate with policy officials regarding how best to exploit the potential of such data, not only for accountability purposes but also for supplying actionable knowledge to foster improved teaching and learning. Notes 1. See the evolution of the Illinois State Report Card Data for an example of this: http://www.isbe.net/assessment/report_card.htm. 2. See http://www2.ed.gov/programs/titleiparta/state_local_report_card_guidance_2-08-2013.pdf, pp. 56. 3. See http://nces.ed.gov/nationsreportcard/about/assessmentsched.aspx. 4. See http://ies.ed.gov/ncee/wwc/ 5. See http://ies.ed.gov/funding/grantsearch/details.asp?ID=1466 6. See http://ies.ed.gov/funding/ncer_rfas/randd.asp References Anagnostopoulos, D., & Bali, V.A. (2011). Implementing statewide longitudinal student data systems: Lessons from the states. East Lansing: Education Policy Center at Michigan State University. Bocala, C., & Boudett, K. P. (2015). Teaching educators habits of mind for using data wisely. Teachers College Record, 117(4). Center for Assessment. (2013). Using baseline data and information to set SLO targets: A part of the SLO toolkit. Retrieved from http://www.nciea.org/wp-content/uploads/7_Using-Baseline-Data-and-Information_7.15.13.pdf The Center for Education Reform. (2013). Americas attitudes towards education reform public support for accountability in schools. Retrieved from http://www.edreform.com/wp-content/uploads/2013/12/CER-Accountability-Poll-Results2.pdf Center for K-12 Assessment & Performance Management at ETS. (2012). Coming together to raise achievement: New assessments for the Common Core State Standards. Retrieved from http://www.k12center.org/rsc/pdf/Coming_Together_April_2012_Final.PDF Center on Education Policy. (2007). Choices, changes, and challenges: Curriculum and instruction in the NCLB era. Washington, DC: Author. Data Quality Campaign. (2011). From compliance to service: Evolving the state role to support district data efforts to improve student achievement. Retrieved from http://www.dataqualitycampaign.org/files/1455_From%20Compliance%20to%20Service.pdf Data Quality Campaign. (2013). Data for action. Retrieved from http://www.dataqualitycampaign.org/files/DataForAction2013.pdf Data Quality Campaign. (2014). Teacher data literacy: Its about time. Retrieved from http://www.dataqualitycampaign.org/files/DQC-Data%20Literacy%20Brief.pdf Datnow, A., & Hubbard, L. (2015). Teachers use of assessment data to inform instruction: Lessons from the past and prospects for the future. Teachers College Record, 117(4). Doherty, K. M., & Jacobs, S. (2013). Connect the dots: Using evaluations of teacher effectiveness to inform policy and practice. National Council on Teacher Quality. Retrieved from http://www.nctq.org/dmsView/State_of_the_States_2013_Using_Teacher_Evaluations_NCTQ_Report Education Sciences Reform Act, 20 U.S.C. " 9501 et seq. (2002). Retrieved from http://www2.ed.gov/policy/rschstat/leg/PL107-279.pdf Farley-Ripple, E. N., & Buttram, J. (2015). The development of capacity for data use: The role of teacher networks in an elementary school. Teachers College Record, 117(4). Gerzon, N. (2015). Structuring professional learning to develop a culture of data use: Aligning knowledge from the field and research findings. 117(4). Gummer, E. S., & Mandinach, E. B. (2015). Building a conceptual framework for data literacy. Teachers College Record, 117(4). Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J., & Wayman, J. (2009). Using student achievement data to support instructional decision making (NCEE 2009-4067). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/practice_guides/dddm_pg_092909.pdf Howe, K. R. (2009). Positivist dogmas, rhetoric, and the education science question. Educational Researcher, 38(6), 428440. doi:10.3102/0013189X09342003 Hull, J. (2013). Trends in teacher evaluation: How states are measuring teacher performance. Center for Public Education. Retrieved from http://www.centerforpubliceducation.org/Main-Menu/Evaluating-performance/Trends-in-Teacher-Evaluation-At-A-Glance/Trends-in-Teacher-Evaluation-Full-Report-PDF.pdf Jimerson, J. B., & Wayman, J. C. (2015). Professional Learning for Using Data: Examining Teacher Needs & Supports. Teachers College Record, 117(4). Johnson, R.B. (2009). Comments on Howe: Toward a more inclusive scientific research in education. Educational Researcher, 38(6), 449457. doi:0.3102/0013189X09344429 Lachlan-Haché, L., Cushing, E., & Bivona, L. (2012). Student learning objectives as measures of educator effectiveness: The basics. Washington, DC: American Institutes for Research. Lacireno-Paquet, N., Morgan, C., & Mello, D. (2014). How states use student learning objectives in teacher evaluation systems: a review of state websites (REL 2014013). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory North-east & Islands. Retrieved from http://ies.ed.gov/ncee/edlabs Mandinach, E. B., Friedman, J. M., & Gummer, E. S. (2015). How can schools of education help to build educators capacity to use data? A systemic view of the issue. Teachers College Record, 117(4). Marsh, J. A., Bertrand, M., & Huguet, A. (2015). Using data to alter instructional practice: The mediating role of coaches and professional learning communities. Teachers College Record, 117(4). McDonnell, L. M. (2013). Educational accountability and policy feedback. Educational Policy, 27(2), 170189. doi:10.1177/0895904812465119 McDonnell, L. M., & McLaughlin, M. W. (1982). The Elementary and Secondary Education Act (ESEA) Title IV consolidation. Southern Review of Public Administration, 5(4), 437458. Mead, S. (2012). Recent state action on teacher effectiveness: Whats in state laws and regulations? Bellwether Education Partners. Retrieved from http://bellwethereducation.org/sites/default/files/RSA-Teacher-Effectiveness.pdf National Assessment of Educational Progress. (2013). Grade 12 state program. Retrieved from http://nces.ed.gov/nationsreportcard/pdf/about/schools/Grade12_StateProgramFactSheet.pdf National Education Goals Panel. (1999). National education goals: Lessons learned, challenges ahead. Retrieved from http://govinfo.library.unt.edu/negp/reports/negp31.pdf National Research Council. (2002). Scientific research in education. Washington, DC: National Academies Press. National Research Council. (2011). Incentives and test-based accountability in education. Washington, DC: National Academies Press. Orland, M., & Anderson, J. (2013). Assessment for learning: What policymakers should know about formative assessment. Washington, DC: WestEd. Popkewitz, T. S. (2004). Is the National Research Council Committees report on scientific research in education scientific? On trusting the manifesto. Qualitative Inquiry, 10(1), 6278. doi:10.1177/1077800403259493 Race to the Top Fund. (2009, November 18). Federal Register, 74(221), 5968859834. Race to the Top Fund Assessment Program. (2010, April 9). Federal Register, 75(68), 1817118185. Sambolt, M., & Blumenthal, D. (2013). Promoting college and career readiness: A pocket guide for state and district leaders. Washington, DC: American Institutes for Research. Schildkamp, K., & Poortman, C. (2015). Factors influencing the functioning of data teams. Teachers College Record, 117(4). Strengthening Education through Research Act of 2014, H.R. 4366, 113th Cong. (2014). Retrieved from https://beta.congress.gov/113/bills/hr4366/BILLS-113hr4366rfs.pdf United States Department of Education. (2012). ESEA flexibility. Washington, DC. Retrieved from http://www2.ed.gov/policy/elsec/guid/esea-flexibility/index.html The White House, Office of the Press Secretary. (2009). Address to joint session of congress, February 24, 2009 [Press release]. Retrieved from http://www.whitehouse.gov/the_press_office/Fact-Sheet-Expanding-the-Promise-of-Education-in-America/
|
|
|
|
|
|