Subscribe today to the most trusted name in education.  Learn more.
Current Issue Current Issue Subscriptions About TCRecord Advanced Search   

Volume 109, Number 13 (2007)

by Pamela A. Moss & Philip J. Piety
This volume and the small but growing literature base on which it draws decenter, complement, and challenge studies of the impact of standards-based accountability to consider questions about how education professionals (might) actually interpret and use tests and other sources of evidence to make routine decisions in their daily work; about how these practices shape and are shaped by organizational structures, routines, and cultures; and about the sorts of learning and professional agency that are fostered. The volume also highlights technical infrastructures that have emerged concomitant with the standards-based reform movement to enable the collection, distribution, consolidation, and reuse of evidence as has not been possible before, along with the social practices through which they are implemented.
 Archival Access Icon Sign-up for access

by David Gamson
“The significance of this new movement is large,” wrote Ellwood P. Cubberley in 1916, praising the growth of scientific measurement in education, “for it means nothing less than the ultimate changing of school administration from guesswork to scientific accuracy; the elimination of favoritism and politics completely from the work; . . . the substitution of professional experts for the old and successful practitioners; and the changing of school supervision from a temporary or a political job, for which little or no technical preparation need be made, to that of a highly skilled piece of professional social engineering” (pp. 325–326). As dean of the Stanford University School of Education, Cubberley was supremely confident, as were many of his contemporaries, that the empirical study of education would uncover timeless educational truths, yield new instructional and administrative practices, and permanently unite educators around a common vision of policymaking.
 Archival Access Icon Sign-up for access

by James P. Spillane & David B. Miele
While much of the recent educational literature has been devoted to explaining how investigators can produce high quality, practical research evidence (e.g., Cook, 2002; Feuer, Towne, & Shavelson, 2002; Shavelson & Towne, 2002; Slavin, 2002; Towne, Wise, & Winters, 2005), little attention has been paid to how evidence can and should be used by teachers and school leaders. Our goal is not to review the empirical literature on teachers’ and school leaders’ use of evidence, but rather to identify the conceptual tools that frame our thinking about this work. Policymakers often work on the assumption that evidence-based practice should be a simple and straightforward process for school practitioners; that is, practitioners need only follow the guidance offered by evidence—typically equated with qualitative research findings and trends in student achievement data—when deciding what they should do and how they should do it. However, this belief is based on several questionable assumptions.
 Archival Access Icon Sign-up for access

by Michael S. Knapp, Michael A. Copland & Juli Ann Swinnerton
Educational leaders have always had “data” of some kind available to them when making decisions. Gathering whatever information they could readily access, and drawing on accumulated experience, intuition, and political acumen, leaders have pursued what they viewed as the wisest courses of action. However, in many cases, the data drawn into the decision-making process was unsystematically gathered, incomplete, or insufficiently nuanced to carry the weight of important decisions.
 Archival Access Icon Sign-up for access

by Gina Schuyler Ikemoto & Julie Marsh
High-stakes accountability policies such as the federal No Child Left Behind (NCLB) legislation require districts and schools to use data to measure progress toward standards and hold them accountable for improving student achievement. One assumption underlying these policies is that data use will enhance decisions about how to allocate resources and improve teaching and learning. Yet these calls for data-driven decision making (DDDM) often imply that data use is a relatively straightforward process. As such, they fail to acknowledge the different ways in which practitioners use and make sense of data to inform decisions and actions.
 Archival Access Icon Sign-up for access

by William A. Firestone & Raymond González
School districts occupy a special place in the American educational system. They are the locus of accountability to both local and state government. In recent decades, this has meant that they have a responsibility to mobilize evidence to demonstrate that students are being educated (often in a cost-effective manner). As districts grow beyond a certain size, they take on certain staff functions related to curriculum and the support of teaching, so they house experts who use evidence about student achievement to make decisions. Finally, their staff roles often extend to collecting, analyzing, interpreting, and distributing data, especially student assessment or testing data.
 Archival Access Icon Sign-up for access

by Lauren B. Resnick, Mary Besterfield-Sacre, Matthew Mehalik, Jennifer Zoltners Sherer & Erica Rosenfeld Halverson
As standards-based accountability systems have become common in American schools, performance data on state and national tests have become the bottom line of the educational enterprise, with systems for analyzing student test performance and for raising student test scores garnering substantial interest. Calls for data-driven management seem to focus largely on the use of student performance data to help teachers and administrators respond earlier to signals about how students are likely to perform on end-of-year or periodic state and national tests. This form of assessment-driven education focuses almost entirely on student performance—the output of education. The authors of this chapter believe that there is another essential kind of assessment that is needed if student-based assessment is to have its full, intended effect.
 Archival Access Icon Sign-up for access

by Frederick Erickson
One of the basic problems in relating educational evaluation and educational practice is that the two activities often take place on radically differing time scales. It is not only a matter of aims—that evaluation of local educational practice as conducted by external researchers (or by the use of instruments designed by external researchers, as in the case of formal testing) may be done “summatively” for purposes of external accountability, and so the information collected may not directly inform the local conduct of instruction and school administration. It is also a matter of timeliness, in that whatever information is collected from a local site of practice may not be analyzed and communicated back to the site in time for frontline service providers to do anything about it, that is, in time for teachers to adapt their ongoing instruction in light of the information provided by the assessment.
 Archival Access Icon Sign-up for access

by Judith Warren Little
Accounts of teaching experience punctuate teachers’ talk with one another in a range of workplace contexts: in staffroom or hallway encounters, regularly scheduled meetings of one sort or another, professional development events, and increasingly, activities focused on reviews of school assessment data or samples of student work. Such accounts, whether in the form of passing references or extended narratives, form a pervasive feature of professional interaction. Yet in studies that now span several decades, scholars offer quite mixed assessments of them: what they convey of teachers’ knowledge; what they signify regarding teachers’ beliefs about and dispositions toward students, parents, and colleagues; how they function in shaping or changing the norms of professional discourse; and what they offer as resources for problem solving and innovation.
 Archival Access Icon Sign-up for access

by John B. Diamond & Kristy Cooper
Standards-based accountability policies that include high-stakes testing are currently the dominant school reform approach in the United States. These policies are designed to raise students’ educational outcomes and reduce race and class achievement gaps by linking students’ test scores to rewards and sanctions for both schools and students. Such policies are based on a straightforward set of assumptions: Educators will improve instruction and students will learn more if (1) policymakers clearly articulate rigorous standards, (2) a curriculum that is aligned with the standards is developed and implemented, (3) regular assessments are taken to determine if students are meeting the standards, and (4) rewards and sanctions for schools and/or students based on these test results are imposed. By establishing a clear set of goals, motivating educators and students through incentives, and providing schools with objective data on student learning outcomes, these policies are designed to create more educational equality.
 Archival Access Icon Sign-up for access

by Daniel T. Hickey & Kate T. Anderson
This chapter aims to introduce several ideas about using evidence from assessment to guide educational decision making. We expect these ideas to be new to many readers, as they reflect the influence of “sociocultural” theories of learning (e.g., Vygotsky, 1986), particularly the theories of “situative” sociocultural theorists (e.g., Greeno & MMAP, 1998). These theories assume that all learning is social change. This contrasts with traditional theories underlying most prior considerations of assessment, which assume that learning is fundamentally about individual (“cognitive”) change.
 Archival Access Icon Sign-up for access

by Drew Gitomer & Richard A. Duschl
The enactment of the No Child Left Behind Act (NLCB) has resulted in an unprecedented and very direct connection between high-stakes assessments and instructional practice. Historically, the disassociation between large-scale assessments and classroom practice has been decried, but the current irony is that the influence these tests now have on educational practice has raised even stronger concerns (e.g., Abrams, Pedulla, & Madaus, 2003) stemming from a general narrowing of the curriculum, both in terms of subject areas and in terms of the kinds of skills and understandings that are taught. The cognitive models underlying these assessments have been criticized (Shepard, 2000), evidence is still collected primarily through multiple choice items, and psychometric models still order students along a single dimension of proficiency.
 Archival Access Icon Sign-up for access

by Peggy Carr, Enis Dogan, William Tirre & Ebony Walton
Large-scale assessments designed to serve as indicators of academic progress in a social context provide invaluable information about the condition of education in America. This unique class of assessments serves as a common yardstick by which the educational progress in states, jurisdictions, and other countries can be compared. Because these assessments serve as monitors across a wide variety of curricula, content standards, and instructional practices, they are uniquely designed and well suited for their task. The focus of this chapter is to define what policymakers need to know to be proficient in this kind of large-scale indicator assessment literacy.
 Archival Access Icon Sign-up for access

by Christopher Thorn, Robert H. Meyer & Adam Gamoran
This chapter conceptualizes and reviews educational indicator systems, focusing on how evidence delivered by these systems is (or is not) being used now and how it might be used in the future to support professional learning and decision making at the district, state, and national levels. This topic is particularly salient because under the federal No Child Left Behind (NCLB) (2002) law, massive amounts of data are being collected, but states and districts frequently lack the capacity to use the data for school improvement.
 Archival Access Icon Sign-up for access

by James Paul Gee
This chapter is a reflection on assessment and the implications and uses of assessments from what will be called a “sociocultural-situated” perspective on language, learning, and mind. By “sociocultural” I mean to indicate the importance of the fact that human beings are givers and takers of meaning and the meanings they give and take can come from no other place than the cultures and social groups within which they act and interact (Gee, 1992, 1996). This is so for much the reasons Wittgenstein (1958) pointed to in his well-known argument about the impossibility of “private” languages. By “situated” I mean to indicate the importance of the fact that the meanings which humans give and take are always customized to—situated within—actual situations or contexts of use (Gee, 2004, 2005). Humans make meanings that both shape the contexts they are in and are shaped by them (Duranti, 1992).
 Archival Access Icon Sign-up for access

by Denis C. Phillips
To jump to the heart of the matter, the point of the following discussion is that on the most straightforward reading Webster’s is seriously mistaken, while Marshall and James are much closer to getting things right. Pursuit of the central issues that are at stake will take the discussion into epistemological territory that is well-known to contemporary philosophers of science, and to add further contemporary relevance, it may be asserted at the outset that some strong supporters of the so-called “evidence-based” (or “research-based” or “scientifically based”) policy movement make the same mistake—in addition to others—that was made by the authors of this edition of the great dictionary.
 Archival Access Icon Sign-up for access

Catch the latest video from AfterEd, the new video channel from the EdLab at Teachers College.
Global education news of the week in brief.; NCLB; international education; software; This episode explores ten interesting and little known facts about Social Studies.; social studies; humor; media; research; schools; Three seniors at Heritage High School talk about education and what the next President should do about it.; Debates; Heritage High School; NCLB; NYC schools; education; election; girls; interview; politics; presidential election; schools; speak out; students; testing; EdWorthy Theater starring MIT Physics Professor Professor Walter Lewin.; MIT; physics; We feature new content about the future of education. Put us on your website ­ whether you're a student, teacher, or educational institution, we aim to create great content that will entertain and enlighten your audience.

Site License Agreement    
 Get statistics in Counter format