|
|
How Did That Happen? Teachers’ Explanations for Low Test Scores
by Margaret Evans, Rebecca M. Teasdale, Nora Gannon-Slater, Priya G. La Londe , Hope L. Crenshaw, Jennifer C. Greene & Thomas A. Schwandt - 2019
Context: Educators often engage with student performance data to make important instructional decisions, yet limited research has analyzed how educators make sense of student performance data. In addition, scholars suggest that teachers recognize a relationship between their instruction and student performance data, but this is a relatively untested assumption.
Focus of Study: We investigated if and how teachers referenced instruction as a contributing factor for why students performed in particular ways on assessments. We also studied other explanations that teachers offered for student performance data.
Research Design: Our research team conducted a qualitative case study of six grade-level teams of teachers who met biweekly to make meaning of student performance data. Using data collected from 44 hours of observation of teacher team meetings, 16 individual interviews, and six group interviews with participating teachers, we analyzed the ways in which and the extent to which teachers referenced instruction as a contributing factor to student performance data.
Findings: Teachers connected student performance data to their instruction approximately 15% of the time. Teachers more frequently connected student performance data to student characteristics. Notably, student behavior accounted for 32% of all teacher explanations for student performance. We offer five distinct categories of teachers’ explanations of student performance and the extent to which teachers invoked each category.
Conclusions: The findings in this study build on research on teachers’ attributions for assessment data. In contrast to other studies, our findings suggest that teachers invoked student characteristics in distinct ways when explaining student performance. At times, teachers were knowledgeable about student characteristics, which offered verifiable insights into the “problem” of low achievement. At other times, teachers voiced negative viewpoints of students that served to blame students for their poor performance. We suggest that the practice of data-driven decision making offers an opportunity to bolster educators’ informed judgment and undermine negative, unverifiable claims about children.
To view the full-text for this article you must be signed-in with the appropriate membership. Please review your options below:
|
|
|
- Margaret Evans
Illinois Wesleyan University
E-mail Author
MARGARET EVANS is an assistant professor at Illinois Wesleyan University. Dr. Evans supports the development of future K–12 teachers for social justice. Her research interests include teachers’ data-driven decision making and the ways in which educators are responsive to the needs of students living in poverty.
- Rebecca Teasdale
University of Illinois at Urbana-Champaign
E-mail Author
REBECCA M. TEASDALE is a doctoral student in educational psychology at the University of Illinois at Urbana-Champaign. Her research focuses on methodology for evaluating adult learning and informal science learning, with a particular emphasis on representing learners’ perspectives in the valuing process.
- Nora Gannon-Slater
Breakthrough Charter Schools
E-mail Author
NORA GANNON-SLATER is the director of performance and analytics for Breakthrough Charter Schools, an urban charter school network located in Cleveland, Ohio. As part of her work, she works with education practitioners and policy makers to adopt research-based policies and practices, and she designs tools, resources, protocols, and routines to improve individual and organizational capacities to incorporate systematic evidence in decision-making processes. Nora has presented her work at local and national conferences as well as state education departments. Most recently, Nora coauthored a special issue on equity and data use in the Journal of Educational Administration (in press) with Amanda Datnow and Jennifer Greene.
- Priya La Londe
Georgetown University
E-mail Author
PRIYA G. LA LONDE is assistant professor of teaching at Georgetown University. She studies data and research use, international comparisons of incentivist policy, and social justice education. Her current work examines how performance pay shapes teacher relationships and work in high-performing schools in Shanghai.
- Hope Crenshaw
Teen Health Mississippi/Mississippi First
E-mail Author
HOPE CRENSHAW is the training and program coordinator of Teen Health Mississippi/Mississippi First. Her research interests include professional development, parent engagement, equity, and evidence-based interventions.
- Jennifer Greene
University of Illinois at Urbana-Champaign
E-mail Author
JENNIFER C. GREENE is a professor of educational psychology at the University of Illinois, Urbana-Champaign. Greene’s work focuses on the intersection of social science methodology and social policy and aspires to be both methodologically innovative and socially responsible. Her research interests include democratic valuing in evaluation, and methodological innovation with accountability. Two recent publications in these domains are: Greene, J. C. (2016). Advancing equity: Cultivating an evaluation habit. In S. I. Donaldson & R. Piciotto (Eds.), Evaluation for an equitable society (pp. 49–65). Charlotte, NC: Information Age Publishing; and Greene, J. C. (2015). The emergence of mixing methods in the field of evaluation. Qualitative Health Research, 25(6), 746–750.
- Thomas Schwandt
University of Illinois at Urbana-Champaign
E-mail Author
THOMAS A. SCHWANDT is professor emeritus, Department of Educational Psychology, University of Illinois at Urbana Champaign. His scholarship is focused on theory of evaluation, and his most recent book is Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice (Stanford University Press, 2015).
|
|
|
|
|