Amrein-Beardsley, AudreyArizona State University
AUDREY AMREIN-BEARDSLEY is an associate professor in the Mary Lou Fulton Teachers College at Arizona State University. Her research interests include educational policy, research methods, and, more specifically, high-stakes tests and value-added measurements and systems. She is also the creator and host of a show titled Inside the Academy during which she interviews some of the top educational researchers in the academy. For more information, please see http://insidetheacademy.asu.edu. Two of her recent and related publications include “Working with Error and Uncertainty to Increase Measurement Validity,” which she co-authored with Joshua H. Barnett (Educational Assessment, Evaluation and Accountability, 2012), and “Methodological Concerns about the Education Value-Added Assessment System (EVAAS),” (Educational Researcher, 2008, volume 37, no. 2).
In this study, the researchers surveyed all 50 states and the District of Columbia to provide an inclusive national growth and value-added model overview.
Audrey Amrein-Beardsley, Joshua Barnett & Tirupalavanam G. Ganesh — 2013
In this article, teacher education researchers examine growing concerns about teacher education programs in America, as well as growing concerns about how to evaluate the programs and hold them internally and externally accountable for the quality of the teachers they graduate. Researchers describe a multi-university, statewide initiative that approached this work, and what one of the largest teacher education colleges in the nation, did to advance these examinations locally.
Heather Carter, Audrey Amrein-Beardsley & Cory Cooper Hansen — 2011
Teach For America (TFA) graduate students evaluated their method course instructors significantly lower than did traditional students on an end-of-semester student evaluation instrument. This prompted faculty researchers to investigate how to best meet the needs of these alternatively certified teachers. Implications include suggestions for restructuring teacher preparation programs to best meet the needs of TFA first-year teachers, whose work impacts some of the highest needs students in the country.
Included in this commentary is a discussion of three key challenges that need to be addressed when conducting large-scale, internal evaluations of teacher education programs, a progressively interesting undertaking as those on the inside are increasingly engaging in this empirical work and those on the outside are increasingly holding them accountable for doing so.
For decades, policymakers have promulgated legislation that requires schools to hire effective teachers in all classrooms. Simultaneously, the education research community has attempted to define what effective teachers do in the classroom. A decade ago, No Child Left Behind provided a framework for defining effective teachers as “highly qualified,” which required schools to ensure all of their teachers fit the new standard. This standard, however, is no longer appropriate, as continued evidence indicates that the relationship between credentials and achievement is tenuous. Therefore, policymakers and researchers need to revise the term “highly qualified,” and, by utilizing the advances in educational accountability over the previous decade, replace it with a term grounded in practice and directly connected to achievement and effectiveness.
On September 8, 2011 Teachers College Record published a book review of Douglas N. Harris’s recent book Value-Added Measures in Education. In this commentary the author takes issue with not necessarily the book's What Every Educator Needs to Know content but the author's overall endorsement of value-added, and his and others' imprudent adoption of some highly complex assumptions.
Dr. Maxine Greene, distinguished philosopher, scholar, and professor emerita at Teachers College, Columbia University, passed away recently on May 29, 2014 at the age of 96. As a self-proclaimed existentialist, Greene served as an advocate for aesthetic education in American public schools for well more than half a century and remained committed to expanding creativity among children by encouraging them to imagine possibilities both within and beyond the classroom.
Margarita Pivovarova, Jennifer Broatch & Audrey Amrein-Beardsley — 2014
Over the last decade, teacher evaluation based on value-added models (VAMs) has become central to the public debate over education policy. In this commentary, we critique and deconstruct the arguments proposed by the authors of a highly publicized study that linked teacher value-added models to students’ long-run outcomes, Chetty et al. (2014, forthcoming), in their response to the American Statistical Association statement on VAMs. We draw on recent academic literature to support our counter-arguments along main points of contention: causality of VAM estimates, transparency of VAMs, effect of non-random sorting of students on VAM estimates and sensitivity of VAMs to model specification.
Jessica Holloway-Libell & Audrey Amrein-Beardsley — 2015
Despite the overwhelming and research-based concerns regarding value-added models (VAMs), VAM advocates, policymakers, and supporters continue to hold strong to VAMs’ purported, yet still largely theoretical strengths and potentials. Those advancing VAMs have, more or less, adopted and promoted a set of agreed-upon, albeit “heroic” set of assumptions, without independent, peer-reviewed research in support. These “heroic” assumptions transcend promotional, policy, media, and research-based pieces, but they have never been fully investigated, explicated, or made explicit as a set or whole. These assumptions, though often violated, are often ignored in order to promote VAM adoption and use, and also to sell for-profits’ and sometimes non-profits’ VAM-based systems to states and districts. The purpose of this study was to make obvious the assumptions that have been made within the VAM narrative and that, accordingly, have often been accepted without challenge. Ultimately, sources for this study included 470 distinctly different written pieces, from both traditional and non-traditional sources. The results of this analysis suggest that the preponderance of sources propagating unfounded assertions are fostering a sort of VAM echo chamber that seems impenetrable by even the most rigorous and trustworthy empirical evidence.
In this commentary, authors discuss the Houston Independent School District's (HISD) highest-stakes use of its contracted value-added system (i.e., the Education Value-Added Assessment System (EVAAS)) to reform and improve student learning and achievement throughout the district's schools. Authors situate their discussion within a related report on the recent release of Houston Superintendent's own evaluation scores. Authors also situate their discussion within the evidence, as per the recent release of the state's large-scale standardized test scores. Authors assert that, perhaps, attaching high-stakes consequences to teachers' value-added output in Houston is not working as intended.
Inside the Academy, an online educational historiography, models the innovative use of technology to transmit educational research beyond academia. This is done by chronicling the personal and professional journeys of highly esteemed educational researchers and scholars through video interviews. In this study, researchers conducted an in-depth qualitative analysis of twelve honorees’ interview data. Analyses revealed that Inside the Academy has the potential to function as an accessible, relevant, research dissemination platform by providing policymakers, practitioners, pre-service teachers, graduate students, and others increased access to open source information and expert knowledge about foundational and contemporary educational philosophies, salient policy issues, and research-based practices of utmost prevalence in America’s public school system, and beyond.
In this commentary, authors introduce the idea of artificial conflation, as predicated by Campbell’s Law, and as defined by how those with power might compel principals to artificially conflate teachers’ observational with their value-added scores to purposefully exaggerate perceptions of validity, via the engineering of conflated correlation coefficients between these two indicators over time.
This commentary details the remarkable relationship between what Rachel Carson evidenced in her revolutionary book Silent Spring and how public officials in our field continue to use control measures, namely high-stakes tests, to monitor and regulate what is happening in America’s public schools.