Measure, Use, Improve! Data Use in Out-of-School Time
reviewed by Craig A. Mertler - July 19, 2021
Title: Measure, Use, Improve! Data Use in Out-of-School Time
Author(s): Christina A. Russell & Corey Newhouse
Publisher: Information Age Publishing, Charlotte
ISBN: 1648022537, Pages: 330, Year: 2021
Search for book at Amazon.com
In this volume of the series Current Issues in Out-of-School Time, editors Christina A. Russell and Corey Newhouse tackle the topic of measurement and evaluation systems that contribute to the quality improvement of out-of-school time (OST; i.e., afterschool) programs. As professional evaluators, they capitalize on decades of experiences partnering with OST organizations and initiatives to understand implementation, analyze impact, and support improvement (p. xxv). They describe this volume as a book for OST practitioners and researchers to share practical lessons and approaches about measurement and data use (p. xxv). In the Introduction, the editors argue the fact that data usealong with using those data to guide continuous quality improvementis a significant topic for exploration, especially as researchers and nonprofit leaders try to systematically strengthen their use of information generated by both internal and external assessments that are designed to drive improvements. The authors of subsequent chapters share promising strategies for measurement and data use, as offered by OST practitioners and by researchers. These authors also take an honest and forthright approach, stressing the fact that this work is not easy. Collecting and using quality data for program improvement requires investments of time, energy, and resources. They state that the authors raise important questions, make concrete recommendations, and collectively issue a call to action (p. xxvi). They close the Introduction with the declaration that their book reflects a priority on equity throughout all chapters.
Part I focuses on making a case for the use of measurement and evaluation in improvement efforts related to OST programs. This is sometimes made more challenging because most OST programs are funded by grants from foundations (to provide free, high-quality programs), and using those dollars for the use of data beyond simple compliance reporting can appear counterintuitive. This section includes authors who share insights directly from their experiences about how an investment of this type and commitment to these efforts can benefit the youth served by OST programs. In the first chapter, Regino Chávez shares his experiences as director of evaluation at LAs BEST, a long-lived OST program that partners with the school district, the City of Los Angeles mayors office, and the private sector. His case study emphasizes the fact that evaluations canand should becustomized to fit the emphasis of individual OST programs. Next, Jason Spector, through his work as internal evaluator for the national After-School All-Stars (ASAS) program, stresses the importance of matching a chosen evaluation strategy to the organizations stage of development and implementation, and its overall needs. He also offers four key lessons for motivating change in terms of organizational decisions. In closing out Part I, Rebecca M. Goldberg et al. share their experiences regarding the use of data for organizational learning from the perspective of a large, national foundation.
Part II centers around the notion that measurement and data use is often overwhelming for many OST organizations, especially if their staff is not sizable nor has dedicated measurement experts. Its authors focus their presentations on demystifying data, measurement, and evaluation. Hannah Lantos et al. discuss what counts as data, and argue that OST programs should not limit themselves to only participant outcomes. They also argue that data collection and use should be part of a much larger overall performance management strategy. Betsy Block then examines challenges and strategies for identifying or developing a data management system for a particular organization. Next, Tasha Johnson and Aasha Joshi consider processes for building capacity to effectively use data for program improvement, using the YMCA as their experiential context. In a related presentation, Jessica Donner et al. reflect on lessons learned about building broader, local capacity from experiences in the Every Hour Counts program. In the final chapter of this section, Joseph Luesse and Kim Sabo Flores explore the value and meaning attached to the inclusion of all participants in an evaluation process, namely those who the programs are intended to servelocal youth.
Part III addresses the need and strategies for building evaluative thinking skills as a mechanism for fostering program improvement initiatives. Tiffany Berry and Michelle Sloper begin this section by arguing that evaluative thinking should be an essential component of continuous quality improvement (CQI) frameworks. They also offer five strategies for building improvement systems with a focus on evaluative thinking. Next, Jocelyn Wiedow and Jennifer Griffin-Wiesner discuss an approach for engaging program staff in a process of making meaning from data, which includes a mindset shift from "doing evaluation" to "being evaluative." Through several examples and case studies, Valerie Threlfall considers the inherent value in collecting youth feedback and provides suggestions for making those data more meaningful. Linda Barton et al. present discussion of a decade-long journey to develop, scale, and sustain a statewide system of using data to inform staff professional development and program improvement. Finally, Bryan Hall and Brenda McLaughlin analyze ways in which unintended outcomes of research and evaluation efforts can lead to program improvements, including the transfer of several OST practices, into school-day teachers classrooms.
Finally, Part IV presents chapters on the use of data and evaluation to specifically improve staff capacity. Jamie Wu et al. offer a statewide case study that incorporated a coaching network, and how evaluation enhanced the effectiveness and intentionality of coaching to support quality in the states OST programs. They describe several lessons learned and subsequent recommendations for implementation. Miranda Yates et al. write that a key element in program improvement efforts is making data accessible to and actionable by program staff. This often translates into honoring the complexity of the system and the potential crossing of traditional boundaries. In this sections final chapter, Jaynemarie Angbah considers the notion of using findings about staff perceptions as a launching point for designing professional development for OST staff, targeting both onboarding and retention.
Where Measure, Use, Improve! succeeds most is through the use of case studies as presented and discussed by researchers, evaluators, and practitioners. The authors of many of the chapters provide justifications, insights, strategies, lessons learned, and recommendations regarding data collection and use, measurement, and evaluation from the perspectives of actual programsboth large and smalland the leaders and evaluators who have engaged in these processes. This goes a long way to make the lessons learned and strategies so much more real, tangible, and practical for the reader. This is a must-read resource for professionals involved in the complex world of out-of-school time programs, and especially for those who strive to make those programs the best that they can be for our youth.
There are no related articles to display