Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Measure, Use, Improve! Data Use in Out-of-School Time


reviewed by Craig A. Mertler - July 19, 2021

coverTitle: Measure, Use, Improve! Data Use in Out-of-School Time
Author(s): Christina A. Russell & Corey Newhouse
Publisher: Information Age Publishing, Charlotte
ISBN: 1648022537, Pages: 330, Year: 2021
Search for book at Amazon.com


In this volume of the series Current Issues in Out-of-School Time, editors Christina A. Russell and Corey Newhouse tackle the topic of measurement and evaluation systems that contribute to the quality improvement of out-of-school time (OST; i.e., afterschool) programs. As professional evaluators, they capitalize on decades of experiences partnering “with OST organizations and initiatives to understand implementation, analyze impact, and support improvement” (p. xxv). They describe this volume as a book for OST practitioners and researchers “to share practical lessons and approaches about measurement and data use” (p. xxv). In the Introduction, the editors argue the fact that data use—along with using those data to guide continuous quality improvement—is a significant topic for exploration, especially as researchers and nonprofit leaders try to systematically strengthen their use of information generated by both internal and external assessments that are designed to drive improvements. The authors of subsequent chapters share promising strategies for measurement and data use, as offered by OST practitioners and by researchers. These authors also take an honest and forthright approach, stressing the fact that this work is not easy. Collecting and using quality data for program improvement requires investments of time, energy, and resources. They state that the authors “raise important questions, make concrete recommendations, and collectively issue a call to action…” (p. xxvi). They close the Introduction with the declaration that their book reflects a priority on equity throughout all chapters.


Part I focuses on making a case for the use of measurement and evaluation in improvement efforts related to OST programs. This is sometimes made more challenging because most OST programs are funded by grants from foundations (to provide free, high-quality programs), and using those dollars for the use of data beyond simple compliance reporting can appear counterintuitive. This section includes authors who share insights directly from their experiences about how an investment of this type and commitment to these efforts can benefit the youth served by OST programs. In the first chapter, Regino Chávez shares his experiences as director of evaluation at LA’s BEST, a long-lived OST program that partners with the school district, the City of Los Angeles mayor’s office, and the private sector. His case study emphasizes the fact that evaluations can—and should be—customized to fit the emphasis of individual OST programs. Next, Jason Spector, through his work as internal evaluator for the national After-School All-Stars (ASAS) program, stresses the importance of matching a chosen evaluation strategy to the organization’s stage of development and implementation, and its overall needs. He also offers four key lessons for motivating change in terms of organizational decisions. In closing out Part I, Rebecca M. Goldberg et al. share their experiences regarding the use of data for organizational learning from the perspective of a large, national foundation.


Part II centers around the notion that measurement and data use is often overwhelming for many OST organizations, especially if their staff is not sizable nor has dedicated measurement experts. Its authors focus their presentations on demystifying data, measurement, and evaluation. Hannah Lantos et al. discuss what counts as data, and argue that OST programs should not limit themselves to only participant outcomes. They also argue that data collection and use should be part of a much larger overall performance management strategy. Betsy Block then examines challenges and strategies for identifying or developing a data management system for a particular organization. Next, Tasha Johnson and Aasha Joshi consider processes for building capacity to effectively use data for program improvement, using the YMCA as their experiential context. In a related presentation, Jessica Donner et al. reflect on lessons learned about building broader, local capacity from experiences in the Every Hour Counts program. In the final chapter of this section, Joseph Luesse and Kim Sabo Flores explore the value and meaning attached to the inclusion of all participants in an evaluation process, namely those who the programs are intended to serve—local youth.


Part III addresses the need and strategies for building evaluative thinking skills as a mechanism for fostering program improvement initiatives. Tiffany Berry and Michelle Sloper begin this section by arguing that evaluative thinking should be an essential component of continuous quality improvement (CQI) frameworks. They also offer five strategies for building improvement systems with a focus on evaluative thinking. Next, Jocelyn Wiedow and Jennifer Griffin-Wiesner discuss an approach for engaging program staff in a process of making meaning from data, which includes a mindset shift from "doing evaluation" to "being evaluative." Through several examples and case studies, Valerie Threlfall considers the inherent value in collecting youth feedback and provides suggestions for making those data more meaningful. Linda Barton et al. present discussion of a decade-long journey to develop, scale, and sustain a statewide system of using data to inform staff professional development and program improvement. Finally, Bryan Hall and Brenda McLaughlin analyze ways in which unintended outcomes of research and evaluation efforts can lead to program improvements, including the transfer of several OST practices, into school-day teachers’ classrooms.


Finally, Part IV presents chapters on the use of data and evaluation to specifically improve staff capacity. Jamie Wu et al. offer a statewide case study that incorporated a coaching network, and how evaluation enhanced the effectiveness and intentionality of coaching to support quality in the state’s OST programs. They describe several lessons learned and subsequent recommendations for implementation. Miranda Yates et al. write that a key element in program improvement efforts is making data accessible to and actionable by program staff. This often translates into honoring the complexity of the system and the potential crossing of traditional boundaries. In this section’s final chapter, Jaynemarie Angbah considers the notion of using findings about staff perceptions as a launching point for designing professional development for OST staff, targeting both onboarding and retention.


Where Measure, Use, Improve! succeeds most is through the use of case studies as presented and discussed by researchers, evaluators, and practitioners. The authors of many of the chapters provide justifications, insights, strategies, lessons learned, and recommendations regarding data collection and use, measurement, and evaluation from the perspectives of actual programs—both large and small—and the leaders and evaluators who have engaged in these processes. This goes a long way to make the lessons learned and strategies so much more real, tangible, and practical for the reader. This is a must-read resource for professionals involved in the complex world of out-of-school time programs, and especially for those who strive to make those programs the best that they can be for our youth.






Cite This Article as: Teachers College Record, Date Published: July 19, 2021
https://www.tcrecord.org ID Number: 23796, Date Accessed: 8/4/2021 2:26:50 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles
There are no related articles to display

Related Discussion
 
Post a Comment | Read All

About the Author
  • Craig A. Mertler
    Arizona State University
    E-mail Author
    CRAIG A. MERTLER, Ph.D., is currently an associate professor in the EdD Program in Leadership & Innovation at Arizona State University. He has been an educator for 35 years, 25 of those in higher education, and 12 as an administrator. His teaching focuses on the application of action research to promote educator empowerment, school improvement, job-embedded professional development, and data-informed decision making in educational settings, and he also teaches research methods, statistical analyses, and educational assessment methods. He has served as the research methodology expert on more than 100 doctoral dissertations and masterís theses. He is the author of 28 books, 9 invited book chapters, 24 refereed journal articles, and 3 novels. He has presented more than 40 research papers at professional meetings, and has consulted with numerous schools, districts, and universities around the country on classroom-based action research, data-informed decision making, and on the broad topic of classroom assessment.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS