Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

Advancing Methodologies to Support Both Summative and Formative Assessments

reviewed by Todd Reeves - February 02, 2016

coverTitle: Advancing Methodologies to Support Both Summative and Formative Assessments
Author(s): Ying Cheng, Hua-Hua Chang (Eds.)
Publisher: Information Age Publishing, Charlotte
ISBN: 1623965950, Pages: 348, Year: 2014
Search for book at Amazon.com

Ying Cheng and Hua-Hua Chang’s book Advancing Methodologies to Support Both Summative and Formative Assessments attempts to “summarize the advances in gaining information and gaining information more efficiently in testing, with an emphasis on formative assessment” (p. xii). They argue that both the roles of assessment in public education and demands placed on assessment systems have greatly expanded. Contemporary assessment systems are increasingly designed to elicit evidence of complex cognitive processes—such as problem solving—and provide diagnostic information concerning student strengths and weaknesses to teachers. This book argues that these demands have pushed the rapid advancement of assessment methodologies.

Cheng and Chang describe various methodologies and technologies intended to improve the efficiency and/or technical character of assessment processes in response to the pace of these developments. This review considers this book holistically, and is written from the perspective of a broadly trained quantitative educational researcher, rather than that of a psychometrician or measurement specialist. This integrated review emphasizes the book’s breadth, depth, applicability, and connection to the existing scholarship in the field rather than its technical merits.

Advancing Methodologies is organized into four parts which address methodological advances intended to improve particular aspects of summative and/or formative assessment processes (e.g., efficiency, validity, and reliability)—Part One is “Advances in Making Testing More Efficient,” Part Two is “Advances in Gaining Information of or from Test Items,” Part Three is “Formative Assessment: Gaining Diagnostic Information by Having Subdimensions and Subscores,” and Part Four is “Formative Assessment: Gaining Diagnostic Information by Cognitive Diagnostic Modeling.” The book surveys a variety of methodological advances including automated test assembly, computer-adaptive and multi-stage testing, estimation, calibration, cognitive diagnostic and multi-dimensional modeling, and assessment of dimensionality and model fit. As such, Advancing Methodologies reviews numerous measurement models including both frequentist and Bayesian.

The book’s discussions are generally theoretical and highly technical. Some chapters describe highly specific applications of methodological advancements (e.g., potential applications of clinical trial sequential testing to educational assessment, and cognitive diagnosis in the context of computer-adaptive testing), whereas others are more broadly relevant to diverse audiences (e.g., multidimensional Item Response Theory and Rasch models, and assessment of dimensionality). Empirical results illustrate content in a few select chapters.

The book’s back cover promises “state-of-the-art coverage on new methodologies to support traditional summative assessment [and]…emerging formative assessments” and it certainly fulfills this promise with minimal exception. Each chapter describes a specific, contemporary, and active area of research, development, and/or application in the field (e.g., on-the-fly multi-stage testing, online item calibration in computer-adaptive testing, and multidimensional Rasch models). I found many of the topics covered to be novel, yet each chapter offers excellent conceptual and/or technical background for its respective topic. These introductions generally identify problems or a history of problems related to traditional assessment methods (e.g., efficiency, accuracy, and reliability) and how methodological advances address them.

Cheng and Chang deserve commendation for their depth of coverage. Each chapter provides rich information concerning the respective methodology that is described. The authors outline a variety of approaches that can be employed for each methodological advancement that is detailed, including its consequences, and trade-offs resulting from selecting a particular approach (e.g., stage length in multi-stage testing in Chapter Two, and assignment strategies for online calibration in Chapter Four). The authors even describe the bounds of application of particular methods in some cases, the conditions under which particular approaches are most optimal, and common challenges that may be encountered when conducting such work (e.g., as related to sample sizes, and number of items).

Links to extensive external resources are provided when content is not explored in depth due to space constraints. Each chapter’s discussion of methodology is grounded firmly in past and present literature. Methodological advances are located within their full historical development although the Preface implies that most of the advances occurring in these areas are recent (e.g., over the last five years).

Three other favorable aspects regarding content are notable. First, the authors readily identify areas in the literature where a particular methodology is underdeveloped and ripe for additional contributions. Some authors even pose specific outstanding research questions that need to be addressed by future studies. Second, the authors specify relevant software packages that can be used to implement the methodologies, and this is often coupled with information concerning their capabilities and limitations. Finally third, several authors draw connections between focal and more well known content. In Chapter Ten, the authors note that between-item multidimensionality in Rasch multidimensional models is analogous to simple structure in factor analytic contexts. These features should be instructive for readers hoping to begin work in these areas and contribute to these conversations.

My main critique of Advancing Methodologies is its construal of assessment. First, the described methodological advances are largely relevant in the context of large-scale assessment processes, rather than at the classroom level. Educational assessment is broadly comprised of processes at both of these levels, and classroom assessment is more likely to be formative in nature (Black & William, 1998; Kingston & Nash, 2011; Pellegrino, Chudowsky, & Glaser, 2001). Second, and more importantly, the book emphasizes methodological advances with respect to only some components of the assessment process (e.g., test assembly, calibration, estimation, and dimensionality assessment) to the exclusion of others. Methodological advances in the full assessment process such as automated item generation, longitudinal data systems, and automated scoring of constructed-response and performance assessments—all of which would hold promise in improving formative assessment processes—are unfortunately not addressed.

Notwithstanding this critique, Advancing Methodologies to Support Both Summative and Formative Assessments is a valuable technical resource for the fields of psychometrics and measurement, and assessment professionals and scholars working in these realms. As an academic, I found the book to be an excellent primer on recent methodological developments. This text is less likely to be useful for teachers and leaders although this is likely not its intended audience. The book should be most accessible to individuals who have already received formal training in assessment, measurement, psychometrics, mathematical statistics, and calculus given that the text is advanced and highly technical. Advancing Methodologies would also prove instructive as a supplemental resource in advanced graduate coursework in measurement and psychometrics, though the book does not contain an index, datasets, or corresponding exercises.



Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. London, UK: Granada Learning.

Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and a call for research. Educational Measurement: Issues and Practice, 30(4), 28–37.

Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academies Press.

Cite This Article as: Teachers College Record, Date Published: February 02, 2016
https://www.tcrecord.org ID Number: 19377, Date Accessed: 10/23/2021 6:10:33 AM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Todd Reeves
    Northern Illinois University
    E-mail Author
    TODD REEVES is an Assistant Professor of Educational Research and Evaluation at Northern Illinois University. His research focuses on problems related to assessment, teacher education and development, and technology, and problems that exist of the points of intersection among these domains. His work has been published in Teaching and Teacher Education, Professional Development in Education, and Journal of Education for Teaching. His current projects center on teacher education for data use, technology-enhanced assessment of argumentation skills, and the design of student surveys for assessment of instructional practice.
Member Center
In Print
This Month's Issue