Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements

An Experimental Study of the Effects of Monetary Incentives on Performance on the 12th-Grade NAEP Reading Assessment


by Henry Braun, Irwin Kirsch & Kentaro Yamamoto — 2011

Background/context: The National Assessment of Educational Progress (NAEP) is the only comparative assessment of academic competencies regularly administered to nationally representative samples of students enrolled in Grades 4, 8, and 12. Because NAEP is a low-stakes assessment, there are long-standing questions about the level of engagement and effort of the 12th graders who participate in the assessment and, consequently, about the validity of the reported results.

Purpose/Focus: This study investigated the effects of monetary incentives on the performance of 12th graders on a reading assessment closely modeled on the NAEP reading test in order to evaluate the likelihood that scores obtained at regular administrations underestimate student capabilities.

Population: The study assessed more than 2,600 students in a convenience sample of 59 schools in seven states. The schools are heterogeneous with respect to demographics and type of location.

Intervention: There were three conditions: a control and two incentive interventions. For the fixed incentive, students were offered $20 at the start of the session. For the contingent incentive, students were offered $5 in advance and $15 for correct responses to each of two randomly chosen questions, for a maximum payout of $35. All students were administered one of eight booklets comprising two reading blocks (a passage with associated questions) and a background questionnaire. All reading blocks were operational blocks released by NAEP.

Research Design: This was a randomized controlled field trial. Students agreed to participate without knowing that monetary incentives would be offered. Random allocation to condition was conducted independently in each school.

Data Collection/Analysis: Regular NAEP contractors administered the assessments and carried out preliminary data processing. Scaling of results and linking to the NAEP reporting scale were conducted using standard NAEP procedures.

Findings: Monetary incentives have a statistically significant and substantively important impact on both student engagement/effort and performance overall, and for most subgroups defined by gender, race, and background characteristics. For both males and females, the effect of the contingent incentive was more than 5 NAEP score points, corresponding to one quarter of the difference in the average scores between Grades 8 and 12. In general, the effect of the contingent incentive was larger than that of the fixed incentive, particularly for lower scoring subgroups.

Conclusions/Recommendations: There is now credible evidence that NAEP may both underestimate the reading abilities of students enrolled in 12th grade and yield biased estimates of certain achievement gaps. Responsible officials should take this into account as they plan changes to the NAEP reading framework and expand the scope of the 12th-grade assessment survey.



To view the full-text for this article you must be signed-in with the appropropriate membership. Please review your options below:

Sign-in
Email:
Password:
Store a cookie on my computer that will allow me to skip this sign-in in the future.
Send me my password -- I can't remember it
 
Purchase this Article
Purchase An Experimental Study of the Effects of Monetary Incentives on Performance on the 12th-Grade NAEP Reading Assessment
Individual-Resource passes allow you to purchase access to resources one resource at a time. There are no recurring fees.
$12
Become a Member
Online Access
With this membership you receive online access to all of TCRecord's content. The introductory rate of $20 is available for a limited time.
$20
Print and Online Access
With this membership you receive the print journal and free online access to all of TCRecord's content.
$145


Cite This Article as: Teachers College Record Volume 113 Number 11, 2011, p. 2309-2344
http://www.tcrecord.org ID Number: 16008, Date Accessed: 4/19/2014 9:28:42 PM

Purchase Reprint Rights for this article or review
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Henry Braun
    Boston College
    E-mail Author
    HENRY BRAUN has held the Boisi Chair in Education and Public Policy at Boston College since 2007. From 1979 to 2006, he worked at Educational Testing Service (ETS) in Princeton, New Jersey, where he served as vice president for research management (1989–1999). He has a bachelor’s degree in mathematics from McGill University and M.A. and Ph.D. degrees, both in mathematical statistics, from Stanford University. He has a long-standing involvement in technical analyses of policy issues, especially those involving testing and accountability. He has done considerable work in the area of value-added modeling and authored Using Student Progress to Evaluate Teachers: A Primer on Value-Added Models (2006). He was a major contributor the OECD monograph, Measuring Improvements in Learning Outcomes: Best Practices to Assess the Value-added of Schools (2008) and chair of the NRC panel that recently issued the publication, Getting Value out of Value-Added: Report of a Workshop.
  • Irwin Kirsch
    Educational Testing Service
    IRWIN KIRSCH has held the title of Distinguished Presidential Appointee at Educational Testing Service since 1999, where he began working in 1984. He holds a bachelor’s degree in psychology from the University of Maryland, an M.S. in communication disorders from Johns Hopkins University, and a Ph.D. in educational psychology from the University of Delaware. His interests include issues involving the comparability and interpretability of large-scale assessments, and using technology to link learning and assessment. He has had a long-standing involvement in the development and implementation of large-scale comparative surveys including NAEP, and he was one of the original developers of the International Adult Literacy Survey (IALS). He currently directs the Program for the International Assessment of Adult Competencies (PIAAC) for the OECD and chairs the reading expert group for PISA. He has authored a number of policy reports using data from these surveys, including America’s Perfect Storm.
  • Kentaro Yamamoto
    Educational Testing Service
    KENTARO YAMAMOTO is deputy director/principal research scientist for the Center for Global Assessment at Educational Testing Service (ETS). He has been a technical advisor for OECD and the U.S. Department of Education. He has designed or contributed to the design of numerous national and international large-scale surveys of adults, as well as for special populations, such as NAEP, TIMMS, PISA, IALS, ALL, and PIAAC. He has also designed several individual tests in reading and literacy, a mixture model of continuous and discrete measurement models for diagnostic testing and IRT scaling that has been used for all literacy surveys at ETS. He also designed the online testlet adaptive testing for adult literacy skills. He has written numerous reports, research papers, and contributed chapters and has given numerous presentations at national and international conferences.
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS