Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Are Achievement Gap Estimates Biased by Differential Student Test Effort? Putting an Important Policy Metric to the Test


by James Soland — 2018

Background/Context: Achievement gaps motivate a range of practices and policies aimed at closing those gaps. Most gaps studies assume that differences in observed test scores across subgroups are measuring differences in content mastery. For such an assumption to hold, students in the subgroups being compared need to be giving similar effort on the test. Studies already show that low test effort is prevalent and biases observed test scores downward. What research does not demonstrate is whether test effort differs by subgroup and, therefore, biases estimates of achievement gaps.

Purpose: This study examines whether test effort differs by student subgroup, including by race and gender. The sensitivity of achievement gap estimates to any differences in test effort is also considered.

Research Design: A behavioral proxy for test effort called “rapid guessing” was used. Rapid guessing occurs when students answer a test item so fast, they could not have understood its content. Rates of rapid guessing were compared across subgroups. Then, achievement gaps were estimated unconditional and conditional on measures of rapid guessing.

Findings: Test effort differs substantially by subgroup, with males rapidly guessing nearly twice as often as females in later grades, and Black students rapidly guessing more often than White students. However, these differences in rapid guessing generally do not impact substantive interpretations of achievement gaps, though basic conclusions about male–female gaps and changes in gaps as students progress through school may change when models account for test effort.

Conclusions: Although the bias introduced into achievement gap estimates by differential test effort is hard to quantify, results provide an important reminder that test scores reflect achievement only to the extent that students are willing and able to demonstrate what they have learned. Understanding why there are subgroup differences in test effort would likely be useful to educators and is worthy of additional study.



To view the full-text for this article you must be signed-in with the appropriate membership. Please review your options below:

Sign-in
Email:
Password:
Store a cookie on my computer that will allow me to skip this sign-in in the future.
Send me my password -- I can't remember it
 
Purchase this Article
Purchase Are Achievement Gap Estimates Biased by Differential Student Test Effort? Putting an Important Policy Metric to the Test
Individual-Resource passes allow you to purchase access to resources one resource at a time. There are no recurring fees.
$12
Become a Member
Online Access
With this membership you receive online access to all of TCRecord's content. The introductory rate of $25 is available for a limited time.
$25
Print and Online Access
With this membership you receive the print journal and free online access to all of TCRecord's content.
$210


Cite This Article as: Teachers College Record Volume 120 Number 12, 2018, p. 1-26
http://www.tcrecord.org ID Number: 22443, Date Accessed: 12/11/2018 6:32:52 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • James Soland
    Northwest Evaluation Association
    E-mail Author
    JAMES SOLAND is a research scientist at NWEA. His work focuses on how assumptions about test scores and data impact policy and practice. Particular areas of interest include measuring social-emotional learning, test motivation, and scaling. Two citations for his related work are: Soland, J. (2013). Predicting high school graduation and college enrollment: Comparing early warning indicator data and teacher intuition. Journal of Education for Students Placed at Risk (JESPAR), 18(3–4), 233–262; and Soland, J. (2017). Is teacher value added a matter of scale? The practical consequences of treating an ordinal scale as interval for estimation of teacher effects. Applied Measurement in Education, 30(1), 52–70.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS