Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Ready or Not: How Multiple Measures of College Readiness Can Help Reduce Unnecessary Remediation


by Judith Scott-Clayton - November 17, 2014

While the prevalence of remediation has generated widespread concern about the college readiness of our nation’s high school graduates, comparatively little attention has been paid to how “readiness” is actually determined. At most community colleges and at many nonselective four-year colleges, readiness is determined by scores on short standardized math and English placement tests. This commentary describes research finding that assignment to remedial or college-level courses based on standardized placement exams results in large numbers of placement errors, and that incorporating high school transcript information would lead to fewer assignments to remediation while maintaining or increasing success rates in college-level Math and English.

As college enrollment rates have risen over time, so too have concerns regarding whether students are sufficiently prepared to succeed: of all degree-seeking college entrants, only about half will complete any type of degree or certificate within six years.1 Remedial coursework (which provides instruction in reading, writing, or math, but does not bear college credit) is perhaps the most widespread intervention aimed at addressing incoming students’ skill deficiencies. Half of all undergraduates will take one or more remedial courses while enrolled; among those who take any, the average is 2.6 remedial courses.2 With over three million new students entering college each year, this implies a national cost of nearly $7 billion dollars annually.3 This figure accounts only for the direct cost of remediation: it does not include the opportunity cost of time for students enrolled in these courses, nor does it account for any impact, positive or negative, that remediation may have on students’ future outcomes.


While the high prevalence and costs of remediation have generated plenty of handwringing about the college readiness of our nation’s high school graduates, comparatively little attention has been paid to how “readiness” is actually determined. At most community colleges and at many nonselective four-year colleges, readiness is determined by scores on relatively short, standardized math and English placement tests. Often, placement is determined solely on the basis of whether a score is above or below a certain cutoff. In many cases, placement decisions are final and retesting is not allowed.4


The lack of attention to the placement process is surprising given the high stakes involved. The motivation for remediation is that underprepared students who proceed directly to college-level courses may fail their classes and negatively affect the learning environment for their peers. But on the other hand, truly prepared students who are incorrectly assigned to remediation may garner little or no educational benefit, but incur additional tuition and time costs and may be discouraged from or delayed in their degree plans. While some mis-assignment is inevitable, evidence is beginning to accumulate that remedial placement testing may be hurting academically prepared students more than it is helping those who are entering college without adequate skills.5 6 7


This led me and other researchers at the Community College Research Center (CCRC) to take a closer look at the validity of the placement tests themselves. To do this, we analyzed data from tens of thousands of community college entrants in a large urban system and a statewide system. Incorporating rich information on students’ high school performance, placement test scores, demographics, and college grades, we developed statistical models to predict how students would have fared had they been placed directly into college level courses.


The findings were striking: math placement test scores explained only 13% of the variation in college-level math grades, and English placement test scores explained less than 2% of the variation in college-level English grades.8 Our predictive model suggests that under-placements (students assigned to remediation even though they could have done well in college-level work) are far more common than over-placements (students placed into college-level even though they are likely to fail). We estimate that a quarter to a third of students assigned to remedial classes based on placement exams scores could have passed college-level classes with a grade of B or better.


We also explored whether information from students’ high school transcripts could be used to improve the accuracy of placement decisions, and found clear evidence that it could: using high school achievement resulted in far fewer misplacements (both into college-level courses and into remediation), and higher success rates in college level courses. And the improvements were often substantial: we estimated that incorporating high school transcript information could reduce incorrect placements by up to a third, and could generate a 10 percentage point increase in the likelihood that students placed into college-level courses in the relevant subject would complete the courses with a grade of C or higher, by the end of their first semester. In contrast, our simulations indicated that incorporating test scores when high school information is already available provides little to no additional benefit.


Another key finding from our analysis is that the choice of what information to use in the placement process can substantially influence the racial and gender composition of both remedial and college-level courses. For example, using high school information instead of test scores would substantially increase the percentage of women able to immediately access college-level math, but would decrease the percentage of African-Americans admitted directly to college-level English. Given that the patterns of estimated “readiness” by race and gender depend upon the type of measure used, it seems especially important to avoid relying on a single measure to determine placement.9


No placement system can avoid making some mistakes, but our research strongly suggests that institutions and policymakers have placed more weight on some types of mistakes than others. Specifically, we have traditionally given more attention to the problem of underprepared students getting into college courses, than to the problem of creating unnecessary obstacles for those who truly are prepared. This may be because the costs of over-placement fall not just on the over-placed student (who may be discouraged and/or risk losing financial aid eligibility), but also on faculty members who dislike having to fail students, as well as on other students in the college-level course who may experience negative peer effects. The costs of under-placement, in contrast, fall primarily on the institution and the under-placed student. Moreover, over-placements may simply be easier to observe: it is straightforward to document how many students placed into a college-level course fail there, while under-placements can only be estimated statistically.


College leaders and policymakers in several states have recently begun to question the assumptions underlying remedial placement testing. For example, at Long Beach City College in California, a pilot program that incorporates high school grades into the placement process saw immediate impacts: after the launch of the program, the percentage of students who placed into and passed college English in their first year quadrupled; for college math, the percentage tripled.10 Florida now permits students who successfully pass the state’s high school exit exam to bypass placement exams and enroll directly in college-level courses. In Virginia, the community college system (VCCS) has developed new placement tests better aligned to students’ programs of study. Preliminary research indicates that this simple change has dramatically increased the rates at which students place into and complete college math in their first year.11


The signs are promising that more nuanced systems of assessment and placement—based on multiple measures of academic achievement—may soon be adopted at colleges across the country. By taking information on high school performance into account, institutions can have the best of both worlds: they can remediate substantially fewer students without lowering success rates in college-level courses.


Notes


1. Author’s calculations based on beginning postsecondary student (BPS) 2009 data (National Center for Education Statistics [NCES], 2012). Bachelor’s degree attainment rates are 59% for those entering with a 4-year-degree goal, and bachelor’s/associate’s degree attainment rates are 30% for those entering with a 2-year-degree goal.

2. Estimate based on BPS:2009 transcript data for 2003–2004 entrants (NCES, 2012). Estimates based on student self-reports are substantially lower, potentially because students do not realize the courses are remedial.

3. Scott-Clayton, Judith, Crosta, Peter M., Belfield, Clive R. (2014). Improving the Targeting of Treatment: Evidence from College Remediation. Educational Evaluation and Policy Analysis, 36(3), 371-393.

4. Hodara, Michelle, Jaggars, Shanna S., Karp, Melinda M. (Nov 2012). Improving Developmental Education Assessment and Placement: Lessons from Community Colleges Across the Country (CCRC Working Paper No. 51). New York, NY: Columbia University, Teachers College, Community College Research Center.

5. Jaggars, Shanna S. and Stacey, Georgia W. (2014). What We Know About Developmental Education Outcomes. New York, NY: Columbia University, Teachers College, Community College Research Center.

6. Scott-Clayton, Judith and Rodriguez, Olga (2012). Development, Discouragement, or Diversion? New Evidence on the Effects of College Remediation (NBER Working Paper No. 18328). Cambridge, MA: National Bureau of Economic Research.

7. Martorell, P., & McFarlin, I. J. (2011). Help or hindrance? The effects of college remediation on academic and labor market outcomes. Review of Economics and Statistics, 93, 436–454.

8. Scott-Clayton, J. (2012). Do high stakes placement exams predict college success? (CCRC Working Paper No. 41). New York, NY: Community College Research Center.

9. Scott-Clayton, Judith, Crosta, Peter M., Belfield, Clive R. (2014). Improving the Targeting of Treatment: Evidence from College Remediation. Educational Evaluation and Policy Analysis, 36(3), 371-393.

10. Hetts, J. J., & Fuenmayor, A. (2013).  Promising pathways to success: Using evidence to dramatically increase student achievement. Long Beach City College, Long Beach, CA. Retrieved from http://www.lbcc.edu/IE/Research/Researchdocs.cfm

11. Rodríguez, O. (2014). Increasing access to college-level math: Early outcomes using the Virginia Placement Test (CCRC Brief No. 58). New York, NY: Columbia University, Teachers College, Community College Research Center.





Cite This Article as: Teachers College Record, Date Published: November 17, 2014
https://www.tcrecord.org ID Number: 17757, Date Accessed: 11/29/2021 9:45:43 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Judith Scott-Clayton
    Teachers College, Columbia University
    E-mail Author
    JUDITH SCOTT-CLAYTON is an Assistant Professor of Economics and Education at Teachers College, Columbia, where she teaches labor economics and quantitative methods for causal inference. She is also a Senior Research Associate at the Community College Research Center. Her primary interests are postsecondary education, inequality, and policy design and evaluation, with a particular focus on financial aid and other policies aimed at improving college attainment among disadvantaged populations.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS