Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Beyond the Cut-Point: College Writing Readiness for Linguistically Diverse Students


by Stefani R. Relles & Blanca Rincón - July 01, 2019

The study draws on the essay test scores of linguistically diverse students to describe the characteristics of their writing according to a six-point rubric of national college readiness standards. It subsequently considers the extent to which these students may be misplaced into postsecondary English remediation even though they are capable of writing at a college-level.

Because academic preparation is the leading predictor of postsecondary success (Attewell, Heil, & Reisel, 2011), colleges typically vet the skill-levels of incoming students and assign those who score below a designated cut-point on a standardized exam to remediation. In the past decade, however, higher education has had to acknowledge that remediation coursework may be, at best, inconsequential and, at worst, actually detrimental to attainment (Valentine, Konstantopoulos, & Goldrick-Rab, 2017). Although findings on English remediation are mixed (Hodara & Xu, 2016), for students on the cut-point margin, placement into remediation can be costly in other ways. While students may incur unnecessary tuition expenses, institutions may be wasting money on programs for students who do not really need additional academic support to graduate (Page & Scott-Clayton, 2016).


Recent debate over whether the rewards of remediation outweigh its risks has focused on the assessment procedures by which students are placed into college-level or remedial courses. Large-scale studies have indicated not only that inaccurate tests have been misplacing some college-ready students into remediation courses (Hughes & Scott-Clayton, 2011; Ngo & Melguizo, 2015; Scott-Clayton, Crosta, & Belfield, 2014), but also that being misplaced into English remediation may increase the likelihood of dropping out of college altogether by 8% (Scott-Clayton & Rodriguez, 2014). Such findings are particularly concerning given the overrepresentation of linguistically diverse students in English remediation as these students are an important subset of the nation’s imminent new mainstream (Enright, 2011). We define linguistically diverse students as students who speak a language other than English at home, which includes bilingual students who represent 61.8 million of U.S. students enrolled in K-12 education in 2010 (U.S. Census Bureau, 2013). Despite their growing numbers, few peer-reviewed articles have explored the relationship between English placement test scores and the educational outcomes of linguistically diverse students, in part, because postsecondary “institutions do not systematically collect information on” students’ language backgrounds (Flores & Drake, 2014, p. 4).


The present study draws on the essay test scores of linguistically diverse students deemed the most susceptible to English course misplacement because their prior standardized writing scores clustered around the designated remediation cut-point. To theorize future research, we describe the characteristics of their writing according to a six-point rubric of national college readiness standards. We subsequently consider the extent to which students may be misplaced into postsecondary English remediation even though they are capable of writing at a college-level.


WHAT IS COLLEGE WRITING READINESS?


At universities and colleges nationwide, it is common that an essay test is used to determine English course placement (Fields & Parsad, 2012). More than 50 years of research, however, indicates that the skills students actually need to write successfully in college are not those measured by this test (Huot, 1990). Studies have historically found that “writing timed essays on unannounced topics with no opportunity for research, reflection, or collaboration” does not accurately represent college coursework tasks (Black, Helton, & Sommers, 1994, p. 249). Moreover, although college writing “cannot be successfully managed in one hurried draft,” essay tests are typically administered under timed conditions (Fritzsche, Rapp Young, & Hickson, 2003, p. 1551). Respectively, only a tenth of future English course performance is explained by an essay test (Haswell, 2004).


What do essay tests measure if not college writing? Studies show a student’s score will vary from date to date, suggesting the test evaluates the quality of a particular writing performance on a particular day (Diederich, 1964; Kincaid, 1953). Other studies show essay test scores reflect rater biases, not student skills (Nystrand, Greene, & Wiemelt, 1993). What then should be assessed to determine college writing readiness? Compositionists argue that critical thinking distinguishes college-level from basic writing, and that revising, in particular, has the greatest impact on achievement because it supports higher-order cognitive outcomes (Fitzgerald, 1987). Yet most essay tests do not provide enough time for students to make the argumentative and structural changes associated with college-level revising (Butler & Britt, 2011).


Although these measurement liabilities are not new, their negative implications are potentially more far-reaching as U.S. college students become increasingly diverse (Solano-Flores, 2008). Second language acquisition research, for example, has shown that revising enables ELLs to achieve the same writing quality as monolingual students whose first drafts more closely resemble standards (Choi, 2013; Stefanou & Revesz, 2015; Williams, 2012). Inferentially, bilingual students may need to perform more revision than their monolingual peers in order to receive a college-level score on an essay test (Polio, 2012; Polio & Fleck, 1998). If this is true, then an essay test is unlikely to capture a linguistically diverse student’s true ability to produce college writing free of time restrictions.


METHODS


Representing more than 30 Title I high schools in a large urban district, the sample for this study consisted of 284 students shown to be among the most vulnerable to remediation misplacement because their average Scholastic Aptitude Test (SAT) writing section score was 468, just below the College Board’s recommended readiness benchmark of 500 (Wyatt, Kobrin, Proestler, Camara, & Wiley, 2011). Yet their average high school grade point average (GPA) was above 3.78, meaning these students were in the top 10% of their graduating classes. Given that GPA “is an extremely good and consistent predictor of college performance,” the inconsistency between prior essay test scores and average GPA infers the plausibility of misplacement (Belfield & Crosta, 2012, p. 39). All participants were eligible for free or reduced lunch the semester prior to graduating high school. Within the sample, 74% identified as Latinx, 13% as Asian-Pacific Islander, 6% as African American, 1% as Native American, and 4% as multiracial or other. More than 60% identified English as their second language, and the rest reported bilingual status in Spanish, Cantonese, Bengali, Tagalog, or Khmer.


DATA SOURCES


This analysis relies on data collected in the context of grant-funded college access programs associated with a research center at a southwestern university. Recruited from these programs, approximately 100 students per year contributed essays using an instrument adapted from a placement test used by the California State University (CSU) system. Each participant took the test twice, once under the exam’s prescribed test conditions (a single 45-minute seating) and once under an amended protocol that gave students an additional 45 minutes to revise their essay from the first seating.


DATA ANALYSIS


Using the same six-point rubric as the official exam, each draft was holistically scored by three independent raters (CSU Student Academic Services, 2014). Each draft’s holistic score was also disaggregated according to the rubric categories. The first four points—response to topic, understanding and use of the passage, quality and clarity of thought, and organization, development and support—infer the cognitive aptitudes described in college readiness frameworks (Conley, 2017). The last two points—sentence structure and command of language, and grammar use and mechanics—denote basic skills.


Using descriptive statistics, we examined variations between the scores generated in the first and second seating. Then, these variations were analyzed according to their course placement implications with a score of 4 designated as the cut-point. Using frequency counts, we analyzed the rubric scores to identify which writing skills were associated with score gains. Then, to examine whether, on average, scores improved between the first and revised draft of the exam, we used a paired sample t-test.


RESULTS


As shown in Table 1, 80% of students improved their scores on the second draft. The majority (54%) improved their score by 1 point, nearly a quarter (23%) by 2 points, and approximately 3% by 3 points (see Table 2). Only 17% of students saw no change in their scores between the first and second essay drafts, and just under 4% of students scored one or more points lower on the second draft. According to the exam’s rubric (see Table 3), most scores increased in the higher order thinking context of “quality and clarity of [their] thought” (86%). First drafts that, for example, “lack[ed] focus, …fail[ed] to communicate ideas, or ha[d] weak analysis” became revised drafts that met or exceeded the standard to “provide… basic analysis” once revised. Writing skills that supported “understanding and use of the passage” (81%) also produced the largest gains (see Table 4). First drafts that, for example, “demonstrate[d] a limited understanding of the passage, or ma[de] poor use of it in developing a weak response” were revised to show “a generally accurate understanding of the passage in developing a reasonable response.” Students also made significant gains in the “organization, development, and support” (80%) of their essays. First drafts that were “poorly organized and developed, presenting generalizations without adequate and appropriate support or presenting details without generalizations” became revised drafts that were “adequately organized and developed, generally supporting ideas with reasons and examples.” In Table 4, we report that the majority of gains were to the rubric’s higher-order thinking categories. On average, students saw a 1.58-point increase in “quality and clarity of thought,” a 1.36-point increase in “understanding and use of the passage,” and a 1.27-point increase in “organization, development, and support.” Least important to improvement were basic aptitudes such as sentence-level grammar and language skills as well as comprehension of the prompt.


Of students whose first draft scores implied the need for remediation, 113 made gains that crossed the cut-point threshold, implying a college-level placement outcome. Descriptive statistics show that the average holistic score on the first essay was 2.98, whereas the average score on the revised essay was 4.01, just above the cut-point. The results of the paired sample t-test (see Table 5) show a statistically significant difference in the test scores between draft one and two, t(283)= -21.07, p < .001). Whereas only 26% of the sample had scored above the cut-point on the first draft, 64% scored above the cut-point on the revised draft. Although our analyses are based on a small sample, these data imply a need to reexamine English course placement procedures.



Table 1: Student Essay Scores by Draft (N=284)

Rubric

Score

First Draft

Second Draft

N

%

N

%

1

4

1.4

1

0.4

2

85

30.0

21

7.0

3

121

43.0

81

28.6

4

61

21.0

78

28.0

5

12

4.0

76

27.0

6

1

0.6

27

9.0

Total

284

 

284

 



Table 2: Change in Scores by Rubric Categories for Students Whose Revised Essays Crossed the Cut-Point Threshold (N=113)

 

Response to topic

Understanding and use of the passage

Quality and clarity of thought

Organization, development and support

Sentence structure and command of language

Grammar, use and mechanics

Change

N

%

N

%

N

%

N

%

N

%

N

%

1

43

38

48

42

34

30

41

36

32

28

37

33

2

32

28

37

33

45

40

34

30

34

30

24

21

3

5

4

7

6

18

16

15

13

6

5

8

7

4

1

1

0

0

0

0

0

0

2

2

0

0

Total

81

72

92

81

97

86

90

80

74

65

69

61



Table 3: Essay Scoring Guide

Performance Category

Score of 1

Flawed

Score of 2

Inadequate

Score of 3

Limited

Score of 4

Proficient

Score of 5

Strong

Score of 6

Superior

 

demonstrates fundamental writing problems

Demonstrates serious writing problems.

Demonstrates developing writing but has some significant problems.

Demonstrates adequate writing.

Demonstrates clear competence in writing.

Is superior writing but may have minor flaws.

General

Flawed

Inadequate

Limited

Proficient

Strong

Superior

Response to the topic

Fails to respond meaningfully to the topic

Indicates confusion about the topic or neglects important aspects of the task

Misunderstands or does not respond to all parts of the task

Addresses the topic, but may not respond to all parts of the task thoroughly

Addresses the topic clearly, but may respond to some aspects of the task more effectively than others

Addresses the topic clearly and responds effectively to all aspects of the task

Understanding and use of the passage

Demonstrates little or no understanding of the passage or does not use it to respond to the topic

Demonstrates very poor understanding of the main points of the passage, does not use the passage appropriately in developing a response, or may not use the passage at all

Demonstrates a limited understanding of the passage, or makes poor use of it in developing a weak response

Demonstrates a generally accurate understanding of the passage in developing a reasonable response

Demonstrates a sound critical understanding of the passage in developing a well-reasoned response

Demonstrates a thorough critical understanding of the passage in developing an insightful response

Quality and clarity of thought

Is unfocused, illogical, or incoherent

Lacks focus and coherence, and often fails to communicate ideas

Lacks focus, sometimes fails to communicate ideas, or has weak analysis

Provides basic analysis, but may treat the topic repetitively

Shows some depth and complexity of thought

Explores the issues thoughtfully and in depth

Organization, development, and support

Is disorganized and undeveloped, providing little or no relevant support

Has very weak organization, development, and support

Is poorly organized and developed, presenting generalizations without adequate and appropriate support or presenting details without generalizations

Is adequately organized and developed, generally supporting ideas with reasons and examples

Is well organized and developed, with ideas supported by appropriate reasons and examples

Is coherently organized and developed, with ideas supported by apt reasons and well-chosen examples

Sentence structure and command of language

Lacks basic control of sentence structure and language

Demonstrates inadequate control of sentence structure and language

Demonstrates limited control of sentence structure and language

Demonstrates adequate control of sentence structure and language

Displays some sentence variety and facility in the use of language

Has an effective, fluent style marked by sentence variety and a clear command of language

Grammar, usage, and mechanics

Has serious and persistent errors in grammar, usage, and mechanics that severely interfere with meaning

Has numerous errors in grammar, usage, and mechanics that frequently interfere with meaning

Has an accumulation of errors in grammar, usage, and mechanics that sometimes interfere with meaning

May have some errors, but generally demonstrates control of grammar, usage, and mechanics

May have a few errors in grammar, usage, and mechanics

Is generally free from errors in grammar, usage, and mechanics



Table 4: Average Change in Scores by Rubric Categories for Students Whose Revised Essays Crossed the Cut-Point Threshold (N=113)

 

Response to topic

Understanding and use of the passage

Quality and clarity of thought

Organization, development and support

Sentence structure and command of language

Grammar, use and mechanics

Draft 1

3.10

2.88

2.78

2.73

2.45

2.37

Draft 2

4.26

4.15

4.35

4.09

3.56

3.32

Change

1.16

1.27

1.58

1.36

1.11

0.95



Table 5: T-test Results Comparing Students’ First and Revised Draft Essay Scores (N=284)

 

English Placement Essay Test

  
 

First Draft

Second Draft

t

df

Student Scores (n=284)

2.98

4.01

-21.07***

283

 

(.88)

(1.125)

  

Note: p<.05*, p<.01**, p<.001***. Standard deviations appear in parentheses below means.


DISCUSSION


Our analyses suggest that the first draft test score may not accurately reflect a student’s true ability to write successfully in college. As many as 64% of the sample might have been misplaced into English remediation based on the first draft essay score. Further, when examining students who scored below the cut-point at the first seating and above the cut-point after revising, we see that the majority of students are making gains on the rubric that infer the cognitive aptitudes associated with college-level writing. These results are consistent not only with previous research indicating that revision is primarily informed by higher-order thinking (Butler & Britt, 2011), but also the argument that essay test scores may represent a narrower range of writing skills than linguistically diverse students actually use to perform college writing.


The notion that different students prioritize writing skills differently is a conceivable alternative to the prevailing deterministic perspective that all students use the same writing skills to achieve the same writing quality regardless of their prior language experience. To bridge the gaps between theory and practice, studies that examine writing skillsets across student groups are warranted (Elliot, Deess, Rudniy, & Joshi, 2012, p. 306). More research on the equity of automated scoring procedures and digital assessments will also be helpful as extant findings imply that technology may proliferate testing inequalities (Ramineni, 2013; Scott, Ritter, Fowler, & Franks, 2019).


Ultimately, the assessment concerns raised in this article indicate systemic problems that require comprehensive policy solutions. Interdisciplinary collaboration, for example, would seem appropriate to discern whether or not reforms such as multiple measures repeal or reproduce the inequalities of writing education. Mainstream policy scholars will likely need to reconsider their ambivalence toward discipline-specific research (Grubb, 2012). Compositionists and others who employ sociolinguistic perspectives may also need to set aside biases about the limitations of the scientific method to study language in context (Bailey, Jaggars, & Scott-Clayton, 2013).


Although the diversity of college students has increased, the construct of college writing has not evolved. Linguistically diverse students are disproportionately assessed into remediation even though enrollment in such coursework may actually jeopardize their degree prospects. As college campuses become increasingly diverse, we argue the need to update how college writing has been conceptualized in the past so as to improve the equity of English course placements in the future (Behizadeh, 2014). In the interim, we encourage readers to reexamine basic college readiness assumptions with the goals of improving access and degree completion for all.


References


Attewell, P., Heil, S., & Reisel, L. (2011). Competing explanations of undergraduate noncompletion. American Educational Research Journal, 48(3), 536–559.


Bailey, T., Jaggars, S., & Scott-Clayton, J. E. (2013). Characterizing the effectiveness of developmental education: A response to recent criticism. New York, NY: Community College Research Center.


Behizadeh, N. (2014). Mitigating the dangers of a single story creating large-scale writing assessments aligned with sociocultural theory. Educational Researcher, 43(3), 125–136.


Belfield, C., & Crosta, P. M. (2012). Predicting success in college: The importance of placement tests and high school transcripts. In CCRC Working Paper (Vol. 42, pp. 1–45). New York, NY: Community College Research Center.


Butler, J. A., & Britt, M. A. (2011). Investigating instruction for improving revision of argumentative essays. Written Communication, 28(1), 70–96.


Choi, J. (2013). Does peer feedback affect L2 writers’ L2 learning, composition skills, metacognitive knowledge, and L2 writing anxiety? English Teaching, 68(3), 187–212.


Conley, D. (2017). The new complexity of readiness for college and careers. In K. L. McClarty, K. D. Mattern, & M. N. Gaertner (Eds.), Preparing Students for College and Careers: Theory, Measurement, and Educational Practice (pp. 9–22). New York, NY: Routledge.


CSU Student Academic Services. (2014). Focus on English. Retrieved from California State University website: http://www.calstate.edu/sas/publications/documents/focusonenglish.pdf


Diederich, P. B. (1964). Problems and possibilities of research in the teaching of written composition. In D. H. Russell, M. J. Early, & E. J. Farrell (Eds.), Research design and the teaching of English: Proceedings of the San Francisco Conference 1963 (pp. 52–73). Champaign, IL: National Council of Teachers of English.


Diederich, P. B., French, J. W., & Carlton, S. T. (1961). Factors in judgments of writing ability (2333–8504). Princeton, NJ: Educational Testing Service (ETS).


Elliot, N., Deess, P., Rudniy, A., & Joshi, K. (2012). Placement of students into first-year writing courses. Research in the Teaching of English, 285–313.


Enright, K. A. (2011). Language and literacy for a new mainstream. American Educational Research Journal, 48(1), 80–118.


Fields, R., & Parsad, B. (2012). Tests and cut scores used for student placement in postsecondary education: Fall 2011. National Assessment Governing Board.


Fitzgerald, J. (1987). Research on revision in writing. Review of Educational Research, 57(4), 481.


Flores, S. M., & Drake, T. A. (2014). Does English language learner (ELL) identification predict college remediation designation?: A comparison by race and ethnicity, and ELL waiver status. The Review of Higher Education, 38(1), 1–36.


Fritzsche, B. A., Rapp Young, B., & Hickson, K. C. (2003). Individual differences in academic procrastination tendency and writing success. Personality and Individual Differences, 35(7), 1549–1557.


Grubb, W. N. (2012). Basic skills education in community colleges: Inside and outside of classrooms. New York, NY: Routledge.


Haswell, R. (2004). Post-secondary entry writing placement: A brief synopsis of research. Corpus Christi, TX: Texas A&M University.


Hodara, M., & Xu, D. (2016). Does developmental education improve labor market outcomes? Evidence from two states. American Educational Research Journal, 53(3), 781–813.


Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community colleges. Community College Review, 39(4), 327–351.


Huot, B. (1990). The literature of direct writing assessment: Major concerns and prevailing trends. Review of Educational Research, 60(2), 237.


Kincaid, G. L. (1953). Some factors affecting variations in the quality of student's writing. East Lansing, MI: Michigan State University.


Ngo, F., & Melguizo, T. (2015). How can placement policy improve math remediation outcomes? Evidence from experimentation in community colleges. Educational Evaluation and Policy Analysis, 38(1), 171-196.


Nystrand, M., Greene, S., & Wiemelt, J. (1993). Where did composition studies come from?: An intellectual history. Written Communication, 10(3), 267.


Page, L. C., & Scott-Clayton, J. (2016). Improving college access in the United States: Barriers and policy responses. Economics of Education Review, 51, 4–22.


Polio, C. (2012). The relevance of second language acquisition theory to the written error correction debate. Journal of Second Language Writing, 21(4), 375–389.


Polio, C., & Fleck, C. (1998). “If I only had more time:” ESL learners' changes in linguistic accuracy on essay revisions. Journal of Second Language Writing, 7(1), 43–68.


Ramineni, C. (2013). Validating automated essay scoring for online writing placement. Assessing Writing, 18(1), 40–61.


Reardon, S. F. (2013). The widening income achievement gap. Educational Leadership, 70(8), 10–16.


Scott, C. E., Ritter, N. L., Fowler, R. M., & Franks, A. D. (2019). Developing a community of academic writers: Using social media to support academic accountability, motivation, and productivity. Journal of Literacy and Technology, 20(2), 61–96.


Scott-Clayton, J., Crosta, P. M., & Belfield, C. R. (2014). Improving the targeting of treatment evidence from college remediation. Educational Evaluation and Policy Analysis, 36(3), 371–393.


Scott-Clayton, J., & Rodriguez, O. (2014). Development, discouragement, or diversion? New evidence on the effects of college remediation policy. Education Finance and Policy, 10(1), 4–45.


Solano-Flores, G. (2008). Who is given tests in what language by whom, when, and where? The need for probabilistic views of language in the testing of English language learners. Educational Researcher, 37(4), 189–199.


Stefanou, C., & Revesz, A. (2015). Direct written corrective feedback, learner differences, and the acquisition of second language article use for generic and specific plural reference. The Modern Language Journal, 99(2), 263–282.


Valentine, J. C., Konstantopoulos, S., & Goldrick-Rab, S. (2017). What happens to students placed into developmental education? A meta-analysis of regression discontinuity studies. Review of Educational Research, 87(4), 806–833.


Williams, J. (2012). The potential role(s) of writing in second language development. Journal of Second Language Writing, 21(4), 321–331.


Wyatt, J. N., Kobrin, J. L., Proestler, N., Camara, W. J., & Wiley, A. (2011). SAT benchmarks: Development of a college readiness benchmark and its relationship to secondary and postsecondary school performance. Princeton, NJ: The College Board.




The authors thank Chyllis Scott for input on prior drafts of this manuscript.


Cite This Article as: Teachers College Record, Date Published: July 01, 2019
https://www.tcrecord.org ID Number: 22952, Date Accessed: 7/16/2019 12:28:14 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Stefani Relles
    University of Nevada, Las Vegas
    E-mail Author
    STEFANI RELLES is an Associate Professor in the Department of Educational Psychology and Higher Education at UNLV. Her work focuses broadly on college readiness, access and equity for students from historically marginalized groups. She is the co-author of “Brokering College Opportunity for First-Generation Youth: The Role of the Urban High School” in American Educational Research Journal.
  • Blanca Rincón
    University of Nevada, Las Vegas
    E-mail Author
    BLANCA RINCÓN is an Assistant Professor in the Educational Psychology and Higher Education at UNLV. Her research agenda is concerned with equity issues in higher education, with a specific focus on access and success for underrepresented and underserved students (e.g., women, low-income, first-generation, and students of color) in science, technology, engineering, and mathematics (STEM). She is the author of “Does Latinx Representation Matter for Latinx Student Retention in STEM?” in the Journal of Hispanic Higher Education.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS