The Challenge of Writing Remediation: Can Composition Research Inform Higher Education Policy?
by Stefani R. Relles & William G. Tierney — 2013
Background/Context: This article presents a review of research relevant to postsecondary writing remediation. The purpose of the review is to assess empirical support for policy aimed at improving the degree completion rates of students who arrive at tertiary settings underprepared to write.
Purpose/Objective/Research Question/Focus of Study: Our purpose is to bridge composition studies for a higher education policy audience. Our agenda is to offer a balanced portrait of disciplinary literature framed for organizational decision-making that can improve graduation rates. The research question that guides inquiry is: How can writing and composition research inform remediation policy to increase college-going success?
Research Design: This is a critical synthesis of prior conceptual and empirical work. We first provide a historical perspective to explicate important disciplinary issues that may be unfamiliar to a policy audience. We then present a critical synthesis of the disciplinary-based literature from two viewpoints. The first involves rhetorical frames that are related, but have significant conceptual differences. The second considers student achievement, attainment, and developmental outcomes.
Findings/Results: The review comprehensively demonstrates the seriousness and scope of policy problems perpetuated by two obstacles. The first is a lack of clarity on what constitutes college writing. The second is a dearth of assessment tools with which to measure writing aptitude. The incongruities between standards and assessment of college writing have resulted in a body of research that does not provide the kind of evidentiary support weighted in policy discussions. Both issues require attention if writing remediation policy is to be improved.
Conclusions/Recommendations: The review suggests that writing remediation policy and programs will remain accountable to the rhetorical and paradigmatic viewpoints that dominate writing assessment and that relegate underprepared students to dubious degree pathways. The negative implications of the review for college writing preparation are discussed in the context of the K–12 Common Core State Standards. Recommendations are tendered for an interdisciplinary agenda to increase the educational opportunities of underprepared writers and decrease the social inequities associated with remediation policies and programs.
With U.S. college enrollment at a benchmark high (Bureau of Labor Statistics, 2010), the vast numbers of incoming students underprepared for college-level work implies that postsecondary graduation rates may not expand accordingly. Although the proportion of students pursuing postsecondary education has increased, it is estimated that at least a third of admitted students do not possess the skills necessary to succeed in college classes (Adelman, 2009; Attewell & Domina, 2008; Levin & Calcagno, 2008). Meanwhile, a number of national initiatives represent expanded investment in higher education as a means to ensure long-term economic prosperity (Attewell, Heil, & Reisel, 2011). In particular, the Health Care and Education Reconciliation Act (2010) supports the pledge of having the highest proportion of college graduates in the world by 2020 (Darling-Hammond, 2009). Despite a climate of amplified higher education policy support, research indicates that the United States will fall at least 3 million postsecondary degrees short of its workforce demands by 2018 (Carnevale, Smith, & Strohl, 2010). Existing barriers to graduating students will have to be reduced if degree completion rates are to improve. Increasing college access and efficiency is critical to preventing any further loss of economic opportunity for millions of American workers.
One way to expand graduation rates is to address the college preparation gap so that the third of students who arrive at tertiary settings underprepared in basic academic skills are able to progress through the system as efficiently as possible. The conventional policy solution for ensuring these students persist has been their assignment to preparatory coursework, conventionally referred to as developmental education or remediation. The difference in these terms underscores the difference in the way higher education stakeholders have conceptualized policy aimed at academically underprepared students. The term developmental is generally preferred by practitioners because remediation (and its root term remedy) is associated with a disease-orientated perspective of student abilities, whereas the former implies a growth-oriented standpoint. From a policy angle, however, remediation is possibly more apt because of its punitive consequence: The noncredit bearing nature of remedial coursework necessarily postpones degree progress (Levin & Calcagno, 2008). Additionally, the overrepresentation of disadvantaged students in remediation suggests that the policy may be discriminatory (Bahr, 2010). Despite these concerns, remediation represents a college pathway for underprepared students that in theory does not compromise academic standards.
Given the important access role that remediation plays for underprepared students, optimizing policies and programs to maximize learning outcomes and minimize negative effects is certainly in the best interests of stakeholders nationwide. Considering that only 17% of students enrolled in English remediation complete a bachelors degree, policy improvement in this area of remediation will benefit institutions, taxpayers, and the students themselves (Adelman, 2006). Accordingly, our purpose here is to conceptualize an approach to remediation that maximizes college access and degree completion for students who do not initially demonstrate college-level writing skills.
Although the number of students being placed into postsecondary remediation has increased, a critical review and synthesis of the research literature that focuses exclusively on remedial writing in a policy context is absent. Remediation policy review work thus far identifies programmatic features that may improve access and persistence for remedial students (Mazzeo, 2002; McCusker, 1999). Models such as mainstreaming, stretch, and accelerated learning are associated with better student outcomes than traditional program designs (Jenkins, Speroni, Belfield, Jaggars, & Edgecombe, 2010). This research, however, does not assume basic learning or instructional differentiations for remedial subjects (Weiner, 2002). We propose that the types of remediation programs identified as helpful in recent work may be further advantaged by curricular and pedagogical strategies imported from research on writing. Unfortunately, reviews of the developmental writing literature rarely discuss policy in a way that is accessible to stakeholders without a disciplinary knowledge base. Our purpose is to bridge composition studies for a higher education policy audience. For this reason, we focus broadly on the theoretical perspectives and educational outcomes of existing studies that may help the degree prospects of underprepared writers.
This article is divided into four sections. The first section describes our approach. The second section traces the intellectual development of postsecondary remediation and composition studies, respectively. This historical perspective provides context for the review and establishes disciplinary issues that may be unfamiliar to a policy audience. The third section is a critical review of the literature from two perspectives. The first involves rhetorical frames that are related but have significant conceptual differences. These frames will be expounded prior to their appearance in the review. The second perspective considers student achievement, attainment, and developmental outcomes. Findings are discussed in a subsequent section. We conclude with a discussion of the research implications for remediation policy support.
Our goal is to examine critically what is currently known about writing remediation for the purpose of improving policy. The research question that guides inquiry is: How can writing and composition research inform remediation policy to increase college-going success? Our agenda is to promote economic prosperity through expansion of the nations educated workforce. Note that our purpose is focused on matters of federal, state, and institutional governance. We intend neither to disregard nor to become encumbered by the disciplinary nuances of writing or composition scholarship. Rather, we aim to précis findings for higher education researchers, administrative leaders, and policy makers in the context of improving graduation rates. We deliberately refrain from argumentation for or against particular pedagogies or theoretical orientations. Instead, we try to offer a balanced portrait of the extant literature framed for organizational decision-making.
In preview, we find nowhere near enough evidence on what works and little consensus on what needs to be done to coordinate improved policy support. In redress, we identify two obstacles that require immediate attention if headway is to be made in a policy context. The first is a lack of clarity on what constitutes college writing. The second is a dearth of reliable assessment tools with which to measure college writing aptitude. The review demonstrates the seriousness and scope of research and policy problems perpetuated by these obstacles. Overcoming them will require interdisciplinary cooperation. As such, this inquiry strives to achieve reciprocal appreciation of the scholarship conventions that guide policy research and composition studies, respectively.
Our position is one of practicality; we synthesize the literature to discern viable, research-based opportunities to improve existing writing policies and programs. We are hopeful that such an undertaking represents an incremental approach to broaden awareness of a systemic problem and support an interdisciplinary agenda for future research. The extrication of writing from generic remediation by way of this synthesis will be useful for those who wish to maximize the educational opportunities of underprepared writers. By this review, we hope to establish common ground for ongoing dialogue with an eye toward the implementation of successful policies to help overcome a vexing and growing problem.
Given the acknowledged dearth of remediation literature (Levin & Calcagno, 2008), our approach is strategic to collecting an appreciable sample of content data. To this end, we hypothesized that composition studies offers a body of published work from which to draw an immediate, deeper understanding of remedial writers and their education needs. The term composition studies is an interdisciplinary referent for research that spans a variety of humanities-based and social scientific fields, including rhetoric, literature, communications, linguistics, psychology, and education. Our search terms identified articles published in a variety of disciplinary sources. Initially, our inclusionary criteria were either publication in a peer-reviewed journal or association with a major research institution. Otherwise, all articles related to the topic were considered regardless of disciplinary orientation.
In the context of policy aimed to increase the number of college degrees awarded nationwide, there is merit to evaluating remediation by discipline. A review of math and reading remediation, however, is beyond the scope of this article. In addition, there is more related work in the field of writing research than space allows. The review therefore only includes studies that, either explicitly or implicitly, involve writing in the context of postsecondary remediation. In addition, we incorporate studies conducted in secondary education settings because most remedial students are recent high school graduates (Jenkins & Boswell, 2002; Kirst, 2008). Last, this articles discussion of effectiveness focuses on student outcomes rather than financial measures. Although relevant to the task of optimizing remediation, cost is an enormous issue best served in a subsequent analysis.
To identify literature for the review, we amalgamated studies by reasonable descriptor combinations. As discussed, remediation and developmental education imply pedagogical distinctions (Breneman, Costrell, Haarlow, Ponitz, & Steinberg, 1998), but these tags are often used interchangeably by education researchers. Across disciplines, however, the variance in terms is more particular. In composition studies, for example, basic writing and remedial English are utterly distinct from freshman composition. The former two connote below-college-level coursework, whereas the latter signifies entry-level writing aptitude. English proficiency is another relevant descriptor used primarily in the context of placement exams and remediation policy language. Specifically, English proficiency is the institutional status conferred to students who score above a certain cut point on any one of a number of standardized writing assessments used to determine course placement.1 Our method ensured that any one of these identifiers qualified an article for appraisal.
The search was conducted using the CSA Illumina research portal of the ERIC database. We started with a two-tiered anywhere search. The first row located articles that contained one or more words: remediation or developmental or remedial. The second row employed the prefix and for terms: postsecondary or college. This search identified 1,485 peer-reviewed articles. From this general pool, we identified a working group of primary documents by adding a third anywhere row prefixed by and for the terms: writing or English proficiency. The yield of potential primary sources numbered 252.
We also performed a separate three-tiered anywhere search to accumulate studies conducted in high school settings. The first row specified the phrase: college preparation. The second row used the prefix and to limit the search by the terms: secondary or high school. A third row applied a final and for the term: writing. This search accrued an additional 21 documents for consideration.
The 273 articles were inventoried according to the following fields: remedial discipline (generic, math, reading, writing); level of education (secondary, postsecondary); targeted student population (college level, underprepared); methodology (quantitative, qualitative); date of publication; and setting (domestic, international). At this point, we restricted the range of studies to those published since 1990 and conducted in U.S. domestic locales. Studies that focused on reading or math remediation were bracketed from further consideration. Studies that were not relevant to postsecondary remediation were excised.
The remaining documents were organized into three categories of literature: (1) primary empirical studies in which the research focus applies to writing remediation or directly related issues, (2) secondary empirical studies in which remediation was treated generically across disciplines, and (3) conceptual papers (i.e., critiques and reviews). The latter two categories, conceptual papers and secondary empirical studies, inform a historical discussion of postsecondary remediation and composition studies. The sources from which conceptual and historical information was drawn number 19. The review of primary studies examines 55 empirical sources.
The final collection of primary documents was analyzed using a two-step iterative coding process. The first step reflected a grounded theory coding approach (Glaser & Strauss, 1967). The substantive memoing and in vivo coding strategies outlined by Charmaz (2006) enabled us to apply inductive techniques from which a series of categories organically emerged. These categories ultimately reflected disciplinary origins, research foci, methods, analytic tools, and outcomes.
A second phase of analyses involved predetermined codes derived from two theoretical frameworks: rhetorical orientations and educational outcomes. Rhetorical groupings connote important disciplinary distinctions that are purposeful to bridging composition studies with policy research. To evaluate design consistency, we ascribed rhetorical codes to research methods and analytic tools separately. Educational outcomes are identified at the student level. The conceptual details of both frameworks will be explained prior to the debut of data in the following section (see Table 1 for a complete inventory of the primary empirical studies listed with final codes).
Table 1. Matrix of Primary Empirical Studies Included in Review
The following section traces the intellectual histories of two distinct strands of research that are related, but rarely integrated: (a) postsecondary remediation and (b) the interdisciplinary domain of scholarship on writing known as composition studies. Both accounts are grounded by a synthesis of theoretical literature in each respective field. We regard this undertaking as important to establishing context for the review. We understand, however, that the approach necessarily conflates certain distinctions of the fields in service of broad understanding.
Although currently characterized as a national crisis, remediation in U.S. postsecondary settings is not a new phenomenon. In fact, educating underprepared students is as deep-seated as the institution of higher education itself. Policies to address underpreparation coincide with the inception and expansion of American higher education. Merisotis and Phipps (2000) reported that as early as the 17th century, Harvard College provided prospective clergymen with tutors in Greek and Latin. Breneman and colleagues (1998) credited the debut of formal remedial education to the University of Wisconsin in 1849. By the turn of the 19th century, according to Ignash (2002), 40% of first-year college students enrolled in remediation.
This reported remediation rate held relatively steady even as enrollment increased in the latter half of the 20th century. The federal government introduced a series of legislation which increased access: the Servicemens Readjustment Act (G.I. Bill) of 1944, the Civil Rights Act of 1964, and the Higher Education Act of 1965, respectively. The result of these open admissions policies and increased funding was a doubling of college enrollment rates between 1970 and 2002 (Snyder, 2010).
As the number of students attending college increased, the National Center for Education Statistics (NCES) published its first comprehensive survey of remediation in 1989. The report revealed that 30% of first-time, first-year students enrolled in at least one remedial course (Mansfield, Farris, & Black, 1991). Follow-up studies in 1995 and 2000 confirmed analogous rates of 29% and 28%, respectively (Snyder & Hoffman, 2001). The most recent government report (Aud et al., 2011) suggested that 36% of first-year undergraduates have participated in remediation. Because of data unreliability, however, many researchers have suggested that the actual number of students enrolled in remediation is much higher (Adelman, 2009; Kirst, 2007; Merisotis & Phipps, 2000). In 2004, for example, Adelman (2004) estimated that not less than 41% of college students enroll in remedial coursework at some point during their postsecondary tenure.
In the last decade, policy questions concerning educational access and equity for underprepared students prompted the research community to take stock of efficacy findings. At this point, previous remediation studies were found to be methodologically suspect. Arguably, the most consistent statement about remediation by academics has been the need for robust, comprehensive, multi-institutional research (Bailey & Weininger, 2002; Koski & Levin, 1998; Levin & Calcagno, 2008; Merisotis & Phipps, 2000).
The call for improved scholarship has yielded a number of studies since 2005 that address the weaknesses found in the previous literature. The findings, however, are inconsistent. According to some studies, underprepared students who successfully complete remediation benefit in terms of degree completion when compared with underprepared students who do not participate in remediation or who participate but do not successfully complete remedial coursework (Attewell, Domina, Lavin, & Levey, 2006; Bahr, 2010; Bettinger & Long, 2009). Other studies have found that remediation has either no discernible impact on the probability of earning a college degree (Calcagno & Long, 2008; Martorell & McFarlin, 2011) or negatively influences attainment (Bailey, Jeong, & Cho, 2010).
The lack of consensus, although troubling, yet underscores the importance of optimizing existing remediation programs because they remain the principal option for most underprepared students entering or already within the system. Given that the general effectiveness of remediation has yet to be determined, it is prudent to begin to apply the lenses of academic disciplinarity to remedial programs for the benefit of underprepared students. Although interest in remediation has grown in the education literature, studies focused on remedial writing have rarely appeared in mainstream education journals. By mainstream, we mean journals that publish articles that are of broad-spectrum relevance to the education research community and that support a variety of areas of education research and related disciplines. This definition of mainstream therefore excludes disciplinary and special interest group publications.
With its disciplinary roots in the field of rhetoric, composition studies is broadly concerned with reading and writing instruction in all education contexts, but particularly with the practice and uses of writing connected to higher education (CCCC, 1987). In the most general of terms, then, composition studies is about the teaching of writing. Rhetoric is integral to the field. For researchers who may be unfamiliar with the nature of composition scholarship and its history with regard to remediation studies, the following synopsis is an introduction to writing research fundamentals.
A consensus of historical accounts acknowledges that the field grew out of resistance to the constraints imposed on writing research within departments of literature (Lauer, 1984; Nystrand, 2006; Silva & Leki, 2004). That is, composition studies represents the disengagement of research from a purely literary context in service of a broader agenda to support writing education and the preparation of students to function in a democratic society. Coinciding with the advent of open admissions policies, the field emerged in the late 1960s and 1970s as writing researchers sought to understand the problems associated with teaching composition to undergraduates. The dearth of research on writing education prompted scholars to explore the nature of the writing process (Lauer, 1984; Nystrand, Greene, & Wiemelt, 1993). The fields initial forays into classical rhetoric, philosophy, and linguistics led to explorations in psychology, including creativity, problem-solving, and cognitive development (Lauer, 1984).
The predominance of positivist thinking across academe heavily influenced composition scholarship during the fields nascent years. At the outset, a review by Braddock, Lloyd-Jones, and Schoer (1963) called for research on writing to be conducted using standards associated with the physical sciences. Twenty years later, Hillockss (1984) meta-analysis of experimental research reflected this emphasis on objectivity (Nystrand, 2006).
Nonetheless, by the late 1970s, other methods of composition scholarship emerged as the works of practitioner-researchers began to receive scholarly attention. One particularly influential tome by Shaughnessy (1977) focused on remedial writers and offered the developmental perspective that errors can be indicators of academic growth. By the late 1980s and 1990s, objectivist interests in writing research were largely displaced when it became clear that writing improvement was not an innate progression of language instruction, but rather the cumulative effect of various cognitive, sociocognitive, cultural, and learning influences (Coker & Lewis, 2008; Dyson & Freedman, 2003; Graham & Harris, 1989, 2009; Juzwik et al., 2006; Sperling & Freedman, 2001).
During this time, Flower and Hayes (1981) pioneered the widely influential cognitive process theory of writing. Their work is notable because it (a) explicated writing by three essential subprocessesplanning, drafting, and revision, (b) distinguished these subprocesses as recursive and cooperatively embedded in the activity of composing, and (c) suggested that revision is a determining factor in writing quality (Butler & Britt, 2011; Edlund & Brynelson, 2008; Fitzgerald, 1987; Hildick, 1965; McNamara et al., 2010; Raimes, 2006). Writing process theory inimitably changed the way writing is taught in primary and secondary schools. Because writing is viewed as an incremental process, students are not expected to spontaneously produce quality. Rather, students are taught to plan, draft, and revise toward progress on a scale of academic competence where college writing is the final stratum (A. Goldstein & Carr, 1996). Implicit in writing process theory and pedagogy is the assumption that writing quality results from multiple drafts.
No less influential on the process-oriented teaching and research of writing have been constructivist thinking and sociocultural thinking (Sperling & Freedman, 2001). The viewpoints that writing is a social practice (Vygotsky, 1978) or situated by cultural considerations (Bakhtin, Holquist, & Emerson, 1986) are compatible with process theory but imply different educational approaches. Sperling and Freedman observed that social and cultural perspectives on writing are critical to conceive of students diverse experiences as writers and, in turn, to develop ways that foster those experiences in various social arrangements within classrooms and schools (p. 8). Constructivist perspectives have been linked to social pedagogies that involve social relationships such as peer review. In contrast, a sociocultural viewpoint is associated with multimodal approaches that integrate new media literacies with traditional text-based composition. These examples showcase the explicit link between paradigm and policy where writing education is concerned.
Less than a decade after the debut of writing process theory, a review by Durst (1990) captured the fields altered priorities by analyzing research trends and foci in terms of problems studied, age groups targeted, and methods used. Durst pointed out the methodological paradox that extant writing assessment techniques were incompatible with the fields growing body of evidence that supported writing as a process regardless of situated issues (Durst, 2006a). Notable was Dursts finding that only 8% of the 969 studies he reviewed had, as their primary focus, some aspect of the evaluation of writing quality. This finding foreshadowed that assessment would become a significant hindrance to the implementation of effective writing policies. Dursts prediction was publicly conceded in a 1999 College Board report (Breland, Bridgeman, & Fowles, 1999) that acknowledged that the writing process elements of planning and revision were rarely emphasized in the exams used for admission to higher education. In short, writing theory and research had outgrown the fields conventional assessment model.
There are three principal models of writing assessment in higher education, each of which, not incidentally, was developed in response to policy needs. From the 1950s to the early 1970s, university administrators and faculty petitioned for reliable processes to place students into and out of required composition sequences (Yancey, 1999, 2004). The result was widespread institutional use of indirect writing tests, the first model, which favors multiple-choice items (E. White, 2001). In the 1970s, composition faculty rationalized that students writing abilities should be directly measured. Subsequently, universities substituted indirect tests with the scoring of actual writing samples (Breland et al., 1999; Michael & Shaffer, 1979). To ensure reliability, the samples were written under controlled conditions in response to a uniform prompt that could be holistically scored against a benchmark, usually consisting of a rubric accompanied by exemplar essays. This second type, direct writing assessment, is commonly referenced as the single sample model. In the mid-1980s, composition faculty championed portfolio assessments that could better serve programmatic evaluation and offer students formative critique (Elbow & Belanoff, 1986). Despite theoretical promise for this third model, portfolios remain largely excluded from college placement decisions. The single sample model prevails. Relevant to the present discussion, this model does not adhere to a cognitive process theory of writing because the sample is generated in a timed situation that limits test takers opportunity to revise.
The disconnect between the writing skills measured by standardized tests and the writing skills necessary to produce college-level work has intensified in the last two decades. Lunsford and Lunsford (2008) documented significant changes in the expectations of college writing. They found that the kind of personal narrative and literary analysis that characterized college writing in 1986 has been replaced by argumentative and research-based writing. Lunsford and Lunsford also determined that todays college students are expected to write twice the number of pages required of preceding cohorts. These findings arguably reflect an increase in the rigor of college writing that is viable through planning, drafting, and revision. However, although the kind of writing required to succeed in higher education has changed, the assessment model used to evaluate writing aptitude has not (Harrington, Malencyzk, Peckham, Rhodes, & Yancey, 2001). Standards reflect comprehensive process skills. Assessment reflects drafting in isolation.
In closing, three main themes have emerged from this retrospection. The first involves assessment. The second is the recognition that, in both the higher education and composition fields of scholarship, research on writing remediation is relatively sparse. Third is the practical insight that the kind of research generally heeded in policy discussions is neither the priority nor the forte of composition studies. We offer a brief exposition on the significance of each issue to the following review.
Foremost, the arrested condition of writing assessment corresponds to a long-standing gap in writing assessment research. Huots (1990) review of direct writing assessment led him to conclude that the field had been in a state of neglect. Juzwik and colleagues (2006) subsequent review of 1,502 more recent publications (1999 through 2004) found the disproportion sustained, with less than 8% of research on writing focused on assessment and evaluation. Because we speculate that the methodological inconsistencies historically associated with writing assessment underlie the review literature, our coding approach is attentive to these possibilities. This is the rationale behind applying codes to each studys approach and its analytic tools separately.
Next, even within composition studies, college writing remediation has been at the periphery. According to Lewiecki-Wilson and Sommers (1999), research on students in open admissions contexts is at the margin of the field (p. 440). This assumption is tacit in Dursts (2006b) review of composition research since the mid-1980s that mentions remediation only parenthetically in the context of assessment. Up to now, in both higher education and composition contexts, studies with underprepared writers are the minority in their respective fields.
Last, with regard to policy implications, Juzwik and colleagues (2006) found that less than 20% of all current writing research involved the kind of quantitative methods endorsed by the U.S. Department of Education and reflexively valued by politicians and administrators (Slavin, 2008). We take no position on the efficacy of experimental bias. We simply acknowledge this predisposition to the levying of education policy support. Valid findings that are generalizable have been the mainstay of education policy backing (Feuer, Towne, & Shavelson, 2002; Shavelson, Phillips, Towne, & Feuer, 2003; Slavin, 2008). The following review of empirical research is therefore strategic to ascertaining coherence across and amid disciplines as well as to developing interdisciplinary support for future research that can inform policy.
RESEARCH SYNTHESIS AND REVIEW
This critical synthesis and review converges research literature that links studies on writing with student outcomes in the context of college preparation. Just under half (26 of 55) of the research involved postsecondary remedial writers directly. One tenth (6 of 55) of the studies dealt with high school students. The remainder of studies involved either broad college student populations, faculty, or both. These studies (23 of 55) focused on standards-related issues that are indirectly related to remediation.
We investigate the literature from two perspectives. The first is driven by rhetorical orientations to discern themes or trends in disciplinary origins, research foci, analytic methods, and outcomes. Educational outcomes guide the second review angle to address the quality of evidence for hypothesized relationships between writing remediation and student achievement, attainment, and development.
A framework of rhetorical orientations signifies critical disciplinary concerns that can advantage (or disadvantage) writing policy and programs. A précis of rhetorical frames is therefore in order. We offer the following summary of rhetorical issues to facilitate interdisciplinary dialogue around and policy support for underprepared college writers.
With deference to composition theorists whose work resists oversimplification, we suggest that rhetoric can be broadly understood as the theoretical framework in which writing occurs. In utilitarian terms, rhetoric signifies attention to a series of questions that reveal different assumptions about how writers write and, subsequently, how writers learn to write. Prominent composition researchers Flower and Hayes (1981) suggested that rhetoric responds to the primary question, What guides the decisions writers make as they write? (p. 365). Our aim is to illustrate how different answers to this question imply different educational strategies. In this way, rhetoric can be useful, optimizing policies that support college writing preparation.
For the review, we answer conceptual questions about writing in one of four basic ways. Each answer corresponds to four basic rhetorical groupings that we discuss next: current-traditional, cognitive, expressionist, and epistemic. This framework is associated with the widely influential composition theorist James Berlin, best known as the author of two landmark books in the field: Writing Instruction in Nineteenth-Century American Colleges (1984) and Rhetoric and Reality: Writing Instruction in American Colleges, 19001985 (1987).2 To illustrate the frames in the context of college writing preparation, the pertinent question becomes, What guides the decisions students make as they write for academic purposes in postsecondary contexts? Succinctly, for current-traditionalists, external rules guide the decisions students make to write at the college level. Cognitive rhetoricians believe that planning, drafting, and revision choices guide students toward college-level work. For expressionists, college-level writers make decisions that accentuate voice. Epistemic rhetoricians believe that students who understand how to negotiate the discourse norms of academic culture are college-ready. These assumptions about how writers write academically in turn inform how best to teach and to assess writing. What follows is a brief description of each rhetorical orientation by its implied remediation strategy (see Table 2 for a listing of rhetorical groups cross-referenced by paradigms, epistemologies, and instructional foci).
Table 2. Rhetorical Groupings by Associated Paradigms, Epistemologies, and Instructional Foci
A current-traditional orientation suggests that the decisions writers make as they write are guided by standards concerning argumentation, organization, sentence structure, grammar, usage, mechanics, and the like (Berlin, 1988). Underprepared writers are seen as students who have not yet mastered the requisite set of writing skills that yield college-level work. Current-traditional instruction focuses on habituating students to these rules, often by isolated skill repetition or by instructor comments on grammar, spelling, or other types of mistakes. The strategy is positivist and exemplifies a behavioral approach to writing. Student ability is conceptually equivalent to written product (Berlin, 1987).
Whereas current-traditionalists survey writing ability in terms of product, cognitive rhetoricians are primarily concerned with writing as a process. As discussed earlier, cognitive rhetoricians believe that planning, drafting, and revision choices guide students toward college-level work. The writer is seen as a problem solver (Flower & Hayes, 1981). Writer decisions involve negotiating old knowledge and fabricating new knowledge through engagement with a trio of composing subprocesses: planning, drafting, and revision (Greene & Ackerman, 1995). Cognitive rhetoric assumes that writing quality mirrors the efficiency of composing subprocess. By a cognitive perspective, underprepared writers are viewed as students whose ability to synthesize information through the writing process is not adequately developed. Cognitive instruction therefore focuses on strengthening students understanding of and attentiveness to this cycle of processes that support writing quality.
As stated, for expressionists, college-level writers make decisions that accentuate voice. An expressionist perspective assumes that the decisions writers make as they write involve transforming personal experience into written discourse (Berlin, 1987). In the context of remediation, underprepared writers are seen as students who have trouble articulating their inner understandings using academic writing conventions. Voice is the guiding principle of expressionist instruction. Teaching emphasizes the self-expressive aspects of academic writing as the key to improvement. The orientation is constructivist insofar as expressionists recognize an interactional relationship between writer and language. As described by Berlin (1987), language does not simply record the private vision, but becomes involved in shaping it (p. 146).
As defined previously, epistemic rhetoricians believe that students who understand how to negotiate the discourse norms of academic culture are college-ready. An epistemic rhetorical perspective recognizes the cultural, social, and historical settings as well as the power differentials that influence writers and their writing (Berlin, 1987). This sociocultural perspective assumes that the decisions writers make as they write involve understanding and negotiating the discourse norms valued in a given setting (Street, 1998). Underprepared writers are seen as students who lack experience with the particular kind of written communication necessary to succeed in school (Rose, 1985). Remediation is seen as a process of acculturation, and writing is understood as co-constitutive with student identity (Gee, 2004). Epistemic remedial instruction focuses on teaching students to deconstruct college writings cultural and linguistic codes by comparing and contrasting academic norms with the discourse conventions not traditionally valued in education settings (Carter, 2008). An epistemic curriculum is apt to incorporate digital discourses, such as blogging or video production, that complement traditional writing instruction (Kress, 2003).
In review, for each rhetorical grouping, the educational implications are as follows. A current-traditional viewpoint signifies a drill and skill approach to remediation. A cognitive perspective implies instruction in writing habits, including planning, drafting, and revision. An expressionist approach to remediation encourages students to cultivate an authorial voice. Finally, expressionist rhetoric draws on traditionally nonacademic discourses to teach college writing.
The review uses rhetorical orientations as a heuristic to survey themes or trends by disciplinary origins, research foci, analytic methods, and outcomes. To discern inconsistencies that may warrant policy concessions, analytic methods and analytic tools are coded separately (see Table 3 for quantitative categorizations organized by rhetorical groups). Outcomes are discussed according to three categories: achievement, attainment, and development. Achievement involves basic measurements of student academic success; indicators include grades and test scores. Composition researchers often proxy achievement by analyzing student writing for evidence associated with standards.3 Attainment outcomes signify educational advancement and involve measures of retention, persistence, and degree completion. Developmental outcomes signify the psychosocial consequences that are associated with proacademic behaviors such as motivation and self-efficacy. Note that developmental outcomes indirectly signal achievement and attainment.
Table 3. Quantitative Analyses of Rhetorical Groupings
Twelve of the primary studies reviewed were current-traditionalist in their approach to writing and, by extension, writing remediation. Within this group, only four studies appeared in composition sources (Friend, 2001; Gebril, 2009; Haswell, 2000; Kuehner, 1999). This is not surprising given that one of the driving impulses in the disciplinary formation of . . . [composition studies] was to define itself against current-traditional rhetorical practices that emphasized correctness (Huot & Schendel, 2010). Conspicuous, however, is the tally that a quarter of the current-traditional studies appeared in consequential (see Goodyear et al., 2009) education sources published by the American Educational Research Association (Knudson, 1998; Moss & Yeaton, 2006; Wong et al., 2002). The high visibility given to current-traditional research in mainstream education journals indicates a conceptual gap between disciplinary fields that we will track throughout the review.
Foci. The topics of interest in this group stem from an overriding concern with efficacy reflecting four foci: programs, assessment, curriculum, and standards. Four studies focused on the efficacy of writing remediation using large-scale quantitative assessment methods (Crews & Aragon, 2004; M. Goldstein & Perin, 2008; Moss & Yeaton, 2006; Southard & Clay, 2004); three studies focused on issues associated with writing assessment (Gebril, 2009; Hardison & Sackett, 2008; Knoch & Elder, 2010); four studies focused on curriculum effects (Friend, 2001; Knudson, 1998; Kuehner, 1999; Wong et al., 2002); and one study investigated standards (Haswell, 2000).
Analytic methods. Only two types of data were analyzed in this group: grades and test scores. Grades were the basis of claims in three studies (Crews & Aragon, 2004; M. Goldstein & Perin, 2008; Moss & Yeaton, 2006). The remaining nine studies used standardized test scores. The nine collectively exemplify an overall reliance on the single sample assessment model that is categorically current-traditional. However, there was little coherence amid these single sample instruments. Six of the tests were third-party instruments, but none were identical, nor were the skills these instruments targeted consistent. One tool measured reading as a proxy of writing ability (Wong et al., 2002); two were placement exams from two different state institutions (Knudson, 1998; Southard & Clay, 2004); and the rest were examinations administered by established providers, including the College Board (Hardison & Sackett, 2008), the Educational Testing Service (Gebril, 2009), and the ACT (Moss & Yeaton, 2006). Of the studies that did not use third-party instruments, all four implemented versions of a single sample instrument, but the rubrics did not correspond (Friend, 2001; Haswell, 2000; Knoch & Elder, 2010; Kuehner, 1999). Although the designs within the current-traditional group are internally consistent with a behaviorist orientation, there is little that is comparable about the assessment methods other than the rhetorical orientation of the models.
Outcomes. Also notable is the groups relatively narrow focus on achievement. Every study in this group reported achievement outcomes. Fewer than half (5 of 12) reported secondary findings. Four supplemented achievement results with attainment outcomes (Knudson, 1998; Kuehner, 1999; Moss & Yeaton, 2006; Southard & Clay, 2004). Only three studies supported their conclusions with developmental outcomes (Knoch & Elder, 2010; Kuehner, 1999; Wong et al., 2002).
Fourteen studies evidenced a cognitive rhetorical orientation, two more than the count of the current-traditional group. The distribution of cognitive studies by discipline, however, is nearly the reverse of the current-traditional group. Only three cognitive studies appeared in education journals (Alzate-Medina & Pena-Borrero, 2010; M. Goldstein & Perin, 2008; Turner & Katic, 2009); the rest were published in composition sources.
Foci. The foci of these studies were primarily curriculum related. Half evaluated various instructional and pedagogical approaches to remediation (Alzate-Medina & Pena-Borrero, 2010; Engstrom, 2005; M. Goldstein & Perin, 2008; Kellogg & Raulerson, 2007; Kinsler, 1990; Pritchard & Marshall, 1994; Turner & Katic, 2009), and the other half were less thematically convergent, exploring questions surrounding English language learners (Ferris, 1994; Kenkel & Yates, 2009; Raimes, 2006), placement tests (Bennett-Kastor, 2004; Plakans, 2009), linguistic features of proficiency (McNamara et al., 2010), and student attitudes toward writing (E. Jones, 2008). Despite rhetorical coherence regarding a process viewpoint on writing, the diversity of research foci hinders generalizing statements.
Analytic methods. Other than E. Jones (2008), who used course grades to measure outcomes, the majority of cognitive studies employed a single sample method to assess writing. Three studies analyzed test scores derived by third-party instruments (Engstrom, 2005; A. Goldstein & Carr, 1996; Raimes, 2006). Two implemented researcher versions of the model (Ferris, 1994; Pritchard & Marshall, 1994). The use of these current-traditional assessment tools to report cognitive findings implies design inconsistencies that we will track throughout the review. Parenthetically, by noting design weaknesses, we are not questioning the findings. Rather, we assume that design discrepancies may conceal the presence of process-oriented outcomes not targeted by the current-traditional assessment model.
To this point, several studies supplemented current-traditional methods with cognitive assessment techniques. In composition studies, cognitive methods generally involve computations of linguistic features. The relevance of these techniques does not readily translate across disciplinary fields and so are of minimal help as policy support. Even if disciplinary knowledge were not a barrier, the cognitive methods used were as diverse as in the current-traditional category. There is little that can be reliably inferred given the variance of analytic procedures. Think-aloud protocols (Plakans, 2009; Turner & Katic, 2009), linguistic factor analyses (Bennett-Kastor, 2004; Ferris, 1994; Kenkel & Yates, 2009; Kinsler, 1990; McNamara et al., 2010), automated software itemization (E. Jones, 2008; McNamara et al., 2010), scale analysis (E. Jones, 2008) and discourse synthesis (Plakans, 2009) are not readily comparable.
Outcomes. A total of 13 of the 14 cognitive studies reported educational outcomes associated with achievement. Despite the acknowledged design weaknesses, all the cognitive studies linked positively with achievement. Only one cognitive study reported attainment outcomes (Engstrom, 2005). Positive developmental outcomes were reported in five studies (Bennett-Kastor, 2004; Engstrom, 2005; ; Kenkel & Yates, 2009; Turner & Katic, 2009). The emphasis on and overall beneficial nature of achievement outcomes in this grouping imply the viability of coordinated cognitive research to inform remediation policies.
The distribution of expressionist studies is more balanced by discipline than what was evident in the current-traditional and cognitive groups. Of the 12 expressionist studies under review, eight appeared in composition journals (Fishman et al., 2005; Melzer, 2009; Pagano, Bernhardt, Reynolds, Williams, & McCurrie, 2008), and four were published in education sources. The more balanced distribution of these studies across the disciplines would seem in keeping with the so-called quiet expansion of expressive approaches to teaching writing (Fulkerson, 2005, p. 654).
Foci. Half (6 of 12) the expressionist studies surveyed college writing standards, but the foci differed by the targets that researchers selected to examine proficiency. Whereas Brockman and colleagues (2010) focused on faculty perceptions, both Kreth and colleagues (2010) and Melzer (2009) focused on college writing assignments. Fishman and colleagues (2005), along with S. Jones and Lea (2008), explored college writing samples generated in formal and informal environments. Jeffery (2009) reviewed assessment prompts and rubrics to determine college writing standards. The inductive nature of these studies collectively demonstrate the expressionist leaning to understand college writing standards by the voices of academic culture-sharing members (Burnham, 2001). The studies that did not focus on standards were also situational in their approach to writing assessment (Acker & Halasek, 2008; Pagano et al., 2008), writing curriculum (Fearn & Farnan, 2007; Rochford, 2003), and student attitudes toward writing (Charney et al., 1995; Wheeler & Wheeler, 2009).
Analytic methods. The diagnostic models accompanying expressionist research represented current-traditional and expressionist orientations. Expressionist diagnostic modes such as discourse analysis were used to categorize situated features of writing in seven studies. Five expressionist studies employed a single sample assessment model to generate standardized test scores. The rubrics, however, were not comparable: Three were 6-point (Fearn & Farnan, 2007; Pagano et al., 2008; Rochford, 2003), one was 5-point (Brockman et al., 2010), and one was 4-point (Kreth et al., 2010).
Outcomes. An essential tension between standardized and situated findings is exemplified in this group. In the studies that focused on standards, despite a similar inductive approach, findings lack the coherence necessary to integrate claims. Also notable, neither Jeffery (2009) nor Brockman and colleagues (2010) reported findings related to student outcomes. Jeffrey presented findings related to standards, whereas Brockman and colleagues offered faculty perceptions. The achievement outcomes were determined by a relative balance of current-traditional (5 of 12) and expressionist (7 of 12) techniques. Only one expressionist study reported outcomes related to attainment (Charney et al., 1995), and 8 of the 12 expressionist studies reported psychosocial outcomes (Acker & Halasek, 2008; Brockman et al., 2010; Charney et al., 1995; Fishman et al., 2005; S. Jones & Lea, 2008; Melzer, 2009; Pagano et al., 2008; Wheeler & Wheeler, 2009).
Seventeen studies, a majority over the other rhetorical categories, positioned writing as epistemic. The distribution by discipline is noticeably uneven. Thirteen of the studies appeared in composition journals (Carter, 2006; Fanetti et al., 2010; Gleason, 2000; Goen-Salter, 2008; Harklau, 2001; Hassel & Giordano, 2009; Hull, Rose, Fraser, & Castellano, 1991; Lesley, 2001, 2008; Lunsford & Lunsford, 2008; Maloney, 2003; Moss & Bordelon, 2007; Rigolino & Freel, 2007). Of the four education journals that published epistemic work, only Callahan and Chumneys (2009) article in Teachers College Record targets a broad audience inclusive of policy makers. The apparent disciplinary gap appears historical in nature; each of the four education articles (Armstrong, 2008; Callahan & Chumney, 2009; Gutiérrez, Hunter, & Arzubiaga, 2009; Li, 2007) was published in the last 5 years, whereas the composition articles date back to 1991. This finding suggests a delay by the mainstream education research community in considering writing remediation from a sociocultural perspective.
Foci. All the epistemic studies are characterized by a critical framework. Across the group, underprepared writers were routinely conceptualized as outsiders . . . [who] lack experience with the academic discourse community and its conventions (Fulkerson, 2005, p. 678). The range of foci fell into three basic categories, but the majority (11 of 17) examined the effects of curricular and scheduling approaches to remediation (Carter, 2006; Fanetti et al., 2010; Gleason, 2000; Goen-Salter, 2008; Gutiérrez et al., 2009; Lesley, 2001, 2008; Li, 2007; Maloney, 2003; Moss & Bordelon, 2007; Rigolino & Freel, 2007). Armstrong (2008), Callahan and Chumney (2009), Harklau (2001), and Hull and colleagues (1991) each investigated student attitudes toward writing. Lunsford and Lunsford (2008) and Hassel and Giordano (2009) considered standards.
Analytic methods. Despite major education journals lack of interest toward this group, the analytic methods used by epistemic researchers were impartial to rhetorical orientation. The majority used both current-traditional and epistemic assessment models. Holistic scoring practices were often employed in combination with ethnographic techniques applied at the student level (Harklau, 2001; Hull et al., 1991), the classroom level (Lesley, 2008; Moss & Bordelon, 2007), the program level (Carter, 2006; Maloney, 2003), and the institutional level (Callahan & Chumney, 2009). The prevalence of multiple methods across the epistemic research implies that the design weaknesses observed in the two preceding rhetorical groups may be less meaningful because the data are triangulated and reliability is increased.
Outcomes. The epistemic group is notable not only because it is the largest group but also because it presents the broadest range of educational findings. All the achievement outcomes in this group were supplemented by developmental findings. More than half (10 of 17) of the epistemic studies reported two categories of student outcomes (Armstrong, 2008; Carter, 2006; Fanetti et al., 2010; Gleason, 2000; Goen-Salter, 2008; Lesley, 2001, 2008; Li, 2007; Moss & Bordelon, 2007; Rigolino & Freel, 2007). The most common pairing was achievement and development. Armstrong reported on attainment and development. Two epistemic studies reported on the trio of achievement, attainment, and developmental outcomes (Moss & Bordelon, 2007; Rigolino & Freel, 2007).
EMPIRICAL AND THEORETICAL REVIEW
The following section reviews the literature more conventionally to evaluate claims that writing remediation can be positively linked to educational outcomes. For the purpose of consistency, we focus on data at the student unit of analysis. Accordingly, we look at student achievement, attainment, and developmental outcomes. We focus on the degree to which the evidence coheres in each category of data.
Achievement outcomes include grades, a range of evidence related to standards, and test scores. The relationship between writing remediation and achievement is examined in 43 of the studies reviewed in this article. The preponderance of achievement outcomes cited in the review literature is not surprising given that the function of remediation is unequivocally to address academic underpreparation (Deil-Amen & Rosenbaum, 2002).
Grades. Of the 12 studies that reported academic outcomes in terms of grades, only two studies used grade point average as a representation of longitudinal achievement (Crews & Aragon, 2004; Maloney, 2003). Because our focus is policy development to support degree completion, single course grades are of minimal consequence.
Standards-related evidence. The review of rhetorical groupings revealed a heterogeneity of standards-related evidence to proxy achievement. When the totality of review literature is considered, variability is sustained. Of the 43 studies that reported achievement outcomes, 25 studies made assessments based on standards criteria. Nonetheless, eight categories, many with additional subcategories, are needed to adequately describe the variety of data on which achievement outcomes were determined. For example, reading was used as an indirect measure of writing achievement in seven studies, but reading itself was signaled by three related, though not entirely comparable, skills: comprehension (Engstrom, 2005; Kuehner, 1999; Lesley, 2001; Wong et al., 2002), criticality (Kreth et al., 2010; Moss & Bordelon, 2007), and reading-to-write (Plakans, 2009).4 The variance of criterion used to measure reading is symptomatic of the problem with the achievement grouping as a whole. Organization (Friend, 2001; Kenkel & Yates, 2009; Kinsler, 1990), vocabulary (McNamara et al., 2010; Moss & Bordelon, 2007), spelling (Bennett-Kastor, 2004), grammar (Fearn & Farnan, 2007), argument development (Ferris, 1994; Lunsford & Lunsford, 2008), and audience awareness (Carter, 2006; Hassel & Giordano, 2009; Kinsler, 1990; Melzer, 2009; Moss & Bordelon, 2007) may each reliably represent achievement, but there is no footing for coherent claims amid these studies. Some studies measured achievement in terms of productivity (Ferris, 1994; Fishman et al., 2005; Kellogg & Raulerson, 2007; Lunsford & Lunsford, 2008; Melzer, 2009; Moss & Bordelon, 2007); others assessed occupational literacy (Haswell, 2000; Moss & Bordelon, 2007).
Test scores. The problem of variability extends to standardized test scores, the most common data used to measure achievement as evidenced in 21 studies. Cumulatively, 13 different third-party instruments are represented in this grouping. Also indicative of standards variation, in studies that employed researcher adaptations of holistic scoring tools, the assessment rubrics are not analogous. Four studies involved a 6-point rubric (Acker & Halasek, 2008; Fearn & Farnan, 2007; Goen-Salter, 2008; Pagano et al., 2008); three used a 5-point rubric (Brockman et al., 2010; Friend, 2001; Hassel & Giordano, 2009); and one used a 4-point rubric (Kreth et al., 2010).
Overall, based on the impurity of grades as a standardized measure (Marsh, Trautwein, Lüdtke, Köller, & Baumert, 2005), the disparity amid standards-related variables, and the use of so many different instruments to generate test scores, the most significant finding within this frame is the miscellany of evidence used to link remediation with achievement. Despite the quantity of studies that report on achievement, the findings lack both empirical consistency and substantive relevance to policy aimed at increasing college graduation rates. The achievement research is not suitable for induction.
Attainment measures such as course retention, persistence, and degree completion signify educational advancement. Nine of the studies reviewed examine the association between writing remediation and traditional attainment outcomes. All the relationships are positive, with the exception of Kuehner (1999), who found no difference in the retention rates of students who received computer versus conventional classroom-based remediation. Attainment measures that are related to a single course, however, lack policy relevance. Because so few studies reported empirically based attainment outcomes, within the attainment grouping, we acknowledge seven studies that describe conceptual findings related to high school to college transitional support.
Enrollment in credit-level coursework. Two studies reported strategies that increased attainment by improving student performance on single sample placement tests (Knudson, 1998; Raimes, 2006). Knudsons rule-based coaching was linked directly with students placing out of remediation and indirectly to persistence because enrollment in credit-bearing courses was not delayed. Raimess process writing strategies correlated with higher placement test scores and subsequent enrollment in credit-level coursework. Given what is known about the limitations of assessment, teaching to the test (Higgins, Miller, & Wegmann, 2006) does not necessarily speak to the problem of underpreparation. By the same token, test preparation is a pragmatic approach in a policy context rife with ambiguities.
Course retention. The limited number of studies that report on retention (Gleason, 2000; Goen-Salter, 2008; Kuehner, 1999; Maloney, 2003; Rigolino & Freel, 2007; Southard & Clay, 2004) dissuades generalizing about this group. Nonetheless, commensurate with Jenkins and Boswells (2002) review findings, retention was positively related to mainstreaming models of writing remediation (Goen-Salter, 2008; Rigolino & Freel, 2007) and negatively related to prerequisite style remediation (Gleason, 2000).
Institutional persistence. Only three studies (Callahan & Chumney, 2009; Engstrom, 2005; Gleason, 2000) reported on institutional persistence, an outcome that has valuable public policy implications. Notably, of the three, Engstrom (2005) linked technology use in remediation with degree progress. Although other studies reported a positive relationship between technology and remediation (Acker & Halasek, 2008; Kuehner, 1999; McNamara et al., 2010; Turner & Katic, 2009; Wheeler & Wheeler, 2009), the outcomes are not readily comparable in a manner suited to policy discussion.
Degree completion. Although a number of curricular and programmatic approaches correlated positively with attainment, only Rigolino and Freel (2007) reported increased degree completion rates. One study does not hold policy importance. As noted, however, this study described the advantageousness of a mainstreaming model. As such, it corroborates programmatic findings from generic remediation studies (Jenkins, Jaggars, & Roksa, 2009; Schwartz & Jenkins, 2007).
High school to college transitional support. In addition to traditional attainment measures, seven studies reported transitional support for high school students prior to college matriculation (Acker & Halasek, 2008; Armstrong, 2008; Fanetti et al., 2010; Harklau, 2001; Knudson, 1998; Lesley, 2008; Moss & Bordelon, 2007). These studies conceptualized remediation as an educational bridge between high school and college writing. In this way, the bridge concept indirectly supports attainment outcomes.
Because developmental outcomes are not the target of remediation policies, these findings are not conventionally weighted in policy studies. Nevertheless, developmental outcomes signify psychosocial consequences that are associated with proacademic behaviors. Student development thus supports achievement and attainment.
As with achievement and attainment, however, a lack of coherence inhibits inductive claims within this frame. Of the 24 studies that reported psychosocial outcomes, the findings varied across four categories: motivation, attitudes, perceptions of learning, and conceptualization of academic discourse. Although these categories are most certainly related, they are not rigorously equivalent and often contain subcategories of data.
Motivation. Although motivation is referenced explicitly in two studies (Acker & Halasek, 2008; Turner & Katic, 2009), an additional three constructs represent the outcomes in this developmental group. Motivation is implicit in Knoch and Elders (2010) finding of student preference for process writing, Moss and Bordelons (2007) construct of educational aspirations, and Rigolino and Freels (2007) report of student agency, but none are equivalent by stringent measurement principles.
Attitudes. Understanding that the constructs are related, though nonequivalent, we include self-efficacy studies and research regarding attitudes together under this heading. Self-efficacy was represented by both confidence in literacy skills (Acker & Halasek, 2008; Engstrom, 2005; Moss & Bordelon, 2007) and identity development (Gutiérrez et al., 2009). Attitudes toward literacy tasks were documented in four studies (Charney et al., 1995; Harklau, 2001; Kuehner, 1999; Lesley, 2008). As with all other categories reviewed thus far, our groupings are useful for presentational, not inductive, purposes.
Perceptions of learning. Student perceptions of learning were reported in seven studies (E. Jones, 2008; Li, 2007; Moss & Bordelon, 2007; Pritchard & Marshall, 1994; Rigolino & Freel, 2007; Wheeler & Wheeler, 2009; Wong et al., 2002). Such outcomes, however, are generally not given credence for policy decisions.
Conceptualization of academic discourse. For all groupings in this review, despite the variety of descriptors employed by researchers to describe this outcome, the findings are conceptually coherent. In total, nine studies reported increased academic discourse awareness, a critical capacity associated with a sociocultural perspective and concurrently an epistemic rhetorical orientation. Four studies referenced this awareness by the descriptor of metacognition (Carter, 2006; Harklau, 2001; S. Jones & Lea, 2008; Lesley, 2008). Three cited findings in terms of deconstructing dominant forms of written discourse (Armstrong, 2008; Hull et al., 1991; Li, 2007). Gutiérrez and colleagues (2009) described the effect as critical cultural historical awareness, and Fishman and colleagues (2005) labeled discourse awareness by the moniker of performance.
Before commencing to discussion, the reviews major insights are worth reviewing.
We used rhetorical orientations to assess the literature in terms of (a) disciplinary origins, (b) investigative foci, (c) analytic methods, and (d) findings. With regard to disciplinary origins, the review suggests a considerable delay by the higher education research community in considering writing remediation from a sociocultural perspective. In terms of research foci, although just under a third (17 of 55) of the literature claims a relationship to college writing standards, there is little coherence amid these standards. Apropos analytic tools, the prevalence of current-traditional analytic tools in assessing writing achievement was evident across each of the rhetorical groups. Sixty percent (33 of 55) of the studies used current-traditional methods to assess writing, though only a fifth (12 of 55) of the research is framed by current-traditional assumptions about writing and writing education. As such, there is a recurring misalignment between analytic methods and analytic tools in the cognitive, expressionist, and epistemic categories. Additionally, despite the ubiquity of the current-traditional assessment model, the variability of instruments renders findings unsuitable for generalization. Last, in terms of findings, the review suggests that the strongest correlation between remediation and improved writing quality is rooted in epistemic research.
Student outcomes were assessed according to three categories: achievement, attainment, and development. A majority (48 of 55) of studies reported on achievement, but based on the questionable reliability of grades as a standardized measure, inconsistencies amid standards-related variables, and the sheer number of disparate instruments to generate test scores, the research is not suitable for induction. The attainment category is also immaterial given that only three studies reported on institutional persistence, an outcome that has policy implications. As with achievement and attainment, a lack of coherence amid developmental outcomes inhibits inductive claims within this frame.
The overall investigation of rhetorical orientations and triangulated student outcomes reveals a shortage of empirical support with which to infer claims that can inform remediation policy. Nonetheless, the review offers certain insights about the state of extant research that may be helpful to our goal of establishing common ground for ongoing dialogue across disciplinary fields. Findings converge on the themes of standards, assessment, and strategic planning. The first two are liabilities. Strategic planning is the boon in an otherwise mired policy state.
THE NEED FOR CLEAR COLLEGE WRITING STANDARDS
The most reliable finding of the review is the inconsistency of college writing standards across the literature. The variability of standards makes it impossible to synthesize claims for policy support.5 Confirmation of the proliferation of standards inconsistency in the research was not hard to come by. Even within the review literature, it was the dominant finding in seven studies. Jeffery (2009) concluded construct variability by analyzing state and national writing assessment prompts and rubrics. Melzer (2009) explored the contending genre and rhetorical demands of postsecondary writing assignments. Acker and Halasek (2008) investigated differences in the ways high school teachers and university instructors commented on student work as suggestive of a discrepancy in their conceptualizations of college writing. Callahan and Chumney (2009) found that standards varied based on institutional context, whereas Pagano and colleagues (2008) found that perceptions of standards fluctuated by instructor status. Differential access to resources and disparity of instructional expectations were each linked to writing quality. Instructor and institutional status notwithstanding, inconsistency amid faculty perceptions of writing quality was documented in studies by Brockman and colleagues (2010) and Kreth and colleagues (2010). Collectively, these seven studies demonstrate the ubiquity of standards inconsistency amid faculty, institutions, and government agencies.
Without researcher consensus regarding a construct of college writing, the field is unlikely to provide findings that coherently speak to policy makers. Although we recognize that standards deliberation is a mainstay of composition theory, we submit that ongoing academic debate is a luxury that inhibits college access for underprepared writers. The recent debut of the Common Core State Standards (CCSS) Initiative, particularly the Grade 1112 college and career readiness expectations for writing, is timely as a touchstone for critique (Porter, McMaken, Hwang, & Yang, 2011). The CCSS on argumentative writing, for example, challenges the former state standards focus on expository writing (Beach, 2011). This shift reflects a more appropriate conceptual alignment with the college writing genre demands inductively found by Lunsford and Lunsford (2008). Moving toward national standards will involve risk (Carmichael, Martino, Porter-Magee, & Wilson, 2010), but the current standstill is untenable. Higher educations formal position on college readiness initiatives such as the CCSS can bring much-needed focus to critical conversations at all levels of the educational spectrum (Mathis, 2010, p. 15).
THE NEED FOR BETTER COLLEGE WRITING ASSESSMENT METHODS
Just as assumptions about the decisions writers make as they write imply different educational strategies for teaching, rhetoric informs the way writing ability should be measured. More than a third (21 of 55) of the review literature demonstrates a conceptual misalignment between the way writing and learning to write is conceived and the way it is measured. The majority (43 of 55) of researcher assumptions about writing development derive from cognitive, expressionist, and epistemic viewpoints that converge on process-oriented theories of composing. These viewpoints are patently discordant with the current-traditional assessment model that equates writing skill with product.
Current-traditionalists unilaterally believe that writing ability is evidenced on the proverbial page. Accordingly, the hallmark current-traditional assessment model (single sample) typically functions as follows: (a) the test solicits a sample of written product generated by way of a prompt that the student addresses in writing under timed conditions, and (b) the sample is then scored for its adherence to academic writing conventions. Only by a current-traditional rhetorical viewpoint is this approach satisfactory. From a cognitive standpoint, the models focus on drafting represents a dismissal of process theory because students are not generally given enough time to plan or revise their writing. From an expressionist perspective, the model is flawed because scores are based on the occurrence of standard textual, structural, and argumentative characteristics. This scoring strategy rewards convention and inevitably undervalues voice. By an epistemic point of view, the single sample assessment model is problematic because it purports rhetorical neutrality. Epistemic rhetoric takes issue with the veneration of single sample assessment to arbitrate standards. From a cognitive, expressionist, or epistemic standpoint, the single sample model is, at best, a partial measure of writing aptitude.
It should be clear at this point that the single sample assessment model that places students into remediation works from a very different set of assumptions than the cognitive, expressionist, and epistemic theories that pervade current composition scholarship. In light of this controversial circumstance, the single sample test is certainly a questionable gauge on which to base high-stakes decisions about college students English proficiency. Unequivocally, to improve remediation policy, more rhetorically flexible diagnostic options are needed.
Writing assessment is not just a policy obstacle; it is also a research liability. In our review, the use of current-traditional assessment tools in cognitive, expressionist, and epistemic research contexts has important implications. The richness of educational outcomes produced by cognitive, expressionist, and epistemic approaches to remediation cannot be captured by a single sample model, and yet there are no alternatives with which to make the kind of standardized claims recognized in policy studies. Despite these limitations, cognitive, expressionist, and epistemic approaches to remediation demonstrate a positive impact on student achievement. We can reasonably infer that the true and complete benefit of these approaches likely extends beyond what is measurable on a single sample test.
THE IMPORTANCE OF COORDINATING EFFORTS TO ADDRESS RESEARCH LIABILITIES
The reciprocal relationship between standards and assessment implies the importance of coordinating research to improve remediation policy quickly and effectively. Standards without assessment are as useless as assessment without standards. As Carmichael and colleagues (2010) explained, Standards often end up . . . ignored . . . [while] whats on the high-stakes test . . . becomes the real standard (p. 2). Resistance to either standards or assessment negatively impacts the underserved student populations who hover at and below the cut points of problematic measures. In the composition community, the inadequacy of tools used to assess student writing has been widely acknowledged for decades, and yet response within the field has been largely one of avoidance (Huot & Schendel, 2010). Composition theory has advanced along other trajectories while the mantle of writing assessment remains underdeveloped. The cause of access and the improvement of public policy to increase graduation rates, however, brings writing assessment to the forefront of the higher education research agenda. Standards are implicitly invoked by this arraignment. The arenas of writing standards and assessment require immediate, collective attention.
The obviousness of this finding to composition scholars ought not stir contempt for policy makers. Rather, mutual awareness of the problems associated with each field can benefit students whose access to higher education is delayed by test scores that are arguably only a partial measure of writing aptitude. Although standardized writing assessment may not be ideologically valuable outside a positivist framework, resistance to accountability measures in postsecondary writing contexts penalizes underprepared students. Some degree of concession toward the articulation and assessment of college writing standards is essential to improve graduation rates nationwide. Systemic action is required to amend educational inequity for the low-income and minority student populations who are disproportionately less well prepared for college than their advantaged peers.
THE IMPORTANCE OF STRATEGIC PLANNING
To the coordination of policy and composition research, there are three opportunities that deserve recognition. Circumstances involve interdisciplinary collaboration, analytic scope, and epistemic possibilities. Each represents positive conditions in an otherwise barren landscape of writing policy research.
The review demonstrates that the language of education research and the language of composition studies are fundamentally analogous across epistemologies of learning and rhetorical orientations. This cross-disciplinary framework is valuable in the context of not only expanding scholarship on writing remediation but also working across disciplines for the common cause of improving the postsecondary prospects of underprepared students.
The framework of student achievement, attainment, and developmental outcomes employed as an analytic tool suggests a comprehensive viewpoint from which to consider writing remediation policy. Given the limitations associated with assessing writing achievement, the developmental category is especially valuable. Psychosocial outlooks influence the ways in which students not only perceive their own writing strengths and weaknesses but also engage with the task of writing improvement. As Astin (1984) noted, a student is not a black box, but the mediating mechanism that transforms programs and policies into achievement and attainment (p. 519). Developmental outcomes therefore have high stakes because the latent ways in which students conceptualize learning directly impact their ability to progress as writers (Pajares, 2003; Pajares & Valiante, 2008; M. White & Bruning, 2005). In this way, developmental findings represent student capacity for future achievement and attainment. Equally important in a review context, developmental outcomes that supplement achievement and attainment measures increase reliability.
That developmental outcomes warrant the attention of higher education policy makers also finds support via a significant body of writing research that was not reviewed in this article because it did not meet our criterion of being directly related to college remediation. Although confirmatory research with underprepared students is scarce, self-efficacy has been shown to be the most important predictor of writing improvement in other academic settings (Pajares, 1996; Pajares & Valiante, 2008). Nonetheless, given the strength of this literature, in conjunction with the problems of writing assessment, we recommend that policy makers consider student development in the disciplinary context of writing remediation. Minimally, writing remediation policy that supports the collective of academic, attainment, and developmental gains is preferable to policy that is focused on only one or two student outcomes. To improve graduation rates, an expansive outlook on remediation effects will be useful.
Epistemic possibilities. The review suggests that the strongest correlation between remediation and improved writing quality is rooted in epistemic research. Assessment constraints notwithstanding, writing remediation that favors an epistemic orientation is positively linked to student outcomes more significantly and more comprehensively than any other rhetorical approach. Of the 17 studies distinguished by an epistemic viewpoint on writing, more than half (10 of 17) reported on achievement, and all outcomes were positive even though 9 of the 10 assessment tools used were current-traditional. Additionally, more than half (11 of 17) of the epistemic studies present developmental findings, all of which are advantageous to students.
Two points in particular sustain the promise of epistemic research. The conceptualization of academic discourse described in epistemic developmental outcomes is the only consistent finding linked positively to writing remediation across the review literature. Also salient is the conceptual bridge between high school and writing that we identified as benefitting attainment. Of the seven studies that reported transitional support for high school students prior to college matriculation, the majority are epistemic (Armstrong, 2008; Fanetti et al., 2010; Harklau, 2001; Lesley, 2008). More epistemic research that speaks to remediation policy development is vital.
To remain competitive in the global marketplace, the government is promoting civic interest in and fiscal support for higher education. Public provisions, however, cannot raise graduation rates when access issues congest the means by which additional students both enter and navigate the higher education system. Improving remediation is critical to increasing tertiary educations institutional capacity to award more degrees and to supply the educated workers necessary for sustained economic prosperity.
At the outset of this article, we acknowledged the inconclusiveness of large-scale studies regarding the efficacy of remediation as a degree track for underprepared students. In the absence of research-based conclusions, institutions have focused on either decreasing remedial course offerings or eliminating remedial programs altogether (Bettinger & Long, 2009; Gleason, 2000; Soliday, 2002; Tierney & Garcia, 2011). Our agenda is to advocate neither for remediations continuation nor its extinction. Rather, our inquiry has been one of expediency for students either currently or imminently accountable to existing remediation policies. Recent review work demonstrates the positive impact of course sequencing and institutional nature on student outcomes (Bailey, 2009; Callahan & Chumney, 2009; Jenkins et al., 2010). These studies not only call attention to the sanctions levied against underprepared students that hinder access but also insinuate more efficacious program models of remediation for nationwide implementation.
Despite the uncertainty of this review, to the task of rethinking remediation (Bailey, 2009) and the cause of improving graduation rates, it is still our contention that disciplinary considerations can advantage remediation outcomes. For this reason, we encourage education and composition scholars alike to approach writing remediation research with a broad understanding of the fields innate limitations and an increased consideration for policy provisions. Actionable research is essential to increasing the educational opportunities of underprepared writers and decreasing the social inequities associated with remediation policies and programs.
The present work reveals multiple issues to be addressed in future studies. Incongruities between standards and assessment of college writing have resulted in a body of research that does not provide the kind of evidentiary support weighted in policy discussions. This finding predicts that remediation policy and programs will remain accountable to the rhetorical and paradigmatic viewpoints that dominate writing assessment and that relegate underprepared students to dubious degree pathways.
Our findings are also relevant to the general improvement of college writing preparation in secondary schools and high school-to-college transitional settings. The K-12 Common Core State Standards are a preliminary step toward curricular alignment, but standards alone will not solve the problem of underpreparedness for higher education. The importance of developing and applying alternative methods of writing assessment cannot be overstated. Discourse approaches (Connor & Mbaye, 2002) are one option worth pursuing, as are gains or value-added models (Andrade et al., 2008; Condon, 2009; Condon & Kelly-Riley, 2004). Certain K12 state writing policies are implementing exams that rely on two timed writing sessions (Applebee & Langer, 2006). The first session solicits planning and drafting, and the second session solicits revision. This model represents a better conceptual fit with writing process theory than the single sample design. Last, portfolio assessmentthat is, rating multiple writing samples to generate a composite proficiency scoreholds promise, but without clear standards, the model is ineffectual (Acker & Halasek, 2008; Elbow & Belanoff, 1986; Ford & Larkin, 1978; Nystrand, Cohen, & Dowling, 1993). The call for more studies that can help underprepared writers complete college degrees is contingent on the availability of reliable assessment tools. To this ultimate truth, we tender the present work as a harbinger of sobering if not urgent information about the condition of college writing assessment.
Ultimately, the reviews inconclusiveness demonstrates the methodological difficulties of conducting future research to support successful writing remediation policies. The goal of increasing success for underprepared writers is an inherently interdisciplinary venture that necessitates shared understanding of and sensitivity to the distinct allegiances of higher education stakeholders. It is our parting contention that if scholars from across the university are able to come together to support the degree prospects of underprepared writers, then institutions, politicians, and taxpayers will follow suit.
1. These exams are administered by either the institution itself or a third-party company such as the College Board or the ACT.
2. Berlins typology is useful to acquaint the higher education research community with basic rhetorical distinctions that signal divergenteven incompatibleguidelines for policy deliberation. We recognize that recent composition scholarship reflects a reconfiguration of rhetorical groupings to better fit sociocultural complexities. Because our purpose is to outline baseline distinctions amid rhetorical categories, Berlins original framework is tactically sufficient.
3. The variety of data associated with standards is an integral component of discussion. Evidence extends from textual elements (such as grammar and spelling), to essay mechanics (including organization, argumentation, and audience awareness), to indirect measures (such as reading comprehension).
4. Reading-to-write skills connote the self-regulated learning strategies students use when directed to write in response to source material. Examples of reading-to-write skills are organizing/transforming and task-information seeking (Risemberg, 1996).
5. Requisite variance concerns about writing time allotments and essay prompts are insignificant in light of the more formidable situation of incoherent college writing standards.
Acker, S., & Halasek, K. (2008). Preparing high school students for college-level writing: Using eportfolio to support a successful transition. Journal of General Education, 57(1), 114.
Adelman, C. (2004). Principal indicators of student academic histories in postsecondary education, 19722000. Washington, DC: U.S. Department of Education.
Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through. Washington, DC: U.S. Department of Education.
Adelman, C. (2009). The spaces between numbers: Getting international data on higher education straight. Washington, DC: Institute for Higher Education Policy.
Alzate-Medina, G. M., & Peña-Borrero, L. B. (2010). Peer tutoring: Developing writing in college education. Universitas Psychologica, 9(1), 123138.
Andrade, H. L., Du, Y., & Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school students writing. Educational Measurement: Issues and Practice, 27(2), 313.
Applebee, A. N., & Langer, J. A. (2006). The state of writing instruction in Americas schools: What existing data tell us. Albany, NY: Center on English Learning & Achievement, University at Albany.
Armstrong, S. (2008). Using metaphor analysis to uncover learners conceptualizations of academic literacies in postsecondary developmental contexts. International Journal of Learning, 15(9), 211218.
Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25(4), 518529.
Attewell, P., & Domina, T. (2008). Raising the bar: Curricular intensity and academic performance. Educational Evaluation and Policy Analysis, 30(1), 5171.
Attewell, P., Domina, T., Lavin, D. E., & Levey, T. (2006). New evidence on college remediation. Journal of Higher Education, 77(5), 886924.
Attewell, P., Heil, S., & Reisel, L. (2011). Competing explanations of undergraduate noncompletion. American Educational Research Journal, 48(3), 536559.
Aud, S., Hussar, W., Kena, G., Bianco, K., Frohlich, L., Kemp, J., & Tahan, K. (2011). The condition of education 2011 (NCES 2011-033). Washington, DC: U.S. Department of Education.
Bahr, P. R. (2010). Revisiting the efficacy of postsecondary remediation: The moderating effects of depth/breadth of deficiency. Review of Higher Education, 33(2), 177205.
Bailey, T. (2009). Challenge and opportunity: Rethinking the role and function of developmental education in community college. New Directions for Community Colleges, 2009(145), 1130.
Bailey, T., Jeong, D., & Cho, S. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29(2), 255270.
Bailey, T., & Weininger, E. B. (2002). Performance, graduation, and transfer of immigrants and natives in City University of New York community colleges. Educational Evaluation and Policy Analysis, 24(4), 359377.
Bakhtin, M. M., Holquist, M., & Emerson, C. (1986). Speech genres and other late essays (Vol. 8). Austin: University of Texas Press.
Beach, R. W. (2011). Issues in analyzing alignment of language arts Common Core standards with state standards. Educational Researcher, 40(4), 179182.
Bennett-Kastor, T. (2004). Spelling abilities of university students in developmental writing classes. Journal of College Reading and Learning, 35(1), 6782.
Berlin, J. (1984). Writing instruction in nineteenth-century American colleges. Carbondale: Southern Illinois University Press.
Berlin, J. (1987). Rhetoric and reality: Writing instruction in American colleges, 19001985. Carbondale: Southern Illinois University Press.
Berlin, J. (1988). Rhetoric and ideology in the writing class. College English, 50(5), 477494.
Bettinger, E., & Long, B. (2009). Addressing the needs of underprepared students in higher education: Does college remediation work? Journal of Human Resources, 44(3), 736771.
Braddock, R., Lloyd-Jones, R., & Schoer, L. (1963). Research in written composition. Champaign, IL: National Council of Teachers of English.
Breland, H., Bridgeman, B., & Fowles, M. (1999). Writing assessment in admission to higher education: Review and framework. College Board Report, 99(3), 144.
Breneman, D. W., Costrell, R. M., Haarlow, W. N., Ponitz, D. H., & Steinberg, L. (1998). Remediation in higher education: A symposium. Washington, DC: Thomas B. Fordham Foundation.
Brockman, E., Taylor, M., Crawford, M., & Kreth, M. (2010). Helping students cross the threshold: Implications from a university writing assessment. English Journal, 99(3), 4249.
Bureau of Labor Statistics. (2010). College enrollment and work activity of 2009 high school graduates. Washington, DC: U.S. Department of Labor.
Burnham, C. (2001). Expressive pedagogy: Practice/theory, theory/practice. In G. Tate, A. Rupiper, & K. Schick (Eds.), A guide to composition pedagogies (pp. 1935). New York, NY: Oxford University Press.
Butler, J. A., & Britt, M. A. (2011). Investigating instruction for improving revision of argumentative essays. Written Communication, 28(1), 7096.
Calcagno, J. C., & Long, B. T. (2008). The impact of postsecondary remediation using a regression discontinuity approach: Addressing endogenous sorting and noncompliance. Cambridge, MA: National Bureau of Economic Research.
Callahan, K. M., & Chumney, D. (2009). Write like college: How remedial writing courses at a community college and a research university position at-risk students in the field of higher education. Teachers College Record, 111(7), 16191664.
Carmichael, S. B., Martino, G., Porter-Magee, K., & Wilson, W. S. (2010). The state of state standardsand the common corein 2010 (pp. 1373). Washington, DC: Thomas B. Fordham Institute.
Carnevale, A. P., Smith, N., & Strohl, J. (2010). Help wanted: Projections of jobs and education requirements through 2018. Washington, DC: Georgetown University Center on Education and the Workforce.
Carter, S. (2006). Redefining literacy as a social practice. Journal of Basic Writing, 25(2), 94125.
Carter, S. (2008). The way literacy lives: Rhetorical dexterity and basic writing instruction. Albany: State University of New York Press.
CCCC. (1987). Scholarship in composition: Guidelines for faculty, deans, and department chairs. Conference on College Composition and Communication position statement. Urbana, IL: National Council of Teachers of English.
Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Thousand Oaks, CA: Sage.
Charney, D., Newman, J. H., & Palmquist, M. (1995). Im just no good at writing: Epistemological style and attitudes toward writing. Written Communication, 12(3), 298329.
Coker, D., & Lewis, W. (2008). Beyond writing next: A discussion of writing research and instructional uncertainty. Harvard Educational Review, 78(1), 231251.
Condon, W. (2009). Looking beyond judging and ranking: Writing assessment as a generative practice. Assessing Writing, 14(3), 141156.
Condon, W., & Kelly-Riley, D. (2004). Assessing and teaching what we value: The relationship between college-level writing and critical thinking abilities. Assessing Writing, 9(1), 5675.
Connor, U., & Mbaye, A. (2002). Discourse approaches to writing assessment. Annual Review of Applied Linguistics, 22(1), 263278.
Crews, D. M., & Aragon, S. R. (2004). Influence of a community college developmental education writing course on academic performance. Community College Review, 32(2), 118.
Darling-Hammond, L. (2009). President Obama and education: The possibility for dramatic improvements in teaching and learning. Harvard Educational Review, 79(2), 210223.
Deil-Amen, R., & Rosenbaum, J. E. (2002). The unintended consequences of stigma-free remediation. Sociology of Education, 75(3), 249268.
Durst, R. (1990). The mongoose and the rat in composition research: Insights from the RTE annotated bibliography. College Composition and Communication, 41(4), 393408.
Durst, R. (2006a). Research in writing, postsecondary education, 19842003. L1 Educational Studies in Language and Literature, 6(2), 5373.
Durst, R. (2006b). Writing at the postsecondary level. In P. Smagorinsky (Ed.), Research on composition: Multiple perspectives on two decades of change (pp. 78107). New York, NY: Teachers College Press.
Dyson, A. H., & Freedman, S. W. (2003). Writing. In J. Flood, D. Lapp, J. R. Squire, & J. M. Jensen (Eds.), Handbook of research on teaching the English language arts (2nd ed., pp. 967992). Mahwah, NJ: Erlbaum.
Edlund, J., & Brynelson, N. (2008, October). Engaging with high school teachers: Articulating university expectations. Paper presented at the Proficiency in the First Year at the University: Developmental Mathematics and English conference, Los Angeles, CA.
Elbow, P., & Belanoff, P. (1986). Portfolios as a substitute for proficiency examinations. College Composition and Communication, 37(3), 336339.
Engstrom, E. (2005). Reading, writing, and assistive technology: An integrated developmental curriculum for college students. Journal of Adolescent and Adult Literacy, 49(1), 3039.
Fanetti, S., Bushrow, K., & DeWeese, D. (2010). Closing the gap between high school writing instruction and college writing expectations. English Journal, 99(4), 7783.
Fearn, L., & Farnan, N. (2007). When is a verb? Using functional grammar to teach writing. Journal of Basic Writing, 26(1), 6387.
Ferris, D. (1994). Rhetorical strategies in student persuasive writing: Differences between native and non-native English speakers. Research in the Teaching of English, 28(1), 4565.
Feuer, M. J., Towne, L., & Shavelson, R. J. (2002). Scientific culture and educational research. Educational Researcher, 31(8), 414.
Fishman, J., Lunsford, A., McGregor, B., & Otuteye, M. (2005). Performing writing, performing literacy. College Composition and Communications, 57(2), 224252.
Fitzgerald, J. (1987). Research on revision in writing. Review of Educational Research, 57(4), 481506.
Flower, L., & Hayes, J. (1981). A cognitive process theory of writing. College Composition and Communication, 32(4), 365387.
Ford, J., & Larkin, G. (1978). The portfolio system: An end to backsliding writing standards. College English, 39(8), 950955.
Friend, R. (2001). Effects of strategy instruction on summary writing of college students. Contemporary Educational Psychology, 26(1), 324.
Fulkerson, R. (2005). Composition at the turn of the twenty-first century. College Composition and Communication, 56(4), 654687.
Gebril, A. (2009). Score generalizability of academic writing tasks: Does one test method fit it all? Language Testing, 26(4), 507531.
Gee, J. (2004). Situated language and learning: A critique of traditional schooling. New York, NY: Routledge.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies of qualitative research. New Brunswick, NJ: Aldine Transaction.
Gleason, B. (2000). Evaluating writing programs in real time: The politics of remediation. College Composition and Communication, 51(4), 560588.
Goen-Salter, S. (2008). Critiquing the need to eliminate remediation: Lessons from san francisco state. Journal of Basic Writing, 27(2), 81105.
Goldstein, A., & Carr, P. (1996). Can students benefit from process writing? NAEPfacts (3rd ed., Vol. 1, pp. 17). Washington, DC: National Center for Education Statistics.
Goldstein, M., & Perin, D. (2008). Predicting performance in a community college content-area course from academic skill level. Community College Review, 36(2), 89115.
Goodyear, R., Brewer, D., Gallagher, K., Tracey, T., Claiborn, C., Lichtenberg, J., & Wampold, B. (2009). The intellectual foundations of education: Core journals and their impacts on scholarship and practice. Educational Researcher, 38(9), 700706.
Graham, S., & Harris, K. (1989). Components analysis of cognitive strategy instruction: Effects on learning disabled students compositions and self-efficacy. Journal of Educational Psychology, 81(3), 353361.
Graham, S., & Harris, K. (2009). Evidence-based writing practices: Drawing recommendations from multiple sources. BJEP Monograph Series II, Number 6-Teaching and Learning Writing, 1(1), 95111.
Greene, S., & Ackerman, J. (1995). Expanding the constructivist metaphor: A rhetorical perspective on literacy research and practice. Review of Educational Research, 65(4), 383420.
Gutiérrez, K., Hunter, J., & Arzubiaga, A. (2009). Re-mediating the university: Learning through sociocritical literacies. Pedagogies: An International Journal, 4(1), 123.
Hardison, C. M., & Sackett, P. R. (2008). Use of writing samples on standardized tests: Susceptibility to rule-based coaching and the resulting effects on score improvement. Applied Measurement in Education, 21(3), 227252.
Harklau, L. (2001). From high school to college: Student perspectives on literacy practices. Journal of Literacy Research, 33(1), 3370.
Harrington, S., Malencyzk, R., Peckham, I., Rhodes, K., & Yancey, K. (2001). WPA outcomes statement for first-year composition. College English, 63(3), 321325.
Hassel, H., & Giordano, J. B. (2009). Transfer institutions, transfer of knowledge: The development of rhetorical adaptability and underprepared writers. Teaching English in the Two-Year College, 37(1), 2440.
Haswell, R. (2000). Documenting improvement in college writing: A longitudinal approach. Written Communication, 17(3), 307352.
Health Care and Education Reconciliation Act, H. Rept. 111-443 C.F.R. (2010).
Higgins, B., Miller, M., & Wegmann, S. (2006). Teaching to the test . . . not! Balancing best practice and testing requirements in writing. The Reading Teacher, 60(4), 310319.
Hildick, W. (1965). In that solitary room. Kenyon Review, 27(2), 302317.
Hillocks, G. (1984). What works in teaching composition: A meta-analysis of experimental treatment studies. American Journal of Education, 93(1), 133170.
Hull, G. A., Rose, M., Fraser, K. L., & Castellano, M. (1991). Remediation as social construct: Perspectives from an analysis of classroom discourse. College Composition and Communication, 42(3), 299329.
Huot, B. (1990). The literature of direct writing assessment: Major concerns and prevailing trends. Review of Educational Research, 60(2), 237263.
Huot, B., & Schendel, E. (2010). Reflecting on assessment: Validity inquiry as ethical inquiry. Journal of Teaching Writing, 17(1&2), 3755.
Ignash, J. (2002). Who should provide postsecondary remedial/developmental education? New Directions for Community Colleges, 100, 520.
Jeffery, J. (2009). Constructs of writing proficiency in U.S. state and national writing assessments: Exploring variability. Assessing Writing, 14(1), 324.
Jenkins, D., & Boswell, K. (2002). State policies on community college remedial education: Findings from a national survey. Denver, CO: Education Commission of the States.
Jenkins, D., Jaggars, S., & Roksa, J. (2009). Promoting gatekeeper course success among community college students needing remediation. New York, NY: Community College Research Center.
Jenkins, D., Speroni, C., Belfield, C., Jaggars, S. S., & Edgecombe, N. (2010). A model for accelerating academic success of community college remedial English students: Is the Accelerated Learning Program (ALP) effective and affordable? New York, NY: Community College Research Center.
Jones, E. (2008). Predicting performance in first-semester college basic writers: Revisiting the role of self-beliefs. Contemporary Educational Psychology, 33(2), 209238.
Jones, S., & Lea, M. (2008). Digital literacies in the lives of undergraduate students: Exploring personal and curricular spheres of practice. Electronic Journal of E-Learning, 6(3), 207216.
Juzwik, M. M., Curcic, S., Wolbers, K., Moxley, K. D., Dimling, L. M., & Shankland, R. K. (2006). Writing into the 21st century: An overview of research on writing, 1999 to 2004. Written Communication, 23(4), 451476.
Kellogg, R., & Raulerson, B. (2007). Improving the writing skills of college students. Psychonomic Bulletin and Review, 14(2), 237242.
Kenkel, J., & Yates, R. (2009). The interlanguage grammar of information management in L1 and L2 developing writing. Written Communication, 26(4), 392416.
Kinsler, K. (1990). Structured peer collaboration: Teaching essay revision to college students needing writing remediation. Cognition and Instruction, 7(4), 303321.
Kirst, M. (2007). Who needs it? Identifying the proportion of students who require postsecondary remedial education is virtually impossible. National CrossTalk. Retrieved from http://www.highereducation.org/crosstalk/ct0107/voices0107-kirst.shtml
Kirst, M. (2008, Fall). Secondary schools and colleges must work. Thought and Action, 111122.
Knoch, U., & Elder, C. (2010). Validity and fairness implications of varying time conditions on a diagnostic test of academic English writing proficiency. System, 38(1), 6374.
Knudson, R. E. (1998). College students writing: An assessment of comeptence. Journal of Educational Research, 92(1), 1319.
Koski, W. S., & Levin, H. M. (1998). Replacing remediation with acceleration in higher education: Preliminary report on literature review and initial interviews. Stanford, CA: National Center for Postsecondary Improvement.
Kress, G. (2003). Literacy and multimodality: A theoretical framework. In Literacy in the new media age (pp. 3560). New York, NY: Routledge.
Kreth, M., Crawford, M., Taylor, M., & Brockman, E. (2010). Situated assessment: Limitations and promise. Assessing Writing, 15(1), 4059.
Kuehner, A. V. (1999). The effects of computer-based vs. text-based instruction on remedial college readers. Journal of Adolescent & Adult Literacy, 43(2), 160168.
Lauer, J. (1984). Composition studies: Dappled discipline. Rhetoric Review, 3(1), 2029.
Lesley, M. (2001). Exploring the links between critical literacy and developmental reading. Journal of Adolescent & Adult Literacy, 45(3), 180189.
Lesley, M. (2008). Access and resistance to dominant forms of discourse: Critical literacy and at risk high school students. Literacy Research and Instruction, 47(3), 174194.
Levin, H. M., & Calcagno, J. C. (2008). Remediation in the community college: An evaluators perspective. Community College Review, 35(3), 181207.
Lewiecki-Wilson, C., & Sommers, J. (1999). Professing at the fault lines: Composition at open admissions institutions. College Composition and Communication, 50(3), 438462.
Li, L. (2007). Exploring the use of focused freewriting in developing academic writing. Journal of University Teaching & Learning Practice, 4(1), 4153.
Lunsford, A., & Lunsford, K. (2008). Mistakes are a fact of life: A national comparative study. College Composition and Communication, 59(4), 781806.
Maloney, W. (2003). Connecting the texts of their lives to academic literacy: Creating success for at-risk first-year college students. Journal of Adolescent & Adult Literacy, 46(8), 664674.
Mansfield, W., Farris, E., & Black, M. (1991). College-level remedial education in the fall of 1989. Washington, DC: National Center for Education Statistics, U.S. Department of Education, Office of Educational Research and Improvement.
Marsh, H. W., Trautwein, U., Lüdtke, O., Köller, O., & Baumert, J. (2005). Academic self-concept, interest, grades, and standardized test scores: Reciprocal effects models of causal ordering. Child Development, 76(2), 397416.
Martorell, P., & McFarlin, I. (2011). Help or hindrance? The effects of college remediation on academic and labor market outcomes. Review of Economics and Statistics, 93(2), 436454.
Mathis, W. J. (2010). The common core standards initiative: An effective reform tool (Vol. 29, pp. 125). Boulder, CO: Education and the Public Interest Center (EPIC).
Mazzeo, C. (2002). Stakes for students: Agenda-setting and remedial education. Review of Higher Education, 26(1), 1939.
McCusker, M. (1999). ERIC review: Effective elements of developmental reading and writing programs. Community College Review, 27(2), 93105.
McNamara, D. S., Crossley, S. A., & McCarthy, P. M. (2010). Linguistic features of writing quality. Written Communication, 27(1), 5786. doi:10.1177/0741088309351547
Melzer, D. (2009). Writing assignments across the curriculum: A national study of college writing. College Composition and Communication, 61(2), 242261.
Merisotis, J. P., & Phipps, R. A. (2000). Remedial education in colleges and universities: Whats really going on? Review of Higher Education, 24(1), 6785.
Michael, W., & Shaffer, P. (1979). A comparison of the validity of the Test of Standard Written English (TSWE) and of the California State University and Colleges English Placement Test (CSUC-EPT) in the prediction of grades in a basic English composition course and of overall freshman-year grade point average. Educational and Psychological Measurement, 39(1), 131145.
Moss, B. G., & Bordelon, S. (2007). Preparing students for college-level reading and writing: Implementing a rhetoric and writing class in the senior year. Literacy Research and Instruction, 46(3), 197221.
Moss, B. G., & Yeaton, W. H. (2006). Shaping policies related to developmental education: An evaluation using the regression-discontinuity design. Educational Evaluation and Policy Analysis, 28(3), 215229.
Nystrand, M. (2006). Research on the role of classroom discourse as it affects reading comprehension. Research in the Teaching of English, 40(4), 392412.
Nystrand, M., Cohen, A., & Dowling, N. (1993). Addressing reliability problems in the portfolio assessment of college writing. Educational Assessment, 1(1), 5370.
Nystrand, M., Greene, S., & Wiemelt, J. (1993). Where did composition studies come from? An intellectual history. Written Communication, 10(3), 267333.
Pagano, N., Bernhardt, S., Reynolds, D., Williams, M., & McCurrie, M. (2008). An inter-institutional model for college writing assessment. College Composition and Communication, 60(2), 285320.
Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66(4), 543578.
Pajares, F. (2003). Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading & Writing Quarterly, 19(2), 139158.
Pajares, F., & Valiante, G. (2008). Self-efficacy beliefs and motivation in writing development. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 158170). New York, NY: Guilford Press.
Plakans, L. (2009). Discourse synthesis in integrated second language writing assessment. Language Testing, 26(4), 561587.
Porter, A., McMaken, J., Hwang, J., & Yang, R. (2011). Common Core standards: The new U.S. intended curriculum. Educational Researcher, 40(3), 103116.
Pritchard, R. J., & Marshall, J. C. (1994). Evaluation of a tiered model for staff development in writing. Research in the Teaching of English, 28(3), 259285.
Raimes, A. (2006). Language proficiency, writing ability, and composing strategies: A study of ESL college student writers. Language Learning, 37(3), 439468.
Rigolino, R., & Freel, P. (2007). Re-modeling basic writing. Journal of Basic Writing, 26(2), 5174.
Risemberg, R. (1996). Reading to write: Self-regulated learning strategies when writing essays from sources. Reading Research and Instruction, 35(4), 365383.
Rochford, R. A. (2003). Assessing learning styles to improve the quality of performance of community college students in developmental writing programs: A pilot study. Community College Journal of Research & Practice, 27(8), 665677.
Rose, M. (1985). The language of exclusion: Writing instruction at the university. College English, 47(4), 341359.
Schwartz, W., & Jenkins, D. (2007) Promising practices for community college developmental education: A discussion resource for the Connecticut community college system (Vol. 2, pp. 129). New York, NY: Community College Research Center, Teachers College, Columbia University.
Shaughnessy, M. (1977). Errors and expectations: A guide for the teacher of basic writing. New York, NY: Oxford University Press.
Shavelson, R. J., Phillips, D. C., Towne, L., & Feuer, M. J. (2003). On the science of education design studies. Educational Researcher, 32(1), 2528.
Silva, T., & Leki, I. (2004). Family matters: The influence of applied linguistics and composition studies on second language writing studies: Past, present, and future. Modern Language Journal, 88(1), 113.
Slavin, R. (2008). Perspectives on evidence-based research in educationwhat works? Issues in synthesizing educational program evaluations. Educational Researcher, 37(1), 514.
Snyder, T. (2010). Mini-digest of education statistics, 2009 (NCES 2010-014). Washington DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
Snyder, T., & Hoffman, C. (2001). Digest of education statistics, 2000 (NCES 2001-034). Washington, DC: U.S. Department of Education, National Center for Education Statistics.
Soliday, M. (2002). The politics of remediation: Institutional and student needs in higher education. Pittsburgh, PA: University of Pittsburgh Press.
Southard, A. H., & Clay, J. K. (2004). Measuring the effectiveness of developmental writing courses. Community College Review, 32(2), 3950.
Sperling, M., & Freedman, S. (2001). Research on writing. Handbook of Research on Teaching, 4, 370389.
Street, B. (1998). New literacies in theory and practice: What are the implications for language in education? Linguistics and Education, 10(1), 124.
Tierney, W. G., & Garcia, L. D. (2011). Remediation in higher education: The role of information. American Behavioral Scientist, 55(2), 102120.
Turner, K., & Katic, E. (2009). The influence of technological literacy on students writing. Journal of Educational Computing Research, 41(3), 253270.
Vygotsky, L. S. (1978). Mind and society: The development of higher mental processes. Cambridge, MA: Harvard University Press.
Weiner, E. J. (2002). Beyond remediation: Ideological literacies of learning in developmental classrooms. Journal of Adolescent & Adult Literacy, 46(2), 150169.
Wheeler, S., & Wheeler, D. (2009). Using wikis to promote quality learning in teacher training. Learning, Media and Technology, 34(1), 110.
White, E. (2001). The opening of the modern era of writing assessment: A narrative. College English, 63(3), 306320.
White, M., & Bruning, R. (2005). Implicit writing beliefs and their relation to writing quality. Contemporary Educational Psychology, 30(2), 166189.
Wong, B., Kuperis, S., Jamieson, D., Keller, L., & Cull-Hewitt, R. (2002). Effects of guided journal writing on students story understanding. Journal of Educational Research, 95(3), 179191.
Yancey, K. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50(3), 483503.
Yancey, K. (2004). Made not only in words: Composition in a new key. College Composition and Communication, 56(2), 297328.