Mathematics Course Placement Using Holistic Measures: Possibilities for Community College Students


by Federick Ngo, W. Edward Chi & Elizabeth So Yun Park - 2018

Background/Context: Most community colleges across the country use a placement test to determine studentsí readiness for college-level coursework, yet these tests are admittedly imperfect instruments. Researchers have documented significant problems stemming from overreliance on placement testing, including placement error and misdiagnosis of remediation needs. They have also described significant consequences of misplacement, which can hinder the educational progression and attainment of community college students.

Purpose/Objective/Research Question/Focus of Study: We explore possibilities for placing community college students in mathematics courses using a holistic approach that considers measures beyond placement test scores. This includes academic background measures, such as high school GPA and math courses taken, and indicators of noncognitive constructs, such as motivation, time use, and social support.

Setting: The study draws upon administrative data from a large urban community college district in California that serves over 100,000 students each semester. The data enable us to link studentsí placement testing results, survey data, background information, and transcript records.

Research Design: We first use the supplemental survey data gathered during routine placement testing to conduct predictive exercises that identify severe placement errors under existing placement practices. We then move beyond prediction and evaluate student outcomes in two colleges where noncognitive indicators were directly factored into placement algorithms.

Findings/Results: Using high school background information and noncognitive indicators to predict success reveals as many as one quarter of students may be misassigned to their math courses by status quo practices. In our subsequent analysis we find that students placed under a holistic approach that considered noncognitive indicators in addition to placement test scores performed no differently from higher scoring peers in the same course.

Conclusions/Recommendations: The findings suggest a holistic approach to mathematics course placement may improve placement accuracy and provide access to higher level mathematics courses for community college students without compromising their likelihood of success.



What does it mean to be ready for college? And how do colleges know? In the community college setting, the answers to these questions are usually informed by a placement test that students take when they begin or restart their educational careers. Over 90%of all community colleges in the country use a placement test to determine students’ readiness for college-level coursework (Fields & Parsad, 2012). At the same time, nearly 60% of all incoming community college students in the nation enroll in a remedial course, most likely because they are deemed underprepared for college-level coursework (Bailey, Jeong, & Cho, 2010; NCPPHE & SREB, 2010).


Yet placement tests are admittedly imperfect instruments. Recent research has estimated that nearly 25% of students may be misplaced into their math courses by commonly used placement tests (Mattern & Packman, 2009; Scott-Clayton, Crosta, & Belfield, 2014), with potentially serious consequences for educational attainment (Melguizo, Bos, Ngo, Mills, & Prather, 2016). Researchers examining these issues have found that using additional information such as those available in high school transcripts and math diagnostics could improve placement accuracy and reduce the rate of placement errors (Ngo & Kwon, 2015; Ngo & Melguizo, 2016; Scott-Clayton et al., 2014). In tandem with these findings, a number of states (e.g., Florida, North Carolina, Texas) have enacted legislation promoting the incorporation of multiple indicators of students’ academic readiness into community college placement policies (Burdman, 2012). California, the setting for the study, has mandated the use of these “multiple measures” since the early 1990s, but there is wide variation in and few evaluations of these practices (Perry, Bahr, Rosin, & Woodward, 2010).


What remains unknown is whether using a holistic placement approach that includes noncognitive1 measures can improve placement accuracy in community colleges. Noncognitive measures are those not specifically related to academic content knowledge or skills, such as, but not limited to, students’ college plans (e.g., use of time) and their beliefs about the importance of math or college (Sedlacek, 2004). While indicators of some of these constructs are implicitly and sometimes explicitly used in the selection and sorting processes in four-year institutions (e.g., via college admissions essays and letters of recommendation), they largely remain untested and unused in the community college setting. Further, noncognitive attributes have been shown to be predictive of students’ postsecondary success (Sedlacek, 2004) but have rarely been examined in the context of community colleges which serve large numbers of students of color, low-income students, and first-generation college students (Horn, Nevill, & Griffith, 2006). We therefore ask two research questions: (1) What possibilities are there for using a more holistic placement approach that includes noncognitive measures to better identify college readiness among community college students? (2) Do noncognitive measures improve placement accuracy in developmental math?


We focus on the use of indicators of noncognitive constructs for course placement in a large urban community college district (LUCCD) in California, and we conduct two sets of analyses to answer these questions. The first is a predictive exercise that examines possibilities for using indicators of noncognitive constructs in placement decisions. To do so, we use methods outlined by Scott-Clayton et al. (2014) to estimate severe placement errors, but capitalize on the availability of noncognitive questionnaire data that LUCCD colleges simultaneously collected during the time of placement testing. We calculate placement error rates for all colleges under existing placement policies (e.g., placement test scores and other academic background measures) and compare these to estimates of placement error when indicators of noncognitive measures are included in the prediction equation. We emphasize here that the noncognitive questionnaire items are proxy indicators of students’ noncognitive attributes and not necessarily measures validated in prior literature. In the second set of analyses, we examine actual placement algorithms in two colleges that factor in these noncognitive indicators, such as those of motivation and college plans, into placement decisions. We examine the outcomes of students who were able to take a higher level course due to additional points they earned based on noncognitive indicators, comparing them to peers placed in the same level but who scored higher on placement tests.  


Our study adds insight to the broader question of whether a more holistic approach to mathematics course placement that includes indicators of noncognitive attributes can be useful within the open-access setting of community colleges. In contrast to four-year colleges with selective admissions, selection and sorting in the community college setting primarily happens during remedial screening where students who are deemed not college-ready may be referred to remedial coursework (Hughes & Scott-Clayton, 2011). Since this typically hinges on the result of a placement test, incorporating noncognitive information into the screening process may provide opportunity and access for students who do not appear to be academically prepared based on their placement test results alone. Indeed, our analyses first show that a substantial portion of students, as many as one quarter, may be considered as misplaced under current test-based placement practices, and that high school background and noncognitive indicators may offer an improvement over status quo practices. When we test this hypothesis in two colleges that actually factor noncognitive information into placement rules, we find that this holistic approach can increase access to higher level math courses without compromising the likelihood of success in those courses.


The paper proceeds as follows. We first discuss the role of placement testing and selection processes in community colleges. We highlight research on noncognitive measures, and then draw upon expanding conceptions of college readiness and validation theory to frame the study. We then describe the LUCCD and our two analytical approaches—one that investigates possibilities for using noncognitive indicators and one that evaluates existing placement practices already using such indicators. We present the findings and discuss how our work can add insight to current reforms in assessment and placement in community colleges.


BACKGROUND


PLACEMENT TESTS AND ACADEMIC MEASURES OF COLLEGE READINESS


Given that community colleges are open-access institutions serving a diverse body of students, they need some means of identifying academic preparedness and directing students towards appropriate course work. Placement tests are commonly used in community colleges for this purpose, and two tests, the College Board’s ACCUPLACER and the ACT Inc.’s COMPASS, have dominated the market (Conley, 2010).2 These tests are multiple choice, adaptive, computer administered, and are used to assess subjects like math, English, and reading. Regarded as a cost-efficient way to assess students’ academic abilities, the placement test score is the primary measure that determines where students start in their educational trajectory (Hughes & Scott-Clayton, 2011).  


Despite the near ubiquity of placement testing, studies investigating the predictive validity of placement tests have found that correlations between test scores and student achievement are weaker than those between student background variables and achievement (Armstrong, 2000). In fact, Jenkins, Jaggars, and Roksa (2009), examining data from Virginia community colleges, found no significant relationship between reading and writing placement tests and whether students passed gatekeeper English courses, though they did find a relationship between math placement tests and whether students passed gatekeeper math courses.


These findings, along with concerns about the accuracy of placement tests, have fostered growing interest in using multiple measures and a more holistic approach to improve placement decisions (Burdman, 2012; Smith, 2016). Measures such as the level of prior math courses taken and high school GPA are known to be strong predictors of college course completion and success and can be used to identify readiness for college-level work (e.g., Armstrong, 2000; Noble & Sawyer, 2004). Adding to this evidence, Ngo and Kwon (2015) found that community college students who were placed using academic background measures (e.g., prior math and GPA) in addition to test scores performed no differently from their peers who earned higher test scores. This study and others (e.g., Fong & Melguizo, 2016; Marwick, 2004) suggest that using multiple measures may increase access to higher level math without compromising students’ likelihood of success in those courses.


RESEARCH ON NONCOGNITIVE MEASURES


One understudied question is whether noncognitive measures can also improve placement decisions. The logic for incorporating these measures stems from research in educational psychology, which demonstrates that an array of noncognitive attributes beyond cognitive skills are predictive of college success and future outcomes (Duckworth, Peterson, Matthews, & Kelly, 2007; Duckworth & Yeager, 2015; Noonan, Sedlacek, & Veerasamy, 2005; Porcea, Allen, Robbins, & Phelps, 2010; Robbins, Allen, Casillas, Peterson, & Le, 2006). In lieu of providing a comprehensive review of all noncognitive factors and measures associated with college student success, we focus only on those we believe to be related to the measures used by LUCCD colleges to assess and place students in developmental math sequences. These include use of time, motivation, and social support. To our knowledge, the measures used by LUCCD are not directly tied to particular constructs or scales in the literature. We therefore discuss the general literature on each of these noncognitive areas.


One such area is student’s use of time. Researchers examining college students’ time use found that in certain populations, working concurrently in college predicts weaker academic outcomes (Ehrenberg & Sherman, 1987; Pascarella, Edison, Nora, Hagedorn, & Terenzini, 1998; Stinebrickner & Stinebrickner, 2003) and that time studying predicts improved academic outcomes (Michaels & Miethe, 1989; Rau & Durand, 2000; Stinebrickner & Stinebrickner, 2004; 2008). These studies suggest that employment while attending school is associated with decreased likelihood of persistence and lower academic outcomes. Therefore, measuring students’ intended use of time may be an important consideration in the remedial screening process, as community college students in particular are more likely to work while attending college (Horn et al., 2006).


Motivation is another well-studied noncognitive construct that is predictive of college success (Pintrich & Shunk, 2002; Robbins et al., 2006). Theories of motivation can explain an individual’s choices, effort, and persistence in various tasks (Covington, 2000; Pintrich, 2003). For example, the concept of expectancy value within motivation research suggests that individuals make certain decisions or enact certain behaviors because they are motivated by the expected results of those behaviors (Eccles & Wigfield, 2002). Relatedly, motivation may stem from utility or task value, which refers to how and whether individuals perceive tasks as having positive value or utility because they facilitate important future goals (Husman & Lens, 1999). These values can therefore directly influence performance, persistence, and task choice. A student’s motivation, as understood through these values, may encourage persistence in the face of challenging or boring academic learning contexts and therefore be predictive of success in those contexts (Miller & Brickman, 2004).


One’s sense of social support may also influence college outcomes (Noonan et al., 2005). This may be related to the concept of mattering, defined as the feeling one is personally important to someone else (Cooper, 1997; Gossett, Cuyjex, & Cockriel, 1996; Marshall, 2001; Rosenberg & McCullough, 1981; Schlossberg, 1989; Tovar, Simon, & Lee, 2009). In studies of college students, a stronger sense of mattering is linked to pro-academic behaviors and affects (Dixon & Kurpius, 2008; Dixon Rayle & Chung, 2007; France & Finney, 2010). In another study, Dennis, Phinney, and Chuateco (2005) assessed the extent to which motivation to attend college and the availability of social support from family and peers influenced academic success in ethnic minority college students. They included survey items such as how supportive family and peers were of students’ college aspirations and students’ beliefs about attending college. The researchers found that personal interest, desire to attain a rewarding career, and intellectual curiosity were all related to successful adjustment in college. Finally, Sedlacek (2004) demonstrated that noncognitive measures of adjustment, motivation, and leadership are predictors of postsecondary success, particularly for underrepresented minority students.


USE OF NONCOGNITIVE MEASURES


While studies find positive associations between noncognitive measures and college outcomes, whether it is beneficial to use noncognitive measures or indicators of them to inform selection processes remains an outstanding question. Some evidence from four-year institutions has suggested that incorporating noncognitive measures into college admissions can be favorable (Noonan et al., 2005; Sedlacek, 2004; Sternberg, Gabora, & Bonney, 2012). However, despite calls in the literature for the use of holistic assessments of student readiness for college coursework (Hughes & Scott-Clayton, 2011), noncognitive measures are rarely used to make course placement decisions in community colleges (Gerlaugh, Thompson, Boylan, & Davis, 2007). We found just one completed study examining placement using noncognitive measures in the community college setting.3 The Educational Testing Service (ETS) conducted a study to investigate the usefulness of SuccessNavigator, a commercial product that incorporates psychosocial/noncognitive measures, including personality, motivation, study skills, intrapersonal and interpersonal skills, and other factors beyond cognitive ability (Markle, Olivera-Aguilar, Jackson, Noeth, & Robbins, 2013). Examining a set of community colleges, Rikoon, Liebtag, Olivera-Aguilar, Robbins, and Jackson (2014) compared mathematics course passing rates between students placed in math courses using standard institutional practice (i.e., the COMPASS placement test) and those placed using the ETS SuccessNavigator mathematics course placement index in conjunction with test scores. They found no statistically significant differences in course passing rates between the two groups. Since students placed using the ETS instrument and test scores were just as successful as their peers in a higher level course, this suggests that the noncognitive measures may be useful for course placement. The goal of the present study is to complement this work by examining noncognitive measure use in another community college setting. We also analyze survey data rather than data gathered from a proprietary instrument, which may be a more viable option for resource-strapped colleges. We frame the study using expanding conceptions of college readiness and modern approaches to validation, which we discuss next.


CONCEPTUAL FRAMEWORK


COLLEGE READINESS


The policy interest in using both high school background measures and noncognitive measures is in accordance with expanding notions of college-readiness. Historically, college-readiness has been measured by students’ academic ability and cognitive skills but it has also expanded to include noncognitive attributes and college knowledge that are thought to be essential for success in college (Almeida, 2015; Duncheon, 2015; Roderick, Nagaoka, & Coca, 2009; Sedlacek, 2004). The underlying logic is that what determines whether students will be successful in college is broader than cognitive skill or academic background. The research described in the previous section demonstrates that noncognitive attributes play an important role in explaining persistence and success in college. In line with expanding notions of college readiness, we test whether these broader concepts of college readiness can be imported into assessment and placement processes in community colleges. Could selection into developmental courses be improved by expanding concepts of college-readiness beyond academic and cognitive attributes of students?


VALIDATION


Modern validation theory makes it possible to identify and evaluate the usefulness of noncognitive measures of college readiness. In modern validation theory, a validation argument considers the interpretation, purposes, and uses of a measure in addition to its predictive properties; it emphasizes the examination of outcomes that result from the uses of the measure (Kane, 2006). While past conceptions of validation relied mainly on determining the correlation between a measure such as tests scores and college outcomes, this theory suggests that the intended use and purpose should guide how the analysis is conducted to determine predictive properties (Kane, 2001).


Therefore, the validity of a measure such as a placement measure is based on the actual decisions or proposed decisions made using the measure, and not simply the correlation between a measure such as test scores and subsequent outcomes. In the current context, the measures used to make course placement decisions in developmental math would therefore be evaluated in terms of the relevant student outcomes—placement and success in the highest level course possible (Kane, 2006), and the frequency with which these accurate placements occur (Sawyer, 1996, 2007; Scott-Clayton, 2012). If a measure places students into higher level courses and they are successful in those courses, then using those measures improved placement accuracy. If the measures led to placements resulting in worse outcomes, then using those measures did not improve accuracy.


In order to validate noncognitive measures in the context of community colleges, there must be a context where measures of noncognitive attributes, or in this case indicators of them, are actually used to make placement decisions. Our study takes advantage of the fact that some colleges factor in what we are argue are indicators of noncognitive attributes into their placement process. This enables us to assess the validity of these placement measures (Kane, 2006; Sawyer, 1996, 2007). In a related analysis, we also predict the potential usefulness of these indicators for identifying and avoiding placement error in colleges that mainly rely on placement tests.


SETTING & DATA


The setting for the study is a Large Urban Community College District (LUCCD) in California that enrolls over 100,000 students each semester.4  Being open-access institutions, the nine colleges in the district serve a widely diverse body of students, with more than a quarter of students over 35 years of age, and over 40% indicating that their native language is not English. Close to 90% of students report having completed a high school level education.5 This student population is different from the national community college student population since about two-thirds of all students in these California colleges identify as African-American or Latina/o. In contrast, the majority of students who enter a community college in the U.S. are White, and just over one-third identify as African-American or Latina/o (NCES, 2014).  


Each college has considerable autonomy over choice and use of placement tests. The colleges are also required by California law to utilize some combination of “multiple measures” to inform placement decisions (Perry et al., 2010). In the LUCCD, colleges chose to consider items from self-reported background questionnaires as multiple measures. Table 1 shows the placement tests and additional measures used to make course assignments in developmental math in each college. Table 2 shows the types of self-reported background information from the Educational Background Questionnaire (EBQ) also collected at the time of assessment.


Table 1. Multiple Measures Used for Math Placement

College

Point Range

Academic Background

College Plans

Motivation

  

HS Diploma/ GED

HS GPA

Prior Math

 

  

A

0 to 4

  

+

   

B

0 to 3

 

+

+

 

+

 

C

N/A

      

D

0 to 2

  

+

   

E

0 to 3

 

+

    

F

-2 to 2

    

+/-

+/-

G

0 to 3

 

+

+

 

+

 

H

0 to 4

 

+

    

J

-2 to 5

+

 

+/-

 

 

+/-

Note: (+) indicates measures for which points are added, and (-) indicates measures for which points are subtracted. Academic Background includes whether the student received a diploma or GED, high school GPA, and prior math course-taking (including achievement and highest level completed). College plans include hours planned to attend class, hours of planned employment, and time out of formal education. Motivation includes importance of college and importance of mathematics. Multiple measure information was not available for one of the nine LUCCD colleges. The study time period is 2005 to 2012, but information show here for College G is 2011–2012, and for College H is 2005–2009.


Table 2. Types of Information Collected via Education Background Questionnaires

College

Academic Background

College Plans

Motivation

Social Support

 
 

HS Diploma/ GED

HS GPA

Prior math

    

A

x

      

B

x

x

x

x

x

x

 

C

x

x

x

x

x

x

 

D

x

x

x

x

   

E

x

x

     

F

  

x

x

x

  

G

 

x

x

x

x

x

 

H

 

x

x

x

x

x

 

J

x

 

x

 

x

  

Note: Academic Background includes whether the student received a diploma or GED, high school GPA, and prior math course-taking (including achievement and highest-level completed). College plans include hours planned to attend class, hours of planned employment, and time out of formal education. Motivation includes importance of college and importance of mathematics. Students were also asked about social support—how important is it for the people closest to you that you go to college? The study time period is 2005 to 2012, but information show here for College G is 2011–2012, and for College H is 2005–2009.

 



The data for the study consist of all student-level assessment and enrollment records for students assessed between 2005 and 2012, tracked through fall 2013. We focus on the sample of students who took math placement assessments, had not already earned a college degree, were not concurrently enrolled in high school, and were under the age of 65. Since we are interested in noncognitive measures, we focus on six colleges that collect indicators of this information: B, C, D, F, G, and J.6 The data available enable us to examine important short- and long-term student outcomes such as passing the math course in which they are placed and earning 30 degree-applicable units. Table 3 presents a demographic profile of each college included in our analyses. The table shows that each college is unique in its student composition. However, the overall pattern is that Latinas/os and African Americans are the two largest racial groups served by all the colleges in the study. In addition, the table delineates how students were placed in each level of the developmental math sequence by college. While placement distribution varies among different colleges, the majority of students were placed in the three lowest levels—either elementary algebra, pre-algebra, and arithmetic—and few students placed into intermediate algebra or above.



Table 3. College Assessment, Placement, and Demographic Profiles, Six LUCCD Colleges 2005-2012

 

College B

College C

College D

College F

College G

College J

Placement Levels

N

%

N

%

N

%

N

%

N

%

N

%

 > Intermediate Algebra

153

2.0%

99

0.5%

2481

9.7%

518

4.2%

710

9.4%

28

0.4%

Intermediate Algebra

715

9.2%

742

3.8%

6355

24.9%

1099

8.9%

1411

18.7%

630

8.1%

Elementary Algebra

1049

13.4%

3160

16.0%

6409

25.1%

3825

30.8%

1049

13.9%

1927

24.8%

Pre-Algebra

4965

63.6%

2623

13.3%

1604

6.3%

4310

34.7%

2603

34.6%

1766

22.7%

Arithmetic

930

11.9%

9194

46.5%

8673

34.0%

2652

21.4%

1754

23.3%

2757

35.4%

< Arithmetic

 

 

3934

19.9%

 

 

 

 

 

 

674

8.7%

Student Demographics

            

Female

4447

56.9%

10031

50.8%

13472

52.8%

6837

55.1%

3891

51.7%

5288

68.0%

African-American

367

4.7%

7745

39.2%

1965

7.7%

5516

44.5%

125

1.7%

6102

78.4%

Latina/o

5977

76.5%

9367

47.4%

11661

45.7%

3827

30.9%

5508

73.2%

1206

15.5%

Asian/Pacific Islander

376

4.8%

905

4.6%

2512

9.8%

765

6.2%

1244

16.5%

95

1.2%

White (Non-Hispanic)

607

7.8%

584

3.0%

6908

27.1%

1156

9.3%

122

1.6%

50

0.6%

Other

485

6.2%

1151

5.8%

2476

9.7%

1140

9.2%

528

7.0%

329

4.2%

Total Assessed in Math

7812

 

19752

 

25522

 

12404

 

7527

 

7782

 

Placement Test

ACCUPLACER

ACCUPLACER

ACCUPLACER

COMPASS

ACCUPLACER

COMPASS

Years in Sample

2005-2009

2005-2012

2005-2012

2005-2012

2011-2012

2005-2009

Notes: For each math level, we calculated the average value for the 2005–2012 academic years. For College B the average values are calculated from 2005–2009. College C had 454 students who placed below Arithmetic and the data runs from 2008–2012. College J had 138 students who placed below Arithmetic. Colleges B, G, and J have different time periods due to placement policy changes.
Source: 2005–2012 LUCCD transcript data


METHODS


WHICH STUDENTS MAY HAVE BEEN PLACED IN ERROR?


We examined the possibility of using noncognitive measures for placement in developmental math by capitalizing on indicators of this information collected via the EBQ in each college. Our first methodological approach enabled us to assess whether utilizing indicators of noncognitive attributes can help to identify and thus avoid placement errors. Based on our analysis of each college’s EBQs, three areas of noncognitive attributes were common across sets of colleges: motivation, college plans (e.g., use of time), and social supports (see Table 2).


The analytical approach we used to understand how noncognitive measures can identify placement errors follows the sequence described in Scott-Clayton et al. (2014). The procedure estimates the overall proportions of students placed successfully and students placed in error for each level of math in a developmental sequence. Students placed successfully are those who were either placed into a math class level they were predicted to pass or those who were placed into one level below a math class they were predicted to fail. Students placed in error are those who were either: overplaced, placed into a math class that they were predicted to fail, or underplaced, placed into one level below a math class that they were predicted to pass.


We identified placement error for every level of the developmental math sequence, from arithmetic to college-level math. We also performed the procedure using different combinations of cognitive and noncognitive indicators (described further below). In the case of college-level math (CM), the respective logit models are:


[39_21987.htm_g/00002.jpg]


Here [39_21987.htm_g/00004.jpg] is the test score, and [39_21987.htm_g/00006.jpg] and [39_21987.htm_g/00008.jpg] are proposed additional measures used for placement, and [39_21987.htm_g/00010.jpg] is a vector of student level demographic characteristics, including age, race, gender, language, and residence status, added as controls for factors that may be associated with college success. The obtained coefficients are extrapolated to students placed in the course below (e.g., intermediate algebra is one level below college math) to predict each student’s probabilities of success and failure in intermediate algebra. We used the probabilities to identify students placed successfully and students placed in error at each level.  Specifically, we identified severe placement errors, defined by two criteria: (1) students predicted to fail the upper level course they were placed into, or (2) students predicted to pass the upper level course with a B or better, but were placed into a course one level below.7 We estimated the proportion of severe placement errors at each level of math in the developmental sequence for each college.


Since we are interested in comparing various placement scenarios, we calculated the percent of severe placement errors using different combinations of measures:


1)

With placement test scores alone, [39_21987.htm_g/00012.jpg],8

2)

with additional academic background measures (e.g., HS GPA or prior math experience), [39_21987.htm_g/00014.jpg],

3)

with noncognitive indicators obtained from colleges’ EBQs (e.g., motivation or college plans), [39_21987.htm_g/00016.jpg].


This analysis enabled us to determine whether high school background and noncognitive measures—a more holistic profile—could improve upon placement results based on cognitive measures (placement tests) alone.9 The calculated rate of severe placement errors for each set of alternative criteria is the proportion of students that can be considered as having been placed in error by status quo practices, and therefore, the set of students for whom placement errors could be avoided. The rate of severe placement errors is not a measure of the placement accuracy of each set of measures. Instead, it estimates the amount of error in existing placement policy.  


DO NONCOGNITIVE MEASURES ACTUALLY IMPROVE PLACEMENT ACCURACY?


We then took advantage of the placement decision rules in two colleges (Colleges F and J) where proxy indicators of noncognitive constructs were actually factored into placement algorithms. College F awarded up to two additional points to students based on their college plans (units enrolled and expected employment), how important a college education was to them, and how long they have been out of school.10 College F used the COMPASS, with 100 test points possible. College J awarded one additional point to students who indicated that math was important, and four other possible points for cognitive measures (high school math background and receipt of a diploma or GED). College J used the ACCUPLACER, with 120 points possible. Since we could identify student responses on each college’s EBQ along with raw placement test scores, we could therefore examine the success outcomes of students whose final placements were due to the additional points earned from these noncognitive indicators. Although the additional points may seem nominal relative to the placement tests, they did provide a benefit to some students. About 1.8% and 26.4%11 of students in Colleges F and J, respectively, were placed in a higher level course based on their combined score.


Modern validation theory would suggest that measures used for placement in developmental courses are valid if they increase access to higher-level math courses without compromising the likelihood of success in those courses (Kane, 2006; Sawyer, 1996). Therefore, we compared the outcomes of students whose resulting placements were “boosted up” due to noncognitive indicators with the outcomes of students in the same level of math whose placements were the result of their placement test scores. Specified as a linear regression model, the main variable of interest ([39_21987.htm_g/00018.jpg]) is a dummy indicator that equals one for students whose responses to the noncognitive oriented questions on the EBQ resulted in a “multiple measure boost” to a higher level course. We tested the relationship between earning this multiple measure boost and the outcome of interest ([39_21987.htm_g/00020.jpg]), passing the course in which the student was placed. We also examined the relationship between receiving a placement boost and the outcome of earning 30 degree-applicable credits, which are half the units required for an associate’s degree. The linear probability model is:


[39_21987.htm_g/00022.jpg]


[39_21987.htm_g/00024.jpg]captures the difference in average outcomes between students who were assigned to the course due to additional points from noncognitive indicators and students who had higher raw test scores. We include [39_21987.htm_g/00026.jpg], the number of multiple measure points,12 [39_21987.htm_g/00028.jpg] the raw placement test score (normalized), and all [39_21987.htm_g/00030.jpg] covariates as before. We also estimated models controlling for math level and cohort. For each model we compared boosted students to other students just above the cutoff, as well as to all students in the same math level.


Unlike College F, which used only noncognitive indicators to augment its placement algorithm, College J awarded points for both academic and noncognitive indicators. Thus, for College J we could differentiate between students whose boost was solely from academic measures from those who obtained points for both academic and noncognitive indicators.13 We identified the differences between these students by including an interaction term with a dummy variable [39_21987.htm_g/00032.jpg]. The variable [39_21987.htm_g/00034.jpg] is an indicator of the student’s motivation and equals one when students marked that math was very important to their goals. It equals zero for students who responded not important or somewhat important. The model with this interaction is:


[39_21987.htm_g/00036.jpg]

The interaction term enabled us to determine whether there were differential effects for those who earned points for both cognitive and noncognitive indicators relative to those whose boost was from cognitive measures alone.


FINDINGS


POSSIBILITIES FOR NONCOGNITIVE MEASURES


We first investigated possibilities for the use of noncognitive indicators either in addition to or in place of common placement tests. We did so by estimating the percent of severe placement errors at each placement cutoff within each college. As described above, the procedure involves using different sets of alternative placement criteria (e.g., placement tests, high school background measures, and noncognitive indicators) to model successful placement of students, which we defined as: students predicted to pass the math course of interest with a B or better and placed into that math course, and students predicted to fail the math course of interest and placed into one level below that math course. Severe placement errors are those where students predicted to fail the math course of interest were placed into that math course, and students predicted to pass the math course of interest with a B or better were placed into one level below that math course.14


The full results are presented in Tables 4 and 5. Table 4 shows the results of the analysis when the entire sample of students within each pair of math levels is included, e.g., all students in intermediate algebra (IA) or elementary algebra (EA), and Table 5 shows the results for students within a five-point bandwidth around the cutoff. The first column of each table shows the estimated percent of severe placement errors under the status quo practice (which is essentially placement test score alone). There is a range of estimates across colleges, from nearly zero error to over 30% of students identified as placed in error. In comparing error rates across placement scenarios, a higher rate of errors does not suggest the scenario’s placement criteria would produce more placement errors. Rather, a higher estimate suggests the alternative placement criteria identify a larger fraction of placement errors in existing practice that can possibly be avoided. We also decomposed the total estimated placement error into its two respective parts—the percent of students considered as under-placed and the percent considered as overplaced. This also reveals variation across levels and colleges. This is expected given variation in the level of the placement cutoff set at each college and institutional differences between colleges.



Table 4. Severe Placement Errors (SPE), Underplacement (UP), and Overplacement (OP) Identified in Current Practice Under Different Placement Scenarios,
Bandwidth=All Students, Pass with B or Better

  

1) Placement Test*

2) Test+HSB

3) Test+NC

4) Test+HSB+NC

5) HSB

6) HSB+NC

7) NC

College

Math Levels

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

B

IA/EA

.02

.40

.60

.04

.35

.65

.03

.38

.63

.05

.33

.67

.09

.72

.28

.10

.70

.30

.08

.84

.16

 

EA/PA

.01

.36

.64

.03

.21

.79

.02

.30

.70

.03

.20

.80

.09

.81

.19

.10

.80

.20

.11

.93

.07

C

IA/EA

.03

.83

.17

.04

.86

.14

.04

.84

.16

.10

.91

.09

.22

.97

.03

.28

.97

.03

.23

.97

.03

 

EA/PA

.23

.95

.05

.37

.97

.03

.28

.88

.12

.36

.91

.09

.39

.97

.03

.42

.93

.07

.43

.93

.07

 

PA/AR

.08

1.00

.00

.10

1.00

.00

.08

1.00

.00

.10

1.00

.00

.57

1.00

.00

.57

1.00

.00

.64

1.00

.00

D

IA/EA

.18

.01

.99

.16

.01

.99

.18

.02

.98

.16

.02

.98

.16

.04

.96

.16

.04

.96

.19

.07

.93

 

PA/AR

.09

.00

1.00

.08

.99

.01

.09

.00

1.00

.08

.99

.01

.11

.99

.01

.11

.99

.01

.21

1.00

.00

F

IA/EA

.06

.03

.45

.07

.05

.95

.07

.05

.95

.08

.06

.94

.10

.30

.70

.11

.30

.70

.10

.42

.58

 

EA/PA

.08

.96

.04

.05

.83

.17

.10

.97

.03

.07

.87

.13

.07

.91

.09

.08

.87

.13

.08

.98

.02

 

PA/AR

.37

.00

1.00

.37

.00

1.00

.37

.00

1.00

.37

.00

1.00

.39

.01

.99

.39

.01

.99

.39

.01

.99

G

IA/EA

.08

.57

.43

.17

.69

.31

.11

.57

.43

.18

.66

.34

.07

.27

.73

.08

.34

.66

.08

.35

.65

 

EA/PA

.17

.02

.98

.22

.01

.99

.19

.01

.99

.23

.02

.98

.21

.02

.98

.22

.03

.97

.24

.05

.95

 

PA/AR

.00

.00

1.00

.01

.16

.84

.00

.00

1.00

.01

.06

.94

.01

.75

.25

.01

.57

.43

.01

.85

.15

AVERAGES

 

  

 

  

 

  

 

  

 

  

 

  

 

  
 

IA/EA

.07

.37

.53

.10

.39

.61

.08

.37

.63

.11

.40

.60

.13

.46

.54

.15

.47

.53

.13

.53

.47

 

EA/PA

.12

.57

.43

.17

.51

.49

.15

.54

.46

.17

.50

.50

.19

.68

.32

.20

.66

.34

.21

.72

.28

 

PA/AR

.14

.25

.75

.14

.54

.46

.14

.25

.75

.14

.51

.49

.27

.69

.31

.27

.64

.36

.31

.71

.29

 

TOTAL

.11

.45

.49

.13

.53

.47

.13

.44

.56

.14

.53

.47

.22

.67

.33

.23

.66

.34

.25

.71

.29

Notes: Intermediate algebra (IA; elementary algebra (EA); pre-algebra (PA); arithmetic (AR); high school background (HSB); noncognitive (NC).


Table 5. Severe Placement Error (SPE), Underplacement (UP), and Overplacement (OP) Identified in Current Practice Under Different Placement Scenarios,
BW = 5, Pass with B or Better

  

1) Placement Test*

2) Test+HSB

3) Test+NC

4) Test+HSB+NC

5) HSB

6) HSB+NC

7) NC

College

Math Levels

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

Total SPE

UP

OP

B

IA/EA

.10

.87

.13

.09

.86

.14

.09

.86

.14

.09

.80

.20

.09

.72

.28

.10

.70

.30

.08

.84

.16

 

EA/PA

.01

.13

.87

.02

.16

.84

.02

.11

.89

.02

.22

.78

.09

.81

.19

.10

.80

.20

.11

.93

.07

C

IA/EA

.05

.96

.04

.05

.96

.04

.06

.95

.05

.06

.93

.07

.22

.97

.03

.28

.97

.03

.23

.97

.03

 

EA/PA

.39

.97

.03

 

  

 

  

 

  

 

  

.42

.93

.07

.43

.93

.07

 

PA/AR

.01

.71

.29

.01

.78

.22

.01

.70

.30

.01

.83

.17

.57

1.00

.00

.57

1.00

.00

.64

1.00

.00

D

IA/EA

.07

.02

.98

.06

.02

.98

.07

.02

.98

.06

.02

.98

.16

.04

.96

.16

.04

.96

.19

.07

.93

 

PA/AR

.04

.89

.11

.05

.91

.09

.04

.89

.11

.04

.92

.08

.11

.99

.01

.11

.99

.01

.21

1.00

.00

F

IA/EA

.03

.05

.39

.04

.09

.91

.03

.07

.93

.04

.09

.91

.10

.30

.70

.11

.30

.70

.10

.42

.58

 

EA/PA

 

  

 

  

 

  

 

  

.07

.91

.09

.08

.87

.13

.08

.98

.02

 

PA/AR

.25

.01

.99

.25

.01

.99

.25

.01

.99

.25

.01

.99

.39

.01

.99

.39

.01

.99

.39

.01

.99

G

IA/EA

.01

.00

1.00

.02

.00

1.00

.02

.00

1.00

.02

.00

1.00

.07

.27

.73

.08

.34

.66

.08

.35

.65

 

EA/PA

.07

.00

1.00

.06

.00

1.00

.07

.02

.98

.07

.02

.98

.21

.02

.98

.22

.03

.97

.24

.05

.95

 

PA/AR

.00

.50

.50

.01

.00

1.00

.01

.06

.94

.01

.00

1.00

.01

.75

.25

.01

.57

.43

.01

.85

.15

AVERAGES

 

  

 

  

 

  

 

  

 

  

 

  

 

  
 

IA/EA

.05

.38

.51

.05

.39

.61

.06

.38

.62

.05

.37

.63

.13

.46

.54

.15

.47

.53

.13

.53

.47

 

EA/PA

.16

.37

.63

.04

.08

.92

.04

.07

.93

.05

.12

.88

.12

.58

.42

.20

.66

.34

.21

.72

.28

 

PA/AR

.08

.53

.47

.08

.42

.58

.08

.41

.59

.08

.44

.56

.27

.69

.31

.27

.64

.36

.31

.71

.29

 

TOTAL

.09

.42

.53

.06

.34

.66

.06

.33

.67

.06

.35

.65

.17

.57

.43

.20

.58

.42

.21

.65

.35

Notes: Intermediate algebra (IA; elementary algebra (EA); pre-algebra (PA); arithmetic (AR); high school background (HSB); noncognitive (NC).



Nevertheless, a common pattern emerges. Comparing Columns 1–4 with Columns 5–7 reveals that the proportion of placement error identified in existing practice rises when high school background and noncognitive indicators are utilized to model student success. This is the finding in nearly all colleges and levels, and it suggests that more students can be considered as having been placed in error by existing practice when these alternative criteria are used to model the likelihood that students would fail or pass the higher level course, compared to when test scores are incorporated in the model. Patterns are consistent across the full and narrow bandwidth specifications, though the severe placement error estimates are larger when all students are considered. Moving forward, we focus on the narrower bandwidth of five points right around each placement cutoff to better understand sorting and accuracy around each cutoff.


Given the similarity in patterns across colleges and levels, we also calculated the average severe errors for each column to better summarize these trends. Figures 1 and 2 visually present these average results. The first bar of Figure 1 shows the severe placement error of existing placement practices, which typically consider a student’s placement test score and additional points from multiple measures (academic) when making placement decisions. On average, the estimate is that 8.7% of students within a five-point bandwidth around the cutoff can be considered as severe placement errors (11% for the full bandwidth). Figure 2 decomposes errors into the proportion of under- and overplacement, and these results indicate that more students appear to be underplaced when high school background and noncognitive indicators are used to examine placement errors.15 Students who are identified as under-placed presumably could have passed the higher level course.


Figure 1. Severe Placement Errors Identified in Status Quo Practice (Bandwidth=5 points Around Placement Cutoff)

[39_21987.htm_g/00038.jpg]

Note: *Placement Test is representative of status quo placement practice, which in some cases is placement test plus additional multiple measures (see Table 1).


Figure 2. Type of Severe Placement Error Identified (Bandwidth = 5 Points Around Placement Cutoff)

[39_21987.htm_g/00040.jpg]

Note: *Placement Test is representative of status quo placement practice, which in some cases is placement test plus additional multiple measures (see Table 1).


The primary goal of the exercise is to understand whether or not alternative measures identify more or less placement error than the status quo practice. Overall, the results indicate that substantially more students can be considered as having been placed in error when high school background and noncognitive indicators are used to identify placement errors, compared to when placement test scores are used. For example, if the estimating equation for success/failure in the upper level course consists of high school background measures (e.g., HS GPA, prior math, etc.) or noncognitive measures (e.g., motivation), then it appears that 22%–25% percent of students in the full bandwidth and 17%–21% percent in the five-point bandwidth have been placed in error by current practices. Since these students that can be considered as errors are those who could have passed the upper level course if placed there or would have fared better in the lower level course, then it follows that these alternative schemes may offer an improvement over current practices.


The answer to whether noncognitive indicators of motivation, social support, and college plans might improve placement accuracy is less clear. We see that indeed the use of HSB and noncognitive indicators identified the most amount of error, suggesting that these measures may be suitable alternatives to current practices. Comparing the HSB-only results with HSB and noncognitive indicators, we see that noncognitive indicators may make marginal improvements, but our supplementary analysis also revealed that the HSB measures were generally positively correlated with the noncognitive indicators.  


NONCOGNITIVE MEASURES IN USE


We therefore complement this analysis with evidence from two colleges in the district, Colleges F and J, which actually do factor in indicators of noncognitive constructs into their placement algorithms. These colleges award supplemental points which are added (or in some cases subtracted) from the raw placement test scores to determine the final score used to determine math placement. To this end, students’ placement results may be directly related to some noncognitive attributes. In fact, the policy of using these noncognitive indicators increased access to higher level courses for students who otherwise would have been in a lower level math course based on placement test scores alone. As mentioned above, 1.8% and 26.4% of students in Colleges F and J, respectively, were placed in a higher level course based on their final scores after the multiple measures were considered.


The question of interest is whether students who received a multiple measure boost generally performed differently from their higher scoring peers in terms of passing the placed math course and completing 30 degree-applicable units.16 Tables 6 and 8 present the results from this analysis. Based on our estimation of Equation 3 we found no evidence of differences for students around the placement cutoffs for College F (Table 6) or J (Table 8). We show that these are robust to model specifications that include placement level dummies, cohort fixed effects, and student background variables. The null results in Colleges F and J suggest that the use of noncognitive indicators increased access to higher level courses without compromising the likelihood of success in those courses.


Table 6. Regression Results: Whether Students Passed the Placed Math Course Within One-year of Assessment, College F

 

Model 1

Model 2

Model 3

Model 4

 

(1) Around

(2) Entire

(3) Around

(4) Entire

(5) Around

(6) Entire

(7) Around

(8) Entire

Multiple Measure Boost

-0.035

-0.032

-0.061

-0.041

-0.079

-0.048

-0.068

-0.047

 

(0.06)

(0.03)

(0.06)

(0.03)

(0.06)

(0.03)

(0.06)

(0.03)

Multiple Measure Points

0.007

-0.008

0.01

-0.009

0.009

-0.01

-0.003

-0.019**

 

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

Test Score (z)

0.007

0.025***

-0.014

0.018**

-0.01

0.019**

-0.014

0.016*  

 

(0.01)

(0.00)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

Placed Math Level

        

1 Level Below

  

0.059

-0.077**

0.038

-0.083***

0.05

-0.067**

   

(0.05)

(0.03)

(0.05)

(0.03)

(0.05)

(0.03)

2 Levels Below

  

-0.068

-0.162***

-0.073

-0.162***

-0.052

-0.135***

   

(0.05)

(0.02)

(0.05)

(0.02)

(0.05)

(0.02)

3 Levels Below

  

-0.104*

-0.112***

-0.063

-0.104***

-0.04

-0.074***

   

(0.05)

(0.02)

(0.05)

(0.02)

(0.05)

(0.02)

4 Levels Below

   

-0.155***

 

-0.159***

 

-0.126***

    

(0.02)

 

(0.02)

 

(0.02)

Age at Assessment

        

20–24

      

0.006

-0.024*  

       

(0.02)

(0.01)

25–34

      

-0.008

0.002

       

(0.02)

(0.01)

35–54

      

-0.057*

-0.017

       

(0.03)

(0.01)

55–65

      

-0.058

-0.046

       

(0.08)

(0.03)

Female

      

0.019

0.033***

       

(0.02)

(0.01)

Race

        

Asian/PI

      

-0.048

0.014

       

(0.04)

(0.02)

African-American

      

-0.070*

-0.061***

       

(0.03)

(0.01)

Latina/o

      

-0.001

0.014

       

(0.03)

(0.02)

Other

      

-0.017

-0.008

       

(0.04)

(0.02)

English not prim. lang.

      

0.059*

0.031*  

       

(0.03)

(0.01)

Perm. Res.

      

-0.008

0.032*  

       

(0.03)

(0.02)

Other Visa

      

0.106*

0.091***

       

(0.05)

(0.02)

Cohort Fixed Effects

No

No

No

No

Yes

Yes

Yes

Yes

Constant

0.189***

0.243***

0.253***

0.372***

0.246***

0.355***

0.250***

0.333***

 

(0.01)

(0.00)

(0.04)

(0.02)

(0.05)

(0.03)

(0.06)

(0.03)

R-squared

-0.001

0.004

0.01

0.011

0.024

0.018

0.038

0.032

N

2328

12377

2328

12377

2328

12377

2328

12377

* p < 0.05, ** p < 0.01, *** p < 0.001
Notes:  The estimates shown are coefficients of the linear probability model, with standard errors in parentheses. Students in College F could earn up to an additional 2 points based on responses on the Educational Background Questionnaire.
Models: M1 includes Boost, Multiple Measure Points, and Test Score; M2 includes Boost, Multiple Measure Points, Test Score, and Math Level; M3 includes Boost, Multiple Measure Points, Test Score, Math Level, and Cohort Fixed Effects; M4 includes Boost, Multiple Measure Points, Test Score, Math Level, Cohort Fixed Effects, and demographic controls. Around restricts sample to students 5 points above the cutoff; Entire includes all students within the math level. Reference groups: Students in transfer-level math, students 18–20 years old; Race = white.



Since the multiple measure boost in these two colleges consisted of additional points drawn from a number of survey questions (e.g., college enrollment and employment plans, importance of math, and time since last enrollment in College F; highest math, HS diploma/GED, and importance of math in College J) we also investigated heterogeneity by boost type. That is, we attempted to disentangle those multiple measure boosts that were largely due to points from academic measures from those that included points from the noncognitive indicators. This was only possible in College J, where there was enough variation in answer choice and a large enough sample size to identify a group of students who, although they earned a multiple measure boost into a higher level course, indicated that they consider math to not be important or only somewhat important personally towards their educational goals (see Table 7, N = 547).


Table 7. Composition of Multiple Measure Boost in College J

 

Total

Cognitive + Noncognitive

Cognitive Only

Noncognitive Only

Number of students receiving boost

2054
(26.4% of 7,782)

1350

547

157

Percentage of all boosts

 

65.7

26.6

7.6

   

 Diploma/GED

  Importance of math

 

 Highest math passed with C or better



We therefore included a dummy variable that equaled one for each student that said math was important, and interacted this with the boost variable as shown in Equation 4. The results in Table 8 show no differential relationship between boost and student outcomes for the interaction term. This suggests that boosted students who indicated they did believe that math was important, along with those who did not indicate that math was important, did not exhibit differences in their outcomes. The two groups had statistically equivalent probabilities of passing the course. While this can be interpreted as evidence of the irrelevance of the motivation measure, we remind the reader that this result can still be considered as an improvement in placement accuracy. The students who received this boost due to a noncognitive indicator were able to access a higher level course and their likelihood of success in the course was the same as their peers.


Table 8. Regression Results: Whether Students Passed the Placed Math Course Within One-year of Assessment, College J

 

Model 1

Model 2

Model 3

Model 4

 

(1) Around

(2) Entire

(3) Around

(4) Entire

(5) Around

(6) Entire

(7) Around

(8) Entire

MM Boost

-0.032

-0.065**

-0.048

-0.069***

-0.048

-0.067**

-0.046

-0.067**

 

(0.03)

(0.02)

(0.03)

(0.02)

(0.03)

(0.02)

(0.03)

(0.02)

NC (Imp. Math)

-0.024

-0.054***

-0.024

-0.041**

-0.027

-0.042***

-0.03

-0.046***

 

(0.02)

(0.01)

(0.02)

(0.01)

(0.02)

(0.01)

(0.02)

(0.01)

MM Boost*NC

-0.035

0.026

-0.029

0.007

-0.028

0.005

-0.024

0.007

 

(0.03)

(0.02)

(0.03)

(0.02)

(0.03)

(0.02)

(0.03)

(0.02)

MM Points

0.058***

0.076***

0.058***

0.061***

0.058***

0.061***

0.058***

0.061***

 

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

Test Score (z)

0.018

0.042***

-0.026

0.011

-0.03

0.008

-0.033*

0.006

 

(0.01)

(0.01)

(0.02)

(0.01)

(0.02)

(0.01)

(0.02)

(0.01)

Placed Math Level

        

1 Level Below

  

-0.494**

-0.125

-0.482**

-0.128

-0.448*

-0.117

   

(0.18)

(0.08)

(0.18)

(0.08)

(0.18)

(0.08)

2 Levels Below

  

-0.670***

-0.219**

-0.661***

-0.225**

-0.624***

-0.206**

   

(0.18)

(0.08)

(0.18)

(0.08)

(0.18)

(0.08)

3 Levels Below

  

-0.534**

-0.12

-0.525**

-0.128

-0.489**

-0.106

   

(0.18)

(0.08)

(0.18)

(0.08)

(0.18)

(0.08)

4 Levels Below

  

-0.660***

-0.222**

-0.656***

-0.233**

-0.614***

-0.208**

   

(0.18)

(0.08)

(0.18)

(0.08)

(0.18)

(0.08)

Age at Assessment

        

20–24

      

-0.044*

-0.041***

       

(0.02)

(0.01)

25–34

      

-0.016

-0.02

       

(0.02)

(0.01)

35–54

      

0.045*

0.014

       

(0.02)

(0.01)

55–65

      

-0.081

-0.053

       

(0.07)

(0.04)

Female

      

0.015

0.022*  

       

(0.02)

(0.01)

Race

        

Asian/PI

      

-0.092

0.109

       

(0.13)

(0.07)

African-American

      

-0.124

0.04

       

(0.11)

(0.06)

Latina/o

      

-0.009

0.129*  

       

(0.11)

(0.06)

Other

      

-0.012

0.108

       

(0.11)

(0.06)

English not prim. lang.

      

-0.017

-0.011

       

(0.03)

(0.02)

Perm. Res.

      

0.078*

0.125***

       

(0.04)

(0.02)

Other Visa

      

0.022

0.055

       

(0.06)

(0.04)

Cohort Fixed Effects

No

No

No

No

Yes

Yes

Yes

Yes

Constant

0.141***

0.113***

0.742***

0.343***

0.712***

0.332***

0.771***

0.247*  

 

(0.02)

(0.01)

(0.18)

(0.08)

(0.18)

(0.08)

(0.21)

(0.10)

R-squared

0.011

0.04

0.034

0.056

0.036

0.057

0.052

0.071

N

3416

7782

3416

7782

3416

7782

3416

7782

* p < 0.05, ** p < 0.01, *** p < 0.001
Notes: The estimates shown are coefficients of the linear probability model, with standard errors in parentheses.  Students in College H could earn up to an additional 5 points based on responses on the Educational Background Questionnaire.
Models: M1 includes Boost, Multiple Measure (MM) Points, and Test Score; M2 includes Boost, Multiple Measure Points, Test Score, and Math Level; M3 includes Boost, Multiple Measure Points, Test Score, Math Level, and Cohort Fixed Effects; M4 includes Boost, Multiple Measure Points, Test Score, Math Level, Cohort Fixed Effects, and demographic controls. Around restricts sample to students 5 points above the cutoff; Entire includes all students within the math level. Reference groups: Students in transfer-level math, students 18–20 years old; Race = white.


DISCUSSION


The study contributes to concerns about selection and sorting processes at the start of community college, which have been of increasing policy interest in recent years. A number of states (e.g., Colorado, Florida, Texas) have begun to consider multiple measures, including noncognitive measures, for developmental student advising, assessment, and placement practices (Bracco et al., 2014). However, there has been scant evaluation of these practices to inform policy, and our study attempts to fill this gap.


We drew upon expanding conceptions of college readiness to frame our investigation of possibilities for a more holistic approach to placement that includes the use of noncognitive indicators in contexts where they are currently not used. We found that using high school background and noncognitive indicators may help to identify and thus reduce and avoid some placement errors associated with test-based placement. This is key descriptive evidence indicating that alternative placement criteria may offer a means of improving upon status quo practices, and further research should continue to test this hypothesis.


We were able to validate the use of noncognitive indicators in two colleges. The results from Colleges F and J, which essentially incorporate noncognitive characteristics in the identification of low-scoring students who could be moved to a higher level course, reveal that students placed under this approach performed no differently from their higher scoring peers. Interestingly, they also performed no differently from their peers whose boost was based primarily on academic background measures. The study therefore contributes to the burgeoning literature on using multiple measures such as high school transcript information to optimally place students (Fong & Melguizo, 2016; Ngo & Kwon, 2015; Scott-Clayton et al., 2014), but adds the important component of evaluating the use of indicators of noncognitive attributes. Similar to the aforementioned studies of academic measures, we find that noncognitive indicators may make marginal improvements in placement accuracy over high school background factors alone or test scores alone.


LIMITATIONS


A limitation of these analyses is that we were unable to observe instructor characteristics, which may be important determinants of developmental math student outcomes (Chingos, 2016). We also relied on instructors’ grades as a metric of success. If math instructors adjust their instruction or grading in order to meet the needs of students, this would bias the estimate of the relationship between academic background measures, such as placement test scores, and course success. We reasoned that because math is a fairly hierarchical subject, instructors’ grading practices may not vary as much as it may in other subjects (e.g., English). To mitigate this bias, we also chose to focus on the B or better criterion, which is a more cautious approach to identifying error than using a C or better criterion (Scott-Clayton et al., 2014). There likely is less variation in grading practices related to awarding an A or B grade than a C grade.


A second limitation is that results may be more emblematic of where placement cutoffs are set rather than the accuracy of measures themselves. The boost analysis, for example, may not be reflecting the validity of the additional measures used, but rather, the measurement error inherent in placement testing. Test scores are noisy measures and students a few points apart may be very similar. To address this concern, we ran models where we compared students to similar-scoring peers right above the cutoff, as well as to all students in a given level. We found the results to be consistent across model specification for College F, but we found differences between the “around” and “entire” groups in College J (see Table 8). However, it is likely that the significant negative coefficients for “entire” in College J may be related to the fact that there is as much as a 30-point range of placement scores that result in assignment to the same courses. Therefore students in the “entire” regressions in College J may be substantially different along unobservable characteristics.


FUTURE CONSIDERATIONS


While the results from both sets of analyses point towards a potential advantage to using high school background measures and, to a lesser extent, noncognitive measures in the assessment and placement process, a study in which students were experimentally placed using these measures would provide stronger evidence on the usefulness of these measures. Indeed, the results from Colleges F and J provide slightly more convincing evidence that noncognitive indicators improve placement accuracy, but again, these students are placed using a combination of placement test scores, high school background, and noncognitive indicators. Further research should examine student success under conditions where noncognitive indicators are the sole placement measure in order to determine the actual usefulness of these measures as an alternative to academic placement measures.


Further, while we chose colleges’ existing items from questionnaires and classified some as cognitive and others as noncognitive, we are unsure of their psychometric properties and validity in the traditional sense. It stands to reason that different cognitive and noncognitive measures would yield different results than what we obtained. We encourage more work in investigating differences in measures that can be potentially used for placement. To this end we encourage a thorough exploration of the burgeoning literature on noncognitive constructs and their scales in predicting college outcomes for their applicability in placement policy at the community college level.


Finally, we also note that these may be noisy measures of high school background and noncognitive attributes since they are gathered from a self-reported student questionnaire administered at the time of placement testing. A placement policy that incorporated such additional self-reported information could be susceptible to misinterpretation, reference bias, faking, or “gaming” (Duckworth & Yeager, 2015). Colleges therefore need to take caution and consider evaluation of the measures and the accessibility of placement policy information. Incorporating high school transcript information to automate these decisions may be more efficient and accurate, but this would necessitate data-sharing agreements between K–12 and community college districts.


CONCLUSION


Ultimately, these findings concerning noncognitive indicators are related to the fact that psychosocial attributes such as motivation, and nonacademic characteristics such as students’ use of time and their degree of social support, are useful for explaining why some students do well in school while others fall behind (Pintrich, 2003). Our findings suggest that gauging these various measures at the start of each student’s college career may assist colleges as they sort students into community college coursework. Although evaluation and selection on noncognitive skills may unfairly focus attention on student characteristics rather than the role of institutions, using noncognitive measures may nevertheless promote equity. Holistic placement practices that increase access to upper level courses without compromising the likelihood of student success in those courses provide opportunities for community college students to progress faster and further in their college careers.


Notes


We recognize that there is debate over terminology, with some scholars preferring terms such as character skills, social and emotional competencies, dispositions, personality, temperament, 21st century skills, and personal qualities (Duckworth & Yeager, 2015). We choose the term noncognitive because it provides a contrast to the academic/cognitive measures discussed and because these terms “refer to the same conceptual space,” (Duckworth & Yeager, 2015, pg. 239).

2. The ACT, Inc. has recently decided to phase out the use of the COMPASS (Fain, 2015).

3. The Multiple Measures Assessment Project reported collecting noncognitive data from one college for analysis (Bahr et al., 2014).

4. About one quarter of all community college students in the United States are enrolled in California community colleges, many of which are in located in urban centers (Foundation for Community Colleges, n.d.).

5. Source: California Community College Chancellor’s Office DataMart (http://datamart.cccco.edu/datamart.aspx)
6. We could not identify student survey responses in College H (2009–2012) because we did not have access to the actual questionnaire. College H did collect indicators of some noncognitive attributes from 2005–2009 (shown in Table 2), but the college used a diagnostic test instead of the ACCUPLACER or COMPASS. We therefore did not include College H in the analysis.

7. Students are considered likely to pass if their predicted probability of success in the upper level course is 50% or greater, and considered likely to fail if the predicted probability of failure is 50% or greater.

8. We use the adjusted score, which includes multiple measure points for each college.

9. We also run pooled college models by standardizing student test scores. For these pooled analyses, we identify items on colleges’ EBQs that measure common constructs, such as use of time, motivation, and social support.

10. College F subtracted points for high hours of schooling and work (use of time), low importance of education (motivation), and returning students who had been out of education for more than 5 years.

11. This is 221 out of 12,224 students in College F and 2,054 students out of 7,782 students in College J.

12. BOOST and MMPOINTS are correlated but this should not be a multicollinearity issue. We ran the models without the multiple measure points and did not observe significant differences in the magnitude, direction, or statistical significance of the boost variable.

13. There were 157 cases of boosts solely determined by noncognitive measures in College J and 547 with cognitive measures alone.

14. We also conducted the same analyses with a “Pass at all” outcome. These results are available from the authors upon request.

15. We caution against interpreting the results as indicators of accuracy of the current system. This is because accuracy is related to the predictive validity of the placement instrument and we do not perform an analysis that measures predictive validity per se. Other methods may be used to determine whether placement tools are accurate and, relatedly, whether cutoffs are set correctly (Melguizo et al., 2016).

16. The regression results for the longer term outcomes are available in the Appendix.


References


Armstrong, W. B. (2000). The association among student success in courses, placement test scores, student background data, and instructor grading practices. Community College Journal of Research and Practice, 24(8), 681–695.


Almeida, D. (2015). The roots of college readiness. In W. G. Tierney & J. Duncheon (Eds.), The problem of college readiness (pp. 3–44). SUNY Press: Albany.


Bahr, P. R., Hayward, C., Hetts, J., Lamoree, D., Newell, M., Pellegrin, N., & Willett, T. (2014). Multiple Measures for Assessment and Placement (White Paper). Retrieved from http://www.rpgroup.org/system/files/MMAP_WhitePaper_Final_September2014.pdf


Bailey, T., Jeong, D. W., & Cho, S.-W. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29(2), 255–270. http://doi.org/10.1016/j.econedurev.2009.09.002


Bracco, K. R., Dadgar, M., Austin, K., Klarin, B., Broek, M., Finkelstein, N. . . . & Bugler, D. (2014). Exploring the use of multiple measures for placement into college level courses: Seeking alternatives or improvements to the use of a single standardized test. San Francisco, CA: WestEd.


Burdman, P. (2012). Where to begin? The evolving role of placement exams for students starting college. Boston, MA: Jobs for the Future.


Chingos, M. M. (2016). Instructional quality and student learning in higher education: Evidence from developmental algebra courses. The Journal of Higher Education, 87(1), 84–114.


Conley, D. (2010). Replacing remediation with readiness. An NCPR Working Paper. Retrieved from http://files.eric.ed.gov/fulltext/ED533868.pdf


Cooper, J. (1997). Marginality, mattering, and the African American student: Creating an inclusive college environment. College Student Affairs Journal, 16(2), 15–20.


Covington, M. V. (2000). Goal theory, motivation, and school achievement: An integrative view. Annual Review of Psychology, 51, 171–200.


Dennis, J. M., Phinney, J. S., & Chuateco, L. I. (2005). The role of motivation, parental support, and peer support in the academic success of ethnic minority first generation college students. Journal of College Student Development, 46(3), 223–236.


Dixon Rayle, A., & Chung, K.-Y. (2007). Revisiting first-year college students’ mattering: Social support, academic stress, and the mattering experience. Journal of College Student Retention: Research, Theory and Practice, 9(1), 21–37.


Dixon, S. K., & Kurpius, S. E. R. (2008). Depression and college stress among university undergraduates: Do mattering and self-esteem make a difference? Journal of College Student Development, 49(5), 412–424.


Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 92(6), 1087–1101.


Duckworth, A. L., & Yeager, D. S. (2015). Measurement matters: Assessing personal qualities other than cognitive ability for educational purposes. Educational Researcher, 44(4), 237–251.


Duncheon, J. (2015). The problem of college readiness. In W. G. Tierney & J. Duncheon (Eds.), The problem of college readiness (pp. 3–44). Albany: SUNY Press.


Ehrenberg, R. G., & Sherman, D. R. (1987). Employment while in college, academic achievement, and postcollege outcomes: A summary of results. Journal of Human Resources, 22(1), 1.


Eccles, J., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology, 53, 109–132.


Fields, R., & Parsad, B. (2012). Tests and cut scores used for student placement in postsecondary education: Fall 2011. Washington, DC: National Assessment Governing Board.


Fong, K., & Melguizo, T. (2015). Utilizing additional measures to buffer against students’ lack of math confidence and improve placement accuracy in developmental math. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.


France, M. K., & Finney, S. J. (2010). Conceptualization and utility of university mattering: A construct validity study. Measurement and Evaluation in Counseling and Development, 43(1), 48–65.


Gossett, B. J., Cuyjex, M. J., & Cockriel, I. (1996). African Americans’ and Non-African Americans’ sense of mattering and marginality at public, predominantly white institutions. Equity and Excellence in Education, 29(3), 37–42.


Gerlaugh, K., Thompson, L., Boylan, H., & Davis, H. (2007). National study of developmental education II: Baseline data for community colleges. Research in Developmental Education, 20(4), 1-4.


Horn, L., Nevill, S., & Griffith, J. (2006). Profile of undergraduates in US postsecondary education institutions, 2003-04: With a special analysis of community college students. Statistical Analysis Report. NCES 2006-184.National Center for Education Statistics.


Hughes, K. L., & Scott-Clayton, J. (2011). Assessing developmental assessment in community colleges. Community College Review, 39(4), 327–351.


Husman, J., & Lens, W. (1999). The role of the future in student motivation. Educational Psychologist, 34, 113–125.


Jenkins, D., Jaggars, S. S., & Roksa, J. (2009). Promoting gatekeeper course success among community college students needing remediation: Findings and recommendations from a Virginia Study. New York, NY: Community College Research Center. Retrieved from http://eric.ed.gov/?id=ED507824


Kane, M. T. (2001). Current concerns in validity theory. Journal of Educational Measurement, 38(4), 319–342.


Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 17–64). Westport, CT: ACE/Praeger Publishers.


Markle, R., Olivera-Aguilar, M., Jackson, R., Noeth, R., & Robbins, S. (2013). Examining evidence of reliability, validity and fairness for the SuccessNavigator assessment (Research Report No. RR-13-12). Princeton, NJ: Educational Testing Service. http://dx.doi.org/ 10.1002/j.2333-8504.2013.tb02319.x


Marshall, S. K. (2001). Do I matter? Construct validation of adolescents’ perceived mattering to parents and friends. Journal of Adolescence, 24(4), 473–490.


Marwick, J. D. (2004). Charting a path to success: The association between institutional placement policies and the academic success of Latino students. Community College Journal of Research and Practice, 28(3), 263–280.


Mattern, K. D., & Packman, S. (2009). Predictive validity of ACCUPLACER scores for course placement: A meta-analysis (Research Report No. 2009-2). New York, NY: College Board.


Melguizo, T., Bos, J., Ngo, F., Mills, N., & Prather, G. (2016). Using a regression discontinuity design to estimate the impact of placement decisions in developmental math. Research in Higher Education, 57(2), 123–151.


Michaels, J. W., & Miethe, T. D. (1989). Academic effort and college grades. Social Forces, 68(1), 309.


Miller, R. B., & Brickman, S. J. A model of future-oriented motivation and self-regulation. Educational Psychology Review, 16, 9–33.


National Center for Public Policy and Higher Education & Southern Regional Education Board (NCPPHE & SREB). (2010). Beyond the rhetoric: Improving college readiness through coherent state policy. Atlanta, GA: NCPPHE. Retrieved from http://publications.sreb.org/2010/Beyond%20the%20Rhetoric.pdf


Ngo, F., & Kwon, W. (2015). Using multiple measures to make math placement decisions: Implications for access and success in community colleges. Research in Higher Education, 56(5), 442–470.


Ngo, F., & Melguizo, T. (2016). How can placement policy improve math remediation outcomes? Evidence from community college experimentation. Educational Evaluation and Policy Analysis, 38(1), 171–196.


Noble, J. P., & Sawyer, R. L. (2004). Is high school GPA better than admission test scores for predicting academic success in college? College and University Journal, 79(4), 17–22.


Noonan, B. M., Sedlacek, W. E., & Veerasamy, S. (2005). Employing noncognitive variables in admitting and advising community college students. Community College Journal of Research and Practice, 29(6), 463–469.


Pajares, F., & Rayner, S. (2001). Self-beliefs and school success: Self-efficacy, self-concept, and school achievement. Perception, 239–266.


Pascarella, E. T., Edison, M. I., Nora, A., Hagedorn, L. S., & Terenzini, P. T. (1998). Does work inhibit cognitive development during College? Educational Evaluation and Policy Analysis, 20(2), 75.


Perry, M., Bahr, P. M., Rosin, M., & Woodward, K. M. (2010). Course-taking patterns, policies, and practices in developmental education in the California Community Colleges. Mountain View, CA: EdSource. Retrieved from https://edsource.org/wp-content/publications/FULL-CC-DevelopmentalCoursetaking.pdf


Pintrich, P. (2003). A motivational science perspective on the role of student motivation in learning and teaching contexts. Journal of Educational Psychology, 95(4), 667–686.


Pintrich, P., & Schunk, D. (2002). Motivation in education. Englewood Cliffs, NJ: Merrill.


Porcea, S. F., Allen, J, Robbins, S., & Phelps, R. P. (2010). Predictors of long-term enrollment and degree outcomes for community college students: Integrating academic, psychosocial, socio-demographic, and situational factors. The Journal of Higher Education, 81(6), 750–778.


Rau, W., & Durand, A. (2000). The academic ethic and college grades: Does hard work help students to “make the grade”? Sociology of Education, 73(1), 19.


Rikoon, S., Liebtag, T., Olivera-Aguilar, M., Robbins, S., & Jackson, T. (2014). A pilot study of holistic assessment and course placement in community college: Findings and recommendations. Retrieved from https://www.ets.org/Media/Research/pdf/RM-14-10.pdf


Robbins, S., Allen, J., Casillas, A., Peterson, C. H., & Le, H. (2006). Unraveling the differential effects of motivational and skills, social, and self-management measures from traditional predictors of college outcomes. Journal of Educational Psychology, 98(3), 598–616.


Roderick, M., Nagaoka, J., & Coca, V. (2009). College readiness for all: The challenge for urban high schools. The Future of Children, 19(1), 185–210.


Rosenberg, M., & McCullough, B. C. (1981). Mattering: Inferred significance and mental health among adolescents. Research in Community and Mental Health, 2, 163–182.


Sawyer, R. (1996). Decision theory models for validating course placement tests. Journal of Educational Measurement, 33(3), 271–290.


Sawyer, R. (2007). Indicators of usefulness of test scores. Applied Measurement in Education20(3), 255–271.


Sedlacek, W. E. (2004). Beyond the big test: Noncognitive assessment in higher education. San Francisco, CA: Jossey-Bass.


Schlossberg, N. K. (1989). Marginality and mattering: Key issues in building community. New Directions for Student Services, 1989(48), 5–15.


Scott-Clayton, J. E. (2012). Do high-stakes placement exams predict college success? (Working Paper No. 41). New York, NY: Community College Research Center. Retrieved from http://academiccommons.columbia.edu/catalog/ac:146482


Scott-Clayton, J. E., Crosta, P. M., & Belfield, C. R. (2014). Improving the targeting of treatment: Evidence From college remediation. Educational Evaluation and Policy Analysis, 36(3), 371–393. http://doi.org/10.3102/0162373713517935


Smith, A. (2016, May 26). Determining a student’s place. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2016/05/26/growing-number-community-colleges-use-multiple-measures-place-students  


Stinebrickner, R., & Stinebrickner, T. R. (2003). Working during school and academic performance. Journal of Labor Economics, 21(2), 473–491.


Stinebrickner, R., & Stinebrickner, T. R. (2004). Time-use and college outcomes. Journal of Econometrics, 121(1–2), 243–269.


Sternberg, R. J., Gabora, L., & Bonney, C. R. (2012). Introduction to the special issue on college and university admissions. Educational Psychologist, 47(1), 1–4.


Tovar, E., Simon, M. A., & Lee, H. B. (2009). Development and validation of the college mattering inventory with diverse urban college students. Measurement and Evaluation in Counseling and Development, 42(3), 154–178.



APPENDIX


Table A1. Regression results: Whether students completed 30 degree-applicable units, College F

 

Model 1

Model 2

Model 3

Model 4

 

(1) Around

(2) Entire

(3) Around

(4) Entire

(5) Around

(6) Entire

(7) Around

(8) Entire

Multiple Measure Boost

0.006

0.009

-0.012

0.002

-0.019

0.005

-0.028

0.004

 

(0.06)

(0.03)

(0.06)

(0.03)

(0.06)

(0.03)

(0.06)

(0.03)

Multiple Measure Points

-0.011

-0.025***

-0.009

-0.025***

-0.01

-0.026***

-0.021

-0.033***

 

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.02)

(0.01)

Test Score (z)

0.016*

0.043***

0.002

0.020***

-0.007

0.011

-0.013

0.006

 

(0.01)

(0.00)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

Placed Math Level

        

1 Level Below

  

0.004

-0.048*

0.018

-0.029

0.036

-0.017

   

(0.06)

(0.02)

(0.06)

(0.02)

(0.06)

(0.02)

2 Levels Below

  

-0.024

-0.050**

-0.019

-0.048*

0.004

-0.032

   

(0.05)

(0.02)

(0.05)

(0.02)

(0.05)

(0.02)

3 Levels Below

  

-0.072

-0.087***

-0.084

-0.098***

-0.066

-0.086***

   

(0.05)

(0.02)

(0.05)

(0.02)

(0.05)

(0.02)

4 Levels Below

   

-0.142***

 

-0.149***

 

-0.134***

    

(0.02)

 

(0.02)

 

(0.02)

Age at Assessment

        

20–24

      

-0.086***

-0.077***

       

(0.02)

(0.01)

25–34

      

-0.127***

-0.073***

       

(0.03)

(0.01)

35–54

      

-0.076**

-0.061***

       

(0.03)

(0.01)

55–65

      

-0.026

-0.079*  

       

(0.08)

(0.03)

Female

      

0.016

0.026***

       

(0.02)

(0.01)

Race

        

Asian/PI

      

0.052

0.019

       

(0.05)

(0.02)

African-American

      

-0.001

0.002

       

(0.03)

(0.01)

Latina/o

      

0.026

0.032*  

       

(0.03)

(0.01)

Other

      

0

0.02

       

(0.04)

(0.02)

English not prim. lang.

      

0.051

0.040***

       

(0.03)

(0.01)

Perm. Res.

      

-0.008

0.033*  

       

(0.04)

(0.02)

Other Visa

      

0.188***

0.101***

       

(0.05)

(0.02)

Cohort Fixed Effects

No

No

No

No

Yes

Yes

Yes

Yes

Constant

0.240***

0.220***

0.275***

0.300***

0.291***

0.341***

0.290***

0.327***

 

(0.01)

(0.00)

(0.05)

(0.02)

(0.06)

(0.03)

(0.06)

(0.03)

R-squared

0.001

0.013

0.002

0.018

0.004

0.027

0.025

0.041

N

2328

12377

2328

12377

2328

12377

2328

12377

* p < 0.05, ** p < 0.01, *** p < 0.001
Notes: The estimates shown are coefficients of the linear probability model, with standard errors in parentheses.  Students in College F could earn up to an additional 2 points based on responses on the Educational Background Questionnaire.
Models: M1 includes Boost, Multiple Measure Points, and Test Score; M2 includes Boost, Multiple Measure Points, Test Score, and Math Level; M3 includes Boost, Multiple Measure Points, Test Score, Math Level, and Cohort Fixed Effects; M4 includes Boost, Multiple Measure Points, Test Score, Math Level, Cohort Fixed Effects, and demographic controls. Around restricts sample to students 5 points above the cutoff; Entire includes all students within the math level. Reference groups: Students in transfer-level math, students 18–20 years old; Race = white.



Table A2. Regression results: Whether students 30 degree-applicable credits, College J

 

Model 1

Model 2

Model 3

Model 4

 

(1) Around

(2) Entire

(3) Around

(4) Entire

(5) Around

(6) Entire

(7) Around

(8) Entire

MM Boost

-0.006

-0.019

-0.016

-0.018

-0.014

-0.018

-0.015

-0.018

 

(0.02)

(0.02)

(0.02)

(0.02)

(0.02)

(0.02)

(0.02)

(0.02)

NC (Imp. Math)

-0.043*

-0.050***

-0.040*

-0.044***

-0.040*

-0.043***

-0.042*

-0.046***

 

(0.02)

(0.01)

(0.02)

(0.01)

(0.02)

(0.01)

(0.02)

(0.01)

MM Boost*NC

-0.022

-0.013

-0.017

-0.015

-0.018

-0.016

-0.015

-0.015

 

(0.03)

(0.02)

(0.03)

(0.02)

(0.03)

(0.02)

(0.03)

(0.02)

MM Points

0.039***

0.048***

0.034***

0.038***

0.033***

0.037***

0.032***

0.037***

 

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

Test Score (z)

0.034**

0.031***

-0.004

0.01

0.001

0.011*

-0.001

0.008

 

(0.01)

(0.00)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

(0.01)

Placed Math Level

        

1 Level Below

  

-0.330*

-0.07

-0.333*

-0.074

-0.343*

-0.07

   

(0.16)

(0.07)

(0.16)

(0.07)

(0.16)

(0.07)

2 Levels Below

  

-0.429**

-0.144*

-0.428**

-0.145*

-0.433**

(0.13)

   

(0.16)

(0.07)

(0.16)

(0.07)

(0.16)

(0.07)

3 Levels Below

  

-0.441**

-0.166*

-0.438**

-0.168*

-0.442**

-0.156*  

   

(0.16)

(0.07)

(0.16)

(0.07)

(0.16)

(0.07)

4 Levels Below

  

-0.499**

-0.211**

-0.493**

-0.210**

-0.494**

-0.196**

   

(0.16)

(0.07)

(0.16)

(0.07)

(0.16)

(0.07)

Age at Assessment

        

20–24

      

-0.068***

-0.058***

       

(0.02)

(0.01)

25–34

      

-0.039*

-0.024*  

       

(0.02)

(0.01)

35–54

      

0.022

0.01

       

(0.02)

(0.01)

55–65

      

-0.063

-0.039

       

(0.06)

(0.04)

Female

      

0.011

0.028**

       

(0.01)

(0.01)

Race

        

Asian/PI

      

-0.201

0.075

       

(0.11)

(0.06)

African-American

      

-0.091

0.016

       

(0.09)

(0.05)

Latina/o

      

-0.081

0.024

       

(0.10)

(0.05)

Other

      

-0.016

0.075

       

(0.10)

(0.05)

English not prim. lang.

      

0.063*

0.009

       

(0.03)

(0.02)

Perm. Res.

      

0.136***

0.184***

       

(0.03)

(0.02)

Other Visa

      

-0.062

0.070*  

       

(0.05)

(0.03)

Cohort Fixed Effects

No

No

No

No

Yes

Yes

Yes

Yes

Constant

0.116***

0.100***

0.570***

0.291***

0.586***

0.310***

0.682***

0.267**

 

(0.02)

(0.01)

(0.16)

(0.07)

(0.16)

(0.07)

(0.18)

(0.09)

R-squared

0.009

0.022

0.021

0.032

0.022

0.032

0.042

0.052

N

3416

7782

3416

7782

3416

7782

3416

7782

* p < 0.05, ** p < 0.01, *** p < 0.001
Notes: The estimates shown are coefficients of the linear probability model, with standard errors in parentheses. Students in College H could earn up to an additional 5 points based on responses on the Educational Background Questionnaire.
Models: M1 includes Boost, Multiple Measure Points, and Test Score; M2 includes Boost, Multiple Measure Points, Test Score, and Math Level; M3 includes Boost, Multiple Measure Points, Test Score, Math Level, and Cohort Fixed Effects; M4 includes Boost, Multiple Measure Points, Test Score, Math Level, Cohort Fixed Effects, and demographic controls. Around restricts sample to students 5 points above the cutoff; Entire includes all students within the math level. Reference groups: Students in transfer-level math, students 18–20 years old; Race = white.





Cite This Article as: Teachers College Record Volume 120 Number 2, 2018, p. 1-42
https://www.tcrecord.org ID Number: 21987, Date Accessed: 8/3/2020 10:53:23 PM

Purchase Reprint Rights for this article or review