Subscribe today to the most trusted name in education.  Learn more.
Current Issue Current Issue Subscriptions About TCRecord Advanced Search   

 

Structuring Opportunity After Entry: Who Has Access to High Quality Instruction During College?


by Josipa Roksa

Background/Context: When inequality of opportunity is discussed in higher education, it typically pertains to access to college. Ample research has examined sociodemographic inequalities in transition to higher education and enrollment in particular types of institutions. Although providing valuable insights, social stratification research does not dedicate the same attention to studentsí experiences during college and, more specifically, to inequalities in instructional quality.

Research Questions: Conceptualizing opportunity in terms of instructional quality, I address two specific questions: First, do students from all sociodemographic groups report similar levels of instructional quality? And second, do students report changes in instructional quality during their time in college?

Research Design: This study relies on data from three cohorts of students who entered college in the fall of 2006, 2007, and 2008 and were followed through the end of their senior year in the Wabash National Study (WNS) of Liberal Arts Education. In addition to studentsí background characteristics, WNS includes detailed information regarding studentsí college experiences, including instruction.

Results: The results reveal a substantial amount of variability in studentsí reports of instructional quality. Notably, this variation is not systematically related to studentsí sociodemographic characteristics, net of controls. However, the results also reveal a considerable amount of path dependency, with approximately half of the students reporting the same level of instructional quality at the beginning and end of college. Academic achievement is related to studentsí reports of instructional quality in the first year, after which academic inertia carries students toward the end. Academic motivation, on the other hand, facilitates mobility to higher levels of instructional quality over time and is particularly beneficial for students who begin college reporting a low level of instructional quality.

Conclusion: Reported patterns offer insights into social stratification and path dependence of studentsí experiences during college. Moreover, patterns of stability and change in studentsí reports of instructional quality have implications for studies of college impact. The article concludes with reflections on different conceptualizations of instructional quality and recommendations for future research.

When inequality of opportunity is discussed in higher education, it typically pertains to access to college. Ample research has examined inequalities in transition to higher education and enrollment in particular types of institutions. These studies describe the extent to which students from different socioeconomic backgrounds as well as racial/ethnic and gender groups have varying chances of entering higher education and gaining access to specific types of institutions, especially those of higher selectivity (for reviews, see Buchmann, 2009; Deil-Amen & Lopez-Turley, 2007; Gamoran, 2001; Grodsky & Jackson, 2009; Kao & Thompson, 2003). Although examining inequalities in access to higher education is crucial, social stratification research provides few insights into inequalities in students’ experiences after college entry.


Several recent studies of stratification in K–12 education have highlighted the value of considering not only whether students transition to a given level of education but also whether they have access to qualitatively similar experiences at each level (e.g., Breen & Jonsson, 2000; Lucas, 2001). Focusing on students’ access to different academic tracks, they show that socioeconomically disadvantaged students are not only less likely to transition to the next grade or educational level but also less likely to gain access to educationally beneficial experiences. Moreover, these studies demonstrate a substantial amount of path dependence—where students begin their educational careers is highly consequential for their subsequent experiences (see also Kerckhoff, 1993).


Applying these insights to higher education raises questions about social inequality and path dependence of students’ experiences during college. U.S. higher education presents a unique context, one characterized by a substantial amount of flexibility and choice (Delbanco, 2012; Robinson, 2011). However, flexibility and choice are often accompanied by a lack of clear pathways and guidance, which leaves students relying on their (and their family’s) knowledge and experience to make decisions about academic and social engagements (Armstrong & Hamilton, 2013; Deil-Amen & Rosenbaum, 2003; Stuber, 2011). Despite the flexibility, students’ trajectories through college may be both unequal and constrained by initial experiences.


To explore inequalities in students’ lives in college, I focus on instructional quality. Although several recent studies have illuminated inequality in students’ social and extracurricular experiences (e.g., Armstrong & Hamilton, 2013; Stuber, 2011), less is known about variation in instructional quality. More specifically, I consider two questions: First, do students from all sociodemographic groups report similar levels of instructional quality? And second, do students report changes in instructional quality during their time in college? Addressing these questions has important implications for providing a more nuanced understanding of inequality in higher education and illuminating the extent to which well-documented inequalities at entry also shape students’ experiences as they progress through college. Moreover, understanding whether and how instructional quality varies over time can provide valuable insights into what is being captured, and potentially missed, in higher education research on good practices and college impacts.


LITERATURE REVIEW


STRATIFICATION IN HIGHER EDUCATION


Concerns about equity permeate the literature on access to higher education. Students from less socioeconomically advantaged backgrounds continue to lag behind their more advantaged peers in the transition to higher education, and this gap has persisted over time despite the vast expansion of the higher education system and increased college participation of all groups (Roksa, Grodsky, Arum, & Gamoran, 2007). For example, whereas 87% of high school sophomores in 2002 whose parents had graduate/professional degrees entered college by 2006, only 53% of their peers whose parents had a high school education or less did so. Similar patterns are observed for family income, where students from high-income families are substantially more likely to enter higher education, as well as enter four-year institutions, than their less financially advantaged counterparts (Bozick & Lauff, 2007). Although differences in academic preparation help to account for some of these gaps, students from less socioeconomically advantaged families continue to lag behind their more advantaged peers even after controlling for academic preparation (e.g., Hearn, 1991; Karen, 2002; Perna & Titus, 2004; Roksa et al., 2007).


These socioeconomic inequalities have important implications for racial/ethnic differences in postsecondary transitions and outcomes. In reviewing the relevant literature, Gamoran (2001) concluded that “the most important reason for educational inequality between blacks and whites is socioeconomic” (p. 137). Thus, although descriptive statistics reveal differences in college access across students from different racial/ethnic groups, after controlling for socioeconomic background and academic preparation, African American and Hispanic high school graduates are equally (or in some instances more) likely to enter higher education than their White peers (Camburn, 1990; Jencks & Philips, 1998; Roksa et al., 2007). Recent decades have also witnessed a reversal in the gender gap, with women becoming more likely to enter and complete higher education relative to men (e.g., Buchmann & DiPrete, 2006; Carbonaro, Ellison, & Covay, 2011).


Although the social stratification literature provides ample evidence regarding the sociodemographic inequalities in college entry (and, more recently, completion), it does not dedicate the same attention to students’ experiences during college. A small number of studies that have considered inequalities in students’ experiences during college have tended to focus on their social and extracurricular engagement, without explicit consideration of instructional quality (see a review in Stevens, Armstrong, & Arum, 2008). This is in stark contrast to the K–12 literature, which has produced extensive evidence regarding the distribution of students across different curricula and courses of varying levels of academic rigor (for reviews, see Hallinan, 1994; Kelly, 2007).


INSTRUCTIONAL QUALITY DURING COLLEGE


Although the social stratification literature has not dedicated much attention to examining instructional quality during college, higher education scholarship provides valuable insights in this regard. This article builds on the conceptualization of instructional quality developed in the college impact research.[1] In the late 1980s and early 1990s, Arthur Chickering and Zelda Gamson (1987, 1991) synthesized the existing evidence on the impact of college on students and distilled it into seven broad categories or principles for good practice in undergraduate education. The seven original principles were: (1) student–faculty contact; (2) cooperative learning among students; (3) active learning; (4) prompt feedback to students; (5) time on task; (6) high academic expectations; and (7) diversity experiences.


The influence of Chickering and Gamson’s seven principles has been extensive and lasting. For example, the National Survey of Student Engagement (NSSE), possibly the most well-known annual survey of students in the country, was originally based on questionnaire items designed to operationally define the seven good practices (Kuh, 2001). Since then, the concept of good practices has evolved and expanded (Kuh, 2008), and there is a growing body of research indicating that many of the good practices identified by Chickering and Gamson are important precursors to desirable outcomes of college (for reviews see, Kuh, Kinzie, Buckley, Bridges, & Hayek, 2006; Pascarella & Terenzini, 2005).


Research on good practices within the college impact framework is extensive and multifaceted as well as highly fragmented. Many studies examine the effects of different practices in isolation, often using small samples at one or a few institutions without adequate controls for students’ background characteristics and academic performance. Moreover, when considered in large-scale studies, instructional quality is rarely the focus of examination but instead represents one dimension of students’ academic and/or social engagement (e.g., Arum & Roksa, 2011; Astin, 1993; Kuh, Cruce, Shoup, Kinzie, & Gonyea, 2008). Instructional quality is thus often understood in a broader context of students’ college experiences, as opposed to being a sole focus of inquiry.


More important for the purposes of this study is the observation that even when researchers in this tradition focus on the quality of instruction, they tend to estimate the effects of one experience net of the others. For example, Seifert and colleagues (2014) examined the effect of academic challenge and high expectations, net of high quality of interactions with faculty and other academic and social integration indicators, whereas Pascarella, Wang, Trolian, & Blaich (2013), examined the effect of teaching organization and clarity net of deep approaches to learning. This approach is useful for identifying the relationship between specific practices and particular outcomes (i.e., “the impact” of college) but says little about the overall distribution of students’ experiences or how that may change during their time in college.


Finally, although studies in the college impact tradition control for students’ background characteristics, they typically are not focused on understanding the patterns of inequality. One recent study suggested that students from different socioeconomic backgrounds have similar access to high-quality instruction (measured as instructional clarity and organization) but that African American students report lower levels of teaching quality in the first year than their White peers. No socioeconomic or racial gaps in the quality of teaching were observed in the senior year (Trolian et al., 2014). Whether these results reflect differential exposure to a particular set of practices—instructional clarity and organization—or whether they also hold for a broader conception of instructional quality remains to be examined.


Notably, the results from the study conducted by Trolian and colleagues indicate that the teaching quality and organization in the first year of college is highly predictive of the teaching quality and organization in the senior year. The first-year teaching quality was used simply as a control in the study, but this pattern of continuity may be quite consequential for understanding students’ experiences in college. If students report low instructional quality for only a short amount of time, that may not be as consequential as when students consistently report either a low or a high level of instructional quality. Apart from using prior experiences as controls, studies of college impact have dedicated limited attention to explicitly examining the consistency of instructional quality over time.


DATA AND METHODS


In this study, I rely on data from three cohorts of students participating in the Wabash National Study of Liberal Arts Education (WNS).[2] The WNS began with incoming first-year students entering college in 2006, 2007, and 2008. Across the three waves, 43 different four-year institutions participated in the study. Institutions in the sample vary with respect to institutional type, control, size, and selectivity. Among the 43 institutions, 28 are liberal arts colleges, 6 are research universities, and 9 are regional institutions.


First-year, full-time students were invited to participate in the longitudinal study of undergraduate experiences. The student sample was selected in two ways. For larger institutions, it included a random sample of students at the institution. For smaller institutions, it included the entire incoming first-year class. In the fall of their first year of college, students completed a survey including demographic characteristics, family background, high school experiences, and college plans. The same students were followed through the end of their fourth year in college and surveyed two more times—in the spring of their first year and spring of their fourth year. In the follow-up surveys, students were asked a range of questions about their experiences in college, including their perceptions of instructional quality. This is one of the unique strengths of the WNS study: It presents a longitudinal portrait of students as they progress through college, with the same measures available at the beginning and end of college.


STUDENTS’ REPORTS OF INSTRUCTIONAL QUALITY


Studies in the college impact tradition have typically focused on specific instructional practices and have either considered only one practice at the time or estimated the effect of one practice net of the others. A few recent exceptions, however, have aimed to develop more comprehensive measures of instructional quality, such as, for example, an effort to define “a liberal arts emphasis” by combining a range of educational practices often associated with liberal arts education (Pascarella, Wolniak, Seifert, Cruce, & Blaich, 2005). One recent study that considered both specific and combined measures of good practices indicated the value of considering a composite measure (Cruce, Wolniak, Seifert, & Pascarella, 2006).


Instead of focusing on a specific practice, I aim to capture students’ general assessment of instructional quality. More specifically, I combine16 items that measure teaching clarity and organization, course rigor, and high faculty expectations (the list of individual items is provided in the appendix). Those 16 items, originally rated on a 5-point scale ranging from very often to never, are combined if at least six scores are present, producing a scale with Cronbach’s alpha of 0.903. In addition to combining different dimensions of instructional quality, this scale represents students’ overall perception of instruction: Students are not rating specific courses or instructors but are providing an overall evaluation of instructional quality during the academic year.


While combing these 16 items creates a continuous measure of instructional quality, one of the key insights from the K–12 research on curricular differentiation is the importance of capturing categorical differences in students’ experiences. The issue is not simply whether students have a slightly better experience or whether their experience changes slightly over time (i.e., 1 point on a continuous scale) but whether they have a qualitatively different experience and whether that persists over time. I therefore divide instructional quality into three categories: low instructional quality (0.5 or more of a standard deviation below the mean), high instructional quality (0.5 or more of a standard deviation above the mean), and medium instructional quality (between 0.5 standard deviations below and above the mean). In this conceptualization, high and low categories are a full standard deviation apart, which can be reasonably assumed to represent a qualitatively different educational experience. Approximately one third of students are found in each category of instructional quality.


The measure of instructional quality in this study is based on students’ responses to a survey asking about a range of their experiences in college. Although reliance on students’ self-reports of undergraduate education, including instructional quality, is a common practice in higher education literature, it raises questions about the extent to which these self-reports accurately reflect instructional practices. A recent review of the K–12 literature concluded that student ratings are valid and reliable data sources of information on instructional practices and that they are related to academic achievement (Burniske & Meibaum, 2012). Pascarella and Terenzini (2005) reached the same conclusions reviewing research in higher education (see also Marsh & Roche, 1997; Pascarella, Salisbury, & Blaich, 2011). I am aware of only one large-scale project that has the capacity to validate students’ evaluation of instructional practices against observations of trained raters—the Measures of Effective Teaching (MET) project. To date, the MET project results indicate that “students seem to know effective teaching when they experience it” (p. 9): Students’ ratings of instruction are related to their achievement as well as to the achievement of students in other classes taught by the same instructor (MET project, 2010). The report comparing classroom observations to students’ reports of instructional practices and their achievement has not yet been released. I am not aware of similar endeavors to validate students’ responses against classroom observations in higher education, using a large number of students across institutions and majors—this is a crucial area of future research, which is further discussed in the conclusion.


INDEPENDENT VARIABLES


To consider whether reports of instructional quality vary among sociodemographic groups, I examine students’ socioeconomic background (defined by parents’ years of education, representing the highest year of education completed by either parent, and family income), gender (represented by a dummy variable for males), race/ethnicity (divided into the following categories: African American, Hispanic, Asian, and White [reference]), and English language background (a dummy variable for English not being the primary language spoken at home).


To examine the distribution of instructional quality across different sociodemographic groups, I control for key confounding variables that are potentially related to both sociodemographic characteristics and instructional quality: ACT scores, academic motivation, and college GPA.[3] All these measures are standardized with a mean of zero and a standard deviation of one. Academic motivation is an eight-item scale provided in the WNS data set that represents students’ interest in working hard, getting good grades, and engaging in challenging intellectual material (specific items included in the scale are provided in the appendix). All academic predictors are coded temporarily prior to the outcome. For models estimating instructional quality at the end of the first year (i.e., the spring of the first year), academic motivation and college GPA are measured during the first semester of college (i.e., the fall of the first year). For analyses of instructional quality at the end of college, academic motivation is measured at the end of the first year, and GPA represents the cumulative GPA during the first year of college.


When considering inequalities in instructional quality, two additional factors are important to consider: institutional characteristics and college major. Prior research indicates that students from different racial/ethnic, gender, and socioeconomic groups are differentially distributed across college majors and institutions (e.g., Bowen, Chingos, & McPherson, 2009; Buchmann, 2009; Charles et al., 2009; Davies & Guppy, 1997; Goyette & Mullen, 2006; Massey, Charles, Lundy, & Fischer, 2003). In addition, institutional characteristics and college major are related to instructional quality (e.g., see a review in Pascarella & Terenzini, 2005).


All analyses include two institutional characteristics: selectivity and type. Following a common practice in the literature, I define institutional selectivity as the average ACT scores of the incoming freshman class. This measure is standardized with a mean of zero and a standard deviation of one. Institutional type is defined by mission and represented by three categories: liberal arts (with a traditional focus on undergraduate education), research institutions (with a traditional focus on research and graduate education, reference category), and regional institutions (with a traditional focus on broad access). In addition, senior-year models include college major, coded into six broad categories: arts and humanities, social sciences, natural sciences (reference), business, education and professional fields, and other majors. College major is included in the analysis of instructional quality only at the end of college because many students have not declared or concentrated their course-taking in their major in the first year.


ANALYSIS


Given the categorical definition of the outcome examined, I estimate multinomial logistic regression models, starting with students’ reports of instructional quality at the end of the first year, and followed by their reports of instructional quality at the end of the senior year. More specifically, the models estimate the probability of students reporting high and medium levels of instructional quality relative to a low level of instructional quality. For ease of representation, the tables report odds ratios, in addition to logit coefficients and standard errors.


Presented analyses are based on a sample of 3,709 seniors attending 43 four-year institutions. Both freshman- and senior-year analyses are conducted on the same sample and include students who have valid information on freshman- and senior-year indicators of instructional quality. This ensures that the results over time do not reflect different sample characteristics. Seniors in the sample differed from freshmen in expected ways—for example, they were more academically prepared and more likely to be female, be White, and come from more advantaged family backgrounds. This is what would be expected given the association between academic preparation and demographic factors with college persistence and completion (e.g., for recent reviews, see Buchmann, 2009; Gamoran, 2001; Grodsky & Jackson, 2009; Kao & Thompson, 2003). Apart from parental income, which is missing in 16% of the cases, predictor variables are missing relatively few cases. To preserve missing cases in analyses, mean imputation is used, with means substituted for missing cases, and a dummy variable included in the models to designate that the substitution is made. Finally, because students are nested within institutions, all analyses are adjusted for clustering of students within schools.


RESULTS


INSTRUCTIONAL QUALITY AT THE END OF THE FIRST YEAR


To examine instructional quality at the end of the first year of college, Table 1 shows results from a multinomial logistic regression model comparing students’ reports of high and medium levels of instructional quality relative to a low level of instructional quality. This analysis does not reveal a clear pattern of (dis)advantage with respect to students’ sociodemographic characteristics. Net of other characteristics, students’ socioeconomic background, whether measured through parental education or income, is not related to students’ reports of instructional quality at the end of the first year, nor is students’ language background. Similarly, there is no consistent pattern disadvantaging racial/ethnic minority groups. Whereas, net of other factors, Asian students have lower odds of reporting a high level of instructional quality than their White peers, no differences are observed for either African American or Hispanic students. Men are more likely to report a high, but not a medium, level of instructional quality relative to women.


Table 1. Multinomial Logistic Regression Model Predicting Instructional Quality at the End of the First Year of College [reference: low instructional quality]

      

 

Medium

 

High

 

b(SE)

odds ratio

 

b(SE)

odds ratio

Sociodemographic characteristics

     

Male

0.003

1.003

 

0.359*

1.431

 

(0.099)

  

(0.139)

 

Race/ethnicity [reference: White]

     

  African American

-0.083

0.920

 

0.311

1.365

 

(0.181)

  

(0.215)

 

  Hispanic

-0.102

0.903

 

0.365

1.440

 

(0.157)

  

(0.242)

 

  Asian

-0.387

0.679

 

-0.584**

0.558

 

(0.197)

  

(0.217)

 

Parental education

0.015

1.015

 

0.02

1.020

 

(0.021)

  

(0.023)

 

Parental income

0.016

1.016

 

0.051

1.052

 

(0.063)

  

(0.075)

 

Non-English language

0.028

1.028

 

-0.103

0.902

 

(0.252)

  

(0.332)

 

Academic characteristics

     

ACT (std)

0.111

1.117

 

0.078

1.081

 

(0.059)

  

(0.069)

 

GPA (std)

0.179**

1.196

 

0.290**

1.336

 

(0.04)

  

(0.046)

 

Academic motivation (std)

0.150**

1.162

 

0.550**

1.733

 

(0.038)

  

(0.046)

 

Institutional characteristics

     

Selectivity (std)

0.131*

1.139

 

0.334**

1.396

 

(0.051)

  

(0.093)

 

Institutional type [reference: research]

     

  Liberal arts

0.902**

2.464

 

1.498**

4.470

 

(0.107)

  

(0.207)

 

  Regional

0.301**

1.351

 

0.714**

2.043

 

(0.092)

  

(0.214)

 

Intercept

-0.431

  

-1.506**

 

 

(0.341)

 

 

(0.412)

 

Note: std indicates that the variable is standardized with a mean of 0 and a standard deviation of 1.

Models are adjusted for clustering of students within institutions.  N = 3,709.

   

*p < 0.05. *p < 0.01.

     


In addition, students’ academic achievement and motivation are related to instructional quality. For example, one standard deviation increase in GPA is associated with 34% higher odds of students reporting a high level of instructional quality as well as 20% higher odds of reporting a medium level of instructional quality, relative to a low level of instructional quality. ACT scores are only weakly and not statistically significantly related to students’ reports of instructional quality.[4] In addition, one standard deviation increase in academic motivation is associated with 73% higher odds of students reporting a high level of instructional quality as well as 16% higher odds of students reporting a medium level of instructional quality, compared with a low level of instructional quality.


Given that this analysis controls for academic achievement and motivation, relative similarity across sociodemographic groups may not be surprising. However, this finding is notable in light of the K–12 research, which indicates that some groups of students, especially those from less socioeconomically advantaged families, experience lower quality of instruction, even after prior academic performance is taken into consideration. This continues to be the case in recent decades, when formal tracking mechanisms have been for the most part dismantled, but informal tracking continues to structure students’ instructional experiences (see a review in Kelly, 2007). For example, Lucas (2001) shows that students from more socioeconomically advantaged families are more likely to continue their education to the next grade and do so in higher academic tracks than students from less socioeconomically advantaged families, net of a range of confounding factors, including prior academic performance.


Table 1 also indicates that students attending institutions of varying type and selectivity report different levels of instructional quality. Corroborating findings from prior research (e.g., Pascarella, Cruce, Wolniak, & Blaich, 2004; Pascarella et al., 2005; Seifert, Pascarella, Goodman, Salisbury, & Blaich, 2010), students attending liberal arts colleges report higher quality of instruction than their peers at research universities. The advantage of liberal arts colleges is notable given that the model controls for students’ own ACT scores (and other indicators of academic preparation and motivation) as well as institutional selectivity. Because prior research has focused on contrasting liberal arts colleges to other types of institutions, it has not illuminated the contrast between regional institutions and research universities. Results in Table 1 indicate that once student characteristics and institutional selectivity are taken into consideration, students attending regional institutions have higher odds of reporting high and medium levels of instructional quality as opposed to a low level of instructional quality, compared with students attending research universities. This finding has important implications given concerns about the cost of college education and differential opportunities for access to regional institutions versus research universities.


Akin to institutional type, selectivity is related to instructional quality. Even after controlling for institutional type, students attending more selective institutions report higher levels of instructional quality. Although the results for institutional selectivity are more mixed in prior research, some studies indicate that institutional selectivity is related to established good practices in undergraduate education (e.g., Pascarella et al., 2006). The importance of institutional selectivity is particularly notable given that students’ own ACT scores are not statistically significant.


INSTRUCTIONAL QUALITY OVER TIME


Presented results offer a useful baseline for considering what happens as students continue on their journeys through college. Figure 1 presents the distribution of students’ reports of instructional quality over time. These patterns illustrate both the continuity and change that occur as students progress through college. Regardless of where students begin in their freshman year, approximately half of them report the same level of instructional quality in their senior year. Thus, although students experience mobility over time, where one begins is a strong predictor of where one ends up. This pattern may not be surprising given that students’ academic achievement and motivation, as well as characteristics of the institutions attended, are related to instructional quality. The question is thus whether this stability is largely a reflection of students’ individual characteristics and institutional settings or whether there is an independent relationship between students’ reports of instructional quality over time.


Figure 1. Percentage of students reporting different levels of instructional quality in their first and senior years of college

[39_18231.htm_g/00002.jpg]

Table 2 examines this question in a multivariate framework. As was the case for the first-year models, these analyses predict the probability of reporting high and medium levels of instructional quality compared with a low level of instructional quality at the end of college. The model-based results confirm the descriptive patterns: Students’ reports of instructional quality at the beginning of college are a strong predictor of their reports of instructional quality at the end of college. The magnitude of the relationship is quite remarkable. Even after controlling for sociodemographic characteristics, academic preparation, college major, and institutional attributes, students who begin college reporting a low level of instructional quality have 92% lower odds of reporting a high level of instructional quality at the end of college and 68% lower odds of reporting a medium level of instructional quality at that time. The “stickiness” at the ends of the distribution is quite strong, with students who begin in the low category having vey little chance of ending in the high category. Students who begin in the middle of the distribution with respect to instructional quality have more mobility, but the mobility is both upward and downward.


Table 2.  Multinomial Logistic Regression Model Predicting Instructional Quality at the End of College [reference: low instructional quality]

      

 

Medium

 

High

 

b(SE)

odds ratio

 

b(SE)

odds ratio

Instructional quality 1st year [reference: high]

     

Low

-1.139**

0.320

 

-2.493**

0.083

 

(0.097)

  

(0.159)

 

Medium

-0.409**

0.664

 

-1.320**

0.267

 

(0.120)

  

(0.143)

 

Sociodemographic characteristics

     

Male

-0.134

0.874

 

-0.022

0.978

 

(0.085)

  

(0.171)

 

Race/ethnicity [reference: White]

     

  African American

0.317

1.374

 

0.565*

1.759

 

(0.173)

  

(0.227)

 

  Hispanic

0.316

1.372

 

0.541*

1.718

 

(0.268)

  

(0.252)

 

  Asian

0.191

1.211

 

0.278

1.320

 

(0.211)

  

(0.209)

 

Parental education

-0.009

0.991

 

-0.021

0.979

 

(0.019)

  

(0.026)

 

Parental income

0.001

1.001

 

0.144*

1.155

 

(0.065)

  

(0.058)

 

Non-English language

-0.265

0.767

 

-0.514

0.598

 

(0.288)

  

(0.337)

 

Academic characteristics

     

ACT (std)

0.043

1.043

 

0.040

1.041

 

(0.073)

  

(0.083)

 

GPA (std)

0.032

1.033

 

0.046

1.047

 

(0.049)

  

(0.052)

 

Academic motivation (std)

0.138*

1.147

 

0.216**

1.241

 

(0.060)

  

(0.057)

 

College major [reference: natural sciences]

     

  Arts and humanities

0.468**

1.596

 

0.584**

1.792

 

(0.180)

  

(0.198)

 

  Social sciences

0.504*

1.656

 

0.560**

1.751

 

(0.221)

  

(0.170)

 

  Business

0.479*

1.615

 

0.463

1.589

 

(0.198)

  

(0.297)

 

  Education and professional

0.126

1.135

 

-0.008

0.992

 

(0.165)

  

(0.223)

 

  Other fields

0.024

1.024

 

-0.149

0.862

 

(0.105)

  

(0.176)

 


Institutional characteristics

     

Selectivity (std)

0.130

1.139

 

0.212

1.236

 

(0.069)

  

(0.109)

 

Institutional type [reference: research]

     

  Liberal arts

0.675**

1.963

 

1.089**

2.972

 

(0.114)

  

(0.146)

 

  Regional

0.278*

1.320

 

0.378*

1.459

 

(0.141)

  

(0.187)

 
      

Intercept

0.525

  

0.586

 

 

(0.327)

 

 

(0.455)

 

Note: std indicates that the variable is standardized with a mean of 0 and a standard deviation of 1.

Models are adjusted for clustering of students within institutions. N = 3,709.

   

*p < 0.05. *p <0.01.

     


In addition to the central role played by instructional quality at the beginning of college for the patterns reported at the end of college, Table 2 reveals several other notable findings. First, Asian students are no longer disadvantaged relative to their White peers, and African American and Hispanic students are more likely to report a high level of instructional quality than White students at the end of college, net of other characteristics. These patterns are similar to those found for college entry and completion, indicating that, net of background and academic controls, racial/ethnic minority students are either as likely or more likely to experience positive educational outcomes (Adelman, 2006; Camburn, 1990; Jencks & Philips, 1998; Roksa et al., 2007). Moreover, there is no longer a gender difference, with women and men reporting similar levels of instructional quality at the end of college. Because first-year and senior-year analyses are conducted on the same sample, these patterns imply that students change their course-taking patterns over time, such that sociodemographic groups that begin college at a disadvantage gain access to similar levels of instructional quality at the end of college as their initially more advantaged peers.


One sociodemographic characteristic—parental income—remains associated with students’ reports of instructional quality at the end of college. When examining instructional quality at the end of college, parental income surfaces as a relevant predictor suggesting that students from more financially advantaged families are more likely to report a high level of instructional quality in their senior year, controlling for where they started. A range of processes—from parental intervention in students’ choices and opportunities, to more advantaged students having more resources to navigate college rules and procedures and enhance their experiences along the way—may contribute to these results.


Second, academic preparation is not independently related to students’ reports of instructional quality at the end of college. Whether students entered college with higher ACT scores or started college with higher GPAs is not related to their reports of instructional quality in their senior year. Instead, as Table 1 indicates, academic performance (and specifically GPA) is related to where students begin college. These patterns imply that students’ academic performance shapes instructional quality in the first year, after which “academic inertia” carries students toward the end of college. These results speak to the substantial path dependency in academic experiences as students progress through college.


Third, although academic performance is not related to instructional quality at the end of college, academic motivation is. In addition to being a strong predictor of instructional quality in the first year of college (Table 1), the influence of academic motivation continues through the senior year. Table 2 shows that academically motivated students report higher quality of instruction at the end of college, controlling for where they started at the beginning of college. One standard deviation increase in academic motivation is associated with 24% higher odds of reporting a high level of instructional quality and 15% higher odds of reporting a medium level of instructional quality compared with experiencing a low level of instructional quality at the end of college, controlling for the quality of instruction in the first year and a host of other factors. These patterns suggest that academic performance places students on particular pathways that remain relatively stable over time but that academically motivated students can gain access to higher quality of instruction by the end of college, regardless of where they started in their first year.


These findings raise the possibility that academic motivation may be particularly beneficial for students who begin college reporting a low level of instructional quality. To consider this proposition, I included interaction terms between academic motivation and first-year instructional quality in the model: All the interaction terms were statistically significant at either p < 0.05 or p < 0.10 level. Although interaction effects in a logit model may lead to incorrect conclusions (Mood, 2010), supplemental analyses relying on average marginal effects (AMEs) produce substantively similar results. To facilitate interpretation of the interaction terms, Figure 2 shows predicted probabilities of students reporting a high level of instructional quality at the end of college for different categories of academic motivation and instructional quality in the first year of college. In this illustration, high academic motivation represents one standard deviation above the mean, and low academic motivation represents one standard deviation below the mean. The probabilities are estimated with all other variables held at their mean.


As observed earlier, this figure confirms that students who begin college reporting a high level of instructional quality are quite likely to report the same high level of instructional quality at the end of college. Academic motivation does not play much of a role in this case—regardless of whether these students are highly academically motivated, they are similarly likely to report a high level of instructional quality at the end of college. The pattern is quite different for students who report a low level of instructional quality at the start of college. Although these students are not very likely to report a high level of instructional quality at the end of college, academic motivation is consequential for who moves up—students who are more academically motivated are more likely to end up reporting higher instructional quality at the end of college. Thus, whereas Table 2 shows that academic motivation on average is positively related to the likelihood of reporting higher instructional quality at the end of college, Figure 2 presents a more nuanced picture, indicating that academic motivation is much more beneficial for students who begin college reporting a low level of instructional quality, followed by those who begin college reporting a medium level of instructional quality. Among students who begin college reporting a low level of instructional quality, those who are highly academically motivated have twice the probability of reporting a high level of instructional quality at the end of college as compared with those who are not academically motivated (21% vs. 11%).


Figure 2. Predicted probabilities of students reporting a high level of instructional quality at the end of college, by academic motivation and instructional quality in the first year

[39_18231.htm_g/00004.jpg]


The second half of Table 2 also indicates that college major and institutional characteristics are related to students’ reports of instructional quality at the end of college. Prior research has noted that faculty across disciplines tend to adopt distinct pedagogical styles and create unique socializing environments (Holland, 1997; Smart, Feldman, & Ethington, 2000). For example, faculty members in different disciplines tend to structure courses and interact with students differently as well as encourage specific practices such as active learning (Braxton, Olsen, & Simmons, 1998; Smart & Umbach, 2007; Umbach, 2007). Moreover, as noted with respect to the first-year findings, students attending liberal arts institutions and, to a lesser extent, those attending regional institutions, are more likely to report higher levels of instructional quality compared with their peers at research institutions. Moreover, students enrolled in more selective institutions are more likely to report a higher quality of instruction, although that relationship is statistically significant only at the p < 0.10 level.


Although the relevance of college major and institutional characteristics is not surprising given prior research, it is notable that students’ reports of instructional quality in the first year continue to play a strong role in predicting instructional quality at the end of college, even after adjusting for fields of study and institutional characteristics. If college major and institutional characteristics were not included in Table 2, the coefficients for instructional quality in the first year would be only 10%–15% higher. This implies that institutional characteristics and college major play a relatively small role in explaining the relationship between instructional quality at the beginning and end of college. Stated differently, the continuity in instructional quality over time is not related to a substantial extent to the broad categories of college major or institutional attributes considered in this study.


CONCLUSION


Social stratification literature offers extensive evidence regarding the sociodemographic inequalities in access to higher education. Focusing on entry into higher education, however, leaves open the question of what happens after students enter higher education and, in particular, whether they gain access to high-quality instruction. Presented results reveal a substantial amount of variability in college students’ reports of instructional quality. Students categorized into high and low levels of instructional quality in this study are a full standard deviation apart, and a substantial number of students (approximately one third) are found in each of those categories. Notably, the quality of instruction reported by students is not related in a systematic way to their sociodemographic characteristics. Students from different socioeconomic backgrounds and racial/ethnic groups do not report consistent advantages or disadvantages in terms of their quality of instruction, net of controls. This relatively equitable distribution of instructional quality in higher education stands in contrast to the K–12 research, which finds inequalities in instructional experiences even after adjusting for students’ prior academic performance.


The second main finding is that the distribution of students’ reports of instructional quality in higher education exhibits a high degree of path dependency. Regardless of where they begin, approximately half of the students end college reporting the same level of instructional quality as they did in their first year. More specifically, students who begin college reporting a low level of instructional quality are very unlikely to report medium and, especially, high levels of instructional quality at the end of college. When students do move up to higher levels of instructional quality, this is related in part to their academic motivation. Academic motivation surfaced as an important predictor of the quality of instruction at both the beginning and end of college. Moreover, academic motivation facilitates mobility over time toward higher levels of instructional quality and is particularly beneficial for students who begin college reporting a low level of instructional quality. The importance of academic motivation has notable implications for higher education practice, especially given indications that students’ academic motivation declines during their time in college (Blaich, 2011). Further research is needed to investigate why academic motivation declines during college and how institutional policies and practices can help to reverse that trend.


The patterns of stability and change in students’ reports of instructional quality also have implications for research on college impacts. Studies examining how different academic experiences are related to desirable educational outcomes typically measure those academic experiences at one point in time (for recent reviews of the literature, see Kuh, 2006; Pascarella & Terenzini, 2005). However, this may potentially misrepresent the relationship between students’ academic experiences and outcomes for both students who experience stability and those who experience change. When students experience stability, the effects may be compounded over time. A given student may not be reporting a low level of instructional quality at just one point in time but year after year, in which case the cumulative effect of instructional quality may be much greater than an estimate at any specific point in time. A different set of concerns arises for students who report different levels of instructional quality over time. At the point of measurement, students may be reporting a high level of instructional quality, but they may have a much lower quality of instruction earlier or later in their educational careers. An estimate at a particular point in time may thus not be an accurate representation of students’ academic experiences or their impact on subsequent outcomes. As opposed to measuring students’ experiences at one point in time, future research on college impacts may benefit from considering different strategies for capturing the totality of students’ academic experiences during college.


Moreover, future research is needed to further investigate the academic pathways students travel through college. A recent qualitative study of students’ experiences in college described how students could enter a particular pathway (whether academic or social) and stay on that pathway until graduation or until leaving the institution (Armstrong & Hamilton, 2013). Findings presented herein similarly reveal strong path dependence in students’ experiences on their journeys through college. Additional research is needed to fully explicate the mechanisms that produce this high degree of consistency. One potential mechanism is college major. Controlling for college major did not explain much of the relationship between students’ reports of instructional quality at the beginning and end of college in this study. In addition, supplemental analyses suggested that changing majors is not related to changes in instructional quality. However, given sample constraints, I could consider only six broad categories of college major, which means that there is a substantial amount of variation in instructional quality within each category. If one could examine specific fields, as opposed to aggregated categories, one may indeed find that college major plays an important role in structuring students’ academic experiences and, by extension, the quality of instruction.[5] Future research, relying on much larger samples or focusing on in-depth analyses of a small number of fields, would be particularly valuable in illuminating these processes.


It is important to conclude by drawing attention to the specific definition and measurement of instructional quality used in this study. I build on the college impacts research, which has a strong presence in the higher education literature and typically relies on students’ reports of their college experiences, including instruction. Students’ reports of instruction are regarded to be reliable and related to academic achievement (see recent reviews in Burniske & Meibaum, 2012; Pascarella et al., 2011). However, students’ reports have not been validated against objective classroom observations in large-scale studies. The MET project has the capacity to compare students’ perceptions with classroom observations in K–12 education, although final results of this component of the project have not been released yet. Because it is not feasible to train raters and observe all classrooms within an institution (or across multiple institutions), students’ self-reports will likely remain an important component of higher education research and policy development. However, to increase confidence in students’ reports, it is important for future research to validate them against other measures of instructional quality. Noting correlations between students’ self-reports of instructional quality and academic achievement is valuable but provides only indirect evidence of their accuracy. Projects that compare students’ perceptions of instruction with instructors’ reports and observations of trained raters would be particularly illuminating in providing stronger evidence regarding the quality of students’ self-reports.


In addition to measurement, the approach utilized in this study is conceptually distinct from that used by scholars focusing explicitly on the process of how students learn. The distinction between the two approaches was vividly described by Anna Neumann in her presidential address at the 2012 meeting of the Association for the Study of Higher Education (ASHE) (Neumann, 2014). Understanding the process of how students learn draws attention to classroom interactions and engagement with ideas. This tradition emphasizes the importance of subject-specific content as well as the crucial role played by students’ prior knowledge and experience in their learning (e.g., Bransford, Brown, & Cocking, 2000; Shulman, 2004a, 2004b). Learning occurs when “a student acknowledges and works through differences between her or his prior views and beliefs and new ideas that instructors or texts represent” (Neumann, 2014, p. 251). This process is not only cognitive but also deeply personal, given that students’ experiences and knowledge are embedded within families and communities. The conception of teaching quality in this tradition rests on teachers creating environments that provide encounters with ideas, surface students’ prior knowledge, and support students in working through the cognitive and emotional processes associated with encountering new ideas.


The two approaches, although largely distinct because of both conceptual and methodological differences, can present complementary insights about instructional quality. Findings of the present study indicate that students’ reports of their overall instructional experiences are reasonably equitably distributed across different sociodemographic groups. This does not preclude the existence of specific ways and contexts in which inequalities are perpetuated. To unearth these processes, studies of classroom interactions would be particularly valuable because it is in those interactions that personal backgrounds are likely to be particularly salient. Overcoming the chasm between the two scholarly traditions—one focusing on college impacts and the other on the process of how students learn—presents an important direction for future research. Understanding how interactions in the classroom complement or vary from the broad patterns of engagement and students’ reports of instructional quality can provide invaluable insights into both how students learn and how they experience college.


Acknowledgements

The author thanks the Spencer Foundation for supporting this research. An earlier version of this paper was presented at the Research Committee on Social Stratification and Mobility (RC28) meeting, Trento, Italy, May 2013.


Notes

1. The benefits and limitations of this approach, as well as a discussion of an alternative approach, are included in the data/methods and conclusion sections.

2. For more information on the Wabash Study, see http://www.liberalarts.wabash.edu/study-overview/.

3. In addition to ACT scores, one would ideally control for high school GPA. However, the high school GPA variable in the WNS data set is reported by students and provided only in 5 categories, with almost two thirds of the sample concentrated in the top category (representing grades of A- to A+). Given this lack of variation and precision, I include the measure of college GPA in presented analyses. Previous research indicates that high school and college GPAs are relatively highly correlated and that even net of a range of different factors, including background characteristics and college experiences, high school GPA is a strong predictor of college GPA (e.g., see Charles, Fischer, Mooney, & Massey, 2009).

4. GPA and ACT scores are correlated at r = 0.390, p < 0.01 in this sample, so if entered on their own, each variable has a stronger relationship to the quality of instruction.

5. This is not a limitation only of the WNS study but of most multi-institution longitudinal studies that do not have an adequate number of cases to disaggregate college major into detailed categories.


References

Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through college. Washington, DC: U.S. Department of Education.


Armstrong, E. A., & Hamilton, L. T. (2013). Paying for the party: How college maintains inequality. Cambridge, MA: Harvard University Press.


Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Chicago, IL: University of Chicago Press.


Astin, A. W. (1993). What matters in college? Four critical years revisited. San Francisco, CA: Jossey-Bass.


Blaich, C. (2011). How do students change over four years of college? Crawfordsville, IN: Center of Inquiry in the Liberal Arts at Wabash College. Retrieved from http://www.liberalarts.wabash.edu/storage/4-year-change-summary-website.pdf


Bowen, W. G., Chingos, M. M., & McPherson, M. S. (2009). Crossing the finish line: Completing college at America's public universities. Princeton, NJ: Princeton University Press.


Bozick, R., & Lauff, E. (2007). Education Longitudinal Study of 2002 (ELS:2002): A first look at the initial postsecondary experiences of the sophomore class of 2002 (NCES 2008-308). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.


Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.


Braxton, J. M., Olsen, D., & Simmons, A. (1998). Affinity disciplines and the use of principles of good practice for undergraduate education. Research in Higher Education, 39, 299–318.


Breen, R., & Jonsson, J. O. (2000). Analyzing educational careers: A multinomial transition model. American Sociological Review, 65, 754–772.


Buchmann, C. (2009). Gender inequalities in the transition to college. Teachers College Record, 111, 2320–2346.


Buchmann, C., & DiPrete, T. A. (2006). The growing female advantage in college completion: The role of family background and academic achievement. American Sociological Review, 71, 515–541.


Burniske, J., & Meibaum, D. (2012). The use of student perceptual data as a measure of teaching effectiveness. Austin: Texas Comprehensive Center at SEDL.


Camburn, E. M. (1990). College completion among students from high schools located in large metropolitan areas. American Journal of Education, 98, 551–569.


Carbonaro, W., Ellison, B., & Covay, E. (2011). Explaining the gender gap in college entry and completion. Social Science Research, 40,120–135.


Charles, C., Fischer, M., Mooney, M., & Massey, D. (2009). Taming the river: Negotiating the academic, financial, and social currents in selective colleges and universities. Princeton, NJ: Princeton University Press.


Chickering, A., & Gamson, Z. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39, 3–7.


Chickering, A., & Gamson, Z. (1991). Applying the seven principles for good practice in undergraduate education. San Francisco, CA: Jossey-Bass.


Cruce, T. M., Wolniak, G. C., Seifert, T. A., & Pascarella, E. T. (2006). Impacts of good practices on cognitive development, learning orientations, and graduate degree plans during the first year of college. Journal of College Student Development, 47, 365–383.


Davies, S., & Guppy, N. (1997). Fields of study, college selectivity, and student inequalities in higher education. Social Forces, 75, 1417–1438.


Deil-Amen, R., & Lopez-Turley, R. (2007). A review of the transition to college literature in sociology. Teachers College Record, 109, 2324–2366.


Deil-Amen, R., & Rosenbaum, J. (2003). The social prerequisites of success: Can college structure reduce the need for social know-how? The ANNALS of the American Academy of Political and Social Science, 586, 120–143.


Delbanco, A. (2012). College: What it was, is, and should be. Princeton, NJ: Princeton University Press.


Gamoran, A. (2001). American schooling and educational inequality: A forecast for the 21st century. Sociology of Education, 75, 135–153.


Goyette, K., & Mullen, A. (2006). Who studies the arts and sciences? Social background and the choice and consequence of undergraduate field of study. Journal of Higher Education, 77, 497–538.


Grodsky, E., & Jackson, E. (2009). Social stratification in higher education. Teachers College Record, 111(10), 2347–2384.


Hallinan, M. (1994). Tracking: From theory to practice. Sociology of Education, 67, 79–91.


Hearn, J. C. (1991). Academic and nonacademic influences on the college destinations of 1980 high school graduates. Sociology of Education, 64, 158–171.


Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Odessa, FL: Psychological Assessment Resources.


Jencks, C., & Philips, M. (Eds.). (1998). The Black-White test score gap. Washington, DC: Brookings Institution Press.


Kao, G., & Thompson, J. (2003). Racial and ethnic stratification in educational achievement and attainment. Annual Review of Sociology, 29, 417–442.


Karen, D. (2002). Changes in access to higher education in the United States: 1980–1992. Sociology of Education, 75, 191–210.


Kelly, S. (2007). Social class and tracking within schools. In L. Weis (Ed.), The way class works (pp. 210–224). New York, NY: Routledge/Taylor Francis.


Kerckhoff, A. C. (1993). Diverging pathways: Social structure and career deflections. Cambridge, England: Cambridge University Press.


Kuh, G. D. (2001). Assessing what really matters to student learning: Inside the National Survey of Student Engagement. Change, 33, 10–17.


Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. Washington, DC: AAC&U.


Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on college grades and persistence. Journal of Higher Education, 79, 540–563.


Kuh, G. D., Kinzie, J., Buckley, J. A., Bridges, B. K., & Hayek, J. C. (2006). What matters to student success: A review of the literature. Washington, DC: National Postsecondary Education Cooperative.


Lucas, S. (2001). Effectively maintained inequality: Education transitions, track mobility, and social background effects. American Journal of Sociology, 106, 1642–1690.


Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist, 52, 1187–1197.


Massey, D., Charles, C., Lundy, G., & Fischer, M. (2003). The source of the river: The social origins of freshmen at America’s selective colleges and universities. Princeton, NJ: Princeton University Press.


MET project. (2010). Learning about teaching: Initial findings from the Measures of Effective Teaching project (Research paper). Seattle, WA: Bill & Melinda Gates Foundation. Retrieved from http://www.metproject.org/downloads/Preliminary_Findings-Research_Paper.pdf


Mood, C. (2010). Logistic regression: Why we cannot do what we think we can do, and what we can do about it. European Sociological Review, 26, 67–82.


Neumann, A. (2014). Staking a claim on learning: What we should know about learning in higher education and why. Review of Higher Education, 37, 249–267.


Pascarella, E. T., Cruce, T., Umbach, P. D., Wolniak, G. C., Kuh, G. D., Carini, R. M., . . . Zhao, C.-M. (2006). Institutional selectivity and good practices in undergraduate education: How strong is the link? Journal of Higher Education, 77(2), 251–285.


Pascarella, E., Cruce, T., Wolniak, G., & Blaich, C. (2004). Do liberal arts colleges really foster good practices in undergraduate education? Journal of College Student Development, 45, 57–74.


Pascarella, E., Salisbury, M., & Blaich, C. (2011). Exposure to effective instruction and college student persistence: A multi-institutional replication and extension. Journal of College Student Development, 52, 4–19.


Pascarella, E., & Terenzini, P. (2005). How college affects students (Vol. 2): A third decade of research. San Francisco, CA: Jossey-Bass.


Pascarella, E., Wang, J. S., Trolian, T., & Blaich, C. (2013). How the instructional and learning environments of liberal arts colleges enhance cognitive development. Higher Education, 66, 569–583.


Pascarella, E., Wolniak, G., Seifert, T., Cruce, T., & Blaich, C. (2005). Liberal arts colleges and liberal arts education: New evidence on impacts. San Francisco, CA: Jossey-Bass.


Perna, L. W., & Titus, M. A. (2004). Understanding differences in the choice of college attended: The role of state public policies. Review of Higher Education, 27, 501–525.


Robinson, K. (2011). The rise of choice in the U.S. university and college: 1910–2005. Sociological Forum, 26, 601–622.


Roksa, J., Grodsky, E., Arum, R., & Gamoran, A. (2007). Changes in higher education and social stratification in the United States. In Y. Shavit, R. Arum, & A. Gamoran (Eds.), Stratification in higher education: A comparative study (pp. 165–191). Stanford, CA: Stanford University Press.


Seifert, T., Gillig, B., Hanson, J., Pascarella, E., & Blaich, C. (2014). The conditional

nature of high impact/good practices on student learning outcomes. Journal of Higher Education, 85, 531–564.


Seifert, T., Pascarella, E., Goodman, K., Salisbury, M., & Blaich, C. (2010). Liberal arts colleges and good practices in undergraduate education: Additional evidence. Journal of College Student Development, 51, 1–22.


Shulman, L. S. (2004a). Teaching as community property: Essays on higher education.

San Francisco, CA: Jossey-Bass.


Shulman, L. S. (2004b). The wisdom of practice: Essays on learning, teaching, and learning to teach. San Francisco, CA: Jossey-Bass.


Smart, J. C., Feldman, K. A., & Ethington, C. A. (2000). Academic disciplines: Holland’s theory and the study of college students and faculty. Nashville, TN: Vanderbilt University Press.


Smart, J. C., & Umbach, P. D. (2007). Faculty and academic environments: Using Holland’s theory to explore differences in how faculty structure undergraduate courses. Journal of College Student Development, 48, 183–195.


Stevens, M., Armstrong, E., & Arum, R. (2008). Sieve, incubator, temple, hub: Empirical and theoretical advances in the sociology of higher education. Annual Review of Sociology, 34,127–151.


Stuber, J. (2011). Inside the college gates: How class and culture matter in higher education. New York, NY: Lexington Books, Rowman & Littlefield.


Trolian, T., Kilgo, C., Pascarella, E., Roksa, J., Blaich, C., & Wise, K. (2014, November). Race and exposure to good teaching during college. Paper presented at the annual meeting of the Association for the Study of Higher Education, Washington DC.


Umbach, P. D. (2007). Faculty cultures and college teaching. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 263–318). New York, NY: Springer.


APPENDIX


Table A1. Items Included in the Measure of Instructional Quality (Cronbach’s alpha = 0.903)

 

How often faculty asked challenging questions in class

How often faculty asked R to show how a particular course concept could be applied to an actual problem or situation

How often faculty asked R to point out any fallacies in basic ideas, principles, or points of view presented in the course

How often faculty asked R to argue for or against a particular point of view

How often faculty challenged R’s ideas in class

How often students challenged each other’s ideas in class

Frequency that faculty gave clear explanations

Frequency that faculty made good use of examples and illustrations to explain difficult points

Frequency that faculty effectively reviewed and summarized the material

Frequency that faculty interpreted abstract ideas and theories clearly

Frequency that faculty gave assignments that helped in learning the course material

Frequency that the presentation of material was well organized

Frequency that faculty were well prepared for class

Frequency that class time was used effectively

Frequency that course goals and requirements were clearly explained

Frequency that faculty had a good command of what they were teaching

Note: All items were rated on a 5-point scale ranging from very often to never.

 
 

Table A2. Items Included in the Academic Motivation Scale

 

I am willing to work hard in a course to learn the material even if it won’t lead to a higher grade.

When I do well on a test, it is usually because I am well prepared, not because the test is easy.

I frequently do more reading in a class than is required simply because it interests me.

I frequently talk to faculty outside of class about ideas presented during class.

Getting the best grades I can is very important to me.

I enjoy the challenge of learning complicated new material.

My academic experiences (i.e., courses, labs, studying, discussions with faculty) will be the most important part of college.

My academic experiences (i.e., courses, labs, studying, discussions with faculty) will be the most enjoyable part of college.

Note: All items were rated on a 5-point scale ranging from strongly disagree to strongly agree.



Cite This Article as: Teachers College Record Volume 118 Number 2, 2016, p. 1-28
http://www.tcrecord.org/library ID Number: 18231, Date Accessed: 8/16/2018 9:40:27 PM

Article Tools

Related Articles

Catch the latest video from AfterEd, the new video channel from the EdLab at Teachers College.
Global education news of the week in brief.; NCLB; international education; software; This episode explores ten interesting and little known facts about Social Studies.; social studies; humor; media; research; schools; Three seniors at Heritage High School talk about education and what the next President should do about it.; Debates; Heritage High School; NCLB; NYC schools; education; election; girls; interview; politics; presidential election; schools; speak out; students; testing; EdWorthy Theater starring MIT Physics Professor Professor Walter Lewin.; MIT; physics; We feature new content about the future of education. Put us on your website ≠ whether you're a student, teacher, or educational institution, we aim to create great content that will entertain and enlighten your audience. http://link.brightcove.com/services/link/bcpid1078591423http://www.brightcove.com/channel.jsp?channel=1079000717

Site License Agreement    
 Get statistics in Counter format