Male Program Assessment for College Excellence (M-PACE): An Initial Validation of the Instruments Constructs
by Soua Xiong, J. Luke Wood & Frank Harris III - July 12, 2018
The Male Program Assessment for College Excellence (M-PACE) is an outcomes-based assessment tool that is designed to assess the effectiveness of programs and initiatives serving men of color in community colleges. This current investigation represents an initial validation of the non-cognitive scales within the instrument.
Over the past decade, colleges and universities have become increasingly attentive to disproportionate outcomes experienced by men of color, particularly Black and Latino males. In community colleges, where these men are most highly concentrated, 25.7% of Black males and 24.5% of Latino males will have completed their goals (e.g., earning a certificate, degree, transferring) in three years. In contrast, 30.9% of White males will have completed their goals in the same time frame (BPS, 2014). This data suggests that there are low percentages of male students completing their goals in community colleges. However, given the lower percentage of goal completion for Black and Latino male students, the current study focuses on providing insights specific to male students of color.
Informed by this disparate outcomes data, community colleges have implemented programming designed to enhance the success of college men of color. This programming, referred to as minority male initiatives (MMIs), often consists of retention programs that serve men of color (Wood, Reid, Harris III, & Xiong, 2016). In response to the proliferation of programs taking place around the nation, the American Association of Community Colleges (AACC) launched the Minority Male Initiative Database to serve as a clearinghouse for information on these programs (Christian, 2010).
However, the efficacy of programs serving college men of color are questionable. Specifically, Harper (2014) noted that MMIs (as a whole) have contributed minimally to success outcomes for men of color. In fact, based on his analysis of these programs over the past 15 years, he concluded that the interventions employed were flimsy and fragmented at best (p. 126). He argued that most programming efforts among MMIs have been employed without standards, a comprehensive strategic plan, and did not include an institutional collective sensemaking from all key stakeholder groups, particularly Black male students. Similarly, Wood (2011) also critiqued the utility of MMI efforts, noting that needs and outcomes assessments were rarely used to guide the development of programs or to assess their effectiveness. In particular, he noted that the lack of an outcomes assessment inhibited the ability of programs to improve the utility of their interventions.
In response, the Minority Male Community College Collaborative (M2C3), a national research and practice center at San Diego State University, developed the Male Program Assessment for College Excellence (M-PACE). The M-PACE is an outcomes-based assessment tool that is designed to assess the effectiveness of programs and initiatives serving men of color in community colleges. This current investigation represents an initial examination of the construct validity of this instrument.
ABOUT THE M-PACE
The M-PACE is designed to measure 12 outcomes commonly employed by MMIs. Keflezighi, Sebahari, and Wood (2016) conducted a content analysis of programs serving men of color as identified in the AACC database. In a review of 129 community college MMIs, they identified commonly employed interventions, goals, and outcomes for these programs. Most commonly, they determined that MMIs sought to improve student success outcomes for men of color through leadership development activities, mentoring programs, teaching college success skills, service learning projects, and tutoring. Most relevant to this current study, they created a curriculum alignment matrix which identified how recurrent outcomes sought by MMIs aligned with the interventions in the field. This matrix allowed them to delimit their initial listing to the most salient outcomes.
Their final analysis identified seven affective outcomes (e.g., sense of belonging, academic self-efficacy, locus of control, positive racial regard, personal self-confidence, self-esteem, academic resilience) and six performance outcomes (e.g., use of academic services, engagement with faculty, persistence, achievement (GPA), graduation, and transfer). Many of the items were operationalized as latent variables, including all of the affective outcomes as well as engagement and use of service. Engagement was operationally defined as how often students interacted with faculty members about academic and non-academic matters inside and outside of the classroom. Use of services was operationally defined as how often students use campus resources. The operationalization of these variables was informed by extant scholarship on college men of color (e.g., Bush & Bush, 2010; Flowers, 2006; Hagedorn, Maxwell, & Hampton, 2001; Harris & Harper, 2008; Vasquez Urias, 2012; Wood & Essien-Wood, 2012; Wood & Harris, 2013; Wood, 2012).
These outcomes were then appropriated by Wood et al., (2016) who developed the M-PACE as an assessment tool that could standardize measurement of these outcomes across college campuses. Their instrument employed 52 individual items to assess the latent variables in the instrument. The instrument was designed for online distribution to male students who are the recipient of MMI programming. Colleges employing the instrument have the ability to add items and scales to the instrument to assess outcomes specific to their campus programming which may not be available within the current selection. Their initial validation focused on the content validity of the M-PACE demonstrated strong content validity scores for almost all individual items and the scale indexes. The one exception was socio-emotional intelligence, which demonstrated moderate validity. This manuscript advances the development of the instrument by focusing on construct validity results from a pilot of the instrument.
Data employed in this study were derived from the M-PACE. The M-PACE was distributed to certificate, degree, and transfer seeking men at a large, urban community college in the southern United States. The sample was comprised of 490 male participants. The racial/ethnic composition of these men were as follows: 53.8% Black, 20.8% Latino, 3.9% White, 12.3% Asian/Pacific Islander, and 9.2% All Other. Slightly more than half of the participants (54.7%) attended community college full-time while 45.3% were enrolled less than full-time. While 45.2% of the respondents were between 18 to 24 years old, many were more representative of a community college age demographic. For instance, 17.1% were between 25 to 31 years old, 13.6% between 32 to 38 years old, and the remainder at 39 years of age or above.
The analyses reported in this manuscript focus specifically on the non-cognitive scales in the M-PACE. Because most goals and outcomes of the MMI are non-cognitive (Keflezighi et al., 2016), the non-cognitive scales were the focus of this initial validation study. A total of 20 items were employed. These items were intended to measure academic resilience, self-esteem, locus of control, personal self-confidence, and academic self-efficacy. The items were presented to participants in the questionnaire using a six-point scale of agreement, ranging from strongly disagree to strongly agree. Data were examined using principal axis, exploratory factor analysis (EFA) with Promax rotation. EFA is a multivariate technique that uses variation and covariation among items to determine underlying structures in the data (Green & Salkind, 2009). In this case, the researchers sought to identify constructs evident in the M-PACE. Promax rotation was employed due to the expected correlation among the intended scales. Internal consistency of the items within each construct was assessed using Cronbachs alpha coefficients.
While items for five constructs were initially developed, the researchers still explored the uni-dimensionality of the items to determine the total number of factors for rotation. Three primary criteria were employed to determine the total number of factors, including an exploration of uni-dimensionality (due to the perceived interrelationship among the pre-hypothesized heuristic groupings), the scree test, and the factor solution interpretability. The scree plot indicated that there were indeed five factors occurring in the sharp line of descent. Similarly, five factors demonstrated eigenvalues of 1 or greater.
The Kaiser-Meyer-Olkin measure of sampling adequacy was strong for the factor analysis (.94) indicating that there were a sufficient number of items for the factors. The Barletts test of Sphericity was significant (x2 = 76.02, p < .001). All communalities were above the predetermined threshold of .50, with a lowest initial communality of .58. After rotation, the first factor accounted for 49.8% of the variance in the outcome, the remaining variance accounted for by each factor was as follows: Factor two, 9.6%; Factor three, 7.1%; Factor four, 4.9%; and Factor five, 4.2%. Collectively, the factors accounted for 75.6% of the cumulative item variance.
The results from the rotated pattern matrix using Promax with Kaiser normalization is presented in Table 1. Additionally, the factor correlation matrix is presented in Table 2. No items were identified as rotating on more than one factor. Pattern loading ranges for each factor and their operational names are as follows: Factor 1 Academic Resilience (.78 to .93); Factor 2 Self-Esteem (.60 to .96); Factor 3 Locus of Control (.75-.89); Factor 4 Personal Self-Confidence (.81-.91); and Factor 5 Academic Self-Efficacy (.56-.97). Results from the structure matrix of loadings greater than .70 are also depicted in parentheses. These structure matrix loadings affirm results from the pattern loadings.
Table 1. Retained Loadings From the Pattern Matrix
To determine the reliability of each identified factor, the items within each factor were examined using Cronbachs alpha. Each factor demonstrated strong reliability, at .90 or higher, they include: academic resilience, a = .94; self-esteem, a = .91; locus of control, a = .90; personal self-confidence, a = .94; and academic self-efficacy, a = .94. Given that many campuses would employ the M-PACE on racial/ethnic specific populations, the researchers were interested in the reliability of the constructs across ethnic groups. As a result, subsequent alpha coefficients were calculated for Asian-Pacific Islander, Black, and Latino respondents. Responses for White and All Other students were not included due to limited sample size. Table 3 depicts the coefficient alphas for each group. Across all groups examined, all Cronbachs alphas exhibited strong reliability at .85 or above.
Table 3. Coefficient Reliability for Non-Cognitive Constructs, by Racial/Ethnic Group
Exploratory factor analysis was employed to examine the non-cognitive scales within the M-PACE and five constructs were identified. They included academic self-efficacy, locus of control, personal self-confidence, self-esteem, and academic resilience. Each construct consisted of four items and the analysis indicated that there were sufficient number of items for each non-cognitive construct. Additionally, each scale exhibited strong internal consistency for all men in the sample as well as across several racial/ethnic groups. While the non-cognitive scales were reliable and valid for Asian-Pacific Islander, Black, and Latino college men, due to limited sample size, the reliability and validity of the non-cognitive constructs across other racial/ethnic groups were not computed. As such, further research should examine the validity and reliability of the M-PACE on a larger sample size with greater racial/ethnic diversity. In addition, future research should employ confirmatory factor analyses to verify the factor structure of the non-cognitive constructs within the instrument. Furthermore, only the non-cognitive constructs were examined in this study. Therefore, the validity and reliability of the other constructs within the M-PACE should be examined.
Lastly, while findings from this study provided initial support for construct validity, the M-PACE should be subjected to further construct validation analysis. Future research should also seek to establish convergent and discriminant validity of the non-cognitive constructs (Messick, 1995). Convergent validity ensures that constructs that are related should be related while discriminant validity ensures that constructs that are not related should not be related. As such, subsequent analyses should employ a multitrait-multimethod comparison in a larger sample to examine how each construct relates to other scales designed to measure similar or different non-cognitive constructs (Campbell & Fiske, 1959). These analytic procedures will be particularly important to further establish validity and reliability of the non-cognitive scales within the M-PACE. Moreover, these procedures can also serve as general guidelines for others to systematically establish validity and reliability in their constructs and instrument.
Nonetheless, the M-PACE has utility for outcomes assessment and is recommended as a tool to assess programs and initiatives serving college men of color. Particularly, community colleges can use the instrument to inform policy, practice, and programming efforts to advance success outcomes for men of color in community colleges. The M-PACE should be used to assess the effectiveness of current interventions being employed on college campuses. Community colleges should utilize the M-PACE initially to create a baseline understanding of the interventions and proceed with multiple administration of the assessment over time. Through this process, colleges will be able to gain further insights into change over time of the interventions being employed and improve the utility of those interventions. In addition to using the M-PACE to assess the effectiveness of current interventions, community colleges could also use the survey instrument to help guide the development of future interventions as well. With the development of the M-PACE, it is hoped that community colleges will now have a valid and reliable tool to assess the efficacy of programs and initiatives serving college men of color.
Bush, E. C., & Bush, L. (2010). Calling out the elephant: An examination of African American male achievement in the community colleges. Journal of African American Males in Education, 1(1), 4062.
BPS (2014). Cumulative retention and attainment at first institution through 2013-14 by Attendance intensity through June 2014, Race/ethnicity (with multiple) and gender, for Sector of first institution - 10 categories 2011-12 (Public 2-year). Washington, DC: National Center for Education Statistics, Beginning Postsecondary Students Longitudinal Study.
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81105.
Christian, K. (2010). AACC Launches minority male student success database. Washington, DC:
American Association of Community Colleges. Retrieved from http://www.aacc.nche.edu/newsevents/News/articles/Pages/012220101.aspx May 12,
Flowers, L. A. (2006). Effects of attending a 2-year institution on African American males academic and social integration in the first year of college. Teachers College Record, 108(2), 267286.
Hagedorn, S. L., Maxwell, W., & Hampton, P. (2001). Correlates of retention for African-American males in the community college. Journal of College Student Retention, 3(3), 243263.
Harper, S. R. (2014). (Re)setting the agenda for college men of color: Lessons learned from a 15-year movement to improve Black male student success. In R. A. Williams (Ed.), Men of color in higher education: New foundations for developing models for success (pp. 116143). Sterling, VA: Stylus.
Harris III, F., & Harper, S. R. (2008). Masculinities go to community college: Understanding male identity socialization and gender role conflict. New Directions for Community Colleges, 142, 2535.
Keflezighi, F., Sebahari, L., & Wood, J. L. (2016). An analysis of programs serving men of color in the community college: An examination of funding streams, interventions, and objectives. Research & Practice in Assessment, 10, 5560.
Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Measurement: Issues and Practice, 14(4), 5-8.
Vasquez Urias, M. (2012). The impact of institutional characteristics on Latino male graduation rates in community colleges. Annuals of the Next Generation, 3, 112.
Wood, J. L. (2011, August 5). Developing successful Black male initiatives. Community College Daily. Retrieved from http://www.ccdaily.com/Pages/Opinions/Developing-successful-Black-male-programs-and-initiatives.aspx
Wood, J. L. (2012). Leaving the 2-year college: Predictors of Black male collegian departure. The Journal of Black Studies, 43(3), 303326.
Wood, J. L., & Essien-Wood, I. R. (2012). Capital identity projection: Understanding the psychosocial effects of capitalism on Black male community college students. Journal of Economic Psychology, 33(5), 984995.
Wood, J. L., & Harris III, F. (2013). Community college survey of men: An initial validation of the instruments noncognitive outcomes construct. Community College Journal of Research and Practice, 37(4), 333338.
Wood, J. L., Reid Jr., D. O., Harris III, F., & Xiong, S. (2016). Male program assessment for college excellence (M-PACE): Content validation summary. Community College Journal of Research and Practice, 40(9), 802805.