Background/Context: Educational researchers frequently study the impact of treatments or interventions on educational outcomes. A critical aspect of such investigations involves determining whether treatment effects vary by student subgroups, such as race/ethnicity, sex/gender, SES, and disability status. However, estimation of intervention effects for subgroups of students defined by disability status can be potentially misleading when researchers control for prior achievement or other measures of academic ability. Estimating intervention effects for students with disabilities is further complicated by the fact that disability status is often defined and measured by whether a student has an Individualized Education Program (IEP), masking important variation in abilities related to academic achievement and services received.
Purpose/Objective/Research Question/Focus of Study: This paper describes methodological challenges in estimating effects of educational interventions for students with disabilities, provides an applied example using data from an innovative state-level educational intervention, and concludes with implications for policy and practice.
Research Design: The analyses presented here come from a larger secondary analysis evaluating the impact of an innovative state assessment and accountability program, New Hampshire’s Performance Assessment of Competency Education (PACE) pilot program (2014-2016), on eighth-grade student academic achievement.
Findings/Conclusions/Recommendations: The estimated effects of the PACE pilot program on eighth-grade student achievement for students with and without disabilities differ depending on whether prior academic achievement is included as a control variable. Controlling for prior academic achievement, we found that the PACE program narrowed or even reversed the achievement gap between students with and without disabilities. When prior achievement was not included, the achievement gap was attenuated but not reversed. Further investigation revealed limited overlap in the distributions of prior achievement for students with and without disabilities, impacting estimates of program effects when prior achievement is controlled. Consistent with other studies, this study employed a dichotomous measure of disability (IEP vs. no IEP). However, the dichotomization of students by disability status defined by whether they have an IEP conceals important variability in cognitive skills related to achievement and thus in understanding the impact of educational interventions. We recommend that researchers investigating the impact of large educational interventions place more emphasis on understanding the impact of these programs for students with disabilities. Importantly, our work underscores problematic analytic and interpretive issues that can ensue when students from all disability groups are grouped together.