Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Evaluating Students With Disabilities and Their Teachers: Use of Student Learning Objectives


by Jeanette Joyce, Judith R. Harrison & Danielle Murphy - 2016

Over the past decade, there has been a movement toward increased accountability, focusing on teacher performance, in U.S. education. The purpose of this article is to discuss student learning objectives (SLOs) as one component of high-stakes teacher evaluation systems, within the context of learners with special needs. We describe SLOs and their origin, reviewing how the current Race to the Top states (i.e., states awarded competitive funds) are using SLOs in inclusive classes where general education teachers teach both students with and without disabilities. We found substantial variation in how SLOs from different regions were designed to incorporate the progress of students’ teachers’ evaluation ratings. These variations were particularly evident in three areas where decisions must be made: the population to be targeted, the goals to be targeted, and the weight of the SLOs in teachers’ evaluations. Potential exists for these decisions to negate the positive effects of SLOs; however, in the systems that balance the stakes and the weight of the SLOs, there is potential for the SLOs to lead to improved inclusive practices and differentiated instruction.

The purpose of this article is to discuss student learning objectives (SLOs) as components of high-stakes teacher evaluation systems, within the context of learners with special needs. To provide context for the possible impact of these evaluation systems on particular students and their teachers, we will begin with a review of policies that have led to the current reliance on high-stakes evaluation models. In the ongoing effort to prepare all students for effective participation in a global economy and to address the ever-broadening achievement gap (Blank, 2011; Timar & Maxwell-Jolly, 2012), policymakers across the United States have increasingly embraced models of “accountability” that are designed to hold teachers and administrators liable for their students’ progress (Carnoy & Loeb, 2002; Hanushek & Raymond, 2003; Jacob, 2003; Peterson & West, 2003; Sanders et al, 1997).  Within this model, measures of learning and instructional practices are utilized to make inferences about teaching and teachers.  


Over the past decade, there has been a movement toward increased accountability in U.S. education. Beginning with No Child Left Behind (NCLB, 2002), a requirement for annual standardized testing in grades 3–8 in math and English language arts was legislated. This testing was considered high stakes as the results were used to evaluate students, districts, and schools, and negative sanctions were possible if standards were not met. Subsequently, President Barack Obama and Secretary of Education Arne Duncan announced Race to the Top (RttT), a competitive grant that awards incentives to states who develop data-driven systems. Within RttT was the call for “outstanding teachers”  (Office of the Press Secretary, 2009) and the assessment of teachers through student scores came to the forefront of evaluation systems. It should be noted that this was not the first time that judgment of teacher effectiveness had been tied to test scores of students. In Tennessee in 1992, William Sanders developed the Education Value-Added Assessment System, a statistical method that attributed students’ progress or “growth” to their teachers. However, it was not until RttT once again brought the connection between standardized test scores and teacher quality to the front lines that the practice of rating teachers using student outcomes became widespread. RttT states were required to consider these ratings for personnel decisions by “revising teacher evaluation, compensation, and retention policies to encourage and reward effectiveness” (U.S. Department of Education, 2012, p. 2). Within these constraints, however, the 18 RttT states and the District of Columbia have tremendous flexibility in the design and implementation of the new evaluation systems. One emerging component of these new systems is the SLO, a measure meant to depict student learning over the course of the school year that could be attributed to a particular classroom, and, by extension, to a particular teacher (Sanders, 1992).


In the following sections, we describe SLOs and their origin, and then we take a closer look at how the current RttT states are using SLOs in inclusive classes where general education teachers teach both students with and without disabilities. These SLOs are used either in conjunction with or in place of the standardized test scores typically used as growth measures.1 In addition, we will discuss both potential benefits and growing concerns with the use of SLOs in high-stakes accountability models, for the teachers being evaluated and for their students, including the influence SLOs may have on identification and instruction of youth with disabilities. Bearing in mind that there is little empirical research on SLOs (Kane & Staiger, 2008; Lacireno-Paquet, Morgan, & Mello, 2014; Reform Support Network, 2014; Steele, Hamilton, & Stecher, 2010; Tyler, 2011), we will raise more questions than we can answer at this time. However, our hope is that the questions will begin a worthwhile discussion concerning the viability, sustainability, and practicality of these approaches and potential benefits and risks of SLOs for students with disabilities.


BACKGROUND: ACCOUNTABILITY AND SPECIAL EDUCATION POPULATIONS


There has been an increasing emphasis on using student test data to evaluate students, schools, and, more recently, teachers (Shepard, Hannaway, & Baker, 2009). In order to understand the role of SLOs in evaluations of teachers and the possible consequences to students with special needs who may be included in these teachers’ classrooms, we will briefly examine how accountability models were developed and have evolved with the special education population.


Changes in federal legislature (e.g., Individuals with Disabilities Education Improvement Act [IDEIA], 2004; No Child Left Behind [NCLB], 2001) mandated that schools be accountable for the results of all students (Erickson et al, 1998; Yell & Shriner, 1997). To achieve the goal of proficiency for all students in reading and math, states developed standards and students were then tested on proficiency on those standards.  Scores were disaggregated into several categories (i.e., economically disadvantaged status, students with disabilities, English language learners) and schools were required to demonstrate adequate yearly progress (AYP) for students in each category, including those with disabilities. Failure to do so would result in consequences for schools and states, such as loss of funding and control (Linn, Baker, & Bettebenner, 2002).


Beginning in 2001 with NCLB, all students, including those identified as having special needs, are required to be included in statewide assessment to measure AYP. This is accomplished through participation in standardized assessment, often with some accommodation (Cortiella, 2006). Students with severe disabilities are allowed to meet test requirements using alternate performance standards, which include different expectations for performance and modified achievement standards. However, there are stipulations: “There is no limit on the number of students who can take the [alternative assessments], but the number of proficient and advanced scores that exceed the cap [of 1%] will be counted . . . as if they were not proficient” (Collaboration to Promote Self-Determination, 2013, p. 1). That is to say that if a large number of students are given alternative assessments, and scored above the line, not all those scores would count toward AYP. In some cases, states are granted a waiver to include an additional 1% increase to the cap (Yell, Shriner, & Katsiyannis, 2005). The bottom line is that a majority of students must participate in standardized assessment as appropriate. Although these are the guidelines, IDEIA did not offer specific guidance on how to enact the policy, allowing for idiosyncratic interpretations of guidelines for accommodations and inclusion in testing that has continued and now can be found in the use of SLOs.


In this article, we focus on the experiences of general education teachers who teach students receiving special education services for some portion of the day and are solely responsible for the students’ progress in the subject taught (i.e., inclusive general education settings). According to recent data from the National Center for Educational Statistics, more than 80% of students identified with special needs spend at least 40% of their time in general education settings (U.S. Department of Education, 2013). Therefore, it is important to understand how this majority of students with disabilities and their teachers are impacted by evaluations for teacher accountability.


Additionally, many of the classes that include students with disabilities have no standardized test data to calculate the academic growth of the students (i.e., grades and subjects not tested) as a component of teacher effectiveness. Most states administer standardized tests to students in grades 3–8 in math and English language arts (ELA), and therefore, upwards of 70%–80% of teachers (e.g., remaining grades, electives) will not have a standardized growth measure available for inclusion in their evaluation (e.g., KY DOE, 2014; NJ DOE, 2013). This circumstance makes it necessary to construct some type of alternative assessment to determine the academic growth of the students in their class throughout the year, per the requirements of RttT. Because of the magnitude of the number of teachers in this category (Prince et al., 2009 Tyler, 2011), states have considered at least two options to supply scores comparable to standardized testing. First, a school-wide aggregated score, or “sharing” a growth score with a tested subject teacher, is sometimes used (e.g., North Carolina, Pennsylvania, Tennessee). That is, in addition to observation scores that are used by all states for every teacher, non-tested subject teachers will have the average student growth score (typically a Value Added Measure [VAM] or Student Growth Percentile [SGP]) from the school or grade level associated with the overall effectiveness rating. Within this model, it is not clear whether scores from students with disabilities will be included or excluded in the rating. Second, some states use SLOs.


STUDENT LEARNING OBJECTIVES

HISTORICAL CONTEXT


SLOs2 originated as a component of performance-based compensation and are operationalized as measurable learning goals often developed by the teacher in concert with an administrator. They involve the establishment of a learning objective for a group of students and measures to evaluate the extent to which students have achieved this objective as one way to measure individual teacher impact on student learning (Lacireno-Paquet, Morgan, & Mello, 2014). In many cases, SLOs involve some type of pre- and post-measure in order to make inferences about growth (Reform Support Network, 2012).  


Figure 1. Anatomy of a SLO.

[39_21542.htm_g/00002.jpg]

Note: This schematic shows the basic elements of a student learning objective.


SLOs were initially used in K–12 in pilot studies in Denver, Colorado (CTAC, 2008) and Charlotte-Mecklenberg, North Carolina school districts (CTAC, 2013), which were looking for new types of growth measures to determine pay-for-performance. In the Denver study conducted in 1999 and the Charlotte-Mecklenberg study in 2007–2008, the researchers worked with teachers to formulate measurable SLOs. They found that teachers who framed and achieved what the study defined as “quality” goals for their students, as defined by a four-trait rubric “of learning content, completeness, cohesion, and expectations” (Slotnik & Smith, 2004, p. 33), had better overall achievement on standardized measures than teachers who did not utilize SLOs. In contrast, there was no significant difference between groups who merely set goals and met them (Slotnik & Smith, 2004). Although most teachers were able to meet the objectives they had designed, it was the quality of the SLOs, as described above, that was a better predictor of student achievement as measured by standardized testing. From these pilots, SLOs gained popularity and are now found in most RttT state evaluation systems.


STRENGTHS OF SLOS


In addition to addressing the issue of teachers who teach classes that are not tested with standardized assessments, SLOs address the criticism of using national or state standardized measures as a component of teacher evaluation (McCaffrey, Lockwood, Koretz, & Hamilton, 2003). Some contend that standardized test scores are not situated within local contexts and therefore do not account for local goals or classroom demographics. With SLOs, a teacher can choose targets that are better aligned with district or classroom priorities, and make tiered goals for his or her students, subgrouping them to allow for different starting points, or set a growth goal (increase by percent or points) that is more individualized. This potential benefit of the SLO is well expressed in the Colorado state definition of SLOs as,


“A participatory method of setting measurable goals, or objectives for a specific assignment to class, in a manner aligned with the subject matter taught, and in a manner that allows for the evaluation of the baseline performance of students and the measureable gains in student performance during the course of instruction.” (Colorado Department of Education, 2013, p. 3)


Here we have the potential for a teacher with students with special needs in his or her class to set progress goals aligned with both the classroom objectives and individual student needs.


LIMITATIONS OF SLOS


The limitations of SLOs lie in the development and implementation path (Crouse, Gitomer, & Joyce, 2016; Slotnik & Smith, 2004). When training is not provided to teachers and administrators, objectives tend to be of poor quality and implemented carelessly. In the Denver study (CTAC, 2008) described above, there was a marked improvement over the four years on the quality of SLOs, which indicates that there is a learning curve to successful implementation.

Also, using SLO results as a growth measure to assess teaching is problematic (Crouse et al., 2016). Although SLOs appear to have formative potential, there are validity risks when using a “homegrown” measure for high-stakes purposes. As components in high-stakes teacher evaluation systems, SLOs may inform personnel decisions such as tenure, retention, or compensation. In a study of RttT teaching evaluation systems, Crouse et al. (2016) found that although SLOs may have some value as an indicator of teaching, there are concerns about the validity, reliability, and accuracy of SLO scores, especially given variability in current levels of evaluator training, score calibration, assessment design, and quality controls. Significant questions were raised about the ability to consistently and reliably implement such systems without state guidance and monitoring. If these controls are in place, then there is potential for SLOs to be a valuable measure within teacher evaluation models; however, without these controls the effect on teachers could be detrimental.


THE RACE TO THE TOP PROJECT


In our work, we sought to gain further insight into SLO implementation for the general education teachers who have students with special needs on their class rosters, using data gathered from a broader study of the 19 Race to the Top [RttT] teacher evaluation systems (Crouse, Joyce, & Gitomer, 2014). As part of an earlier study, Crouse et al. (2016) evaluated methods used by RttT states (and Washington, DC) to design their SLOs and the variation in implementation (see Table 1). The data from that study and the additional data discussed here were gathered by examining policy documents; reviewing publicly available implementation plans, guidebooks, and technical documents; and interviewing state department of education officers to verify interpretation of their systems.


Table 1. Summary Data for SLO Requirements by Region as of 2014

State

Number Required

Designed by:

Approved by:

Monitored by:

AZ

None

teacher and evaluator

evaluator

none specified

CO

Tested—at least one

Non-tested—at least two

teacher and evaluator

evaluator

none specified

DC

All—at least one

teacher

school administration

none specified

DE

Tested—one

Non-tested—four

State-approved vendor list,

educator designed, state approved if none of the above are available

state

state audits

FL

Not specified

district

district

none specified

GA

Tested—none

Non-tested—one

state, district if no state-approved measure available

state

district and state

HI

All—one

teacher and evaluator

principal

none specified

IL

Tested—one

Non-tested—two

teacher and evaluator

evaluator

district

KY

Tested—one

Non-tested—two

teacher

district

district

LA

All—two

teacher/evaluator/district

district

state audits

MA

Tested—one

Non-tested—two

district

district

district and state

MD

All—two to four

teacher

evaluator

district

NC

Tested—none

Non-tested—one

vendor

state

none specified

NJ

All—one or two

teacher

principal

none specified

NY

Non-tested—one

district determined

district determined

district

OH

Non-tested—  

two to four

teacher from approved list

evaluation committee

school

PA

District determined

teacher from state-approved list

district determined

none specified

RI

All—two to four

teacher

evaluator

none specified

TN

Non-tested—one

teacher from state-approved list

evaluator

none specified


Within the current comparative case study framework (Yin, 2013), we utilized data from the earlier study (Crouse et al., 2014) and followed an additional two procedures. First, we designed a framework to code information pertaining to design and implementation criteria for SLOs and their use with students with disabilities (SWD) served in inclusive settings. This included criteria such as which students were to be included in SLOs as well as how SLO results were translated into teacher ratings. Second, we made comparisons across systems, and evaluated key areas that differentiated the use of SLOs with SWD across states. Results are described below and presented in Table 2.


Table 2. Target Population, Goals, and Weights of SLOs by Region in 2014

State

Target Population

Goals

Weight of SLOs in teacher’s rating

AZ

One SLO is aimed at closing the achievement gap or bringing at-risk students, including those identified with special needs, up to grade level through RTI mode. The other is for all students.

Tiered

13%-30%

CO

All students

May be tiered or individualized for students with disabilities

up to 25%

DC

All students

Based on average achievement of the entire class

5% of teachers evaluation is based on whether IEP was met in a timely manner, 15% from SLOs (called TAS)

DE

Teachers may be required to have up to four measures. One of the measures must be a vendor assessment if available, but teacher may select 10 students out of the class and use only their scores.  For non-standardized measures, all students must be included.

Growth score

Teachers need an “exceeds” rating overall in order to get a highly proficient rating.

conjunctive model4

FL

Wide range of variability among districts

Wide range of variability among districts

can be up to 40%–50%

GA

All students

Tiered growth measure

conjunctive model

HI

Can be all students or a subgroup

Teacher is rated by percentage of students who achieve cut score.

25%–45%

IL

The state requires districts to “discuss how student characteristics (e.g., special education placement and English language learners [ELLs]) are used in the measurement model” (ISBE guidance p. 6). Districts decide whether all students are included.

Districts decide if tiered targets are allowed.

up to 50% (districts may decide to use conjunctive model)

KY

Must include all students

Growth score

low growth lowers overall rating one category

conjunctive model

LA

Districts may allow teachers to write the SLO for all students or for a specific subgroup of students that merits extra attention, or even individualized goals for students.

Holistic rating based on evidence of student growth

Up to 50%. If score on SLO is very low, it becomes the teacher’s entire rating.

MA

Must include all students

Uses a growth model, so all students must show a specified amount of growth.

Growth category is the convergence of two measures. If no convergence, assume moderate unless “compelling” reason.

conjunctive  model

MD

Districts may allow teachers to write the SLO for all students or for a specific subgroup of students that merits extra attention.

Holistic scoring based on evidence of student growth

15%–50%

NC

All students (district option to target select subgroup)

State sets

 conjunctive model

NJ

Incorporate a “significant number” of students

Tiered

Set differentiated learning goals for students based on their starting points

15%

NY

District decides if all students or subgroup

Individualized, minimum proficiency for whole class or tiered

15%–20%

OH

All students

Teacher sets a cut score and is scored by percentage of students reaching goal.

up to 50%

PA

District decides if all students or subgroup

District decision. May be aligned to IEP.

20%–30%

RI

All students

Tiered

 conjunctive model

TN

All students

Aggregated score from state-approved measure

15%


SLOS AND SPECIAL EDUCATION


We noted differences and a lack of clarity among the 18 states and the District of Columbia in their models for the use of SLOs for students with disabilities. We found that in response to IDEIA’s (2004) stipulation that all students be included in school accountability, and RttT’s extension of this accountability to individual classrooms, each state has developed specific business rules (i.e., how to include scores from students with disabilities). These rules vary and are typically expressed by student–teacher linkages. For example, one state may stipulate that any student who is enrolled in a given teacher’s class by October 1 must be included in the teacher’s evaluation (e.g., Louisiana), while another state may indicate that the student must be enrolled in the classroom 85% of the time before being included (e.g., Delaware). Still other states leave the linkage rules to the discretion of individual districts (e.g., Massachusetts).


We found that, although there is substantial variation in how SLOs incorporate the progress of students with disabilities in general education teachers’ evaluation ratings, three organizing themes emerged around decisions that must be made at the state level (or at least delegated to districts by states) within the regions studied. These include the identification strategy of the target population in the SLOs; the goals set for the students, which become the criteria for teacher effectiveness ratings; and the weight the SLO score has in teachers’ total evaluation ratings.  


TARGET POPULATION OF SLOS


One decision point across all states is the selection of the targeted population included in the SLO (see Table 2). Half of the states (n = 9) call for all students to be represented in SLOs (CO, DC, GA, KY, MA, NC, OH, RI, TN); however, others (n = 4) allow for an additional or alternate subgroup of students to be selected for inclusion and assessment with SLOs (AZ, DE, HI, NJ). That is, the teacher does not need to include all of his or her students in her target or results. Often, but not always, the included subgroup is considered “at risk”; however, it is unclear from the available documents exactly which students are being referenced, as no specific criterion of “at risk” is given. Other states (n = 6) leave the final decision about inclusion to individual districts (FL, IL, LA, MD, NY, PA). For example, in Arizona, teachers in non-tested subjects may have two SLOs—one goal for all students, and a second goal for “at risk” students. Although language suggests that states are referring to students with special needs, only two states directly mention SWD in their inclusion guidance for SLOs: Illinois and Rhode Island. Illinois requires districts to be mindful of including learners with special needs when designing SLOs, and Rhode Island provides very clear guidelines for all students (described later in the article as an example). This looseness in defining “at risk” or acknowledging learners with special needs leaves room for teachers concerned about their own outcomes to potentially exclude students who might “bring down” their score. However, the burden of accounting for each and every student is not costless. Teachers who are overwhelmed with accountability paperwork and testing may have less opportunity to teach (Ladson-Billings, 2009).  


TARGETED GOAL


Another decision point found within state guidance for SLOs relating to students with special needs is the targeted goal, or the process by which student performance is converted to a teacher rating. This is an area in which a majority of states specifically acknowledge students with special needs, and have developed scoring procedures that take into consideration students’ individual baseline and growth rates. Some states (n = 3) adhere to a strict growth model (DE, KY, MA), meaning that all students are expected to improve by a specified number of points or percentage from their baseline score, using a pre-test, post-test model. Others (n = 5) have developed cut scores with teachers’ ratings based on the percentage of students who meet or surpass a prescribed level (DC, HI, NC, OH, TN), while still others (n = 4) may allow for a tiered approach (AZ, GA, NJ, RI). Within a tiered approach, teachers create subgroups among their students (with three subgroups being the most commonly observed) and then establish differentiated goals for these groups. Some states (n = 2; CO, NY) allow for individualized goals in which a teacher establishes separate goals for improvement for each of his or her students, including learners with special needs. Several states (n = 3) defer to districts to make decisions about the target levels (FL, IL, PA). In an even more unique approach, taken by both Louisiana and Maryland, ratings are assigned more holistically, in conference with the teachers being rated, and with support from classroom evidence, including SLOs. In this case, the teacher could potentially explain why a student (or students) did not meet the goal he or she established, and have his or her evaluation rating adjusted as needed.


WEIGHT

  

A third decision point is the weight given to SLOs within the teacher’s overall evaluation rating. In some cases (n = 13), states weight and add component scores (typically growth measures and observation scores), while other states use a model in which components are used in a decision matrix.3 SLO scores are one component of these systems and may range from being 15% (e.g., DC, NJ, TN) to 40% (e.g., IL, MD) of the teacher’s total evaluation score (see Table 2).  


CONCLUSIONS: STUDENT LEARNING OBJECTIVES AND IMPLICATIONS FOR PRACTICE


A wide variation in the role of SLOs in the teacher evaluation process exists. One important distinction across states has to do with three decisions states must make regarding the use of SLOs. The first decision is a determination of the population of students included in the SLOs (target population). Many states (n = 9) stipulate that all students must be included in SLOs. That is, the teacher must set a goal that covers all students taught. By contrast, a few states (n = 4) allow teachers and administrators to select a subgroup of students that the teacher instructs. In these states only a percentage of the students are incorporated into the system, sometimes by criteria indicated by the state (e.g., “at risk”) or by the choice of the teacher (e.g., one period of history taught). In others (n = 6), the system allows much more flexibility, permitting districts to decide which students will be included in each teacher’s SLOs. These selection models, intended perhaps to limit the burden to the teacher or to focus the teacher on students who may need extra attention, do have the potential downside of allowing for the exclusion of “problem” students.


The second decision regards the methodology for scoring the targeted goals. It appears that three procedures are used by a majority of studied states: growth models, cut score models, and tiered models, which may even extend to an individualized goal for each student. The third decision, likely to be perceived as the most important by many key stakeholders, is the weight given to the SLO in the teacher evaluation system.  Findings here suggest that the weight ranges from 15% to 40% across states. In the following sections, we discuss the implications of all of our findings for practice and policy development.  


We know that the consequences of any evaluation system that relies significantly on a single score can result in problematic outcomes. As warned by Campbell in 1976, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” For our teachers, because the outcomes may be so consequential in terms of job security, income, or status, there may emerge a tendency for them to set lower goals in order to be sure all students achieve them, “corrupting” the overarching objective of providing appropriate learning challenges for all students (Nichols & Berliner, 2007). This suggests a number of important implications for practice.


First, given the stressors that are inherent in a high-stakes system (consequences for tenure, promotion, and job security; burdensome paperwork), it is possible that teachers may feel pressure to formulate easily attainable target goals or to exclude students, given the choice, who may not achieve the desired growth or surpass the cut score. Thus, the possible benefits of NCLB for students with disabilities, including higher standards than before, access to the general education curriculum, and inclusion in general education settings with their typically developing peers, could be negated.


The teacher actions in response to target population have implications for the education of learners with special needs. Students not included in the SLO, in states where subgroups are permitted, may be “off the radar” of the teacher, and their growth and learning not a priority for a teacher whose evaluation rating relates to the success of other students in the classroom. Just as Menken (2009) and Wright (2002) found with English language learners, there is the potential for “creating a disincentive for schools to serve these students” (Menken, 2009, p. 106) in order to meet accountability pressures. In a study of how teachers made sense of NCLB in special education, Russell and Bray (2013) stated that “some districts in our study responded to these outcome pressures by implementing policies that may not have served the educational needs of students, such as moving students away from neighborhood schools to avoid subgroup concentrations” (p. 18). While this is perhaps an extreme example, there is a threat to students with disabilities’ educational needs when a teacher’s best interest (job security) is perceived to be in opposition to the student’s (challenging instructional goals).  


Additionally, if the weight of SLOs is inconsequential in the entire model (e.g., MA, NJ), then teachers may not utilize the formative potential of SLOs, particularly for students with disabilities, seeing it instead as a meaningless paperwork task. If a teacher is not invested in formulating and implementing high-quality SLOs, since the weight given within his or her evaluation does not positively correlate with the amount of work required, a student’s learning needs may not be identified and addressed. This might become more salient for the student who has a Section 504 plan than the student whose learning needs are more explicitly addressed by an IEP.


More optimistically, in the systems that manage to balance the stakes and the weight of the SLO, there is tremendous potential for the SLO to lead to inclusion and differentiated instruction when needed. When this occurs, the learner with special needs finds his or her instructional needs targeted within the context of the general education classroom—the least restrictive educational environment for many students with special needs.


Very few of the state documents provided specific guidance for including learners with special needs in SLOs, often not acknowledging the presence of these learners in a teacher’s classroom. Instead, most leave linkage rules and system design to the discretion of district administrators, who may lack the expertise to understand what is needed to accommodate all learners in a single design. Administrators are often ill trained in the development of educational goals, setting baselines, and evaluating outcomes; yet Crouse et al. (2016) found that in the 19 RttT systems studied, these administrators are most often the personnel responsible for evaluating the quality of SLOs. The most likely outcome of this mismatch of training and expectations is that SLOs are judged on timely completion and “face validity” rather than content and expectation appropriateness. In a policy and research brief, Goe and Holdheide (2011) examined currently available data on how states were evaluating “non-tested” subject teachers. As we found with SLOs in the RttT systems studied, these authors found tremendous variability and lack of valid evidence. Goe and Holdheide end with recommendations that assessments of student growth should be designed and monitored at the state level to ensure standardization. Without this oversight, there is the potential for districts to “simplify” teacher evaluation by excluding some students from the model.


Before we move to a discussion of implications of policy, an analysis of a specific state’s implementation of SLOs may prove useful. States often post sample SLOs on their evaluation websites, and we have selected one from the Rhode Island Model Teacher Evaluation and Support System as an example of effective use of SLOs for SWD within general education classes, as it specifically references both inclusion of SWD and differentiation of instruction. In this way, it demonstrates how a well-crafted SLO could benefit this population and their teachers.


Figure 2. Sample SLO from Rhode Island Department of Education.


[39_21542.htm_g/00004.jpg]

NOTE: This example of an SLO indicates how the needs of students with disabilities can be included in a productive way. Retrieved from ride.ri.gov.


Within the Rhode Island Department of Education (RIDE) evaluation model, teachers develop SLOs with the help and guidance of the administration. RIDE requires teachers to consider the needs of students with disabilities by (a) ensuring all students are included, (b) allowing goals to be tiered to take into account current levels of performance, and (c) providing guidance on different teaching contexts such as co-teaching, resource rooms, and self-contained classrooms. Figure 2 is a sample provided by the RIDE (ride.ri.gov). In this sample, there is an identified measure, differentiated starting levels, and differentiated target outcomes. Perhaps more importantly, there is a plan for implementation that includes support for teachers who need it. This is an example of how an SLO could assist general education teachers in their work with students who have been identified with special needs. The teachers are being asked to consider an important skill for the entire class, thus fostering the inclusion of all students, and then to use baseline data to set achievable goals for subgroups. The Rhode Island model does not allow teachers to copy IEPs as SLOs, but instead encourages general education teachers to work with special educators to have IEPs inform SLOs (Rhode Island Department of Education, 2013). That is, if a student’s IEP focuses on improving reading comprehension to a certain level on a certain measure, the SLO might focus on a class assessment of reading comprehension and set the specific student’s target at a comparable level. Finally, the sample SLO asks the teacher to develop the methods and resources needed to achieve the target. This model is both inclusive and differentiated, which are key elements in quality instruction for all students (Lipsky & Gartner, 2013; Smith, Polloway, Patton, Dowdy, & Doughty, 2015). However, this is posted as an exemplar and SLOs of this quality may not be found in every classroom. Also, the consideration given to needs of SWD is not seen in all state models. We will now discuss the implications of less successful models of SLOs for overall policy.


IMPLICATIONS FOR POLICY DEVELOPMENT


There are several implications for policy development that emerge from our examination of SLO implementation with learners with special needs. Results indicate that states have various means of responding to the issue of SWD within SLO design and implementation, and these decisions have implications for both students and teachers. It is clear that further guidance from policy is needed. Teachers need state-specified guidelines, training, and follow up in order to create high-quality SLOs and to use them to guide instructional practice (Crouse et al., 2016). Without this governance, there is potential for marginalization, as has been found with other special populations (Menken, 2009; Russell & Bray, 2013; Wright, 2002) or “gaming” the system (Nichols & Berliner, 2007). In short, policies must support teachers in efforts to develop challenging but attainable targets for all students. This can be accomplished through policy initiatives, professional development, and opportunities for growth in practice without fear of the dire consequences currently associated with high-stakes decisions.


The Council for Exceptional Children (CEC) has issued substantive and comprehensive guidelines for how teachers of SWD should be evaluated. They state that teacher evaluation should be applicable to all teachers, identify areas for professional development, include multiple measures of evaluation, keep the teacher who is being evaluated informed, strive for ongoing improvement, be linked to new research, protect teacher’s confidentiality, and be adequately funded. Evaluation procedures should take into consideration the intricacies of special education by recognizing the teacher’s role and responsibilities, by setting expectations for performance based on standards, by involving trained personnel, and by acknowledging the needs of students in special education. Effective evaluations are based on many forms of measurement, not just student growth; they result from a model that is supported by research, which is valid and predicated on evidence-based practices. In our opinion, decisions about a teacher’s career should be based on more measures than just student growth, and teacher evaluation should not be based on students’ progression toward their goals in their IEP. Teachers are professionals; therefore, they need professional development, involvement in the evaluation process, a supportive environment, manageable workload, and fair compensation. Evaluations are most effective when updated as often as possible based on findings from new research (Council for Exceptional Children, 2012).  


It is clear from the existing literature as well as from the RttT project data (Crouse, et al., 2016) that SLOs have great potential to meet many of the guidelines outlined by the CEC. Specifically, they can be applicable to all classrooms and facilitate teachers’ use of student data. The use of SLOs allows teachers to participate in their own evaluation process and allows administrators to recommend appropriate and targeted professional development. In addition, the use of SLOs can encourage sensitization of general education teachers to the needs of all learners and to the concept of differentiated or individualized instructional goals.

However, without clear policy guidelines, there is much potential for misuse of SLOs. In a high-stakes system, the pressure exists for teachers to set low-bar, achievable goals, to relegate learners with special needs to the lowest tier, or to exclude students from SLOs all together. Then there is the question of over-inclusion. If teachers are permitted to exclude “at risk” students from their subgroups or to set lower goals for them, there is perhaps an incentive to over-identify students as special needs to improve the outcome for the teacher.


This article begins a needed conversation concerning the increasingly widespread utilization of SLOs in the evaluation process in general education classrooms with identified students. It appears that while potential for benefit exists, at this point in time, the presence of variability and the lack of clarity in implementation serve as inhibitors. Clearly, more scrutiny and research is needed on SLOs in general and the potential, both good and bad, for their use in classrooms with learners with special needs.  Only then can SLOs become instruments of inclusion and differentiated instruction for students with disabilities and their teachers.


Notes


1.

The two most common methods for calculating growth measures include Value Added Modeling (VAM), a statistical method used to link teaching to student achievement scores, and Student Growth Percentiles (SGP), a method in which student percentiles are ranked and the median score assigned to the teacher.

2.

Also called student growth objectives, student learning outcomes, student learning targets, district determined measures, or teacher assessed student data.

3.

In a conjunctive model, the state provides a table on two dimensions, practice and student growth. The intersection of the two independent ratings becomes the teacher’s overall score. For more information, see Crouse, Joyce, and Gitomer (2014).

4.

Conjunctive models use nominal categories and a decision matrix. For further explanation, see Crouse, Joyce, and Gitomer (2014).


References


Arizona Department of Education [AZ DE]. (2014). Teacher evaluation: Implications for special educators leading change 2014. Arizona Department of Education. Retrieved from http://www.azed.gov/teacherprincipal-evaluation/student-learning-objectives/.


Blank, R. K. (2011). Closing the Achievement Gap for Economically Disadvantaged Students? Analyzing Change since No Child Left Behind Using State Assessments and the National Assessment of Educational Progress. Council of Chief State School Officers.


Campbell, D. T. (1976). Assessing the impact of planned social change. Hanover, NH: The Public Affairs Center, Dartmouth College.


Carnoy, M., & Loeb, S. (2002). Does external accountability affect student outcomes? A cross-state analysis. Educational Evaluation and Policy Analysis, 24(4), 305–331.


Collaboration to Promote Self-Determination. (2013). CPSD’s policy priorities for education reform in the 113th Congress. Washington, DC: CPSD. Retrieved from http://thecpsd.org/wp-content/uploads/2014/02/CPSD-Education-Reform-Policy-Priorities-113-1.pdf


Colorado Department of Education [CO DOE]. (2013). Student learning outcomes. Denver, CO: Author.


Community Training and Assistance Center [CTAC]. (2008). Tying earning to learning: The link between teacher compensation and student learning objectives. Boston, MA: Author.


Community Training and Assistance Center [CTAC]. (2013). It’s more than money: Teacher incentive fund leadership for educators’ advanced performance, Charlotte-Mecklenburg schools. Boston, MA: Author.


Cortiella, C. (2006). NCLB and IDEA: What parents of students with disabilities need to know and do. Minneapolis, MN: National Center on Educational Outcomes, University of Minnesota.


Council for Exceptional Children. (2012). Council for Exceptional Children 2012 policy manual. Arlington, VA: Author.


Crouse, K., Gitomer, D. H., & Joyce, J. (2016). An analysis of the meaning and use of student learning objectives. In A. Amrein-Beardsley & K. Kappler Hewitt (Eds.), Student growth measures: Where policy meets practice. New York, NY: Palgrave Macmillan.


Crouse, K., Joyce, J., & Gitomer, D. H. (2014, April). Comparative analysis of the design and implementation of Race to the Top teacher evaluation systems. Paper presented at the AERA Annual Meeting, Philadelphia, PA.


Erickson, R., Ysseldyke, J., Thurlow, M., & Elliott, J. (1998). Inclusive assessments and accountability systems. Teaching Exceptional Children, 31(2), 4.


Georgia Department of Education [GA DOE]. (2013). Teacher keys effectiveness system. Georgia Department of Education. Retrieved from https://www.gadoe.org/School-Improvement/Teacher-and-Leader-Effectiveness/Pages/Teacher-Keys-Effectiveness-System.aspx


Goe, L., & Holdheide, L. (2011). Measuring teachers’ contributions to student learning growth for nontested grades and subjects. Washington, DC: National Comprehensive Center for Teacher Quality.


Hanushek, E. A., & Raymond, M. E. (2003). Improving educational quality: How best to evaluate our schools? In Y. Kodrzycki (Ed.), Education in the 21st century: Meeting the challenges of a changing world (pp. 193–224). Boston, MA: Federal Reserve Bank of Boston.


Illinois Performance Advisory Council. (2013). Guidebook on student learning objectives for Type III assessments. Chicago, IL: Illinois State Board of Education. Retrieved from http://www.isbe.net/peac/pdf/guidance/13-4-te-guidebook-slo.pdf


Jacob, B. (2003). High stakes in Chicago: Did Chicago’s rising test scores reflect genuine academic improvement? Education Next, 3(1), 66–72.


Kane, T. J., & Staiger, D. O. (2008). Estimating teacher impacts on student achievement: An experimental evaluation (No. w14607). New York, NY: National Bureau of Economic Research.


Kentucky Department of Education [KY DOE]. (2014, April 11). Certified evaluation plan 4.0. Kentucky Department of Education. Retrieved from

http://education.ky.gov/teachers/pges/geninfo/documents/cep%20model%204-0%205-13-14%20-%20final.pdf


Lacireno-Paquet, N., Morgan, C., & Mello, D. (2014). How states use student learning objectives in teacher evaluation systems: A review of state websites. Washington, DC: US Department of Education, Institute of Education Sciences.


Ladson-Billings, G. (2009). Opportunity to teach. In D. H. Gitomer (Ed.), Measurement issues and assessment for teaching quality (pp. 206–222). Thousand Oaks, CA: Sage.


Linn, R. L., Baker, E. L., & Bettebenner, D. W. (2002). Accountability systems: Implications of requirements of the No Child Left Behind Act of 2001. Educational Researcher, 31(6), 3–16. Retrieved from http://search.proquest.com/docview/216900142?accountid=13626


Lipsky, D. K., & Gartner, A. (2013). 1. Inclusive education: A requirement of a democratic society. World Yearbook of Education 1999: Inclusive Education, 12–23.


Massachusetts Department of Elementary and Secondary Education [MA DESE]. (2012). Massachusetts model system for educator evaluation part VII: Rating educator impact on student learning using district-determined measures of student learning, growth and achievement. Malden, MA: Massachusetts Department of Elementary and Secondary Education. Retrieved from http://www.doe.mass.edu/edeval/model/PartVII.pdf


McCaffrey, D. F., Lockwood, J. R., Koretz, D. M., & Hamilton, L. F. (2003). Evaluating value-added models for teacher accountability. Santa Monica, CA: RAND.


Menken, K. (2009). No Child Left Behind and its effects on language policy. Annual Review of Applied Linguistics29, 103–117.


New Jersey Department of Education [NJ DOE]. (2013). AchieveNJ: Educator evaluation and support in New Jersey. Trenton, NJ: New Jersey Department of Education. Retrieved from http://www.nj.gov/education/AchieveNJ/


New York State Education Department. (2014). Guidance on the New York State district-wide growth goal-setting process for teachers: Student learning objectives. Albany, NY: Author.


Nichols, S. L., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing corrupts America’s schools. Cambridge, MA: Harvard Education Press.


Office of the Press Secretary. (2009, July 24). Remarks by the President at the Department of Education. Retrieved August 7, 2016, from White House Speeches and Remarks: https://www.whitehouse.gov/the-press-office/remarks-president-department-education


Peterson, P. E., & West, M. R. (Eds.). (2003). No child left behind? The politics and

practice of accountability. Washington, DC: Brookings.


Prince, C. D., Schuermann, P. J., Guthrie, J. W., Witham, P. J., Milanowski, A. T., & Thorn, C. A. (2009). The other 69 percent: Fairly rewarding the performance of educators of nontested subjects and grades. Washington, DC: Center for Educator Compensation Reform.


Reform Support Network. (2014). A toolkit for implementing high-quality student learning objectives, 2.0. Washington, DC: U.S. Department of Education. Retrieved from https://www2.ed.gov/about/inits/ed/implementation-support-unit/tech-assist/toolkit-implementing-learning-objectives-2-0.pdf


Rhode Island Department of Education. (2013). Measures of Student Learning-Revised. Retrieved August 7, 2016, from RIDE: http://www.ride.ri.gov/TeachersAdministrators/EducatorEvaluation/StudentLearningOutcomesObjectives.aspx


Rhode Island Department of Education. (2013). Addendum to the Rhode Island Model Teacher Evaluation & Support System. Providence, RI: Author.


Rhode Island Department of Education. (2014). Measures of student learning. Providence, RI: Author.


Russell, J. L., & Bray, L. E. (2013). Crafting coherence from complex policy messages: Educators’ perceptions of special education and standards-based accountability policies. Education Policy Analysis Archives21(12), 1–21.


Sanders, W. L., Wright, S. P., & Horn, S. P. (1997). Teacher and classroom context effects on student achievement: Implications for teacher evaluation. Journal of personnel evaluation in education, 11(1), 57-67.


Shepard, L., Hannaway, J., & Baker, E. (2009). Standards, assessments, and accountability. National Academy of Education. Retrieved from http://www.naeducation.org/cs/groups/naedsite/documents/webpage/naed_080866.pdf


Slotnik, W., & Smith, M. (2004). Catalyst for change: Pay for performance in Denver. Final Report. Boston, MA: Community Training and Assistance Center.


Smith, T. E., Polloway, E. A., Patton, J. R., Dowdy, C. A., & Doughty, T. T. (2015). Teaching students with special needs in inclusive settings. Upper Saddle River, NJ: Pearson.

Steele, J. L., Hamilton, L. S., & Stecher, B. M. (2010). Incorporating Student Performance Measures into Teacher Evaluation Systems. Technical Report. RAND Corporation.


Timar, T. B., & Maxwell-Jolly, J. (2012). Narrowing the Achievement Gap: Perspectives and Strategies for Challenging Times. Cambridge, MA: Harvard Education Press.


Tyler, J. (2011). Designing high quality evaluation systems for high school teachers: Challenges and potential solutions. Washington, DC: Center for American Progress. Retrieved from http://eric. ed.gov/?id=ED535653


U.S. Department of Education. (2012). ESEA flexibility. Washington, DC: U.S. Department of Education. Retrieved from:  https://www.whitehouse.gov/sites/default/files/RTT_factsheet.pdf


U.S. Department of Education, National Center for Education Statistics. (2013). Digest of Education Statistics, 2012 (NCES 2014-015), Chapter 2. Retrieved from https://nces.ed.gov/fastfacts/display.asp?id=59


Wright, W. E. (2002). The effects of high-stakes testing on an inner-city elementary school: The curriculum, the teachers, and the English language learners. Current Issues in Education, 5(5). Retrieved from http://cie.asu.edu/volume5/number5


Yell, M. L., & Shriner, J. G. (1997). The IDEA amendments of 1997: Implications for special and general education teachers, administrators, and teacher trainers. Focus on Exceptional Children, 30(1), 1–19.  


Yell, M. L., Shriner, J. G., & Katsiyannis, A. (2006). Individuals with Disabilities Education Improvement Act of 2004 and IDEA regulations of 2006: Implications for educators, administrators and teacher trainers. Focus on Exceptional Children, 39(1), 1–24.


Yin, R. K. (2013). Case study research: Design and methods. Thousand Oaks, CA: Sage.




Cite This Article as: Teachers College Record Volume 118 Number 14, 2016, p. 1-22
http://www.tcrecord.org ID Number: 21542, Date Accessed: 4/23/2019 6:18:53 PM

Purchase Reprint Rights for this article or review
 
Article Tools

Related Media


Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Jeanette Joyce
    Rutgers University
    E-mail Author
    JEANETTE JOYCE is currently at Rutgers University researching teaching quality and assessment. Her work with Crouse and Gitomer on SLOs has appeared in the National Academies Press’s An Evaluation of the Public Schools of the District of Columbia: Reform in a Changing Landscape (2015), and in the upcoming Student Growth Measures: Where Policy Meets Practice, edited by A. Amrein-Beardsley and K. Kappler Hewitt.
  • Judith Harrison
    Rutgers University
    E-mail Author
    JUDITH R. HARRISON is an Assistant Professor of Educational Psychology in the special education program at Rutgers University. Her research focuses on teacher implementation and classroom-based services for youth with emotional and behavioral disorders, including attention deficit hyperactivity disorder. In addition, Harrison studies teacher acceptability and the feasibility of implementation of evidence-based interventions in classroom-based services. She is the co-author of one book, The Energetic Brain, and numerous peer-reviewed publications.
  • Danielle Murphy
    Rutgers University
    E-mail Author
    DANIELLE MURPHY has recently completed her Master’s in Special Education at Rutgers.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS