Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

What If Only What Can Be Counted Will Count? A Critical Examination of Making Educational Practice “Scientific”


by Jennifer C. Ng, Donald D. Stull & Rebecca S. Martinez - 2019

Background/Context: In recent decades, federal policymakers have pushed for education to be a more “scientific” endeavor. While scholars have considered the implications of this orientation for educational researchers, less attention has been given to its impact on educational practitioners.

Purpose/Focus of the Study: By focusing on the local interpretation and implementation of a multi-tiered system of supports (MTSS) model in one Midwestern school district, this study documents the translation of a comprehensive reform initiative meant to make educational practice more data-driven and scientific. With particular attention to interactions between district and building administrators, classroom teachers, and a group of outside consultants, we also consider the consequential effects of principal–agent relations in determining how learners (should) learn and teachers (should) teach.

Research Design: Using ethnographic methods over a period of five months, this study emerged from a larger project examining the work of educators in a rural district that includes 18 schools and serves approximately 7,600 students from racially, culturally, and linguistically diverse backgrounds. With MTSS as the unifying agenda across multiple interactions that involved a cross-section of the district’s staff, administrative leaders, and outside consultants, we analyzed fieldnotes generated from participant observation during MTSS-specific meetings and semi-structured, individual interviews conducted with key implementation principals and agents. Other fieldnotes and interviews provided confirmation of our primary analysis, as well as supplementary perspectives from building and classroom contexts.

Findings: Through our analysis, we found that implementation leaders presumed the infallibility of the MTSS model; relied exclusively on certain forms of quantitative data; standardized the individual needs of learners, processes of learning, and roles of teachers; and insisted on fidelity of intervention as an end in itself.

Conclusions/Recommendations: Implementation leaders invoking research to inform practice can sometimes silence practitioners rather than foster their substantive involvement and understanding. This marginalizes certain types of knowledge that can contribute to understanding students’ needs, and it forces practitioners to be data-deferent rather than data-driven. The concept of implementation fidelity also needs to be reconsidered—not as an absolute good but with the necessary flexibility afforded to practitioners who are (1) educated in the essential components of available interventions, (2) able to become fluent through practice, and (3) allowed to exercise their professional expertise and judgment as appropriate.



No conclusion of scientific research can be converted

into an immediate rule of educational art.”

John Dewey


Since the late 1990s, federal policymakers have pushed for greater assurance about what works in educational practice with mandates to make the research that informs it more “scientific” (Shavelson & Towne, 2002). This has resulted in the close association of educational research with the methods and standards of the so-called hard sciences (Phillips, 2014). It has also been accompanied by the expectation that student learning outcomes will be more predictably improved as educational practice becomes increasingly derived from research-based evidence (Eisenhart & Towne, 2003).


Within the academy, critics of what has arguably become a dominant “scientific” orientation to education have questioned whether inquiry can—or should—be so rigidly conceived. After all, educational researchers do their work in varied local settings, which are shaped by ever-changing social interactions at particular historical moments (Berliner, 2002). Additionally, critics have expressed concerns about marginalizing important but perhaps not narrowly scientific ways of understanding educational phenomena through historical, philosophical, and cultural perspectives or discounting ways of knowing that arise from interpretivist, critical, postmodern, or broadly qualitative approaches (Erickson & Gutierrez, 2002; Howe, 2009; St. Pierre, 2002).


Educational practitioners are implicated in this “double transformation” of the field as well (Biesta, 2007). Although the circumstances of teaching are similarly dynamic, local, and varied, less attention has been given to what impact the imposed logic of science has had on communities of educational practice. Just as the contemporary tenor of policy mandates has rendered certain modes of inquiry inert in the “lively science” of humans studying humans in their social worlds (Agar, 2013), it seems important to critically examine how the role and work of educators have also been altered. By focusing on the local interpretation and implementation of a multi-tiered system of supports (MTSS) model in one Midwestern school district, this study documents the translation of an initiative meant to make educational practice more data-driven and scientific. With particular attention to interactions between district and building administrators, classroom teachers, and a group of outside consultants, we also consider the consequential effects of principal-agent relations that determine such matters as how learners learn and teachers teach.

MTSS: BACKGROUND CONTEXT

Borne out of a public health approach, an MTSS model refers to a population-based, system-wide conceptualization of a given problem that aims simultaneously to prevent and remedy the problem. In addressing the academic needs of all students, MTSS is typically known as response to intervention (RTI) (e.g., Martinez, 2014). In addressing the behavioral needs of all students, MTSS is known as positive behavioral interventions and supports (PBIS) (e.g., Reinke, Herman, & Stormont, 2013). Most recently, MTSS has also been endorsed for addressing students’ social and emotional learning and is recognized in the Every Student Succeeds Act (ESSA) as a best practice framework for all students in all schools (Sink & Ockerman, 2016).


MTSS is neither a specific curriculum nor a du jour protocol or intervention. Rather, it is a system-wide model that emphasizes a process of monitoring and adjusting a treatment (e.g., a reading intervention) until the desired outcome (i.e., reading achievement) is achieved. Data derived from the monitoring process are central to ongoing problem solving in MTSS. For example, the results of universal screening tools allow decision-makers to ascertain what percentage of a predefined population does not meet a particular criterion for success and therefore might benefit from additional support (Martinez & Nellis, 2008). The continuous interpretation of data generated by frequent, formative assessments of individuals and small groups receiving additional support is also vital to the MTSS process (Albers & Martinez, 2015; Daly, Hofstadter, Martinez, & Anderson, 2010).


In Garden City, Kansas, where our study was conducted, the state MTSS website similarly explains that: “MTSS is a coherent continuum of evidence based, system-wide practices to support a rapid response to academic and behavioral needs, with frequent data-based monitoring for instructional decision-making to empower each Kansas student to achieve high standards” (www.kansasmtss.org/overview.htm). According to this definition, MTSS is intended to be a broad, coordinated reform effort for addressing students’ academic and behavioral needs. The approach is visually depicted in Figure 1, which hung prominently in USD 457’s district central office boardroom and was included in many professional development presentation notes for reference.


Figure 1. From www.kansasmtss.org

[39_22476.htm_g/00002.jpg]


At the core of the MTSS model is a stratified triangle, where the largest swath across the bottom refers to “all” students receiving regular, Tier 1, or “core” instruction; the midsection where “some” students identified through testing receive Tier 2, early interventions; and the peak of the triangle where a “few” students are provided additional, intensive interventions at a Tier 3 level according to their continued assessments. Curriculum, instruction, and assessment form an equivalent, three-part circle encompassing the focal triangle, highlighting the need for careful alignment and constant consideration of these component parts as they function together to provide students with “rigorous and research-based curriculum,” “effective and relentless teaching,” and interventions “at the earliest identification of need.” The outermost arcs summarize the expectations that “every educator will continuously gain knowledge and develop expertise to build capacity and sustain effective practice”; “every leader will be responsible for planning, implementing and evaluating learning”; and “an empowering culture will be enhanced/developed that creates collective responsibility for student success” (www.kansasmtss.org/overview.html).


In comparison to past practices for at-risk children that were unsystematic or too slow to respond to students’ varied needs (Fuchs, Mock, Morgan, & Young, 2003; McCook, 2006; Sailor, 2009), MTSS is laudable for its proactive, coordinated, and fluid intent. The implementation of MTSS in Garden City schools was also consistent with a “scientific” logic of striving always to base educational practice on objective data and established research knowledge. Although the district’s focus on reading and math achievement is perhaps more accurately reflected in existing educational scholarship and practice as RTI, the district’s use of MTSS as synonymous with RTI is consistent with the terminology adopted by other districts and practitioners throughout the state. We adopt a similar usage in our examination of the local interpretation and implementation process of this initiative in USD 457.

THE SIGNIFICANCE OF PRINCIPAL–AGENT RELATIONS IN IMPLEMENTING COMPREHENSIVE SCHOOL REFORM

Like other models of comprehensive school reform, MTSS emphasizes the importance of rooting educational practice in scientific research and shifting targeted student assistance to coordinated, school-wide supports. Such a shift can be especially impactful in schools serving high percentages of economically disadvantaged students. While research about these initiatives has focused primarily on the largest, most widely disseminated models of school change, scholars stress that no single model of school-wide reform provides the best solution for school improvement (Borman, Hewes, Overman, & Brown, 2004; Cross, 2004). Many models of comprehensive school reform exist, and these models function as “blueprints” for school-wide change that specify both the particular features of schooling that will be restructured and delineate how the objectives of reform will be realized (Rowan, Barnes, & Camburn, 2004).


The adoption of any change model must also be accompanied by corresponding action to put the design into practice. And because comprehensive school reform efforts are so multifaceted, Rowan, Barnes, and Camburn (2004) explain, “School improvement results from a confluence of circumstances that must and can be orchestrated by external change agents, district and school leaders, and teachers and students working in cooperation with one another” (p. 2). The particular role of external agents in such a configuration is conceptually unique, as they can serve as temporary “linking agents” who


help local educators learn more about and make wise selections of research-based practices to implement, provide on-site technical assistance throughout the change process (including problem definition, needs assessment, planning and evaluating change efforts), provide direct training in and support of new practices, and provide feedback from local schools to design teams. (Rowan, Barnes, & Camburn, 2004, p. 24)


While researchers have found that the effective contribution of external agents is highly predictive of implementation success, they have also found the quality and consistency of support external agents provide varies from one school site to another (Berends, Bodilly, & Kirby, 2002; Desimone, 2002).


Indeed, much of the writing about comprehensive school reform has been normative in nature, emphasizing how things should proceed according to a theory of change (Kretlow & Helf, 2013; Rowan, Barnes, & Camburn, 2004). Or, analogous to Consumer Reports guides, the literature has advised prospective buyers about the attributes of different reform models, the level of support provided by developers, the costs associated with implementation, and ratings of the research that supports the program’s design (Stringfield, 2000). Studies of actual reform implementation highlight some of the critical challenges schools face in adequately planning, building capacity, securing staff commitment, and evaluating indicators of change on such a large scale (Borman, Carter, Aladjem, & LeFloch, 2004; Molloy, Moore, Trail, Van Epps, & Hopfer, 2013; Ross & Gil, 2004). The nature and implications of these departures from design to application in school contexts are important to explore further.


Informed by a principal–agent framework, our study examines the interactions of relevant actors in one particular school district that adopted MTSS to improve instructional practice. A principal–agent framework highlights the reality that within any organization, interpersonal relations can be viewed as a “web of contracts” whereby one party—the principal—relies on the efforts of another—the agent—to achieve a desired end (Wohlstetter, Datnow, & Park, 2008, p. 240). American public education consists of a series of principal–agent relationships: federal to state governments; citizens to state officials; state agencies to local school districts; school boards to central office staff; and central office to schools, for example (Ferris, 1992). We attend particularly to the last of these relationships in our analyses of local reform.


“At the heart of the principal–agent theory,” Wohlstetter, Datnow, and Park (2008) point out, “is a contract specifying decision rights—what the agent should do and what the principals must do in return” (p. 241). The appropriate exercise of these rights necessitates mechanisms for control. As a result, principal–agent relationships can be plagued implicitly and explicitly by the imbalance of decision-making power; uneven distribution of information and proximity to action; contested or contradictory goals; weak incentives for compliance; and ill-equipped or reluctant entry of parties into an agreement.


In the prevailing call for educators to rely on research, Roderick (2012) asserts, “The problem is not that teachers and school administrators don’t want to improve” (p. 6). What is more common especially in recent years is


watching principals and teachers struggle to make sense of the deluge of information and “data” they face daily: incomprehensible performance management decks, data dashboards, packaged test and survey reports all in three colors with beautiful graphs but little guidance, and school report cards filled with trends on 20 different indicators that don’t seem to provide any insight beyond whether a school is red, yellow, or green. (p. 4)


STUDYING DATA USE IN THE WILD


For all the emphasis policymakers and educational reformers have put on making education a scientific and evidence-based practice, there is “shockingly little research on what happens when individuals interact with data in their workplace settings” (Coburn & Turner, 2012, p. 99). Existing studies reveal a broad view of data with varied accounts of how educators engage with standardized test scores, graduation and dropout rates, attendance figures, progress examinations, classroom assessments, and student work (Honig & Venkateswaran, 2012). What has typified this emerging literature are reports of data outcomes associated with specific educational initiatives; descriptions of activities to manage or promote data-use initiatives; and normative rather than analytical accounts of how transformative data may or may not be, without accompanying information to warrant these claims.


Coburn and Turner (2012) argue that most studies of data use have failed to seriously consider issues of its use in practice. Recognizing the interactive nature of data use and “investigating how data enters into streams of ongoing action and interaction as they unfold at the classroom, school, or district level” is a vital start (p. 102). This further requires understanding the multiple contexts within which data use occurs and, in turn, how certain practices, frames of reference, shared language, and norms become meaningful in principal–agent exchanges. It also means capitalizing on research methods that can provide more insight than surveys or formal interviews alone afford about “data use in the wild,” a necessarily complex phenomenon that unfolds over time (p. 103).


The rich circumstances Coburn and Turner (2012) describe situate this study, which emerged from a larger ethnographic project about the work of public school educators in Garden City, Kansas—a rural town notable for its demographic, linguistic, and cultural transformation over four decade’s time by the area beefpacking industry (Stull & Ng, 2016). During five months of fieldwork in the district’s 18 schools, we observed MTSS serve as the unifying agenda across student assessments, academic screening, and progress monitoring; data-sharing meetings; professional-development training; school board discussions; building staff meetings; and classroom use of select instructional protocols. These organizational routines (Spillane, 2012) involved a cross-section of the district’s classroom teachers, paraprofessionals, instructional coaches, interventionists, and building principals. They also featured key central office administrators and state consultants who dictated the process of MTSS implementation and, to a very large extent, its outcomes.


Most studies of research-based decision making and data use take individual schools as their unit of analysis, but such an approach can overlook how schools and central offices might (1) be similarly accountable to outside entities for data use and reporting, (2) work in cooperation or potentially at odds with each other as a result of district mandates, and (3) share needed resources and expertise (Honig & Venkateswaran, 2012). Additionally, the implementation of an initiative always involves people interacting, interpreting, negotiating, and carrying out associated tasks in particular contexts—processes that inevitably influence what is actually achieved (Coburn & Turner, 2012; Tyack & Cuban, 1997). Our study adds not only to the general need for research about data use in practice, but also to the related scholarship of other researchers who have raised concerns about MTSS as a universally sound prescription for teaching and learning (Artiles & Kozleski, 2010; Klinger & Edwards, 2006).


METHODS


STUDY CONTEXT


This analysis emerged from a larger ethnographic study of educators in Garden City, Kansas, who work in a remote, rural district with 18 schools that serve approximately 7,600 students. Seventy-five percent of the district’s students are Hispanic and other children of color, 70% qualify for free or reduced-price meals, 50% are native speakers of 21 languages other than English, 6% are classified as migrant, 3% are refugees, and 4% are homeless (Stull & Ng, 2016). Our goals were to develop both a broad perspective from people in the district and elicit in-depth understanding from purposefully selected key informants.


With financial support from the Spencer Foundation and University of Kansas General Research Fund, as well as sabbatical leaves and assistance from a graduate research assistant, the first two authors made weeklong visits to Garden City during the 2012–2013 academic year to establish essential contacts, conduct initial observations and informal interviews, and obtain feedback and support for our proposed research in the district. In July 2013, we took up residence in the community and conducted five months of continuous participant observation through the end of the fall term in late December.


DATA SOURCES


To generate insights into the milieu within which Garden City’s educators live and work, we conducted fieldwork at every available opportunity. We observed teachers in their classrooms and accompanied them on field trips. We attended meetings of the school board, local teachers’ union, parent–teacher organizations, superintendent’s advisory council, and school staff. We went to family math and literacy nights, school celebrations, extracurricular activities, student performances, and fundraisers. We joined educators at administrative retreats, professional development workshops, instructional coaching sessions, migrant and refugee program regional workshops, and student disciplinary hearings. We also interacted with local early childhood, community college, faith-based, city government, and social service agency staff, and we attended many of their functions. As we became better acquainted with individuals in the community, people invited us to socialize in their homes and area establishments, worship at area churches, and participate in various recreational activities as well.


Sanjek (2014) likens the eclectic range of such research activities afforded by time in the field as the “gift” of ethnographic presence. Indeed, it was through a long-term process of sensitization and “listening to speech in action, learning how to ask, arranging dialogic exchanges, conducting interviews, requesting specific pieces of information, observing behavior in predetermined times and places and among combinations of actors, and, especially in the early stages of fieldwork, seeing and hearing in a wide-ranging and open manner” (pp. 95–96) that our focus on MTSS ultimately emerged. More specifically, we began deliberately examining how educators interpreted and engaged the expressed logic of MTSS and its related characterizations of how learners (should) learn and how teachers (should) teach.


We augmented our written fieldnotes with curricular materials provided by district staff, and we recorded 85 semi-structured interviews with 90 individuals. Most of these interviews lasted one to two hours, though some were longer and conducted over multiple sessions. “Opening the locks” through early questions helped acquaint us with an initiative that was previously unfamiliar to us, and following a “river and channel” questioning technique created flexible opportunities to pursue individual experiences and meanings at length (Rubin & Rubin, 2012). Comparative, hypothetical, and devil’s-advocate question strategies (Merriam, 2009) during our interviews were particularly important for eliciting contrasting points of view between respondent groups and identifying apparent tensions, incongruities, or conflicts evident in certain observations. Rather than approaching the contradictions we encountered as impolite or threatening to discuss, it became essential to acknowledge and advance the “beginning of a better question, a signpost pointing to a more sensitive understanding” (Agar, 2008, p. 99).


DATA ANALYSES, VALIDITY, AND OTHER ETHICAL CONSIDERATIONS


As is often the case in ethnographic work, our analyses began while we were still in the field (Merriam, 2002) and were introduced to the model, terminology, and process of implementation associated with MTSS. Shifting back and forth between participant observation and interviewing, we confirmed our emerging interpretations across people in varied roles and through continued interactions. Our analyses continued long after we returned home, as we consulted academic literatures that were new to us and discussed our emerging ideas with select colleagues who have expertise in MTSS and related topics. In fact, we invited the third author to contribute formally to this project given the knowledge and input she offered during several of these conversations following fieldwork.


The first two authors transcribed all interviews in full and worked abductively from the resulting transcripts and our fieldnotes to develop the analysis that follows (Agar, 2008). Within sets of data collected during MTSS-specific meetings and interviews with key implementation principals and agents, we paid deliberate attention to frequently expressed ideas, indigenous typologies of meaning, similarities and differences in social interactions, and sentiments that seemed curiously absent in our data—such as reference to any forms of “data” that were not exclusively numerical (Ryan & Bernard, 2003). Other fieldnotes and interview transcripts provided confirmation for the themes that emerged from our primary analysis, as well as more building- and classroom-level contexts and supplementary perspectives.


To ensure the validity of our account, we submitted a written report of our study to 12 individuals variously situated in the district in the summer of 2015 and requested their feedback. We received a few edits and many replies that our analysis of MTSS implementation resonated with the pressures people were experiencing in the district to make educational practice more data-driven and scientific. Select district leaders also expressed appreciation for the report’s critical examination and relationship to such challenges as teacher morale and attrition. As the individuals were administrators and teachers involved with the MTSS initiative in different buildings serving early childhood through 12th-grade students, their reactions provided support for the accuracy and credibility of our findings overall. We presented our final study report officially to the district in the fall of 2015.


The unique nature of doing ethnography warrants a few final comments. As Fluehr-Lobban (1994) points out, the voluntary and informed consent of participants in such long-term and expansive work requires more than a single moment’s willingness. Especially in the early stages of our research, we used official district distribution lists, verbal announcements to groups, and face-to-face introductions with individuals to detail our roles and study purposes. Over time, people grew accustomed to our presence and facilitated our entry into additional settings and relationships, wherein we repeated the cycle of information sharing and getting acquainted. We only observed building-level events with clear approval of building-level administrators, and we only observed classrooms with the specific approval of classroom teachers. Consistent with the stipulations of our university’s IRB approval, signed consent was obtained only with those we formally interviewed.


In ethnographic research that is context-specific and produces personal accounts of experience “near” rather than “distant” (Geertz, 1976), special attention to seemingly conventional practices, such as the use of pseudonyms to ensure confidentiality, is also necessary (Fluehr-Lobban, 1994). Because of the purposeful selection of our research site and vital details that make it readily identifiable, we opted at the outset of our project to identify both Garden City and its only public school district by name. At the individual level, however, we promised confidentiality by omitting specific identifiers that could reveal a person’s identity. In the sections that follow, study participants may be referenced generally (as teachers and administrators) or distinguished by categorical descriptors when those descriptors are helpful to understanding (elementary rather than special education teacher or building instead of district administrator, for example). While four external consultants from the state office led the district’s MTSS initiative, professional development sessions, and data-sharing meetings, we represent them almost interchangeably to prevent their individual identification. Owing to their reliance on scripted presentation protocols with multiple, differently assembled groups across five months of participant observation, our composite characterization of these consultants reflects the remarkable consistency of their manners and messaging over the course of MTSS implementation.


STUDY RESULTS


How does educational research inform educational practice? And how do principal–agent relations influence the way educational practice becomes scientific? Our study of MTSS implementation in one district provides situated insight into these questions. In the sections that follow, we describe (1) the presumed infallibility expressed about the MTSS model, (2) the exclusive quantification of what counts as data, (3) the standardization of addressing learners’ needs, learning processes, and teachers’ roles, and (4) the insistence on fidelity of treatment as an absolute good for improving students’ educational outcomes. Because the merits of an idea in theory are different from the realization of an idea in practice, our focus on MTSS is importantly about the latter and its particular translation across time and multiple interactions.


THE PRESUMED INFALLIBILITY OF MTSS


The MTSS initiative provided practitioners a coordinated and elaborate system of data management that included standardized assessments, multiple displays of assessment results, color-coded classifications of students, and accompanying interventions. At meetings led by state consultants as well as select district administrators in the central office boardroom, building representatives were regularly scheduled to share these data and discuss how they were being used to inform decision-making and action at all levels of the district. Usually, some combination of the school principal, instructional coach(es), and interventionist(s) representing each school attended the meetings, though membership varied from building to building and meeting to meeting.


Of primary concern during these district meetings was whether practitioners were adhering to the MTSS model they were introduced to the year before, and how building personnel were using students’ AIMSweb reading and math test scores to determine their placement on the “Rainbow Report.” Students were assigned to one of three color-coded tiers: Green for Tier 1 students meeting expectations and thus not receiving interventions in addition to the core curriculum; Yellow for some students identified at Tier 2 as needing the core curriculum as well as early and more frequent intervention support; and Red for select Tier 3 students needing the most intensive intervention. As the consultant explained to staff from one elementary school, “We only look at what we have data for. It is only about what AIMSweb tells us. This whole process is about moving kids out of intervention as fast as we can so we can get the scores we want.” For the consultants, AIMSweb results were the sole determinants of student placement. Teachers should provide students only those interventions specified as appropriate for each tier.


Any time building representatives suggested alternative sources of student assessment or instructional activities, the consultant steered them back to the strict interpretation of the MTSS model. In one school where teachers provided students in Tier 1 additional intervention support because there happened to be extra time and space for them in Tier 2 groups, the consultant emphasized: “The Rainbow Report is all we use to make assignments: Red, Yellow, Green. Everybody in Red gets the core and only Tier 3. Everyone in Yellow gets the core and only Tier 2. Everyone in Green gets only the core.” According to the consultant, there were enough safety nets built into the MTSS model—what she called its “self-correcting feedback loop” of curriculum, assessment, and instruction. “If we’re not doing it right,” she explained, “we’ll catch it in time and adjust the protocols.”


For the consultants, MTSS was entirely complete and the only valid model for determining students’ educational placement. Consequently, staff concerns about the appropriateness or effectiveness of different aspects of the system for their students were met with the same limited range of possibilities, the most common being to make sure students were matched to the correct interventions. For example, when a school using the intervention program Read 180 reported its students were not making progress as expected, the state consultant insisted: “The tests are accurate. You are just using the wrong intervention if things aren’t going right.”


Midway through the semester, school representatives were asked to identify particular students who seemed unresponsive to their interventions, or whose test scores defied easy explanation, for further discussion with the state consultant. Some of the students selected were performing better than their teachers anticipated; others scored lower than indicators would have suggested. For one student whose upward trend line was likely to fall short of target outcomes at the completion of his intervention, the consultant advised: “Make sure the right intervention is being used, and make sure the intervention is being done with fidelity. If those two things are being done and the kid is still not progressing satisfactorily, then intensify instruction.” But no additional information was offered—nor was any sought—as to how intensification of this sort could be achieved.


When staff members described a sixth-grade student who passed the fifth-grade reading assessment but was still in Tier 3 according to AIMSweb, and another student who tested as a non-reader but actually read above grade level according to his teacher’s observations in their classroom, the consultant replied that such anomalies should not be happening and that she had never encountered them before. This situation prompted one last option: “Retest if you think the test is invalid,” she said. A district administrator echoed the same constrained reasoning to illustrate just how adaptable MTSS could be for its users:


If a teacher believes that a test isn’t accurate, then they have the ability to retest, to try to get a more accurate number if it’s not consistent with what they’re seeing in their class work. . . . If it doesn’t make sense, then they can question it, and they can retest.


THE QUANTIFICATION OF WHAT COUNTS AS DATA


When asked if other kinds of data entered into the decision-making process of the district or whether only quantitative data mattered, one district administrator replied: “I want to say that there’s more to it, but I’m struggling to think of an example because I think you’re right. Predominantly, it has been quantitative.” By our observations, public characterizations of data were exclusively numerical—carefully distinguished from, and superior to, other forms of knowledge. For instance, a state consultant asked interventionists to keep intervention logs with detailed information about children that could help explain their AIMSweb scores. Some reported they were doing so already. Whether paper-based or electronic, the consultant explained, the log should serve as a record of “anecdotal” information, such as whether the child was present but not engaged or had frequent absences during the intervention period. This distinction between data and anecdotes was widely shared, as evident in the way school staff and administrators spoke as well.


Public suggestions that anything but numbers derived from a standardized assessment (i.e., AIMSweb scores) might be worth considering were ridiculed. When an instructional coach said she had “gone by the data” to group kids for walk-to interventions at her school but several teachers had questions about the basis for these groups, the consultant leading that discussion scoffed, “Feelings? Am I hearing feelings?” The principal reinforced the merits of his staff’s concerns, explaining they were based on careful classroom interactions and knowledge of students’ personal circumstances as recent immigrants and English language learners (ELLs). But the consultant dismissed this observation without any further comment. In the awkward quiet that followed, the instructional coach quickly assured everyone nearby, “I don’t do feelings, either. If teachers come to me with feelings, I tell them to suck it up.”


Teachers worried both when students were placed in intervention groups lower than they believed were appropriate to students’ abilities and when they seemed too quickly removed from groups providing interventions that still seemed necessary. An administrator explained, “It’s [practitioners’] funny inner feelings that tell them kids are not ready, but they have to let it go.” And according to one state consultant, kids may not be ready, but if that was the case, their scores would eventually come back down and they could be returned to a group for remediation. This logic proved to be a recurrent theme during a two-day period of districtwide data sharing, and having to accept it literally brought some instructional coaches and interventionists to tears. From the perspective of MTSS implementation leaders, a test score was the only measure of whether a child was ready to move up—or down. Impassioned teachers were characterized as only acting on what their hearts were telling them, not on what one consultant regularly referred to as “true data.” “Yes, we have a history as a profession of leading with the heart, which can be a good thing,” a district administrator acknowledged, “but at the same time, [teacher] feelings aren’t enough to base important decisions on.”


“So technically,” reasoned a teacher with bewilderment during an interview:


My conversations with [a student] have no meaning except between [the two of us] about pacing, about how to find details in an article, how to find what came first, what came second. Those are just conversations. If you can’t do it on a piece of paper, it doesn’t count.


THE STANDARDIZATION OF LEARNERS, LEARNING, AND THE ROLE OF TEACHERS


The individuality of students was prominent in the center of the MTSS model and the triangle’s differentiated tiers. However, the implementation of the model revealed standardized assumptions about the nature of students’ differences, the processes of student learning, and the means by which student mastery can be adequately demonstrated and understood. As teachers’ roles were increasingly reduced to proctoring AIMSweb tests and then dispensing prescribed educational treatments, the relevance of their actual expertise, effort, and professional judgment also became obscured.


That one MTSS consultant viewed teaching as synonymous with the mere use of a prescribed curricular program became apparent when he asked a group of principals at a districtwide meeting what their schools were doing for intervention with their students. School leaders seemed confused at first, but prodded with some program names they eventually generated a list the consultant wrote on the whiteboard:


Orchard Trees (suitable for K–6, used daily and in interventions)

Do the Math (for multiplication in Grades 5 and 6)

FASTT Math (all for daily fluency)

Study Island (until last year when it expired. “It’s gone . . ” someone said).

IXL (fifth and sixth grades)

Muggins Math (K–4, fifth and sixth grades also)

Number Worlds (K—but not used in Garden City)

Mountain Math

Focus Math (schools don’t have resources to use it this year in grades 1–4)


The exercise revealed different programs were being used at different schools, and there was not much understanding of what was going on from school to school. But, importantly, one exasperated principal finally asserted, “It’s not the program. It’s the wonderful person we have [in our math specialist].” An elementary school principal added in agreement, “How you’re using the term ‘intervention’ is confusing to me.”


Learners and learning were also characterized in highly reductionist and predictable ways. This was especially evident in the way consultants treated the sociocultural contexts of students’ lives as irrelevant so that what differences remained between individuals were universal, human differences. Reading almost verbatim from a canned PowerPoint slide presentation during a professional development training on working with ELLs, one consultant reported: From 1998–1999 to 2008–2009, there has been a 51% increase in ELLs. In Kansas, the growth has been almost 200% from 5,000–20,000. Most of the nation’s immigrants have originated from Mexico (13.5%), China (8.2%), India (6.5%), Dominican Republic, Cuba, Vietnam, Columbia, Korea, and Haiti. As a result, some of the languages common now include Burmese, Arabic, Korean, and Somali. “There’s no magic formula or magic bullet for working with these second language learners,” she explained.


ELLs may have come from unsafe environments and now many reside in rural locations, she went on to say. But when an attendee asked why Illinois and North Carolina had such big increases, the consultant said she had no idea. Don Stull called out an explanation from the back of the room: “For North Carolina, it is hogs and chickens.” For Garden City, it was cows and the similarly associated low-wage, low-skilled jobs created as part of a broad restructuring of the U.S. economy and redeployment of capital to cheaper production sites (Sassen, 1990) that had transformed the town demographically, culturally, and linguistically for almost four decades. The consultant stared out into the audience blankly and kept talking. It was evident that such information was not of interest, nor was unsolicited audience participation.


The consultant continued: There may be significant gaps in students’ educational backgrounds, including late entry into U.S. schools. ELLs may have limited proficiency in their native languages, uneducated parents, and limited background knowledge overall. Additionally, they may be malnourished and may be fairly transient. In contrast, some ELLs come from safe home environments, have strong academic backgrounds, and are proficient in their native languages. Literacy and skills can transfer between first and new languages, but it can’t be assumed that all components have been introduced in the first language. Written language includes meaning-based, pictographic, and logographic forms. Oral language has to be a component of all language learning. And certain sociocultural factors can affect literacy success—support for education, educating girls, and so on. Regardless, the consultant concluded, these sociocultural factors do not matter as long as certain structures are in place.


Following a script, the consultant’s presentation was organized around first presenting myths about ELLs. After each one she said, “And here is the truth.” A teacher asked about the consultant’s claim that sociocultural considerations do not matter, and her question seemed to resonate with several in the audience who nodded in quiet support. The presenter stuck tightly to her notes, however, and simply referred everyone back to the bulleted point on her slide. Then, she reiterated the point. According to the training, the only structures that matter include instructional plans, instructional content, instructional assessments, and professional development.


FIDELITY OR FAIL


Given the presumed infallibility of MTSS and its related diagnostics, “fidelity” to the specified instructional intervention was believed to be essential for students’ success. As one district administrator explained:


When I look at Read 180 and fidelity to Read 180, the research on that program is very strong—shown to have very good results with all students, English-speaking students, English language learners, special education students, but only if the program is adhered to directly. The research is done on 90 minutes a day, five days a week, running a program from whole group to small group exactly as they’ve written it in their manuals. If you’re to vary any of those things, then you don’t have fidelity in the program, and you may not get the results that they’ve shown in their data, in their results.


Fidelity was a concern not only at the classroom level but also by building and across the district. For math instruction, one of the state consultants suggested school representatives “decide on a short list of interventions you want to look at,” and then have a committee identify just one for approved use. This prompted a critical discussion of available programs. One person said Number Worlds was very expensive, to which the consultant replied, “Yeah, most of them are.” Some teachers did not like Do the Math because they thought it was too immature for fifth and sixth graders. Certain schools had already purchased costly materials with grant funds that other schools did not have. Ultimately, consensus formed around the following request: Could the subcommittee charged to study the matter first get information about program costs, see whether programs could be purchased in sections, and inquire about the experiences of other schools that had actually used particular programs?


Though having an approved list of “research-based” and “proven” interventions for reading and math was the approach of this district, an administrator acknowledged:


There are other ways to do it. We could develop something that wasn’t a canned package. We’re not required to do it this way, but it’s a trade-off. You get something that is research-based, that is already prepared, and that you can hand to someone and they know what to do with it. Or, the time, and the effort, and the money, and the resources that go into developing something that would be of comparable quality—the research that I’ve looked at, it doesn’t show a real benefit one way or the other. They both can be successful. This is a recommendation from that state [MTSS] team, or at least a range of options that they’re saying any of these could work.


Classroom-based educators expressed a great deal of frustration. One vented:


I got a piece of paper that told me what to do, that it worked at Belleview High School, at Monterey Middle School, at Atchison Elementary School. Yeah, it works in all those cool places. Has anybody tried it here before you gave it to me to do? . . . Don’t just say, ‘Research says this is good, so let’s do it.’ That’s where we’re at right now. We’re trying to do anything that is proven to be successful.


Other teachers and building staff lamented the loss of serendipitous “teachable moments” and opportunities to engage their students meaningfully—by generating static electricity with a red balloon someone brought to class or listening to themselves talk on a handheld recorder to improve their pronunciation of certain English word sounds, for example. Educators who worked closely with newcomers and culturally and linguistically diverse students further despaired at being unable to address such things as cross-cultural understanding and socialization or provide instruction specific to vocabulary building and reading comprehension. These teachers were frustrated not only because they wanted to offer more engaging instruction. They were deeply conflicted because they viewed teaching as moral work and a matter of helping their students realize their future life opportunities.


Fidelity—or what one teacher called “the new F-word”—was as insulting to her professional preparation and judgment as it was to her students’ intelligence. Indeed, after the district review committee announced its approval of Do the Math for schools to use the following fall, one of the MTSS consultants touted it as “pretty scripted, so you won’t need professional development to implement it—any teacher can use it.” For its purported ease of use across individuals and settings, however, implementing MTSS districtwide provoked a number of real challenges.


Simply scheduling interventions as prescribed for both reading and math seemed an impossible task given the number of minutes required each day for each program, in addition to the time already committed to other subject lessons, lunch, recess, and teacher collaboration and planning. The state consultant acknowledged that was “a quandary every school has gone through,” but cited one school he knew of that had found a way to accommodate both. And as if the scheduling problem was for lack of will, the consultant added, “There’s research out there now that says early math skills are predictors for reading skills.”


Early on, an intermediate school principal lobbied that as a district they needed to “get a vision for what this looks like” and also provide consistent support to K–4 grades. “Amen!” someone called out. There needs to be a transition curriculum, access to the same materials, and a common vision. Some elementary buildings have math coaches, but others do not. Of the two district administrators in the room, one had no comment to offer and the other shrugged off the issue as a matter of limited federal funds “so there’s only so much we can do, and we have to make choices.” An elementary principal said she hired one math and one literacy coach over more interventionists because coaches support the whole building while interventionists only work with the most at-risk students. “Unfortunately, I don’t have license to rent money,” the consultant joked, lacking any more assistance to offer building leaders who, nonetheless, were still tasked with implementing MTSS districtwide with fidelity in reading and math.


DISCUSSION AND IMPLICATIONS


Referring to the relationship between educational research and educational practice, Labaree (2004) acknowledges, “It is not very helpful if researchers answer every important question in the field by saying, ‘It all depends’” (p. 75). Yet, when current educational discourse is dominated by “scientific” prescriptions for practice, Donmoyer (2014) questions:


Will people who now expect relatively simple and unqualified knowledge about ‘what works’ in schools be able to sustain interest in a deliberative process that inevitably will require an appreciation of the complexity of educational issues and the need for many qualifications and caveats? (p. 14)


Even the chance to begin such a process of inquiry is dubious if practitioners’ professional expertise is continually marginalized and other ways of knowing summarily dismissed.


As it was enacted, MTSS dictated the actual terms of practitioners’ conversations and compelled alignment of their instructional efforts throughout the district’s organizational ranks. Only select state and district leaders publically claimed familiarity with what “research says” about MTSS or what constitutes “best practice.” Their invocations distanced themselves and silenced questioning from the general population of school staff more often than they fostered further discussion and understanding. Against the ideological backdrop of “scientifically based” research and the purportedly infallible merits of MTSS for guiding educational practice, it seemed no legitimate challenge could be imagined or entertained for improving teaching and learning.


Making educational practice “data-driven” was emphasized throughout MTSS implementation as well. Though staff interactions about student data were frequent, the prescriptive logic of the MTSS system, its reductionist measures of learning, and its exclusively quantified notion of data left many practitioners disenfranchised. Their professional training, relationships with students, and perspectives from the classroom were critical contributions, yet not harnessed to enrich the process. Instead of being informed and engaged in carefully reasoned action as the mantra of being “data-driven” suggests, educators were forced to be data-deferent and consider only one set of data or way of knowing what their students needed and how best to provide it to them. Teachers’ capacity for pedagogical data literacy must be actively developed so they can transform data into instructional action (Mandinach & Gummer, 2013).


Because numbers—specifically AIMSweb scores—reigned supreme, building leaders were especially challenged to represent the pressing needs, concerns, and circumstances of their schools during district meetings on the one hand, and then relay district expectations back to their increasingly demoralized staff on the other. A few school leaders admitted to exercising a quiet resistance in the contexts of their own buildings. Other school leaders described an ambivalence that required deliberately setting aside or unknowing the occupational values that inspired them to become educators in the first place (Lortie, 1975; Santoro, 2011). Some reasoned that as professionals they should be dispassionate and compliant. Still others talked with clear emotion about retiring early, taking positions in other districts not using MTSS, or pursuing entirely different careers.


Central to both the MTSS model and USD 457’s efforts to put the model into practice was the stated importance of recognizing and responding to students’ diverse needs. This was ironically achieved by the standardized evaluation of students as individual test-takers. Students’ subsequent characterizations and treatment “with fidelity” through the MTSS process limited any further acknowledgment of their actual differences, however. Students might learn at varied rates and require varied supports, but their individual test scores led to placement in intervention groups that received uniform instruction. Substitution of a group classification for an individual student was apparent throughout people’s conversations, as state and district experts would ask for a child’s AIMSweb scores at data-sharing meetings and then offer recommendations on the basis that Tier 2 students have categorically different needs than students in Tiers 1 and 3.


That incongruities between a students’ AIMSweb score and performance through teacher observation or other means of assessment were met so regularly by MTSS consultants with the response that such a child could not exist is a powerful illustration of dismissal. The complete erasure of larger social and cultural forms of difference as educationally relevant also reveals how selectively “diversity” was—and could be—conceptualized or addressed. Exercising power such as this marginalized the array of struggles so many students in the district faced as a result of cultural and linguistic adjustment, individual and family trauma, limited prior opportunities and interrupted schooling, and dire poverty.


The district’s emphasis on fidelity was also consequential, though “fidelity” is an ill-defined and poorly understood term. In medical studies, fidelity is primarily focused on the highly discrete behavior of whether a medication was used as prescribed, and it is an important means by which researchers demonstrate internal validity so their study outcomes can be related to particular variables rather than others. In education, treatments and interventions tend to be multidimensional and highly context dependent. This reality requires balancing fidelity with flexibility in an approach that does not promote a false dichotomy between fidelity’s structural dimensions—program adherence, time allocation, intervention completion—and its process dimensions—the nature and quality of teacher–student interactions during interventions (Harn, Parisi, & Stoolmiller, 2013).


Fidelity is not an absolute good or end in itself. Instead, fidelity in its practical application is essentially a proxy for quality instruction and assumed to be a means by which student learning outcomes will improve. It is a dynamic construct for which a single measure or categorical determination across time and implementation process is inadequate. Near-perfect fidelity of implementation may not only be unrealistic; it may also be counterproductive if increasing student learning outcomes is the goal (Harn, Parisi, & Stoolmiller, 2013). As Simmons et al. (2007) demonstrate, there is a point of diminishing returns where even higher levels of structural fidelity demonstrated by a general interventionist do not compensate for a less faithful but savvy teacher’s understanding of a lesson’s purpose, responsiveness to her students, and sound judgment in assessing learning.


A more thoughtful balance needs to be achieved so teachers can become knowledgeable about the essential components of available interventions, have opportunities to become fluent through practice, and leverage their professional expertise and judgment as appropriate throughout the process of educating students. In turn, teachers can “more actively engage students to maximize learning even at the cost of, or perhaps because of, decreased adherence” (italics added; Simmons et al., 2007, p. 188). The literally equivalent treatment of students ignores consequential differences and cannot serve as an adequate response to the needs of all students, even with the best of intentions.


To critically examine the implementation of MTSS as a case of data use in-practice as we have done is not to suggest educators should be anti-scientific. Rather, our goal has been to consider how making educational practice “scientific” according to certain narrow notions of science has implications for all involved in the education endeavor. As Mandinach’s (2012) conceptual theory of data-driven decision-making makes clear, data exist in raw states without meaning. Data become informative only when given meaning in particular contexts, and the accumulation of such information as knowledge depends on data being deemed useful for guiding action (p. 77).


After four decades of perpetual change, Garden City’s experience exemplifies Erickson’s (2014) point that educational practice occurs in contexts where “the future continues to be original, the local refuses to hold still” (p. 3). Educators appreciate and seek multiple forms of data to inform their work (Honig & Venkateswaran, 2012), and those data should be thoughtfully conceived and considered in order to be put to good use. If only what can be counted through a few officially sanctioned measures will count, then it is necessary to question what has been dis-counted through processes of school-wide reform meant to improve educational practice.


Acknowledgement


Funding for this research was provided in part by grants from the Spencer Foundation and the University of Kansas General Research Fund.  We are also grateful to participants in the 2014 Hall Center for the Humanities Fall Faculty Colloquium on Decolonizing Knowledge at the University of Kansas for their helpful comments during the early development of this article.  


References


Agar, M. (2008). The professional stranger: An informal introduction to ethnography. Bingley, UK: Emerald Group Publishing.


Agar, M. (2013). The lively science: Remodeling human social research. Minneapolis, MN: Mill City Press, Inc.


Albers, C. A., & Martínez, R. S. (2015). Promoting academic success with English language learners: Best practices for RTI. New York, NY: Guilford Publications.


Artiles, A. J., & Kozleski, E. B. (2010). What counts as response and intervention in RTI? A sociocultural analysis. Psicothema, 22, 949–954.


Berends, M., Bodilly, S. J., & Kirby, S. N. (2002). Facing the challenges of whole-school reform. New American schools after a decade. Santa Monica, CA: RAND.


Berliner, D. C. (2002). Educational research: The hardest science of all. Educational Researcher, 31(8), 18–20.


Biesta, G. (2007). Why “what works” won’t work: Evidence-based practice and the democratic deficit in educational research. Educational Theory, 57(1), 1–22.


Borman, K. M., Carter, K., Aladjem, D. K., & LeFlock, K. C. (2004). Challenges for the future of comprehensive school reform. Washington, DC: The National Clearinghouse for Comprehensive School Reform.


Borman, G. D., Hewes, G. M., Overman, L. T., & Brown, S. (2003). Comprehensive school reform and achievement: A meta-analysis. Review of Educational Research, 73(2), 125–230.


Coburn, C. E., & Turner, E. O. (2012). The practice of data use: An introduction. American Journal of Education, 118(2), 99–111.


Cross, C. T. (2004). Political education: National policy comes of age. New York, NY: Teachers College Press.


Daly, E. J., Hofstadter, K. I., Martinez, R. S., & Andersen, M. (2010). Selecting academic interventions for individual students. In G. G. Peacock, R. Ervin, & E. J. Daly (Eds.), Practical handbook of school psychology: Effective practices for the 21st century (pp. 115–132). New York, NY: Guilford Publications.


Desimone, L. (2002). How can comprehensive school reform models be implemented? Review of Educational Research, 72(3), 433–480.


Donmoyer, R. (2014). What if educational inquiry were neither a social science nor a humanities field? Revisiting Joseph Schawb’s “The Practical” in the aftermath of the science wars. Educational Policy Analysis Archives, 22(8).


Eisenhart, M., & Towne, L. (2003). Contestation and change in national policy on “scientifically based” education research. Educational Researcher, 32(7), 31–38.


Erickson, F. (2014). Scaling down: A modest proposal for practice-based policy research in teaching. Education Policy Analysis Archives, 22(9). http://dx.doi.org/10.14507/epaa.v22n9.2014


Erickson, F., & Gutierrez, K. (2002). Culture, rigor, and science in educational research. Educational Researcher, 31(8), 21–24.


Ferris, J. M. (1992). School-based decision making: A principal-agent perspective. Educational Evaluation and Policy Analysis, 14(4), 333–346.


Fluehr-Lobban, C. (1994). Informed consent in anthropological research: We are not exempt. Human Organization, 53(1), 1–10.


Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness to intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research and Practice, 18(3), 157–171.


Geertz, C. (1976). From the native’s point of view: On the nature of anthropological understanding. In K. Basso & H. Selby (Eds.), Meaning in anthropology (pp. 221–237). Albuquerque, NM: University of New Mexico Press.


Harn, B., Parisi, D., & Stoolmiller, M. (2013). Balancing fidelity with flexibility and fit: What do we really know about fidelity of implementation in schools? Exceptional Children, 79, 181–193.


Honig, M. I., & Venkateswaran, N. (2012). School-central office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 118(2), 199–222.


Howe, K. R. (2009). Isolating science from the humanities: The third dogma of educational research. Qualitative Inquiry, 15(4), 766–784.


Kansas MTSS. (2008). Overview. Retrieved from www.kansasmtss.org/overview.htm


Klingner, J. K., & Edwards, P. A. (2006). Cultural considerations with response to intervention models. Reading Research Quarterly, 41(1), 108–117.


Kretlow, A. G., & Helf, S. S. (2013). Teacher implementation of evidence-based practices in Tier 1: A national survey. Teacher Education and Special Education, 36(3), 167–185.


Labaree, D. (2004). The trouble with ed schools. New Haven, CT: Yale University Press.


Lortie, D. (1975/2002). Schoolteacher: A sociological study. Chicago, IL: University of Chicago Press.


Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform practice. Educational Psychologist, 47(2), 71–85.


Mandinach, E. B., & Gummer, E. S. (2013). A systemic view of implementing data literacy in educator preparation. Educational Researcher, 42(1), 30–37.


Martínez, R. S. (2014). Best practices in instructional strategies for reading in general education. In A. Thomas & P. Harrison (Eds.), Best practices in school psychology VI. Bethesda, MD: National Association of School Psychologists.


Martínez, R. S., & Nellis, L. (2008). A school-wide approach for promoting academic wellness for all students. In B. Doll & J. Cummings (Eds.), Transforming school mental health services (pp. 143–164). Thousand Oaks, CA: Corwin Press.


McCook, J. E. (2006). The RTI guide: Developing and implementing a model in your schools. Horsham, PA: LRP Publications.


Merriam, S. B. (2002). Qualitative research in practice: Examples for discussion and analysis. San Francisco, CA: Jossey-Bass.


Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass.


Molloy, L. E., Moore, J. E., Trail, J., Van Epps, J. J., & Hopfer, S. (2013). Understanding real-world implementation quality and “active ingredients” of PBIS. Prevention Science, 14, 593–605. doi:10.1007/s11121-012-0343-9


Phillips, D. C. (2014). Research in the hard sciences, and in very hard “softer” domains. Educational Researcher, 43(1), 9–11.


Reinke, W., Herman, K. C., & Stormont, M. (2013). Classroom-level behavior supports in schools implementing SW-PBIS: Identifying areas of enhancement. Journal of Positive Behavior Interventions, 15(1), 39–50.


Roderick, M. (2012). Drowning in data but thirsty for analysis. Teachers College Record, 114, 1–9.


Ross, S. M., & Gil, L. (2004). The past and future of comprehensive school reform: Perspectives from a researcher and practitioner. Washington, DC: The National Clearinghouse for Comprehensive School Reform.


Rowan, B., Barnes, C., & Camburn, E. (2004). Benefiting from comprehensive school reform: A review of research on CSR implementation. Washington, DC: The National Clearinghouse for Comprehensive School Reform.


Rubin, H. J., & Rubin, I. S. (2012). Qualitative interviewing: The art of hearing data. Los Angeles, CA: SAGE.


Ryan, G. W., & Bernard, H. R. (2003). Techniques to identify themes. Field Methods, 15(1), 85–109.


Sailor, W. (2009). Making RTI work: How smart schools are reforming education through schoolwide response-to-intervention. San Francisco, CA: John Wiley & Sons.


Sanjek, R. (2014). Ethnography in today’s world: Color full before color blind. Philadelphia, PA: University of Pennsylvania Press.


Santoro, D. A. (2011). Good teaching in difficult times: Demoralization in the pursuit of good work. American Journal of Education, 118(1), 1–23.


Sassen, S. (1990). Economic restructuring and the American city. Annual Review of Sociology, 16, 465–490.


Shavelson, R. J., & Towne, L. (Eds.). (2002). Scientific research in education. Washington, DC: National Academy Press.


Simmons, D. C., Kame’enui, E. J., Harn, B., Coyne, M. D., Stoolmiller, M., Santoro, L. E., & Kaufman, N.K. (2007). Attributes of effective and efficient kindergarten intervention: An examination of instructional time and design specificity. Journal of Learning Disabilities, 40, 331 –347.


Sink, C. A., & Ockerman, M. S. (2016). School counselors and the multi-tiered system of supports: Cultivating systemic change and equitable outcomes. The Professional Counselor, 6(3), v–ix. doi:10.15241/csmo.6.3.v.


Spillane, J. P. (2012). Data in practice: Conceptualizing the data-based decision-making phenomena. American Journal of Education, 118(2), 113–141.


St. Pierre, E. (2002). “Science” rejects post-modernism. Educational Researcher, 31(8), 25–27.


Stringfield, S. (2000). Choosing success. Washington, DC: American Federation of Teachers.


Stull, D., & Ng, J. (2016). Majority educators in a United States minority/immigrant public school district: The case of Garden City, Kansas. Human Organization, 75(2), 181–191.


Tyack, D., & Cuban, L. (1997). Tinkering toward utopia: A century of public school reform. Cambridge, MA: Harvard University Press.


Wohlstetter, P., Datnow, A., & Park, V. (2008). Creating a system for data-driven decision-making: Applying the principal-agent framework. School Effectiveness and School Improvement, 19(3), 239–259.




Cite This Article as: Teachers College Record Volume 121 Number 1, 2019, p. 1-26
https://www.tcrecord.org ID Number: 22476, Date Accessed: 9/24/2021 4:31:05 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Jennifer Ng
    University of Kansas
    E-mail Author
    JENNIFER NG is an Associate Professor in the Department of Educational Leadership and Policy Studies at the University of Kansas. Her research focuses on understanding the work of educators in varied social contexts, especially with race, class, and other equity concerns as primary features.
  • Donald Stull
    University of Kansas
    E-mail Author
    DON STULL is Professor Emeritus of Anthropology at the University of Kansas, where he taught from 1975 to 2015. His research and writing focus on the meat and poultry industry in North America, rural industrialization and rapid growth communities, and industrial agriculture’s impact on farmers, processing workers, and rural communities.
  • Rebecca Martinez
    Indiana University in Bloomington
    E-mail Author
    REBECCA MARTINEZ is an Associate Professor in the Department of Counseling and Educational Psychology at Indiana University in Bloomington. She is a former classroom teacher and school psychologist. The ecological framework guides all of her research, and implementation science—translating best practices for use by real schools, with real teachers, and real children and families—is what inspires her most.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS