Classroom, School, and District Impacts on Diverse Student Literacy Achievement

by Kristen Campbell Wilcox , Hal A. Lawson & Janet Angelis - 2015

Background/Context:Prior research has investigated the literacy achievement gap with particular focus on ethnically and linguistically diverse students’ performance. This study extends that research by examining the relationships among classroom instructional practices, school priorities, and district policies in higher performing schools.

Purpose/ Research Question: The purpose of the study was to identify differences between schools with typical literacy performance among diverse students and schools where diverse students exceeded predicted performance. Primary research questions were: What qualities of literacy instruction are characteristic of elementary schools with higher literacy achievement among ethnically and linguistically diverse students? Compared to schools with average literacy achievement among diverse students, what are the proximal and distal factors that describe and explain significantly different diverse student literacy achievement outcomes?

Setting: Fifteen elementary schools in New York State provided the sample. All serve ethnically and/or linguistically diverse students. Ten of these schools were classified as higher performing based on three years of state assessment data for diverse students; five were classified as average performing based on the same assessments.

Research Design: This study was a comparative multiple case study using mixed methods.

Data Collection/Analysis: Two researchers visited each school for two days interviewing 12–15 teachers and administrators using a semistructured interview protocol. They also collected documents (e.g., district goals, curriculum, lesson plans), and constructed interpretive memos. All interviews and memos were coded using HyperResearch software and documents were used to triangulate findings. Axial coding and matrices were used to identify salient proximal and distal contrasts between cases.

Findings:. Practices between the two sets of schools differed at three levels: classrooms, schools, and district office. Differences included the extent and effectiveness of differentiated and technology-enriched literacy instruction and how coherently school and district policies and practices supported and sustained effective classroom practices. Higher performing schools showed evidence of the use of at least 90-minute balanced literacy blocks that embedded support (e.g., ESL and Special Education). Also in these schools, teachers reported relevant professional development and supports for collaborative work and instructional coaching.

Conclusions: Factors and forces in the classrooms, schools, and district offices, and especially in their relations help to account for differences in the two sets of schools. These forces and factors are malleable and actionable, i.e., ones that school and district leaders can do something about in their quest to improve the literacy achievement levels of diverse students.

Wide achievement gaps between white native English speakers and students identified as African-American, Hispanic, Latino, and English learner (EL) persist in many schools across the United States (Vanneman, Hamilton, Baldwin Anderson, & Rahman, 2009). This problem, widely known as “the achievement gap,” is particularly important to address in today’s Race-to-the-Top environment because literacy achievement at the elementary level is associated with future academic success (Foster & Miller, 2007; Han, 2008; Suárez-Orozco et al., 2010). Toward this end, federal legislation has provided incentives for educators in U.S. public elementary schools to improve ethnically and linguistically diverse students’ literacy performance. Incentives specifically designed to address this achievement gap include the No Child Left Behind Act of 2001 as well as today's Race-to-the-Top agenda.

A number of studies have offered important contributions to this significant policy and practice problem. For example, some studies have identified and described salient student characteristics related to achievement gaps, while others have attended to macro-level contextual factors. Some such contextual factors are external to schools, especially high density poverty in surrounding communities. Such place-based poverty is associated with multiple outcome disparities, including suboptimal school outcomes for schools serving substantial numbers of poverty-challenged, diverse students (Bell & Lee, 2011; Elias & Haynes, 2008; Haynes, 2002; Holme & Rangel, 2012; Liew, Chen, & Hughes, 2009; Wilson, 2011).

Fortunately, some researchers have resisted the temptation to attribute all relevant achievement differences to individual student characteristics or to external forces such as poverty. These other groups of researchers have looked for and found identifiable classroom, school and district factors that relate to students’ academic performance (Clewell, Campbell, & Perlman, 2007; Datnow, Lasky, Stringfield, & Teddlie, 2005). For example, several studies have suggested that classroom instruction characterized as differentiated and culturally responsive has positive effects on the literacy achievement of ethnically and linguistically diverse students (August & Shanahan, 2006; Dixon et al., 2012; Santamaria, 2009; Slavin, Lake, Chambers, Cheung, & Davis, 2009; Walpole, Justice, & Invernizzi, 2004). Other studies have attended to organizational forces and factors. For example, several studies suggest that school and district-level practices and policies impact teachers’ work, especially the quality of their classroom literacy instruction (e.g., McGee, 2004).

Significantly, the majority of the above-referenced studies attend to just one or two of three important units of analysis. The units are classrooms, especially teachers’ pedagogical practices; schools; and district offices, especially district-level priorities, mandates, and district-wide school and classroom alignment mechanisms. Research is needed that attends to all three units of analysis in the same studies. These studies need to be designed with particular attention to the relations and interactions among classroom practices, school improvement priorities, and district-level priorities, mandates, and alignment mechanisms.

This comparative case study of 15 elementary schools was designed in response to this need. It was structured to identify and describe relevant classroom, school, and district factors associated with the comparatively higher levels of diverse elementary school students’ literacy achievement. Comparisons were of interest and made possible because targeted “achievement gap students” in 10 of the 15 schools demonstrated higher than predicted performance on literacy assessments. The aim for this research was to find out how and why these 10 schools had comparatively higher performances than the five average-performing comparison schools. Special interest resided in malleable and actionable factors, i.e., ones that school and district improvement leaders can target in their improvement plans in order to close the literacy achievement gap. In contrast to studies that focus on a few such malleable, actionable factors involving just one unit of analysis (e.g., classrooms only), this study proceeded with the search for relationships among the several malleable factors in classrooms, entire schools, and the district office.

In alignment with this focus, we emphasized in the study’s design a view of classroom instruction as interdependent and coevolving with organizational structures, processes, and practices at three interacting levels—classroom, school, and district. In other words, our study was predicated on twin assumptions. School-wide and district-level factors influence classroom climates and teachers’ pedagogy; and reciprocally, teachers’ pedagogy and their classroom climates influence the host schools and districts as organizations. To reiterate: Three units of analysis and their relations are implicated here—classrooms, schools, and district offices—and they are mutually constitutive in our theoretical framework.


As with all manner of case study research, one of this study’s primary aims was to articulate theory (Yin, 2014). Socioecological theory framed this study, informing research design and guiding our interpretation of the findings. This theory highlights an important aspect of the achievement gap problem— namely, how classroom instruction relates to the school and district support systems that surround it. This theory conceptualizes social systems as containing interdependent, nested levels, namely, micro-, meso- and macro- (Brofenbrenner, 1993; Wardle, 1996).

The micro-level (e.g. classroom) is the most proximal system. It is where direct interactions occur, especially acts of teaching and learning involving teachers and students. The meso-level and macro-levels implicate organizational forces and factors and may be understood as progressively more distal. In educational systems, this is where decisions and activities regarding goal setting, curriculum development and revision, opportunities for collaboration and professional development, resource allocation, staffing, data monitoring and use are made. Figure 1 provides a depiction.

Figure 1. An Overview of the Major Components of the Socioecological Framework


In contrast to categorical theories, which delimit the phenomena of interest, with little or no attention to their relationships and interactions, socioecological theory emphasizes relationships and interactions among micro-, meso-, and macro-levels, both in the here and now and also over time. Dynamic interactions among salient micro-, meso-, and macro-level factors are viewed as primary mechanisms for social construction and constitution of reality (Berger & Luckmann, 1967) in classrooms, schools and district offices.

Socioecological theory is especially salient to classroom researchers in search of organizational and policy influences on classrooms, pedagogy, and teachers’ work. Granting this potential, this theory poses challenges because it is inherently abstract and complex. One solution is to differentiate between proximal and distal forces, factors, and actions. In this application of socioecological theory, classrooms are proximal units of analysis. In this view, life in classrooms influences and is influenced by distal forces, factors, and actions, especially organizational structures and processes as well as by national and state policy mandates.

Recent research, increasingly conducted by teams, may not proceed with an explicit focus on socioecological theory and the multi-level interactions it emphasizes, but some of it recommends and justifies attention to both proximal and distal factors and their interactions. For example, Bryk, Sebring, Allensworth, Luppescu, and Easton (2010) identified and described five ecological-contextual factors that accounted for the effectiveness of elementary schools located in Chicago’s high poverty neighborhoods. With principal leadership as the key driver, the other four were instructional guidance, a student-centered learning climate, professional capacity, and strong parent-community ties: These factors cross micro-level (proximal) and meso-levels (distal). Consistent with socioecological theory, Bryk et al. (2010) emphasized the interactions among these five factors. These researchers’ interpretation of their findings recommends socioecological theory for studies such as ours:

Schools are complex organizations consisting of multiple interacting subsystems. Each sub system involves a mix of human and social factors that shape the actual activities that occur and the meaning that individuals attribute to these events. These social interactions are bounded by various rules, roles, and prevailing practices that, in combination with technical resources, constitute schools as formal organizations. In a simple sense, almost everything interacts with everything else (p. 45).

More recently, Knapp, Copland, Honig, Plecki, and Portin (2014) have emphasized classroom, school, and district level relationships and interactions. Their findings from research on several districts and their constituent schools indicate that achieving desirable results is a never-ending adaptive challenge involving relations and alignment mechanisms among classrooms, schools, and districts. They coined the phrase “learning-focused leadership in action” as the overall strategy for improving instruction in order to achieve better outcomes. Such was the theoretical grounding and initial intent for our study.


The current study derived from a larger study of ethnically and linguistically diverse and special needs elementary student achievement (Wilcox, 2011). Our review of the literature for this study, together with the findings of our previous research, led us to expect that a variety of district, school, and classroom level factors would relate to diverse student literacy achievement (Slavin, 2008; Slavin et al., 2009; Wilcox, 2005, 2012). These factors included approaches to curriculum and academic goals; staff selection, leadership, and capacity building; instructional programs and practices; monitoring and use of data; interventions and adjustments; beliefs about teaching and learning, relationships, and resource allocation (Levine & Lezotte, 1990).

To inform the study reported here, we searched for literature that would help us to explain how proximal factors (e.g., what occurs in classrooms between students, and between students and teachers) and distal factors (e.g., what occurs in district and school offices between educators) relate to diverse student literacy achievement. In this search, we found a number of studies that investigated practices related to ethnically and linguistically diverse elementary student literacy achievement. Specific findings from some of the most salient studies are discussed in the following section.


Several researchers have conducted studies of balanced and differentiated literacy instruction and their impacts on student literacy achievement. Balanced literacy in these studies is described as an approach that blends holistic, literature-based reading and writing experiences with explicit skills instruction and includes the use of differentiation techniques that involve providing scaffolding as needed and the removing of scaffolds as students gain independence with a particular skill or understanding of a concept.

Fitzgerald and Noblit (2000) focused their research on a balanced literacy program. In this naturalistic study of linguistically and ethnically diverse first-grade students, they inquired into how students learn about reading in the program and whether a balanced literacy program can be successful in raising literacy achievement outcomes Findings suggested that students increased global and local understandings of reading, improved performance in word recognition and phonics, and showed evidence of enhanced understandings of reading and writing as a means for understanding and communication.

Bitter, O’Day, Gubbins, and Socias’s (2009) two-year study analyzed data from 101 elementary classrooms in nine high poverty schools using a literacy observation tool that had been developed and tested by the Center for the Improvement of Early Reading Achievement (CIERA) in a set of comparable classrooms. Comparing their own data to those from the CIERA studies, Bitter and colleagues found that teachers in San Diego were embedding higher level discussion techniques within their balanced literacy practices at more than three times the rate of the teachers in the earlier, CIERA studies. However, achievement was inconsistent depending on the qualities of the student population in the classroom. For example, if there were increases in the numbers of ELs in the class, modifications to the size of groups was needed to maintain the effectiveness of balanced literacy strategies.

Group size was found relevant to literacy outcomes in Kamps et al.’s quasi-experimental study (2007) as well. In this study outcomes for first and second grade ELs in six elementary schools in response to a three-tier model of intervention were investigated. Results suggested that the students who received small group (3–7 students) direct instruction achieved better outcomes on standardized tests (Woodcock and DIBELS) than those receiving instruction in larger groups (6–15 students).

O’Day’s (2009) study compared reading comprehension results for ELs and native English speakers using data from a three-year study of the implementation as well as qualitative data gathered from 133 teachers. She discovered an emerging awareness among teachers and administrators, not only of the need for differentiated instruction, but also of the need to tailor differentiated activities to meet the unique needs of ELs. For example, increased opportunities for student talk and consistent monitoring of ELs’ language skill development to inform classroom activities were found to be important to improving literacy outcomes.

This body of research highlights the value of the skilled use of balanced and differentiated instructional techniques on culturally and linguistically diverse students’ literacy achievement.


In other, yet related lines of inquiry, scholars have looked to identify unique characteristics of instruction that supports literacy development among culturally and linguistically diverse students specifically. In these studies, culturally relevant pedagogy (a pedagogy predicated on the idea that classroom cultures should be congruent with student’s home and community cultures) has been utilized. Culturally responsive pedagogy is characterized by drawing on students’ cultural and linguistic knowledge, prior experiences, and performance styles in adapting instruction and curricula. This approach has been proposed as a necessary alternative to what has been called “a diversity-blind differentiated instructional approach” (Banks, 1993; Ladson-Billings, 1994, 1995; Lee, 1999).

In one study, Santamaria (2009) examined the potential for increasing diverse student achievement by combining differentiated instruction with culturally responsive pedagogy. In this five-year case study of two culturally and linguistically diverse, high-achieving elementary schools, Santamaria found that schools that had been identified for higher academic achievement and closing achievement gaps were using culturally responsive teaching strategies as well as differentiated instructional methods.

Other researchers drawing on second language acquisition (SLA) research, have focused their inquiry on the kinds of adaptations to language that best serve English learners’ literacy development. Research on the Sheltered Instructional Observation Protocol (SIOP), for example, focuses on the impacts of instructional protocols that encourage teachers to develop ELs’ language skills while learning content. The SIOP approach includes techniques such as tapping prior knowledge and scaffolding with close attention to complexity of language form. Several studies of the SIOP (e.g., Cajkler & Hall, 2012a, 2012b; Dixon et al., 2012; Echevarria, Richards-Tutor, Canges, & Francis, 2011; Echevarria, Short, & Powers, 2006; Echevarria, Vogt, & Short, 2004; Short, Fidelman, & Louguit, 2012) indicate that its use positively impacts EL achievement outcomes.


Few studies have identified impacts of the uses of technology in diverse school settings, and even fewer have focused on the interactions of students with the technology. Key findings in those studies that have examined interactive technologies such as computer-assisted reading programs suggest that these technologies afford teachers the opportunity to differentiate literacy instruction for students and thereby target individual learning needs (Blachowicz et al., 2009; Chambers et al., 2008; Macaruso & Rodman, 2011; Macaruso & Walker, 2008; Tracey & Young, 2007).

For example, Blachowicz et al. (2009) found that in classrooms where each student had access to a computer-assisted reading program designed to engage first graders in developing early reading skills, student performance improved significantly on standardized reading tests. The program involved learning stations through which small groups of students rotated, with individuals advancing at the pace of their own progress. The researchers, who observed the implementation from its inception, interviewed teachers and students, and collected performance data, concluded that this use of technology correlated with facilitating targeted literacy instruction and more flexible grouping of students, leading to higher student performance. The study was conducted in 18 first-grade classrooms in 11 low-performing urban elementary schools with high numbers of African-American and Latino students.

In the same vein, Tracey and Young (2007) found that kindergarten students at risk for reading failure in an urban elementary school with high numbers of African-American and Hispanic students made greater gains on tests of key reading readiness indicators such as letter recognition, vocabulary, and comprehension than students in the control group. Their study involved eight experimental and seven nonintervention classrooms averaging 20 students each and using the same commercial literacy program for instruction. In addition, students in the experimental classrooms received instruction in targeted literacy skills using interactive software for approximately 15 minutes per day. Based on pre- and post-scores on two early reading assessments, students in the experimental group made greater gains than the control group, although gains on a test of auditory conceptualization (i.e., phonemic processing) were statistically insignificant.

Moreover, comparative studies conducted by Macaruso and Rodman (2011) and Macaruso and Walker (2008) of a computer-assisted instructional program have also indicated that technology used as a supplement to direct literacy instruction and supported through school-wide literacy initiatives was related to improved literacy achievement for struggling and low-performing readers. Similarly, Chambers et al. (2008) found that a group of first-graders in two high-diversity urban schools made gains in reading scores following literacy instruction that included computer-assisted tutoring aligned with the reading curriculum.


As mentioned earlier, part of the impetus of the current study was to articulate more clearly how practices, processes, and procedures across classrooms, schools, and districts relate to different literacy outcomes among diverse elementary students. Of the literature found relevant to the current study, few studies focused on that which occurs beyond the classroom and in diverse school and district settings. Walpole et al.’s (2004) case study of a literacy initiative in a high-poverty elementary school was an exception. This study revealed the potential of school-wide coordinated literacy initiatives characterized by administrator knowledge and support of differentiated, small group, evidence-based instruction on student literacy outcomes.

Knapp et al.’s (2014) summary evaluation of the past research and his assessment of the priorities for future research round out our literature review. Knapp concluded from his review that past research has emphasized distributed instructional leadership, school-district alignment mechanisms with particular reference to life in classrooms, and strategic resource allocations in service of equitable knowledge access and learning opportunities for students. Granting these contributions, he prioritized multi-faceted and multi-level research designs so that valuable knowledge can be gained about leadership for high quality instruction.

What is missing from this body of work to date are systematic attempts to develop the conceptual footing of this generative idea and at the same time to explore it empirically, at multiple levels (e.g., both school and district level), across roles (not just the province of a single position, however central), with a focus on actual practice (not just role definitions, structures, policies for the “infrastructure” of leadership), and with a close eye to how leaders’ actions and interactions produce learning—for both young learners and adults.” (Knapp et al., 2014, p. 9, emphasis in the original)

This methodological recommendation and theoretical rationale have informed our study.

With literacy instruction in real world classrooms as the primary focus, overall the research we have reviewed is delimited and restricted. It attends to the proximal features of classrooms, oftentimes without a research design that contextualizes classrooms and instruction in organizational contexts. Although some studies explore the broader contexts surrounding classrooms, and our review identified researchers’ recommendations for multi-level and multi-faceted research designs (e.g., Bryk et al., 2010; Knapp et al., 2014), we found no studies focused specifically on the consequential relationships between proximal and distal factors, i.e., relationships that serve to explain elementary schools’ comparatively higher literacy achievement. Such is the context for our study’s potential methodological and empirical contributions.


Drawing on the literature reviewed above and with the aforementioned socioecological theoretical framework guiding the research design, the current study was structured to address two primary research questions:

1. What qualities of literacy instruction are characteristic of elementary schools with higher literacy achievement among ethnically and linguistically diverse students??

2. Compared to schools with average literacy achievement among diverse students, what are the proximal and distal factors that describe and explain significantly different diverse student literacy achievement outcomes?


This multiple case study utilized quantitative methods for identification of the sample and qualitative methods for the collection and analysis of data.


One of the overarching goals of this study was to contrast the two kinds of schools, i.e., ones with higher literacy achievement rates versus those with lower ones. It was guided by the question: What practices, processes, and procedures were prevalent in the schools with higher literacy performance among ethnically and linguistically diverse students?

The larger study, which encompasses this one, took place in New York State. This state enjoys one of the highest proportions of ethnically and linguistically diverse student populations in the nation (US Bureau of the Census, 2010) and is also unusually diverse in terms of geography, urbanicity, and population density. In addition, New York provides a context of consistent standards and assessments as well as a large number of diverse schools from which to sample. As described below, from this sample we selected a set of 10 higher performing elementary schools representing a range of sizes and locales. We used measures of students’ performance on the required Grades 3–6 State English Language Arts (ELA) Assessment.1 The ELA assessments measure reading, listening and writing performance using multiple choice, short answer, and paragraph length writing tasks (New York State Education Department, 2010). Our definition of literacy for the purposes of this study is aligned with these competencies of reading, listening, and writing (although we recognize the limitations of this definition).

The case study schools were selected via regression analysis as part of an explanatory participant selection design (see Creswell & Plano Clark, 2007). The selection process began with a download of the publically available ELA assessment performance data from the New York State Education Department website for the years 2007, 2008, and 2009 in Grades 3 through 6 (the grades for which assessment data at the elementary level are available). Relative performance on each assessment was defined as the percent of students meeting or exceeding the proficiency standard (by obtaining a score of three or four). In order to adjust relative performance scores for demographic characteristics known to relate to achievement, least-squares regressions were run, regressing relative performance on six background variables: percent African American, percent Hispanic, percent limited English proficiency (or EL), percent stability (percent of students in the highest grade who were also enrolled the previous year), percent of students eligible for free or reduced-price lunch, and number of students at the target grade level. From each regression, a standardized residual was obtained reflecting the over or underperformance of each subgroup relative to the performance of similar students in other schools in the state.

It is important to note that New York State reports subgroup performance only for subgroups of five or more students. With that restriction, Ns for the regressions ranged from a high of 1973 schools (for schools with economically disadvantaged students) to a low of 432 (for schools with English learners). In addition, to measure relative performance over time, means and standard deviations of the standardized residuals were calculated across years and subgroups for each school.

Our choice of a sample distribution of 10 higher performing and five average-performing schools was informed by two concerns. First, because one of the main purposes of the project was to make higher performing case studies publically available to serve as models for school improvement efforts, we sought to overweight the sample of higher performing schools as much as possible. Second, based on previous multiple case studies conducted by this research team, five average-performing cases were found to provide sufficient redundancy in the data. This redundancy criterion is typically used to determine the extent of sampling appropriate in multiple case study designs (Merriam, 1997; Patton, 2002).

From the initial pool of 259 schools whose relative performance across time and subgroups was at least one standard deviation above the mean for the state, we identified a subsample of 10 “higher performing” schools that met the following criteria in addition to what has already been discussed: These particular schools had open-enrollment policies, were funded as measured by district per pupil expenditures at or near the state average, were distributed geographically across the state and were of varying sizes as to be representative of the diversity of schools in the state. This higher performing sample was then matched as closely as possible (based on the same criteria as explained above) with five “average-performing” schools whose ethnically and linguistically diverse students’ achievement in ELA was at or near the mean for the state.

Within the sample we sought for comparability of schools by size of school, poverty (as indicated by free or reduced-price lunch eligibility), and linguistic and ethnic diversity. While we could not match every characteristic in each school to another, within the sample we could contrast schools with larger versus smaller enrollments, schools with higher or lower poverty, and schools with higher or lower diversity (see Appendix A for demographic characteristics). In the aggregate, the 10 higher performing schools chosen for the study had higher mean Z scores (+1.86) than average-performing schools (–0.03). The higher performing school sample also had higher percentages of English learners (17% compared to 12%) and higher percentages of African-American and Hispanic/Latino students (38% and 36% respectively) compared to the average-performing school sample (17% African-American and 33% Hispanic/Latino students). While not every individual school had higher percentages of every group than the average for the state, each higher performing school had higher percentages of at least one subgroup than the state average. Finally, the higher and average-performing schools in the final sample were similar with regard to teacher characteristics (e.g., numbers of years of experience, graduate study).

Although our sample was identified based on higher achievement among the subgroups of interest, this study has an unavoidable, inherent limitation. We acknowledge that subgroup identifications are problematic because they tend to reduce and essentialize in simplistic ways the de facto, complex characteristics of individuals and their communities (see discussion of this issue in Abedi, 2004). We also must emphasize that the “higher performing” schools in our study are designated as such in relation to our comparison schools. More concretely, even though our 10 higher performing schools can be designated as such in our study, they are not the highest achieving schools in the state. Predictably, many of the highest-achieving schools are whiter, have larger numbers of native English speakers, and are in wealthier districts.


Once schools were chosen and site visits arranged, a team of two field researchers visited each school. All field teams (leads and assistants) completed a certification course on human subjects research offered by the University’s Institutional Review Board, participated in a half-day orientation to the study that included an overview of the research literature, the study design, instruments, and procedures. Each lead researcher held an advanced degree in education and had at least 10 years of experience in administration or teaching. Each lead accompanied the Principal Investigator (PI) on a training visit before conducting her own site visit, and an assistant with a master’s or doctorate in education accompanied each lead on each site visit.

Before conducting the site visit, these teams mined the school’s website for documentary evidence and were provided a schedule by the project coordinator. A typical schedule included interviews with the Superintendent or an Assistant Superintendent of Curriculum and Instruction; District Director of Assessment, Human Resources, or Professional Development; a District Director or Coordinator of Special Education; a District Director or Coordinator for ESL; the School Principal or Assistant Principal; one to two 1st–6th grade ESL teachers; one to two 1st–6th grade Special Education or Inclusion teachers; three to four Mainstream classroom teachers; and the School Psychologist, School Counselor, or Social Worker.

Over the two days the research team conducted interviews totaling between 12 and 15 interviews per site. Each lead researcher used a semistructured protocol (see Appendix B for the mainstream teacher protocol) and conducted the interviews in about an hour. Data collected in this study included 211 interviews (see Table 1), 104 of which were with teachers and the remainder with school and district administrators and instructional coaches.

Table 1. Data Sources


Higher Performing Schools

Average Performing-Schools


Grade 1




Grade 2




Grade 3




Grade 4




Grade 5




Grade 6












We also collected documentary evidence in the form of district curriculum, lesson plans, procedures for the collection of student performance data, etc., totaling 135 pieces (samples of these documents are available on the project website, Finally, 15 interpretive memos were developed during these visits (one per visit) that included prompts for field teams to summarize what evidence had been gathered with regard to each of the research questions and what evidence was still sought in addition to initial interpretations of the data.


Phase 1

Since the field researchers were aware of the higher or average-performing status of schools, we reduced the likelihood of bias impacting the interpretation of data by using methods typical in qualitative case study designs, in particular, member checking and source triangulation (see Glesne & Peshkin, 1992; Lincoln & Guba, 1985). First, in order to organize the data for source triangulation, each case data set was coded through the application of a priori codes in the qualitative software program HyperResearch (Researchware, 2009) (see Appendix C). We developed these codes in alignment with the research questions (i.e., context, beliefs, values; instructional programs and practices; curriculum and academic goals; leadership and capacity building; monitoring and compilation of data; and interventions and adjustments); each code was defined and both the PI and one assistant worked to reach interrater reliability of .70 in applying these codes to the data. Also during this phase, each field team crafted a 10–12 page case study that was then edited by the project director and forwarded to school and district personnel to be member-checked for accuracy and modified as necessary.

Phase 2

In the second phase of analysis we searched for patterns by generating code reports using HyperResearch—one for each of the higher performing and average-performing school sets and one for each of the major categories informing the research questions (e.g., instructional programs and practices). As we analyzed these code reports in ongoing reference to the individual school case studies and interpretive memos, we developed categories and dimensions. We recorded these categories and dimensions in a matrix that allowed us to compare across cases (see Appendix C) (Stake, 2008; Yin, 2005). For example, the dimension of “differentiated literacy instruction” saturated the data in the higher performing schools, as evident in coded examples such as this one from a Columbus Elementary School teacher discussing instruction for students identified with special needs: “[Special needs] children are pulled during enrichment period so they are getting the double weighted academic services that they need. During the day they are in their own room receiving reading instruction – with the teacher monitoring, doing guided reading, balanced literacy, modifying work in the classroom and differentiated learning.” If evidence was evident in multiple sources within a particular case and also exemplified in the documentary evidence and case study, a note was made in the matrix indicating evidence of a finding.

Once all of the data were analyzed in this way, two features of practice in the higher performing schools related to literacy achievement were identified as salient: (1) intensive and differentiated literacy instruction (10 of 10 of the higher performing schools in comparison with one of five of the average-performing schools showed evidence of saturation in this category) and (2) technology-enriched literacy instruction (7 of the 10 higher performing schools in comparison with none of the average-performing schools showed evidence of saturation in this category). Even though each category was not saturated in every higher performing school (School 19, Lincoln, and Davison Avenue did not show evidence of technology use as in the other higher performing schools) and not every average-performing school lacked evidence in every category found in higher performing schools (Washington, for example, had evidence of intensive and differentiated literacy instruction like the higher performing schools), when taken together the categories marked salient contrasts overall.

Phase 3

In the final phase of analysis we relied upon theoretical propositions to identify relationships between factors at the classroom, school, and district level in alignment with the socioecological framework that informed our research design. The results were mapped onto the theoretical frame by identifying those practices that were proximal and distal: This mapping is displayed in Figure 2. This graphic display is a strategy recommended by Strauss and Corbin (1998) to articulate the relationships between categories and dimensions as viewed through the theoretical lens informing the study.

Figure 2. Proximal and Distal Factors Related to Minority Student Literacy Achievement



A socioecological framework emphasizes both proximal factors (direct interactions between students and educators) and distal factors (e.g., interactions between teachers, noninstructional staff, and school and district administrators). These two kinds of factors are expected to interact and coevolve, and as they do, student performance is impacted. The theory also posits that distal influences are expected to have strong impacts in contexts with greater challenges (e.g. higher poverty) (Brofenbrenner, 1993). Although distal influences include those at the macro-level that reside outside school systems, as has been noted in other research, some distal factors that may also impact student achievement are malleable and within the purview of educators. Such influences include school and district organizational structures and norms and beliefs about responsibilities and expectations for student achievement that are then enacted in processes, procedures, and practices (Bryk et al., 2010; Elmore, 2004; Knapp et al., 2014; Wilcox, 2013). Mirroring the current study design, other researchers have found these distal factors, to be particularly important to ethnically and linguistically diverse student achievement outcomes (Coburn & Russell, 2008; Holme & Rangel, 2012).

In the following sections we describe what patterns we identified in educators’ approaches toward literacy and then show how higher and average-performing schools in this study contrasted with each other in terms of proximal and distal factors.


Our data suggest that a combination of balanced, intensive, and differentiated literacy instruction provides the foundation of the literacy programs in the higher performing schools in this study, and contrasts were evident in average-performing schools.

Balanced literacy was the model of instruction most often cited as guiding practice in both higher and average-performing elementary schools and was defined as the integration of phonics with literature/trade book study and the use of small, guided, and independent group reading. Although teachers in both higher and average-performing schools described using balanced literacy approaches, what distinguished the two groups was in how much balanced literacy instruction was offered and how effectively that literacy instruction was differentiated to meet diverse students’ needs.

Intensive, Dedicated Literacy Blocks

At higher performing Lincoln Elementary School, a teacher explained that on a typical day, a second grade Lincoln child had three periods of English Language Arts. One period of this block focused on “shared and guided reading,” another on “skills, hands-on activity, fantasy,” and then another for “practice and review in differentiated groups.” Like at Lincoln, in the other higher performing schools studied, blocks of intensive literacy instruction of the kind described by this teacher were defined as of at least 90 minutes. Marked as an important distinction between higher and average-performing schools, these blocks were not interrupted for interventions in the higher performing schools. Rather, literacy time was seen as “sacred” (in the words of a John F. Kennedy Elementary School teacher), and specialists (e.g., special education, ESL, literacy coaches) were often reported as heavily utilized within the mainstream classroom during these intensive literacy blocks. As evidence of this pattern, a Centennial Avenue Elementary School teacher explained how literacy instruction was approached in her school.

The dedicated 90-minute literacy block with a structured mini lesson, reader’s workshop, learning centers, guided reading, and writing opportunities has been a key reform. I think it has benefited our students the most and strengthened our ELA instructional program.

In contrast, teachers and administrators in the average-performing schools spoke about their need to better protect the literacy block from intrusions. As evidence of this pattern, the principal at Washington Avenue Elementary told of her struggle: “Literacy support to . . . targeted students . . . should not be during literacy instruction time. It must be in addition to it.” As at Washington Avenue, administrators at Duncan Elementary had come to recognize the importance of dedicated and uninterrupted literacy blocks and were making moves to implement the practice but had not quite achieved this aim at the time of the study. A district administrator described her initial attempts to push services in to the literacy block and to “create a team to look at student performance and begin to differentiate.” A district administrator at Jahn Meadows admitted that the extent of time spent on literacy from classroom to classroom was unclear: “[The blocks are] hopefully for 90 minutes.” While recognition of the value of uninterrupted intensive literacy instruction was recognized by many of the teachers in average-performing schools, and some did offer 90-minute blocks of literacy instruction, inflexible school schedules and problems with coordinating the use of resource staff during these blocks was a notable obstacle in average-performing schools and ones that higher performing schools had surmounted.

A Coherent Program With Differentiated Instruction at the Center

A second and related distinguishing characteristic between higher and average-performing schools was with regard to how effectively the instruction in literacy blocks was delivered. For example, in higher performing Columbus Elementary School, instruction was guided by the philosophy that everyone can learn, that instruction needs to be individualized and differentiated, that scaffolding is essential, and that this work is a whole school effort. These aspects of practice are evidenced in the following statements, the first of which came from a Columbus teacher and the second by the principal: “Even if something works, we try to find something better. If it works for 80% [of students], we try to find something that works for 100%.” “Whatever we need to do to help a student become literate, we will do.”

Through ongoing assessment, teachers stated that their focus was on determining what students knew (or didn’t), breaking procedures down and building students’ knowledge and skills up step by step. Every Columbus teacher had been trained in the Sheltered Instruction Observation Protocol (SIOP), and those interviewed attested to using it. All lessons based on the SIOP model included a content objective and a language objective, content-obligatory language and compatible language, and an assessment. The lessons were purposively linked to background knowledge and past learning. Reading, writing, speaking, and listening were integrated, and lessons were required to provide modeling, guided practice, and group practice. Teachers indicated that students were involved in hands-on, meaningful activities that promoted engagement and they described using workshop models in both their literacy block and the enrichment block (a block used as a resource for phonics/decoding skills, guided reading, word study/vocabulary, and building fluency). During the daily 90-minute literacy block in Grades 3, 4, and 5, ESL teachers co-taught with the mainstream teachers to assist in differentiating. In addition, Academic Intervention Services (AIS) for reading were provided during the 60-minute reading enrichment block (Walshhampton, 2011).

Some teachers in average-performing schools indicated that differentiation was important. However, many of them also stated that they did not feel capable of effectively delivering a differentiated lesson. Some attributed this gap to their own limitations with regard to pedagogical knowledge, while others indicated the lack of a coherent literacy program or the absence of literacy or other specialists able to embed into the mainstream classroom as problematic.

Some administrators in average-performing schools attributed the paucity of self-reported teacher competencies to differentiate literacy instruction to the lack of a school or district vision for it. As the Teal River principal lamented, “There is nobody in the district that can say this is what it [differentiation] is. This is what it looks like, here is what we are doing, here is how you structure your time and here is why.” In another context (average-performing Munsee Elementary), the literacy program was seen as “in flux,” according to an administrator, and this contributed to inconsistencies in the qualities of instruction from teacher to teacher and grade to grade: “All are expected to use the same programs, but how they bring it alive varies.” For Munsee English learners, a combination of pull-out and push-in models was being used, yet when pushing in the ESL teachers expressed a desire to “be more involved in the whole class,” to “feel like I’m part of it.” The integration of a coherent differentiated approach and the integration of specialist expertise into mainstream classrooms to facilitate that differentiation, as was evidenced in Columbus, were lacking or uneven in the average-performing schools.

Overall, teachers in higher performing schools attributed their self-reported competency to deliver high-quality differentiated instruction to professional development, collaborative work around curricula, and the support of literacy coaches and other specialists (ESL, special education, AIS). Their responses implicated both proximal and distal factors. Inter-school differences regarding these proximal and distal factors are displayed in Table 2.

Table 2. Proximal and Distal Factors Related to Intensive and Differentiated Literacy Instruction


Average-Performing Schools

Higher Performing Schools

Classroom Level

(proximal factors)

Inconsistent teacher competencies to differentiate literacy instruction

High and pervasive teacher competencies to differentiate literacy instruction to ethnic and linguistic minority student needs

School and District Levels

(distal factors)

No or ineffective school- or district-wide initiatives to support consistency in literacy programs; differentiated instructional approaches not supported across the school through professional development; literacy and other specialists not deployed strategically based on student performance; interrupted and/or shorter periods for literacy instruction in the school schedule

High levels of consistency in instructional approaches from teacher to teacher and grade level to grade level due to school- and district-supported professional development opportunities; coherent literacy programs; ongoing collaboration around the curriculum and in-classroom support from literacy coaches and other specialists; uninterrupted literacy blocks planned in the school schedule and protected

In sum, higher performing schools were distinctively different from their average performing counterparts in important ways. Two proximal factors are salient: (1) Higher performers offered comparatively more uninterrupted literacy instruction; and (2) in these same schools instruction was effectively differentiated.

Distal factors also were evident. Extended literacy blocks built into the school schedule, literacy coaches and specialists embedding support within classrooms, and opportunities to engage in professional development focused on differentiation techniques are all generated at the school and district levels, yet related to the quality of proximal interactions between students and teachers within the classroom.


The uses of technology provided another major contrast found between higher and average-performing schools. Although educators in the majority of the schools studied were not satisfied with the amount of technology available to them and/or the adequacy of training provided to use the technology effectively in their classrooms, how technologies were being used for literacy instruction and for ongoing review of student performance emerged as a salient contrast between the higher and average-performing schools.

Instructional Technology

Differences between higher and average-performing schools in their uses of instructional technology were consequential in two important respects: (1) the alignment of instructional technologies with broader goals for developing literacy and (2) how “hands-on” technology was for students. For instance, rather than using technology primarily as a supplement, for its entertainment value or to display information at the front of the classroom, as was typical in the average-performing schools, in several of the higher performing schools (Centennial, Columbus, Forest Road, John F. Kennedy, Martin Luther King Jr., and Pakanasink), teachers expressed how technology was closely aligned to the literacy program and how they used it to facilitate students’ control of pathways for learning, including how quickly and in what way they moved through tasks that would provide evidence of meeting learning objectives. Evidence of this pattern in practice is exemplified in a Columbus Elementary teacher’s response to a question about promising practices that she felt related to students’ literacy outcomes:

One promising practice is the program that they started with this year for reading; it’s one-to-one on the computer. It’s just incredible to see how some of these students have grown and how now they have the background knowledge for it. They can connect it. It’s right where they need to be. It’s engaging. The children love it. It’s been amazing, [for example], for teaching the easy ending. It has little songs for the little skills. Since they’ve done it on the program, they feel ‘I know this’.

The effective use of such technology in higher performing schools was accompanied by aligned planning and coordination at the school and district levels. In brief, the interaction between these distal factors and proximal classroom factors was important here. For example, the principal at the John F. Kennedy Elementary School described the significance of a school-wide commitment to supporting technology-enriched instruction to “differentiate ways for children to learn.” Whether iPads or one-to-one computers on a cart or in a lab, technologies in higher performing schools were generally used strategically, with the teacher having a clear understanding of how they would relate to broader school and district goals for literacy development. The connection at the district-level was evidenced in such messages as (this one from a Centennial Avenue Elementary district administrator): “I want to see the teacher using technology and the students engaged—not straight lecture” and “students learn by doing.”

One of the factors educators in average-performing school cited as affecting their technology use was the lack of or inconsistent funding for technology and outdated equipment. The Duncan Elementary principal expressed her frustration:

We’re so far behind. I have three smart boards in the entire building [a K-8 building of more than 500 students]. We have some computers in the classrooms that are horrible -- some Apple IIs! I have some from [a corporate sponsor], but they came without screens. We are very far behind with technology.

As in the case of Duncan, some schools had benefited from corporate sponsorship or grant funding, but when the sponsor stopped upgrading equipment or the grant was over, practices related to the use of the technology were abandoned. One example of this came from Teal River, where the principal reported that after finally embracing some of the technology-related practices required by a federal Reading First grant, “As soon as it was done the majority [of teachers] dropped those practices.” In addition, lack of funding affected administrators’ abilities to provide professional development and embedded coaching to support effective instructional uses of technology. A Duncan School teacher explained, “The tech piece is one that needs to be as much a part of any conversation as textbooks, strategies and other pieces of the learning puzzle.”

Consistent with earlier findings (e.g., Blachowicz, et al., 2009; Macaruso & Rodman, 2011; Macaruso & Walker, 2008; Chambers, et al., 2008; Tracey & Young, 2007), performance of ethnic and linguistic minority students was higher in the schools where technology supplemented and complemented the literacy program. Another important element of technology use, also consistent with earlier studies, is that it was “in the hands of students.” Once given control, students were able to advance at their own pace.

Technology for Monitoring Performance

Most average- and higher performing schools were also distinctly different from each other in how technologies were used to monitor literacy performance. In higher performing schools, a variety of computer programs were used to collect data on students’ growth on literacy measures, and these data were used by school administrators to inform changes to instruction and the allocation of resources. For example, in the New Rochelle District, home of Columbus Elementary, an administrator described the principal as “extremely savvy on data analysis… so he knows what each individual student needs.” While similar data were available to school administrators in average-performing schools, higher and average-performing schools differed in how and how often teachers and school administrators discussed data and used them to inform curricular and instructional changes, resource allocation decisions, interventions, and plans for professional development.

Higher performing Forest Road Elementary provided an example of this practice. Educators in both the school and district offices emphasized using a variety of evidence to not only monitor progress of individual students but also to measure the effectiveness of interventions. Looking at performance data is where they started to think about and discuss, in a systematic way, student growth and then to measure the impact of interventions on academic achievement. The way this worked was as follows: At the beginning of the school year, the central office presented district-wide data to all staff for all buildings, for all cohorts, including all the subgroup categories (e.g. special education, English learner). This, according to district administrators, gave everyone a sense of “where we were better and where we could improve.” When teachers left that presentation, they found folders with data on their own students’ performance in their mailboxes. It was with these data in hand that principals dialogued with teachers about their goals for the year. A month later, the principal and teachers met again, looking this time only at the data for ESL and special education students and asking, “What/where can we push?” (Principal interview) and setting action plans. Using second grade as an example, data were produced using a universal screening in fall, winter, and spring. This screening was done on the computer. Students also took a reading benchmark assessment in fall and winter. “All teachers must give the unit tests,” said district administrators, and “any kid who scores below 80% gets a double dose,” that is, the teacher retaught that material to those students. The phonics program also had a unit test, and, again, teachers double dosed to make sure that students had learned targeted content: “Kids in the intervention are watched closely” (Principal interview).

In two of the average-performing schools educators reported using—or having once used—technologies to monitor student literacy development, but these were not necessarily integrated with their overall literacy program. Of those schools, Teal River had stopped using a commercial literacy survey because they had learned that it was unreliable and not well matched to their program; but a replacement had not yet been found. Riverview Landing used a variety of instruments that focused on word recognition or fluency but not comprehension. Similar to the way instructional approaches were inconsistent across teachers, the measures of student progress did not contribute to a coherent literacy program in the average-performing schools.

Overall, educators in the higher performing schools were more strategic in how they deployed technology. In addition to being an effective instructional tool for teachers and students, for teachers and administrators, it was a tool that enabled them to carefully monitor and respond to individual student needs. Not only did teachers use it for that purpose, but administrators used results to help teachers focus on students most in need of further or differentiated instruction or some other support. How the proximal and distal factors differed in average- and higher performing schools with regard to these dimensions is displayed in Table 3.

Table 3. Proximal and Distal Factors Related to the Use of Technology


Average-Performing Schools

Higher Performing Schools

Classroom Level

(proximal factors)

Technology controlled by teachers and used primarily for its entertainment value

Technology oftentimes at the fingertips of students to allow for individual pacing and aligned with literacy programs

School and District Levels

(distal factors)

Technologies bought and used not necessarily aligned to the literacy program; not used effectively for curriculum and instructional changes based on performance data, resource allocation, interventions and professional development decisions

Technology connected directly to the literacy program, providing ongoing performance data to inform curriculum and instruction, resource allocation, interventions, and professional development needs

In sum, how technology was used to scaffold and appropriately pace instruction, how students were able to use it directly, and how it was utilized to monitor performance distinguished the higher from average-performing schools. The distal resources of technologies directly connected to district-wide literacy programs and continual and systematic procedures for the use of technology-generated achievement data informed curriculum, instruction, resource allocation, interventions, and professional development. These distal factors contributed to the quality of proximal interactions between students and teachers within the classroom.


Our findings suggest that higher literacy outcomes among elementary-level African-American, Hispanic/Latino and English learners were in part attributable to the relationships and interactions involving proximal factors in classrooms and aligned, distal factors in schools and district offices. Significantly, these factors are malleable and actionable, i.e., they are ones that all manner of improvement specialists can do something about in order to make a positive difference in the literacy experiences and achievement levels of diverse students.

As noted above, at the classroom (microsystem or proximal) level, we found that in the higher performing schools teachers’ abilities to provide differentiated literacy instruction were pervasive. We also found that their instruction was characterized by adaptations based on the unique characteristics of students within individual schools. This generic ability, on close inspection, involves several related competencies, and they were developed collaboratively with specialists. At the same time, these practices were supported by school-wide and district-wide professional development focused on providing research-based differentiated practice. These findings indicate that higher literacy performance by African-American, Hispanic/Latino, and EL students in elementary school is related in part to appropriate affective and cognitive supports provided within the classroom and supported by policies and practices at school, and district levels, as well. This finding is in alignment with other recent research that has found links between improved literacy performance and relevant and differentiated instruction (e.g. Lee, Deaktor, Hart, Cuevas, & Enders, 2005).

Another key instructional practice related to diverse students’ literacy achievement was the integration of technology, especially when it was aligned with the literacy program. Furthermore, technology was used in ways that maximized students’ hands-on use of it for self-pacing and to facilitate ongoing collection and analysis of data on literacy development, which then was used as evidence to inform instruction. These results echo those of other studies with regard to the use of technology-mediated instruction (e.g. Blachowicz et al., 2009; Chapelle, 2009) and the use of computer-generated achievement data to monitor progress (e.g., Coburn, Toure, & Yamashita, 2009).

This study also revealed that distal practices have important impacts on students’ immediate learning contexts and performance. Consistent with Knapp et al.’s (2014) framework, the higher performing elementary schools exhibited an important combination of leadership and organizational development and improvement. Although administrators in higher performing schools did not indicate being any more present in individual classrooms than their average-performing school counterparts, they paid close attention to achievement data and were successful at nurturing a climate in which improving literacy outcomes was understood as a top and shared priority. They also showed evidence of efficiency in directing efforts to improve student achievement outcomes through the use of data, embedding specialists in classrooms, and providing time and opportunities for teachers to engage in curriculum revision work. These findings also underscore the importance of adaptive leadership and structures at the school level (Knapp et al., 2014). Examples include school schedules and deployment of specialists in classrooms.

District-level distal factors also are consequential. Examples start with iterative decision making around how curriculum and literacy programs might be adapted and implemented and the ways in which resources might be allocated to schools. Overall, district leaders in higher performing school districts apparently optimize the conditions for high quality instruction, emphasizing district-wide and school-specific clarity and coherence of practice, while aligning resource allocations to instructional priorities.

In the same vein, our findings indicate that literacy achievement among African-American, Hispanic/Latino, and English learner students is related to distal supports for ongoing professional development on culturally responsive, differentiated, and technology-enriched techniques for literacy instruction. Given the sharp difference between the higher and average-performing schools in terms of teachers’ self-reported efficacy in differentiating instruction, providing professional development in the what, why, and how of differentiation is of utmost importance in schools striving to improve diverse students’ literacy performance. This professional development is most effective when embedded in a coherent literacy program that is school- and district-wide.

Another implication of our study and consistent with previous studies (e.g., Han, 2008; Slavin et al., 2009; Wilcox, 2007) is that district- and school-wide professional development focused on building teacher competencies for using technology for literacy development and monitoring performance is also most effective when supplemented by ongoing, classroom-embedded literacy specialist coaching and systematic procedures for the shared analysis of performance data.

The above findings are constrained by a manifest limitation, and it is illuminated by a robust application of socioecological theory. This study was intentionally delimited to investigate what occurs in school systems as bounded ecological units with special interest in the factors that are available within the educational system to positively impact ethnically and linguistically diverse student achievement. Granting the importance of this study’s findings, it does not discount the influence of social and economic disadvantage, especially place-based high density poverty, on the literacy achievement of individual students or the overall performance of schools serving high poverty populations (Sampson, 2012; Tate, 2012). Nor did this study attend to possible literacy achievement facilitators (e.g., parent engagement strategies, mental health services, health services), especially their contributions to learners’ readiness and engagement in the 10 higher performing schools. These and other manifest limitations provide opportunities for future research.


In contrast to studies that examine just one or two of the three levels of classrooms, schools, and districts, this study looked for and discovered relationships among them; and with particular interest in the forces, factors, and actions that help to explain comparatively higher literacy performance among diverse youth. This study found that relationships and interactions involving district officials, school leaders, and teachers are consequential in teachers’ instructional practices and, in turn, student and school literacy outcomes. Differences between higher performing and average-performing schools can be traced back to these interactions, with particular interest in what these educators do and how they do it. Three descriptors serve to explain this special relationship among district office, schools, and classrooms: Instructional practices are clear, coherent, and aligned. As a result, district officials, school leaders, and teachers are able to work together to achieve common purposes. Diverse students benefit.

By identifying the proximal and distal factors that are related to higher literacy achievement among diverse students and identifying the relationships between them, our findings highlight an important dialectical relationship. It is the relationship between proximal classroom practice and the distal structures, practices, and processes that co-evolve with classroom instruction.

Furthermore, this study’s findings suggest that these proximal (micro)- and distal (meso) factors in higher performing schools differ in some important ways from average-performing schools especially with regard to outcomes for diverse children. Specifically, in higher performers literacy practices proceed with adaptive stances toward ethnic and linguistic diversity and other unique characteristics children bring with them to school (Knapp et al., 2014). In the same vein, these practices include flexible approaches toward the use of time, space and human resources.

The image of an organic human ecology of schools offers enlightening frames and discourses in this regard. Just as the functioning of the human body cannot be explained by anatomy alone and requires physiology, so too does the functioning of what might be called “a higher performing school” depend on descriptions and explanations that proceed beyond organizational structure. With this metaphor in mind, we have begun to frame and name these special instructional interactions and practices as constituting “the soft tissue” of classroom, school, and district ecologies because they give life to practice.

What is more, this soft tissue gives life to the structures of schools and districts and provides coherence across proximal and distal levels. In other words, school-wide instructional clarity, coherence, and alignment in classrooms does not result automatically from district and school structural mandates and incentives. It must be endorsed, internalized, and enacted by teachers, who make it a living system.

Once these “soft tissue” components are recognized as requiring concerted attention at three levels—classroom, school, and district—socioecological theory gains new import for future research because it promises more nuanced theory and more detailed, “actionable” descriptions of and explanations for practice. This study has emphasized the interactions between proximal and distal factors in inherently complex social ecologies (Figure 2). Far from the last word on the topics at hand, its findings hold promise for extant theories for instruction, school improvement, and school-district relations and alignment, including the effects of top-down improvement policies implemented with aspirations for district- and school-wide instructional coherence and effectiveness.

This study identified the proximal and distal factors related to higher African-American, Hispanic/Latino, and English learner student literacy achievement outcomes in a diverse sample of elementary schools. We assert that this line of inquiry, which views schools as organic ecologies wherein individuals (educators and students) are supported in their efforts within and across the interdependent and co-evolving systems comprised of classrooms, schools, and districts, is generative and greatly needed in the current context characterized by increasing demands for individual educator accountability for student achievement.

Together our findings frame an important question. When desirable literacy outcomes are not achieved at scale, and particularly for so-called “achievement gap students,” what should teachers, other school leaders, and district officials prioritize and do? This study provides one set of answers. It has identified malleable and actionable proximal and distal factors, i.e., forces, factors, and actions, that educators can do something about. Perhaps above all, it implicates a bottom-up approach to school and district policy and leadership practice, an approach grounded in instructional coherence and alignment in classrooms that serve diverse youth. Far from the final word on the subject, it has achieved its primary aim if it provides both bottom-up and top-down policy and practice guidance, while informing future research on this important topic.


1. Although the larger study used measures of students’ performance on the English language arts (ELA) and math assessments required by the State, the math and ELA Pearson correlation was high (.88), so a combined score was used. Our analysis in the current study focuses only on what was occurring with regard to literacy. We refer readers to the discussion of results related to math in the full report (Wilcox, 2013).


Abedi, J. (2004). The no child left behind act and English language learners: Assessment and accountability issues in the teaching of English. Educational Researcher, 33(1), 4–14. doi:10.3102/0013189X033001004

August, D., & Shanahan, T. (Eds.). (2006). Developing literacy in second-language learners: Report of the National Literacy Panel on Language-Minority Children and Youth. Mahwah, NJ: Lawrence Erlbaum Associates.

Banks, J. A. (1993). The canon debate, knowledge construction, and multicultural education. Educational Researcher, 22(5), 4–14. doi:10.3102/0013189X022005004

Bell, J., & Lee, M. (2011). Why place and race matter. Oakland, CA: Policylink & The California Endowment. Retrieved from


Berger, P. L., & Luckmann, T. (1967). The social construction of reality: A treatise in the sociology of knowledge. Garden City, NY: Anchor Books.

Bitter, C., O'Day, J., Gubbins, P., & Socias, M. (2009). What works to improve student literacy achievement? An examination of instructional practices in a balanced literacy approach. Journal of Education for Students Placed at Risk, 14(1), 17–44. doi:10.1080/10824660802715403

Blachowicz, C. L. Z., Bates, A., Berne, J., Bridgman, T., Chaney, J., & Perney, J. (2009). Technology and at-risk young readers and their classrooms. Reading Psychology, 30(5), 387–411. doi:10.1080/02702710902733576

Brofenbrenner, U. (1993). The ecology of cognitive development: Research models and fugitive findings. In R. H. Wozniak & K. W. Fischer (Eds.), Development in context: Acting and thinking in specific environments (pp. 3–44). Hillsdale, NJ: Erlbaum.

Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago: Chicago University Press.

Cajkler, W., & Hall, B. (2012a). Languages in primary classrooms: A study of new teacher capability and practice. Language Awareness, 21(1-2), 15–32.

Cajkler, W., & Hall, B. (2012b). Multilingual primary classrooms: An investigation of first year teachers' learning and responsive teaching. European Journal of Teacher Education, 35(2), 213–228.

Chambers, B., Slavin, R. E., Madden, N. A., Abrami, P. C., Tucker, B. J., Cheung, A., & Gifford, R. (2008). Technology infusion in Success for All: Reading outcomes for first graders. Elementary School Journal, 109(1), 1–15.

Chapelle, C. A. (2009). The relationship between second language acquisition theory and computer-assisted language learning. Modern Language Journal, 93(s1), 741–753.

Clewell, B. C., Campbell, P. B., & Perlman, L. (2007). Good schools in poor neighborhoods: Defying demographics, achieving success. Washington, DC: The Urban Insitute Press.

Coburn, C. E., & Russell, J. L. (2008). District policy and teachers’ social networks. Educational Evaluation and Policy Analysis, 30(3), 203–235. doi:10.3102/0162373708321829

Coburn, C. E., Toure, J., & Yamashita, M. (2009). Evidence, interpretation, and persuasion: Instructional decision making at the district central office. Teachers College Record, 111(4), 1115–1161.

Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed method research. Thousand Oaks, CA: Sage.

Datnow, A., Lasky, S. G., Stringfield, S. C., & Teddlie, C. (2005). Systemic integration for educational reform in racially and linguistically diverse contexts: A summary of the evidence. Journal of Education for Students Placed at Risk, 10(4), 441–453. doi:10.1207/s15327671espr1004_6

Dixon, L. Q., Zhao, J., Shin, J. Y., Wu, S., Su, J.H., Burgess-Brigham, R., Gezer, M. U., & Snow, C. (2012). What we know about second language acquisition: A synthesis from four perspectives. Review of Educational Research, 82(1), 5–60. doi:10.3102/0034654311433587

Echevarria, J., Richards-Tutor, C., Canges, R., & Francis, D. (2011). Using the SIOP Model to promote the acquisition of language and science concepts with English learners. Bilingual Research Journal, 34(3), 334–351.

Echevarria, J., Short, D., & Powers, K. (2006). School reform and standards-based education: A model for English language learners. Journal of Educational Research, 99(4), 195–210.

Echevarria, J., Vogt, M. E., & Short, D. J. (2004). Making content comprehensible for English learners: The SIOP model. Boston: Pearson Education, Inc.

Elias, M. J., & Haynes, N. M. (2008). Social competence, social support, and academic achievement in minority, low-income, urban elementary school children. School Psychology Quarterly, 23(4), 474–495. doi:10.1037/1045-3830.23.4.474

Elmore, R. F. (2004). Agency, reciprocity and accountability in democratic education. Philadelphia: Consortium for Policy Research in Education.

Fitzgerald, J., & Noblit, G. (2000). Balance in the making: Learning to read in an ethnically diverse first-grade classroom. Journal of Educational Psychology, 92(1), 3–22. doi:10.1037//0022-0663.92.1.3

Foster, W. A., & Miller, M. (2007). Development of the literacy achievement gap: A longitudinal study of kindergarten through third grade. Language, Speech & Hearing Services in Schools, 38(3), 173–181.

Glesne, C., & Peshkin, A. (1992). Becoming qualitative researchers: An introduction. White Plains, NY: Longman.

Han, W. J. (2008). The academic trajectories of children of immigrants and their school environments. Developmental Psychology, 44(6), 1572–1590.

Haynes, N. M. (2002). Addressing students' social and emotional needs: The role of mental health teams in schools. Journal of Health & Social Policy, 16(1/2), 109–123.

Holme, J. J., & Rangel, V. S. (2012). Putting school reform in its place. American Educational Research Journal, 49(2), 257–283. doi:10.3102/0002831211423316

Kamps, D., Abbott, M., Greenwood, C., Arreaga-Mayer, C., Wills, H., Longstaff, J., Culpepper, M., & Walton, C. (2007). Use of evidence-based, small group reading instruction for English language learners in elementary grades: Secondary-tier intervention. Learning Disability Quarterly, 30(3), 153–168.

Knapp, M. S., Copland, M. A., Honig, M. I., Plecki, M. L., & Portin, B. S. (2014). Practicing and supporting learning-focused leadership in schools and districts. New York: Routledge.

Ladson-Billings, G. (1994). The dreamkeepers: Successful teachers of African-American children. San Francisco: Jossey-Bass Publishers.

Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465–491.

Lee, C. D. (1999, April). Supporting the development of interpretive communities through metacognitive instructional conversations in culturally diverse classrooms. Paper presented at the Annual Meeting of the American Educational Research Association, Montreal, Canada.

Lee, O., Deaktor, R. A., Hart, J. E., Cuevas, P., & Enders, C. (2005). An instructional intervention's impact on the science and literacy achievement of culturally and linguistically diverse elementary students. Journal of Research in Science Teaching, 42(8), 857–887. doi:10.1002/tea.20071

Levine, D. U., & Lezotte, L. W. (1990). Unusually effective schools: A review and analysis of research and practice. Madison, WI: National Center for Effective Schools Research and Development.

Liew, J., Chen, Q., & Hughes, J. N. (2009). Child effortful control, teacher-student relationships, and achievement in academically at-risk children: Additive and interactive effects. Early Childhood Research Quarterly, 25(1), 51–64.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications, Inc.

Macaruso, P., & Rodman, A. (2011). Efficacy of computer-assisted instruction for the development of early literacy skills in young children. Reading Psychology, 32(2), 172–196. doi:10.1080/02702711003608071

Macaruso, P., & Walker, A. (2008). The efficacy of computer-assisted instruction for advancing literacy skills in kindergarten children. Reading Psychology, 29(3), 266–287. doi:10.1080/02702710801982019

McGee, G. W. (2004). Closing the achievement gap: Lessons from Illinois' Golden Spike high-poverty high-performing schools. Journal of Education for Students Placed at Risk, 9(2), 97–125. doi:10.1207/s15327671espr0902_2

Merriam, S. (1997). Qualitative research and case study applications in education. San Francisco: Jossey Bass.

No Child Left Behind Act of 2001, Pub. L. No. 107–110, Section 601 (2002).

New York State Education Department, Office of State Assessment. (2010). English language arts tests standard and performance indicator map with answer key: Grade 3. Retrieved from

O'Day, J. (2009). Good instruction is good for everyone—or is it? English language learners in a balanced biteracy approach. Journal of Education for Students Placed at Risk, 14(1), 97–119. doi:10.1080/10824660802715502

Patton, M. Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: Sage.

Race to the Top, S. 844--112th Congress (2011). Retrieved from

HyperResearch (Version 2.8.3). [Computer Software]. Nashville, TN: Researchware Inc.

Sampson, R. J. (2012). Great American city: Chicago and the enduring neighborhood effect.

Chicago: University of Chicago Press.

Santamaria, L. (2009). Culturally responsive differentiated instruction: Narrowing gaps between best pedagogical practices benefiting all learners. Teachers College Record, 111(1), 214–247.

Short, D. J., Fidelman, C. G., & Louguit, M. (2012). Developing academic language in English language learners through Sheltered Instruction. TESOL Quarterly, 46(2), 334–361. doi:10.3200/JOER.99.4.195-211

Slavin, R. E. (2008). Education reform requires teachers to apply research-proven methods. Education Journal, 110, 7–10.

Slavin, R. E., Lake, C., Chambers, B., Cheung, A., & Davis, S. (2009). Effective reading programs for the elementary grades: A best-evidence synthesis. Review of Educational Research, 79(4), 1391–1466. doi:10.3102/0034654309341374

Stake, R. E. (2008). Qualitative case studies. In Norman K. Denzin & Y. S. Lincoln (Eds.), Strategies of qualitative inquiry (3rd ed., pp. 119–149). Los Angeles: Sage.

Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). Thousand Oaks, CA: Sage.

Suárez-Orozco, C., Gaytán, F. X., Bang, H. J., Pakes, J., O'Connor, E., & Rhodes, J. (2010). Academic trajectories of newcomer immigrant youth. Developmental Psychology, 46(3), 602–618. doi:10.1037/a0018201

Tate, W. F. (2012). Research on schools, neighborhoods, and communities: Toward civic responsibility. Lanham, MD: Rowman & Littlefield & American Educational Research Association.

Tracey, D. H., & Young, J. W. (2007). Technology and early literacy: The impact of an integrated learning system on high-risk kindergartners' achievement. Reading Psychology, 28(5), 443–467. doi:10.1080/02702710701568488

U.S. Department of Commerce, United States Census Bureau. (2010). Current Population Reports. Retrieved from

Vanneman, A., Hamilton, L., Baldwin Anderson, J., & Rahman, T. (2009). Achievement gaps: How Black and White students in public schools perform in mathematics and reading on the National Assessment of Educational Progress (NCES 2009-455). Retrieved from National Center for Education Statistics, Institute of Education Sciences website:

Walpole, S., Justice, L. M., & Invernizzi, M. A. (2004). Closing the gap between research and practice: Case study of school-wide literacy reform. Reading & Writing Quarterly, 20(3), 261–283. doi:10.1080/10573560490429078

Walshhampton, D. (2011). Columbus Elementary School: Meeting critical needs at the elementary level. Albany: State University of New York.

Wardle, F. (1996). Proposal: An anti-bias and ecological model for multicultural education. Childhood Education, 72(3), 152–156.

Wilcox, K. C. (2005). What makes elementary schools work. Albany, NY: State University of New York.

Wilcox, K. C, (with Angelis, J.I.) (2011). What works for diverse and special needs students: Best practices from higher-performing elementary schools. Albany, NY: State University of New York.

Wilson, W.J. (2011). Being poor, Black, and American: The impact of political, economic, and cultural forces. American Educator, 35(1), 10–23.

Wilcox, K.C. (2012). Diversity as strength: How higher performing schools embrace diversity and thrive. In A. Cohan & A. Honigsfeld (Eds.), Breaking the mold of education for culturally and linguistically diverse students: Innovative and successful practices for the 21st century (pp. 47-51). New York: Rowman & Littlefield.

Wilcox, K. C. (2013). A socioecological view of higher performing diverse elementary schools. Journal of Education for Students Placed at Risk 11(2)101–127.

Yin, R. K. (2014). Case study research: Design and methods (5thed.). Thousand Oaks, CA: Sage.


School Demographics and Z Scores


Grade Span

Total Enrollment

% F/R Lunch Eligible



% African-American

% Hispanic/ Latino



% Other

Z Scores

Higher performing


 Dr. Charles T. Lunsford

K - 6









 Dr. Martin Luther King, Jr.

K - 5










K - 5









 John F. Kennedy

K - 5









 Centennial Avenue

K - 5










K - 6










K - 5










K - 5









 Davison Avenue

K - 4









 Forest Road

K - 6









  Mean higher performing











Average performing



K - 8









 Jahn Meadows

K - 5









 Riverview Landing

K - 6









 Teal River

K - 8










3 - 6









  Mean average-performing











Mean New York State










All demographic data are from 2009–10. Z scores are based on 2007, 2008, and 2009 ELA assessment data.


Names of the higher performing schools are actual; leaders of those schools agreed to have their schools identified. Names of the average-performing schools are pseudonyms.


English Learner


Mainstream Teacher Interview


Please restate your name and your position.


How long have you been working as the <insert job title>?
What attracted you to this district?


To what do you attribute your school’s level of success with ethnically and/or linguistically diverse students?

How does your school define success for these students? How would you describe your success with these students as a teacher?


Please describe this school’s climate.

What are the key priorities for your school?

What are the primary challenges facing your school?


Describe opportunities you have to collaborate with ESL teachers.

How is this collaboration supported and sustained?

What is the typical content and outcome of these collaborations?


Can you tell me about your philosophy of teaching?

What, in your opinion “works”? What doesn’t?

What do you believe is the role of the teacher, role of the student?

How do you see supporting all students’ learning in your classroom?

In a typical week, could you tell me about the modes of lesson delivery you use including ways you assess student learning? (lecture, small group, large group; projects, tests/quizzes)


What sorts of things have you done in your classroom that have enabled the academic success of ethnically and/or linguistically diverse students?


Describe how you monitor your students’ performance.

What assessment data do you use? How are those data used and by whom?

Can you give an example of what you have done to help a struggling student?


Did you learn about teaching ethnically and/or linguistically diverse students in your educational training?

 If so—how adequate was that training?

What qualities do you think your colleagues saw in you that contributed to your hiring at this school?


Have you learned about teaching ethnically and/or linguistically diverse students since you started teaching in this school?

Describe your experiences—how helpful have they been?

What kinds of things did you learn?


What kinds of challenges do you face in teaching ethnically and/or linguistically diverse students?

Can you provide an example of a way you have attempted to meet these challenges?


If you had to choose the one promising practice that positively impacts ethnically and/or linguistically diverse students in your school, what would it be?
Please describe and provide an example.


Is there any additional information about your experience teaching ethnically and/or linguistically diverse students you would like to share?


Data Analysis Procedures

Second phase of analysis: Matrix for comparing patterns


Higher Performing Schools

Average-performing Schools

















Intensive and differentiated literacy instruction













Technology-enriched literacy instruction











* Schools are abbreviated. Higher performing schools: CN – Centennial Avenue Elementary School; CO – Columbus Elementary School; DA – Davison Avenue Elementary School; FR – Forest Road Elementary School; JFK – John F. Kennedy Elementary School; LI – Lincoln Elementary School; MLK – Martin Luther King Jr. Elementary School; MA – Maybrook Elementary School; PA – Pakanasink Elementary School; S19 – Dr. Charles T. Lunsford School 19. Average-performing schools: DS – Duncan Road Elementary School; JS – Jahn Meadows Elementary School; RL – Riverview Landing Elementary School; TR – Teal River Elementary School; WA – Washington Ave. Elementary School

** X = evidence triangulated between interview, documentary evidence, and individual school case studies

Cite This Article as: Teachers College Record Volume 117 Number 9, 2015, p. 1-38 ID Number: 18049, Date Accessed: 10/26/2021 4:38:21 PM

Purchase Reprint Rights for this article or review