Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13

Ready for School? Assessing America’s Kindergarteners

by Michael Little & Lora Cohen-Vogel - May 03, 2017

In this research note, the authors provide nationally representative evidence on the prevalence of school readiness assessments, how schools report using data collected from readiness assessments, and how the uses of these data have changed over time.


Ever since the National Education Goals Panel (NEGP) designated that its first goal was that “all children will start school ready to learn” (Lewit & Baker, 1995, p. 125), the concept of school readiness has played a prominent role in early education discourse. In more recent years, interest in school readiness has further grown in light of increasing accountability pressure during the early grades (Bassok, Latham, & Rorem, 2016; Little & Cohen-Vogel, 2016; Pianta, Cox, & Snow, 2007) and mounting empirical evidence on the importance of children’s skills at school entry for later school success (Duncan, et al., 2007; Duncan & Magnuson, 2011).

It is within this context that many schools have sought methods to gather information about the readiness of their incoming kindergarten students (Prakash, West, & Denton, 2003). As of 2014, at least 34 states had some form of school readiness assessment statute or regulation (National Conference of State Legislatures, 2014). In part, this is in response to the Race to the Top Early Learning Challenge (RTT-ELC), which was the federal competition that awarded roughly one billion dollars to 20 states since 2011. The RTT-ELC included a core funding priority that states develop “a credible plan to administer a kindergarten entry assessment, aligned with the Early Learning and Development standards, to all children who are entering a public school kindergarten by the start of the 2014–2015 school year” (USDOE, n.d.). The push to increase the use of readiness assessments is motivated by a desire to understand what skills children enter school with. This is done both for looking back to evaluate the efficacy of early childhood investments and for looking forward to helping schools understand their students and serve their needs (Howard, 2011).

As readiness assessments become commonplace, it is essential to examine how they are being used. The purpose of this research brief is twofold. First, through analysis of nationally representative data, we provide information on the prevalence of school readiness assessments as well as how schools report using the data generated from them. Second, we examine changes in the prevalence of readiness assessments and the use of data from them by comparing data from both cohorts of the ECLS-K. This includes one from the 1998–99 academic year and one from the 2010–11 academic year.



Snow (2006) provides a conceptual overview of the school readiness construct and begins by noting that, “[t]here are few specific definitions of child school readiness, although there are numerous variations on these few themes. Simply put, school readiness refers to the state of child competencies at the time of school entry that are important for later success” (p. 9). Snow (2006) details three key theoretical perspectives that have shaped conceptions of school readiness over time. These include (a) maturational theory, (b) transactional or ecological theory, and (c) evolutionary developmental theory.

First, until recent decades, the maturational theory of school readiness prevailed as a child’s readiness for school was defined solely as a function of chronological age (Gesell, 1925; Snow, 2006). Even today, most states have policies regarding entry into kindergarten that are based on the chronological age of children (Education Commission of the States, 2014). The assumption implicit in these age-based entrance policies is that by a certain moment in time (generally five-years-old) children will have reached a level of maturity necessary for school readiness.

Second, the transactional or ecological theoretical perspective offers a second lens to consider the concept of school readiness (Pianta, Rimm-Kaufman, & Cox, 1999; Snow, 2006). This perspective recognizes that individual children exist and develop within a contextual system that shapes developmental outcomes. Brown and Lan note that an interactionist perspective “frames readiness as a ‘bidirectional concept’ that is co-constructed ‘from the child’s contributions to schooling and the school’s contribution to the child’” (Brown & Lan, 2015, p. 2). As Snow (2006) articulates, this perspective has led policymakers to consider children’s early school environments and the alignment of support services during early childhood. For example, the National Governor’s Association (NGA) Task Force on School Readiness called for elementary schools to support children’s transition into kindergarten by pursuing alignment with prekindergarten programs (NGA, 2005). Brown and Lan (2015) argue that this perspective is the most generally accepted in the early childhood education field.

The third key perspective highlighted by Snow (2006) is evolutionary developmental theory. Rooted in evolutionary developmental psychology (Geary, 2002), this perspective views schools as artificial settings where humans are specifically instructed in the skills and competencies required by adults in the community. Through evolution, a certain base set of abilities arises in all children when reared in typical or good enough environments (Scarr, 1992; Snow, 2006). A set of higher-level cognitive abilities is then developed through environmental experiences. Snow (2006) notes that, “when these experiences can be programmed to maximize the foundation laid by biologically primary cognitive abilities through designed school environments, the potential for the development of the secondary abilities is more fully exploited, and presumably children would be more successful in school (and throughout life)” (p. 14).

Despite the varying theoretical perspectives concerning school readiness, a consistent framework of school readiness has emerged in practice. First conceptualized by the NEGP, school readiness encompasses five different dimensions:

1.     Physical well-being and motor development.

2.     Social and emotional development.

3.     Approaches to learning.

4.     Language development (including early literacy).

5.     Cognition and general knowledge.

In addition to these five dimensions of children’s readiness for school, the NEGP also posited two additional dimensions of school readiness. These include a school’s readiness for children and the non-school support services that facilitate school readiness (Lewit & Baker, 1995).

Subsequently, the National Research Council adopted this NEGP framework to guide their 2006 study of developmental outcomes and the appropriate assessment of young children (Snow & Van Hemel, 2008). Most recently, the Obama Administration’s RTT-ELC required that applicants ensure that their plans covered the Essential Domains of School Readiness that they define using the five dimensions from the NEGP framework (USDOE, 2011). As a result of the RTT-ELC, this framework for conceptualizing school readiness has cascaded to numerous states across the nation and represents the current definition of school readiness in practice. Having detailed the theoretical and definitional issues regarding school readiness, we now turn to review the existing literature on the assessment of school readiness.


Very little academic research has examined school readiness assessments. Much of the academic research focuses on the technical details entailed in the assessment of young children (Epstein, Schweinhart, DeBruin-Parecki, & Robin, 2004; Scott-Little & Niemeyer, 2001). To date, we are unaware of any peer-reviewed research on the prevalence of school readiness assessments and information about their use. However, there is evidence from policy organizations on the matter. This includes the National Council of State Legislatures (NCSL) and the Center on Enhancing Early Learning Outcomes (CEELO). For example, a recent report from the NCSL reports that as of 2014 “at least 34 states and the District of Columbia have some form of school readiness assessment statute or regulation” (NCSL, 2014, p. 1). Furthermore, they report that since the RTT-ELC there has been an increase in funding for early childhood assessment initiatives. This includes both kindergarten entry assessments and more comprehensive early learning formative assessments systems that span from kindergarten to third grade. While these reports provide guidance on the use of readiness assessments in terms of policy, we dig deeper by examining nationally representative survey data collected from school administrators themselves.  

In 2008, the National Research Council (NRC) published a report highlighting the need to develop high-quality assessment procedures for early childhood that capture children’s readiness on the five dimensions advocated by the NEGP mentioned previously. The report’s focus was not specifically on readiness assessments alone, but rather provided guidance for assessment of young children in a number of different formats and contexts. The report cautioned against using assessments for high-stakes decisions like evaluating teachers or delaying children’s entry into kindergarten (Snow & Van Hemel, 2008). Rather, the NRC report stressed that assessment data should be used to provide teachers with information necessary to individualize instruction and drive program improvement (Snow & Van Hemel, 2008). Despite this caution, accounts suggest that data from early assessments are driving high-stakes decisions about programs and children (Little, Cohen-Vogel, & Curran, 2016).


Increasingly, the issue of school readiness is at the forefront of educational discussions. As schools respond to this increased focus, they are using tools to facilitate their work like school readiness assessments. As such assessments become commonplace, it is critical to understand how they are being used as they may be implemented for either a constructive or a destructive purpose. As the review of the existing literature has shown, there is a dearth of evidence on this matter. This research note begins to attempt to fill this gap by asking the following questions about school readiness assessments:

1. How prevalent are school readiness assessments in public schools?

2. In what ways do schools use the data from these assessments? Which uses are most prevalent?

3. In what ways are schools similar in their uses of data from readiness assessments?

4. How has the prevalence of readiness assessments and the use of data from them changed between the 1998 and 2010?


Data for the present study comes from the School Administrator Survey of the 1998–99 and 2010–11 cohorts of the ECLS-K. As part of this survey, elementary school administrators answered questions regarding their schools’ use of kindergarten readiness assessments. In the following sections, we detail the ECLS-K data and our approach for estimating nationally representative descriptive statistics, how we examine the extent that changes have occurred between the two cohorts, and our use of Latent Class Analysis (LCA) to reveal how schools cluster together based upon their use of readiness assessment data.


Descriptive Statistics

This study leverages data from the 1998–99 and 2010–11 kindergarten cohorts of the ECLS-K, sponsored by the National Center for Education Statistics within the U.S. Department of Education. In this analysis, we focus solely on school-level data from the administrator survey collected in the base year (kindergarten). For both the 1998–99 and 2010–11 cohorts, the sample of schools is nationally representative in the base year (Mulligan, Hastedt, & McCarroll, 2012; Tourangeau, Nord, Lê, Sorongon, & Najarian, 2009). We used multiple imputation to address the bias introduced by missing data (Royston, 2004; Rubin, 2004). Multiple imputation is a technique that replaces missing values in a dataset with range of possible values that represent the uncertainty regarding the correct value to impute. We imputed a total of 20 datasets (Young & Johnson, 2010). Our analytic samples included 630 public schools from the 1998–99 cohort and 708 public schools from the 2010–11 cohort.

We utilized Stata’s SVY survey data command set to conduct our descriptive analysis. While use of a single sample weight would yield accurate point estimates, the SVY feature in Stata uses the jackknife replication method to yield accurate standard errors for estimates. Without accounting for the clustering in the survey design, Stata would assume data were collected using a simple random sample and produce standard errors that are smaller than they should be. For each cohort, we utilized the school-level sample weight along with the corresponding replicate weights and used JK2 as the method of replication (Cameron & Trivedi, 2010).

Each respondent was asked whether his or her school administers an assessment. If so, the respondent described whether or not the assessment data are used for each of the following six purposes: (a) to determine eligibility for enrollment when a child is below the cut-off age for kindergarten, (b) to determine children’s class placements, (c) to identify children who may need additional testing (e.g., for a learning problem), (d) to help teachers individualize instruction, (e) to support a recommendation that a child delay entry, and (f) other.

Our analysis began by estimating means and standard errors for each of the readiness assessment measures by cohort. To compare differences between these cohorts, we calculated independent sample t-tests to ascertain whether or not observed differences were statistically significant. Having described the prevalence of readiness assessments, the use of data generated from them, and the changes between the two ECLS-K cohorts, we then conducted a Latent Class Analysis (LCA) to examine school profiles of data use with regard to readiness assessments.

Latent Class Analysis

LCA is a statistical methodology for identifying latent (unmeasured) class membership among individuals (schools in this case) using a set of categorical variables (Halpin & Kieffer, 2015; McCutcheon, 1987). We use LCA to determine latent classes of schools who cluster together in terms of how they report using data generated from readiness assessments. We used Mplus, a latent variable modeling program, to conduct this analysis (Muthen & Muthen, 2014). The first step in a LCA analysis is to determine the appropriate number of latent classes (Cn). To do this, we examined a number of alternative fit statistics for C1-8 class solutions. Specifically, we considered the Loglikelihood, Bayesian Information Criteria (BIC), Sample-Size Adjusted Bayesian Information Criteria (ABIC), Akaike Information Criteria (AIC), and Entropy for each set of possible classes (Lubke & Neale, 2006, 2007). Once the appropriate number of classes was determined, we interpreted the profiles by examining reported response patterns between latent class that we detail in the results section.   



Table 1 presents descriptive statistics for the assessment-related survey items separated by cohort year. Leaders in approximately 7 of 10 public schools report administering an entry assessment at the beginning of kindergarten. Between both administrations of the ECLS-K, there was no statistically significant increase in the percentage of schools that did so. Perhaps this is not surprising since federal incentives for developing and administering assessment systems did not get announced until mid–2011. This was nearly a year after the school administrator survey was initially fielded.

Table 1. Summary Statistics of Kindergarten Readiness Assessment Use, by ECLS-K Cohort


Note: All estimates are weighted to adjust for the ECLS-K complex survey design.

*p<0.05. **p<0.01. ***p<0.001

Table 2. Fit Statistics for LCA Solutions with One to Eight Classes


Figure 1.  Fit statistics for LCA solutions with one to eight classes


However, there was a notable shift in reported uses of these assessment data. Between 1998–99 and 2010–11, there has been an increase in the percentage of schools that report using readiness assessment data to determine children’s class placement (30% to 42%), help teachers individualize instruction (86% to 93%), and for other uses (11% to 34%). Conversely, there has been a reduction in the percentage of schools that reported using these data to identify children who may be in need of additional testing (80% to 65%) and to support a recommendation that a child delay school entry (35% to 25%).


The LCA was conducted using the six survey items in Table 1 that capture how schools reported using the data from readiness assessments in 2010–11 (e.g., for classroom placements; to individualize instruction). Using the information criteria presented in Table 2 and Figure 1, a model with two classes (C2) was found to have the best fit. Increasing the number of classes to three resulted in a poorer fit based upon the AIC, BIC, and ABIC. Also, the fit continued to worsen from C3-8. For the two class solution, 171 schools were placed into class one and 367 schools were placed into class two.

Figure 2. Probability of school’s use of readiness assessment data, by class membership


Figure 2 presents the probability that a school reported using readiness assessment data for each of the six possible applications given their class membership. To illustrate an example, in terms of using readiness assessment data to determine children's classroom placements, schools in class one are 59% likely to report doing so versus 37% likely for schools in class two. The largest discrepancy between the two classes is in terms of using assessment data to support a recommendation that a child should delay entry into kindergarten. Schools in class one are nearly 90% likely to do so while schools in class two are less than 10% likely to do so. The smallest discrepancy between the two classes is in terms of using assessment data to individualize instruction. Class one is slightly less than 90% likely to do so and class two is slightly more than 90% likely to do so.

There appears to be evidence that a key distinguishing factor between these two classes is the likelihood that a school reports engaging in arguably high-stakes applications of readiness assessment data. As such, we label class one as high-stakes assessors and class two as non-high-stakes assessors. Three of the four largest differences between the two classes are arguably high-stakes (determine eligibility for a child near the cutoff age, determine classroom placements, and to support a recommendation that a child delays entry into kindergarten). Moreover, the two classes are most similar on one of the lowest-stakes applications of the data, namely to help teachers to individualize instruction. While these relationships are certainly not deterministic, there is evidence that schools cluster together based on their propensity to leverage readiness assessment data for high-stakes purposes. It is also important to note that class one (high-stakes assessors) represents a minority of the schools (only 32%) that reported using readiness assessments.


We find mixed results when examining the uses of kindergarten readiness assessment data against the recommendations of the NRC report and concerns about inappropriate use. The two most common practices, using data to individualize instruction and to identify students who may need additional testing, conform to the guidelines for proper use. Yet, significant proportions of schools also report using data to determine children's class placements and, against NRC guidance, to support recommendations to delay kindergarten entry.

The use of data for determining class placements rose significantly between 1998–99 and 2010–11 (30% to 42%). This finding merits immediate attention. Research is needed to examine exactly how schools may be sorting children based on kindergarten assessment results. Are they using assessment scores to group students homogeneously into kindergarten classrooms or to ensure that each class has a similar mix of readiness competencies?

Finding significant shifts in the uses of readiness assessment data before the introduction of strong federal incentives was somewhat unexpected. More research is needed both to ascertain the reasons for the changes prior to 2011 and to examine usage patterns since RTT-ELC. We posit that potential explanations for pre-RTT-ELC shifts include an uptake in data driven decision making in K–12 schools generally and an increased focus on academic content in kindergarten particularly (Bassok, et al., 2016; Little & Cohen-Vogel, 2016).

Given that nearly all survey respondents report that teachers use kindergarten entry assessment data to individualize instruction, research should also investigate how teachers make sense of these data and alter their instructional practices as a consequence. Recent evidence from McLendon, Cohen-Vogel, and Wachen (2015) suggests that even when schools develop a culture of data use and teachers use data to identify where their students need help, teachers struggle to translate the findings into instructional modifications. As readiness assessments are built into a broader school improvement framework for the early grades, attention must turn to how these efforts shape instruction to influence the goal of improved academic opportunities and outcomes for all students.


In sum, we argue that increased attention to the topic of early assessment is warranted. Given the high prevalence of readiness assessments and evidence that these data are being used for at least some high-stakes decisions, we must work to understand the effects of these practices, both intended and otherwise. As the NRC’s report noted, “assessments can make crucial contributions to the improvement of children’s well-being, but only if they are well designed, implemented effectively, developed in the context of systematic planning, and are interpreted and used appropriately” (Snow & Van Hemel, 2008, p. 7).


Bassok, D., Latham, S., & Rorem, A. (2016). Is kindergarten the new first grade? AERA Open, 2(1).

Brown, C. P., & Lan, Y. C. (2015). A qualitative metasynthesis comparing U.S. teachers’ conceptions of school readiness prior to and after the implementation of NCLB. Teaching and Teacher Education, 45, 1–13.

Cameron, A. C., & Trivedi, P. K. (2010). Microeconometrics using stata (Vol. 2). College Station, TX: Stata Press.

Duncan, G. J., Dowsett, C. J., Claessens, A., Magnuson, K., Huston, A. C., Klebanov, P., & Sexton, H. (2007). School readiness and later achievement. Developmental Psychology, 43(6), 1428–1446.

Duncan, G. J., & Magnuson, K. (2011). The nature and impact of early achievement skills, attention skills, and behavior problems. In G. J. Duncan & R. J. Murnane  (Eds.) Whither opportunity? Rising inequality, schools, and children's life (pp. 47–69). New York, NY: Russell Sage.

Education Commission of the States. (2014). 50-state comparison: Kindergarten entrance age. Denver, CO: Author. Retrieved from http://ecs.force.com/mbdata/mbquestRT?rep=Kq1402

Epstein, A. S., Schweinhart, L. J., DeBruin-Parecki, A., & Robin, K. B. (2004). Preschool assessment: A guide to developing a balanced approach (NIEER Preschool for Policy Brief, Issue 7). New Brunswick, NJ: National Institute for Early Education Research. Retrieved from


Geary, D. C. (2002). Principles of evolutionary educational psychology. Learning and Individual Differences, 12(2002), 317–345.

Gesell, A. (1925). The mental growth of the preschool child. New York, NY: Macmillan.

Halpin, P. F., & Kieffer, M. J. (2015). Describing profiles of instructional practice: A new approach to analyzing classroom observation data. Educational Researcher, 44(5), 263–277.

Howard, E. C. (2011). Moving forward with kindergarten readiness assessment efforts: A position paper of the Early Childhood Education State Collaborative on Assessment and Student Standards. Washington, DC: Council of Chief State School Officers. Retrieved from http://eric.ed.gov/?id=ED543310

Lewit, E. M., & Baker, L. S. (1995). School readiness. Future of Children, 5(2), 128–139.

Little, M. H., & Cohen-Vogel, L. (2016). Too much too soon? An analysis of the discourses used by policy advocates in the debate over kindergarten. Education Policy Analysis Archives, 24(106).

Little, M. H., Cohen-Vogel, L., & Curran, F. C. (2016). Facilitating the transition to kindergarten: What ECLS-K data tell us about school practices then and now. AERA Open, 2(3), 2332858416655766.

Lubke, G. H., & Neale M.C. (2006). Distinguishing between latent classes and continuous factors: Resolution by maximum likelihood? Multivariate Behavioral Research, 10, 499–532.

McCutcheon, A. L. (1987). Latent class analysis (No. 64). Thousand Oaks, CA: Sage.

McLendon, M. K., Cohen-Vogel, L., & Wachen, J. (2015). Understanding education policymaking and policy change in the American states: Learning from contemporary policy theory. In B. S. Cooper, J. G. Cibulka, & L. D. Fusarelli (Eds.), Handbook of education politics and policy (pp. 86–117). New York, NY: Routledge.

Miller, L. C., Bassok, D. Johnson, A. J., & Galdo, E. (2015). Accountability comes to preschool: Florida’s approach to evaluating pre-kindergarten programs based on their graduates’ kindergarten assessments. EdPolicyWorks Report. Charlottesville, VA: University of Virginia. Retrieved from


Mulligan, G. M., Hastedt, S., & McCarroll. J.C. (2012). First-time kindergartners in 2010-11: First findings from the kindergarten rounds of the Early Childhood Longitudinal Study, Kindergarten class of 2010-11 (ECLS-K:2011). (NCES 2012-049). U.S. Department of Education, National Center for Education Statistics.

National Conference of State Legislatures. (2014). State approaches to school readiness. 2014 Update. Denver, CO: Author. Retrieved from http://www.ncsl.org/Portals/1/Documents/educ/NCSL_Readiness_Assessment_2014_Update_Report_Chart.pdf

National Governor’s Association. (2005). Final report of the task force on school readiness: Building the foundation for bright futures. Washington, DC: Author. Retrieved from


Pianta, R. C., Rimm-Kauffman, S. E., & Cox, M. J. (1999). Introduction: An ecological approach to kindergarten transition. In R. C. Pianta & M. J. Cox (Eds.), The transition to kindergarten (pp. 3–12). Baltimore, MD: Paul H. Brookes Publishing.

Pianta, R. C., Cox, M. J., & Snow, K. L. (2007). School readiness and the transition to kindergarten in the era of accountability. Baltimore, MD: Paul H. Brookes Publishing.

Prakash, N., West, J., & Denton, K. (2003). Schools' use of assessments for kindergarten entrance and placement: 1998-99. Education Statistics Quarterly, 5(1), 37–41. Retrieved from https://eric.ed.gov/?id=EJ678478

Royston, P. (2004). Multiple imputation of missing values. The Stata Journal, 4(3), 227–241.

Rubin, D. B. (2004). Multiple imputation for nonresponse in surveys (2nd ed.). New York, NY: John Wiley & Sons.

Scarr, S. (1992). Developmental theories for the 1990s: Development and individual differences. Child Development, 63, 1–19.

Snow, C. E., & Van Hemel, S. B. (Eds.) (2008). Early childhood assessment: Why, what, and how. Washington, DC: National Academies Press.

Snow, K. L. (2006). Measuring school readiness: Conceptual and practical considerations. Early Education and Development, 17(1), 7–41.

Tourangeau, K., Nord, C., Lê, T., Sorongon, A. G., & Najarian, M. (2009). Early Childhood Longitudinal Study, Kindergarten Class of 1998–99 (ECLS-K), Combined user’s manual for the ECLS-K eighth-grade and K–8 full sample data files and electronic codebooks (NCES 2009–004). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.

United States Department of Education. (n.d.). Priority 1: Using early learning and development standards and kindergarten entry assessments to promote school readiness. Washington, DC: Author. Retrieved from http://www.ed.gov/early-learning/elc-draft-summary/priority-1

Young, R. & Johnson, D. R. (2010). ‘Imputing the missing Y’s: Implications for survey producers and survey users. Paper presented at 64th Annual Conference of the American Association for Public Opinion Research, May 13–16, Chicago, IL.

Appendix A

Excerpt from interview protocol relevant to the current study


Using the Data

Now I would like to ask you about the ways that you use the data from the KEA or other kindergarten readiness assessments.


Do you use the data from the readiness assessment for any of the following purposes:

a.     To determine eligibility for enrollment when a child is below the cut-off age for kindergarten,

b.     To determine children’s class placements,

c.     To identify children who may need additional testing (i.e. for a learning problem),

d.     To help teachers individualize instruction,

e.     To support a recommendation that a child delay entry, and

f.      Other purposes?


Is the data from the KEA helpful to you?


In what ways could the KEA data be more helpful to you?

Cite This Article as: Teachers College Record, Date Published: May 03, 2017
https://www.tcrecord.org ID Number: 21959, Date Accessed: 1/22/2022 10:38:01 PM

Purchase Reprint Rights for this article or review
Article Tools
Related Articles

Related Discussion
Post a Comment | Read All

About the Author
  • Michael Little
    The University of North Carolina at Chapel Hill
    E-mail Author
    MICHAEL LITTLE is a Royster Fellow and doctoral student at The University of North Carolina at Chapel Hill. His research focuses on early childhood education policy, with a particular focus on alignment between Pre–K and elementary school.
  • Lora Cohen-Vogel
    The University of North Carolina at Chapel Hill
    E-mail Author
    LORA COHEN-VOGEL is the Robena and Walter E. Hussman Jr. Distinguished Professor of Policy and Education Reform in the School of Education at the University of North Carolina at Chapel Hill. Her research focuses on education politics and policy, teacher quality reforms, continuous improvement research, and bringing to scale processes for school system improvement.
Member Center
In Print
This Month's Issue