How Can Schools of Education Help to Build Educators’ Capacity to Use Data? A Systemic View of the Issue
by Ellen B. Mandinach , Jeremy M. Friedman & Edith Gummer - 2015
Background: With the growing emphasis for educators to use data to inform their practice, little has been done to consider the means by which the educators can acquire the requisite data literacy skills. This article provides a context for why schools of education can and must play an important role in preparing teachers to use data.
Purpose: This article sought to understand if and how schools of education are preparing teacher candidates to use data effectively or responsibly. The study examined the extent to which schools of education teach stand-alone courses on data-driven decision making or integrate data use concepts into existing courses. It also examined state licensure and certification requirements to determine if and how data use is included in documentation.
Population: A stratified randomized sample of schools of education was drawn with 208 institutions responding, representing a 25.7% response rate.
Research Design: The survey portion of the study consisted of a stratified randomized sample of all schools or departments of education in the United States. The syllabus review was a voluntary part of the survey. The licensure review was a descriptive analysis of every state’s documentation for teacher licensure and certification.
Findings/Results: The survey results indicated that a vast majority of the schools of education reported that they offered a stand-alone data course, and even more integrated data use into existing courses. The syllabus review provided a deeper dive into the course offerings and indicated that the courses were more about assessment literacy than data literacy. The licensure review yielded a plethora of skills and knowledge related to data that are included in state requirements. However, there was wide variation across states in their requirements.
Conclusions: Even though schools of education reported that they are teaching about data-driven decision making in their teacher preparation programs, the results indicate that the content is more about assessment literacy than data literacy. This finding is consistent with the often observed conflation of the two constructs. Licensure requirements include both data literacy and assessment literacy, but the emphasis is more on assessment than data. With the increasing emphasis by policy makers on the importance of educators using data, it is essential that schools of education begin to incorporate data concepts into their curricula and that states make explicit the data-related skills and knowledge required for teachers for licensure and certification.
THE CONTEXT OF DATA LITERACY
This article documents the work of a multicomponent research effort. The objective of the project is to understand what schools of education are doing to prepare teachers to use data in their practice. The issue is multifaceted, complex, and systemic. Schools of education do not act alone to suddenly introduce courses on data-driven decision making into their curricula because they have a whim to do so or because policy makers say that data literacy among educators is important. Schools of education must come to realize on their own that building the human capacity to use data among their teacher candidates is a response to needs from the field, stimulated in part by policy makers rhetoric that education must become an evidence-based field. The study, of which this report is a part, contains three distinct but interconnected components that, in combination, provide a depiction of the landscape of teacher preparation and data literacy. The components include a survey to schools of education, a review of selected syllabi, and an analysis of state licensure documents and requirements.
Until recently, the field has lacked a comprehensive definition of data literacy. A definition is necessary for stakeholders to have a shared understanding of what it means to be data literate. Some definitions have emerged, including one from the Data Quality Campaign Data Literacy Working Group1 (2013), whose goal was to lay out the basic tenets of data use for all educators to be used to promote data literacy to policy makers, professional organizations, and other interested stakeholders and to define what a data-literate educator is (Data Quality Campaign, 2014): Data-literate educators continuously, effectively, and ethically access, interpret, act on, and communicate multiple types of data from state, local, classroom, and other sources to improve outcomes for students in a manner appropriate to educators professional roles and responsibilities (p. 1). The North Carolina Department of Public Instruction (2013) has a web page where it outlines aspects of data literacy and poses a brief definition. Based on the research reported here, we posit the following comprehensive definition:
Data literacy for teaching is the ability to transform information into actionable instructional knowledge and practices by collecting, analyzing, and interpreting all types of data (assessment, school climate, behavioral, snapshot, longitudinal, moment-to-moment, etc.) to help determine instructional steps. It combines an understanding of data with standards, disciplinary knowledge and practices, curricular knowledge, pedagogical content knowledge, and an understanding of how children learn.
This definition also provides the foundation for the conceptual framework that we have developed (Gummer & Mandinach, 2015, this issue).
RELEVANT FINDINGS FROM POLICIES AND RESEARCH
Much attention from policy makers has been given to the importance of teachers using data. Secretary of Education Arne Duncan (2009a, 2009b, 2009c, 2010a, 2010b, 2012) has spoken widely about the need for teachers to use data and the importance of such evidence-driven practice. In fact, at a national conference sponsored by the Data Quality Campaign (DQC), Duncan (2012) publicly challenged schools of education to step up and begin to train educators. Further, data use is one of the four education pillars in the American Recovery and Reinvestment Act (ARRA, 2009) and in the Race to the Top program (U.S. Department of Education, 2009).
Professional organizations such as the National Council for Accreditation of Teacher Education (NCATE, now CAEP [Council for the Accreditation of Educator Preparation]) and the Council of Chief State School Officers (CCSSO) have included data literacy or the capacity to use data among their recommendations and standards. The National Board of Professional Teaching Standards (NBPTS) also has been a strong proponent of improving teachers capacity to use data (Aguerrebere, 2009). A Blue Ribbon Panel (2010) report released by NCATE and endorsed by Duncan (2010b) recommended that teacher candidates know how to make data-driven decisions. It further recommends that teacher candidates must be able to analyze student learning needs and make instructional adjustments by using student performance data and other sources of data to inform their practice. It stresses the need for teacher preparation programs to include data-driven practices into the curricula.
CCSSO (2011) released the InTASC standards for teaching that laid out 10 recommendations, each with knowledge, dispositions, and performance skills that are required of teachers. The document identifies using data to support learning as one of the cross-cutting themes. It further specifies that the data theme occurs in 43 of the knowledge, dispositions, and performance components. We analyzed the document further and noted an additional 24 components. A revised set of InTASC standards was released in 2013 (CCSSO, 2013) in which data was the theme in 38 components. Again, we noted that this was an underestimate, having identified another 48 components where data literacy is relevant.
In some ways, policy makers are further along in their thinking about data literacy among educators than are researchers. Policy makers and researchers in the area of data-driven decision making have focused on teachers in a number of ways but have rarely addressed teacher preparation. Massell (2001) and Choppin (2002) noted over a decade ago that data literacy was not a skill set schools of education have tackled. Yet many articles and studies have noted the importance for teachers to know how to use data effectively to inform their practice and the need to build educators capacity to use data (Baker, 2003; Choppin, 2002; Feldman & Tung, 2001; Hamilton et al., 2009; Ikemoto & Marsh, 2007; Mandinach, 2009, 2012; Mandinach & Honey, 2008; Mason, 2002; Miller, 2009). There have been numerous calls for high-quality and sustained professional development to facilitate data literacy (Baker, 2003; Mandinach, Rivas, Light, & Heinze, 2006; Means, Padilla, & Gallagher, 2010; Schafer & Lissitz, 1987; Wise, Lukin, & Roos, 1991). Yet preparing educators to use data only goes so far. Having good professional development is important, but there also is a pressing need for the infrastructure to support the infusion of data use into schools and districts (Marsh, Pane, & Hamilton, 2006).
A REACTION TO THE LANDSCAPE
Whereas the existing literature focuses on the current cohort of teachers, in-service training, and professional development, little, if any, attention has been devoted to teacher preparation and preservice (Choppin, 2002; Mandinach & Gummer, 2013b; Massell, 2001). Mandinach and Gummer (2013b) noted that involving schools of education is a highly complex and systemic issue.
Recognizing the dearth of knowledge about the role of teacher preparation in developing data literacy, early in 2011, we convened a meeting of key stakeholders to discuss what schools of education can do to prepare educators to use data.2 The outcome of that meeting was a white paper (Mandinach & Gummer, 2011) and a call to action that appeared in Educational Researcher (Mandinach & Gummer, 2013b). The white paper reported on the varying perspectives of different stakeholder groups and presented a clear picture of many of the challenges. It outlined a research agenda needed to inform the field, including a comprehensive survey to schools of education to better understand the landscape of course offerings. The journal article laid out the systemic nature of the problem, noting that professional development providers can only go so far as to train some of the current cohort of teachers.
Hence, the intent of this study was to determine the pervasiveness of courses on data-driven decision making or the integration of data use into existing suites of courses. We also realized that data literacy is often conflated with assessment literacy (Mandinach & Gummer, 2012, 2013a). This reflects a major problem, especially when policy makers confuse the concepts; they typically talk about data literacy but focus only on the use of assessment data to inform teaching (see CCSSO Task Force, 2012; Greenberg & Walsh, 2012), which is a view that lacks comprehensiveness. Data literacy requires an understanding of more than just assessment data (Mandinach & Gummer, 2011, 2012, 2013a). Because of this confusion, we sought to examine what really is being taught in a subset of the courses by examining syllabi. The final component was to determine if and how data literacy is reflected in state licensure documents. The assumption here is that schools of education prepare teacher candidates with the licensure requirements in mind.
The study reported here contains three components that, in combination, are intended to provide a comprehensive picture of what schools of education are doing to prepare teachers to use data. The components are interdependent, with each part providing essential clarifying information about the other parts. The first component is a survey to schools of education to ascertain if they have stand-alone courses on data-related concepts or if they integrate data use into existing suites of courses. The second component is a review of syllabi. The examination provides a more in-depth analysis of what is actually being taught in some of the courses, what books and resources are being used, and the depth and breadth of the courses. The review provides a validity check for the survey. The third component is an analysis of state licensure requirements, recognizing that schools of education must be responsive, to some degree, to the requirements for the licensure of their graduates. Thus, the studys components answer the following questions: What do schools of education say they are doing in terms of course offerings? What really is being taught in the courses? What should be addressed in courses offered by the schools of education? What do states require in terms of data literacy for teacher candidates?
In total, faculty or staff members from 808 different schools of education from across the United States and territories were invited to participate in the survey. Schools invited to participate were identified in one of two ways: (1) a stratified randomized sample of schools of education created by a survey expert at the University of Michigan; and (2) land grant and state schools in the 50 states, the District of Columbia, and territories that were not part of the original sample.3 The stratified sample created by the survey experts team used the most recent Integrated Postsecondary Education Data System (IPEDS) file, which identified 1,474 postsecondary schools in the 51 states that offered a baccalaureate or higher degree in education.
To select a stratified sample, the institutions were sorted by level of degree offered (baccalaureate, masters, doctorate) and type of institution (public; private, nonprofit; private, for profit). Schools were sorted by level of degree offered because institutions offering different degrees vary in their involvement in educational research and the empirical methods courses associated with educational research and thus should be examined analytically. The sample was sorted by type of control or ownership of each institution because in national studies of higher education, institutions are often grouped into public institutions, private nonprofit institutions, and private for-profit institutions, and it was decided that this was also a worthy distinction. A probability-proportionate-to-size (PPS) sample was run of the 1,474 institutions to match the characteristics outlined above. The combination of these two dimensions created a 12-cell sampling matrix or design.
A number of these institutions have multiple campuses but only one dean or chief academic officer. In short, there were multiple listings for what the study deemed one institution. However, this was not realized until after the first sampling. After removing the duplicate institution entries from the population, the universe of institutions dropped to 1,428. With the originally sampled schools already contacted, it was impractical and costly to redraw the entire sample. Instead we opted to randomly add schools to the sample where necessary. Our initial sample resulted in 503 institutions selected (see Table 1), however, we were ultimately unable to obtain a valid email address for pertinent faculty or staff at two of the institution, so staff members from 501 institutions were contacted as part of the first sample. This final sample roughly maintained the original PPS design.
Table 1. Allocation of Sample of Institutions by Level of Education, Degrees Offered, and Control
The first sample, which consisted of 45.5% private schools, paid no attention to program size and included a large number of institutions with relatively small teacher preparation programs. The second group was added after reviewing the stratified, random sample and noting the preponderance of small, religious, technical, and unrelated institutions that were drawn into the survey and would prevent us from obtaining the desired information from the survey.
One person from each institution was identified as the individual best suited to receive the survey. In most instances, this was the dean or chair of the department of education or the chair of the teacher preparation program. Survey invitations were sent out over the first two weeks of March 2013, with reminder emails and phone calls utilized during the months the survey was open. Although there was a concern that such a robust, detailed survey might cause a lower response rate, it was decided that a detailed survey was necessary to truly dig in deep and understand what schools of education report are and are not components of their teacher preparation programs. The level of depth in the survey allowed us to garner meaningful results about the participating institutions.
THE SYLLABUS REVIEW
A total of 80 syllabi were submitted by a subset of schools of education that responded to the survey. The submission of syllabi was voluntary, and 48 schools of education provided at least one syllabus. Two of the syllabi came from a school of education that did not complete the survey, and the syllabi from this school of education were not included in this analysis. One of the schools submitted a single syllabus from a course that was intended for educational administrators and was not included in the analysis. Ten of the schools of education provided multiple syllabi. All the multiple syllabi from a given school of education were examined, and the one that most directly addressed the issue of data literacy was included in this analysis. For instance, some schools of education included a course for administrators in their syllabi, and the target audience precluded the use of the syllabus for this analysis. Similarly, if a course was focused on general topics of educational psychology and no focus on data could be ascertained, it was not included in this analysis. A total of 44 syllabi were used for the initial analysis, which looked holistically at the course content and type.
Two additional analyses were conducted for this study. One examined the nature of the assignments required of students in the course to ascertain the extent to which they supported the development of data literacy. Only complete syllabi that included a description of the course objectives, required textbooks and readings, lectures or discussion threads, and assignments were used for the analysis of the assignments. A total of 35 syllabi included sufficient information for that analysis.
The second additional analysis examined the nature of the content that was addressed during the sessions of the course as seen in the topics in a course calendar. The course week was the unit of analysis to measure what the teacher or teacher candidate studied during a week in the course. If the syllabus did not include a weekly schedule, it was not included in this analysis. A total of 35 syllabi provided such a calendar and were used for this final analysis.
A modification of the analysis methodology used by Hess and Kelly (2007) was used to analyze the syllabi in this study. The coding scheme included the type of course, the focus of the activities of the course, the nature of the data examined in the assignments or readings, the types of assignments, the extent to which statistics were addressed, the type of required textbook and readings, and the extent to which the course activities focused on decision-making with respect to instruction. The syllabi were also analyzed to determine the amount of time that was associated with the following topics: assessment or measurement, data topics including systems and tools, analytic processes, inquiry into data, data management, specific uses of data, and statistics. These categories were used to determine the focus of each course week, which was considered the unit of analysis for those courses that had sufficiently detailed syllabi.
ANALYSIS OF THE LICENSURE DOCUMENTS
The methods for this component of the study not only include the licensure review performed specifically for the current study but also incorporate data from a prior study on defining the term data literacy (Mandinach & Gummer, 2011, 2013a).
Data Literacy Study
In the study of data literacy, over 55 experts, including researchers, professional development providers of data-driven decision making and formative assessment, professors, funders, and policy makers were convened and asked to define data literacy. This request was part of a larger study. No constraints were placed on the definitions. The request was generic to all educators; that is, it did not specify if the definition was to pertain to only teachers or administrators. Each expert submitted a definition. In two cases, colleagues submitted a joint definition. The resulting definitions were analyzed to identify particular skills and knowledge that the experts reported educators must have to use data effectively. Transcripts from the two-day conference also were analyzed for evidence of specific skills and knowledge. The analyses did not focus on the frequency of particular skills being mentioned, only the existence of them among the definitions. The process yielded nearly 100 skills that were initially categorized into: problem focus; data focus; process focus; and disciplinary, topical, dispositional, and other knowledge.
This work has entailed an intensive examination and analysis of the skills and knowledge related to data literacy contained in state licensure documents. Project staff searched the websites of state departments of education and their licensing agencies for any regulations, requirements, and documents that relate to teacher licensure. In many instances, these documents were relatively easy to locate. In some instances, they were not readily apparent. When locating the appropriate websites or documents was problematic, the project sought the assistance of the executive director of the National Association of State Directors of Teacher Education and Certification (NASDTEC) in providing either a link or a contact person. In all but one instance, the project was able to identify the needed documents. The exception is Wyoming, which does not appear to have relevant documentation.
For some states, multiple documents were identified. For other states, a single document was located. The documents ranged from a paragraph to hundreds of pages. A search procedure using key words was used because it would have been too labor intensive to read every page. The key search terms were data and assessment. Although this search method was not as comprehensive as reading every page in every document, it is estimated that it yielded the vast majority of the relevant passages. The location strategy was applied and compared with a careful reading of a subset of the shorter documents. In those documents, the search strategy worked close to perfection.
A log of all relevant passages was made for each state. Each statement was examined and the specific skills identified. Additionally, the analyses yielded information about the purpose of the data use, what is referred to as the outcome. A state-by-skill matrix was developed as well as a state-by-outcome matrix. Another matrix of general characteristics also was developed. This matrix focused on general categories such as the amount of information contained in the document, if the term data was mentioned, if specific skills were identified, the level of specificity and detail of the skill description, and if assessment was mentioned. Additionally, it was noted if the state used specific frameworks or standards, if the document included a specific data standard, if it contained a developmental continuum, and a determination of whether the document pertained to data literacy or assessment literacy.
Some states have developed documents that specify the skills and knowledge for teachers of particular grades and content domainsfor example, high school mathematics and statistics, high school science, or middle school social studies. The project decided to exclude certain skills that may appear to be data oriented if they were specific to the discipline. For example, if in high school mathematics, the documents specified the teacher must know certain statistical procedures, or in science, that they must know how to read graphical displays of experimental results, those skills were excluded. However, if those skills appeared generically, they were included.
Respondents from 208 of the 808 schools completed the survey, for a response rate of 25.7%. The responding schools employed between 5,581 and 10,776 full-time and 4,245 and 10,749 part-time faculty members involved in the preparation of between 51,840 and 96,543 preservice teacher candidates at a time. The institutions were diverse with respect to size and geographic location. The oversampling of state schools helped to increase the number of larger colleges and universities that responded to the survey, which led to a sound representation of all school sizes. For instance, nearly 23% of responding schools enrolled 500 or more preservice teacher candidates each year (see Figure 1). Furthermore, respondents represented institutions in 47 states, the District of Columbia, and the Virgin Islands. California was the most highly represented state, with 18 responding colleges and universities.
Figure 1. How many preservice teacher candidates enrolled per year
There was an overrepresentation of public schools, which can be attributed to the second sample of state schools. Over two thirds (68%) of responding college and universities identified as public institutions. Under one third were private institutions (2.9% for-profit and 28.6% not-for-profit). Institutions that identified as traditional state teachers colleges comprised 8.3% of the sample. The sample also consisted of 2.9% sectarian institutions.
Data Use Course Offerings
The main focus of the survey was to poll respondents about whether their institution offered a stand-alone course or courses on data use for educational decisions or if the topic was integrated into an existing set of courses. If the respondent indicated that his or her school did have stand-alone and/or integrated course(s) on the use of data for educational decisions, the respondent was asked a series of follow-up questions about which data and assessment topics, tools, systems, and processes were taught and to what extent they were a focus of the class. Almost two thirds (62.4%) of the respondents claimed that their school offered at least one stand-alone course on data use, while 92.0% reported that data use for educational decisions was integrated into a least one existing course. Public colleges and universities were slightly more likely to offer at least one stand-alone course than were other types of institutions (65.7% vs. 55.2%) and almost as likely to offer at least one course in which data use for educational decisions was subsumed or integrated (91.3% vs. 92%).
If a respondents institution offered more than one stand-alone course on data use, he or she was asked to answer only for the one course with the strongest emphasis on using data to inform educational decisions. We found that the typical stand-alone data use course:
is a requirement for a teaching degree (80% of the time);
is intended for the target audience of preservice teacher candidates (84.6% of the time);
is taught at the undergraduate level (71.6% of the time);
is delivered in a face-to-face setting (83.8% of the time);
has a tenure-track professor as the instructor of record (58.1% of the time);
includes a component in which students may access and examine authentic data from K12 students for whom they can make educational decisions (72.4% of the time); and
includes a component in which the student may access and examine simulated data (78.3% of the time).
Data Use Course Content
The respondents from institutions with stand-alone and/or integrated data use classes were asked a series of questions about how prominently a series of different data and assessment topics, tools, systems, and processes was addressed in the relevant class or classes. Some identical questions were asked for both the sections on the stand-alone course and integrated courses. These questions asked the user to rate how prominently a specific topic, tool, system, or process was integrated in the class. The options were A prominent part of the course; Addressed only peripherally; Not addressed at all; and Dont know. Skills and knowledge areas covered for both stand-alone and integrated courses were: kinds of data; data topics; assessments and assessment topics; measurement topics; data systems; data tools; teacher processes important for data use; and teacher action and decision-making processes important for data use. These questions were asked in order to gauge the depth and breadth of data use taught in the classes.
Tables 2 and 3 illustrate the aggregated totals for each of the questions in which the user was asked to rate the prominence of a specific skill. Table 2 presents the results for the stand-alone courses, and Table 3 displays the results for the integrated courses. In both instances, respondents identified data topics (e.g., how different kinds of data are collected, data quality) and assessment and assessment topics (e.g., summative assessment process, diagnostic assessment process) as being most commonly a prominent part of the course. Conversely, the more modern data systems (e.g., data warehouses, student information systems) and data tools (e.g., student dashboards, behavioral tracking) were, in both instances, most frequently not addressed at all. Therefore, it is no surprise that data tools and data systems were also the categories with the highest rates of dont know responses. It is possible that a respondent actually did not know if that specific skill was being addressed if it was not covered in a particular course of as part of the curriculum.
Table 2. Data and Assessment Topics, Tools, Systems, and Processes in Stand-alone Courses
Table 3. Data and Assessment Topics, Tools, Systems, and Processes in Integrated Courses
As mentioned earlier, there is confusion among educators, researchers, and policy makers regarding the difference between data literacy and assessment literacy. Most of the aforementioned questions ask about assessment and nonassessment components of the topics, tools, systems, and processes that they are addressing. To examine how prominently assessment items are addressed compared with nonassessment items, an analysis was conducted between the levels at which each category was reported. For each question about the content covered in the stand-alone course (Questions 2128; see Table 2 for a list of what the categories were for those questions), the item responses were broken down into two categories: assessment items and nonassessment items. Assessment items are items where assessment is part of the description (e.g., benchmark or interim assessment data, diagnostic assessment data), and nonassessment items are the rest (e.g., attendance data, behavioral data, student information systems).4 It was found that, on average, assessment items were reported at a substantially greater level to be a more prominent part of the course than similarly categorized nonassessment items. It is clear from the results that the responding schools focus more on assessment data than on the other types of data. Tables 48 show that in every instance in which there were both assessment and nonassessment items in a question, the assessment items were more frequently a prominent part of the course, often at double the rate of nonassessment items.
Table 4. Assessment Versus Nonassessment Kinds of Data
Table 5. Assessment Versus Nonassessment Assessments and Assessment Topics
a There is only one non-assessment item in this question. Interpret with caution.
Table 6. Assessment Versus Nonassessment Data Systems
aThere is only one assessment item in this question. Interpret with caution.
Table 7. Assessment Versus Nonassessment Data Tools
Table 8. Assessment Versus Nonassessment Teacher Processes Important for Data Use
External Factors on Data Use Courses
There is confusion among respondents as to whether their state has licensure or certification requirements regarding teachers knowledge and skills to use data to inform educational decisions. One question asked, Does your state have licensure or certification requirements regarding teachers knowledge and skills to use data to inform educational decisions? In 16 states, at least one respondent answered Yes, their state has licensure or certification requirements, while at least one other respondent in the state answered No, their state does not have licensure or certification requirements. Each respondent in a state should have the same answer. It is clear that communication on the subject needs to be improved.
It is unclear, though, the extent to which external factors have an influence on data use course offerings in the respondents teacher preparation programs. It appears that outside sources may have less impact on a schools decision to change than their own opinion of what is important. Under half (45.7%) of the respondents with knowledge about their institutions future course plans stated that their school planned on developing and implementing a new course or courses on the use of data. A crosstab analysis between whether or not a respondents school planned on developing and implementing a new course or courses and the respondents opinion about the influence the federal emphasis on use of data shows a connection between the influence felt and whether or not a school plans on developing a course or courses around data use. The majority of respondents believe the federal emphasis on use of data has just about the right influence. Yet, those who believe the federal emphasis on data use has too small an influence were nearly twice as likely to belong to an institution planning on developing and implementing a new course or courses on use of data as compared with those who believed it has too great an influence (67% to 37%). Though the sample size is small, this shows that institutions may be more likely to change when they believe something to be important, and it is a subject area that future research could explore further. Schools reporting that the federal emphasis had too strong an influence were less likely to plan on adding a course or courses than those who reported it was not strong enough. For change to occur, the schools must want to change, and for that to happen, the conversation about teacher preparation should be one of encouragement instead of judgment.
THE SYLLABUS REVIEW
The schools of education that provided surveys are a subset of those who responded to the survey. As indicated in Table 9, they are largely a representative subset in all the categories by which the schools of education were classified in the original survey. The exception is that there were no sectarian colleges or universities represented in the syllabus analysis. The syllabi represented courses that are taught at both the undergraduate and graduate levels. The undergraduate status of the courses and the fact that they were taught in the morning or early afternoon indicated that 23 (50%) of the courses were designed for preservice teachers. It was not clear whether the other courses were specifically designed for in-service teachers or whether they represent a mix of preservice and in-service contexts.
Table 9. Comparison of Institution Type in Survey and Syllabus Analysis
Holistic Focus of the Courses
A holistic analysis of the focus of the courses is based on the coding of the title of the course and the overall topics of the course assignments and activities. Of the 46 course syllabi that had sufficient information for analysis, 35 (76%) were coded as having a major focus on the design, implementation, and analysis of types of assessments. The texts for these courses typically address the knowledge and practices that classroom teachers need to know in order to identify or design items to be used in classroom quizzes and tests, and tasks that could be used in performance assessments.
A secondary focus of these courses was on formative assessment, state assessments, or assessment policy issues. The courses ranged in terms of the rigor with which assessment was addressed. Three courses with a strong focus on educational measurement included a significant emphasis on statistics, whereas the majority of the courses addressed little if any statistics and focused on assessment more globally.
Only one course had a specific focus on data-driven decision making in the title, included texts that specifically addressed the development of data literacy and assessment texts, and had a specific emphasis on the use of data beyond just achievement or affective data. This course focused on classroom management. Three of the assessment courses had a stronger focus on data literacy. One was a course that integrated assessment and a broader emphasis on data associated with class sessions that were held in a school. That course will be discussed in more detail next. Two special education courses that were focused on assessment of exceptional students also included a deeper emphasis on the use of data that extended beyond just student achievement.
The remaining 10 courses addressed a range of foci, including general educational psychology, learning in the content area, methods, teaching practica, and teacher as researcher. The focus on data literacy in these courses was quite limited. These courses were included in the analysis because all had at least one week of the course focused on assessment or one assignment that had students collecting and analyzing data from individual students, small groups, or whole classrooms.
The coding of the nature of the assignments analyzed what was explicitly required of the students. These assignments ranged from short-term weekly journal entries, reflections, and quizzes to long-term projects, papers, capstone projects, portfolios, field-based assignments connected to projects, and assessments and final exams. A total of 36 syllabi included adequate descriptions of the assignments. Of the 132 separate assignments analyzed in this study, two types were most frequently indicated, including some sort of lesson or unit plan with assessments and the design of assessment items or tasks. Twenty three of the 36 courses (66%) had either a single lesson or a full unit plan for which an assessment task or tasks was identified or designed, implemented, and analyzed. Eight of these courses (23%) required the student to choose only one lesson, which might be the focus of an observation in a cooperating teachers classroom. Others were full unit plans with assessments and item analysis or design, identified in 15 or (43%) of the syllabi.
Unit plans frequently included a requirement that the data from the assessments be analyzed with a focus on how the results informed instruction. However, it was not possible to determine whether that information was to have been used during the course of instruction, with immediate consequences for students opportunities to learn or to plan for future use of the unit plan to teach the unit in the subsequent year. Most of the unit plans addressed a range of assessments, from formative assessments that are meant to be used in teaching, to quizzes and unit tests. In two cases, the assessments were added onto a unit plan that was the assignment in another course. The 15 courses that had the analysis or design of assessment items typically also had the student practice writing a range of assessment types, including selected and open response items and portfolio design.
Almost half of the courses (16; 44%) included an assignment that had students designing assessment items and tasks. On several occasions, the students were required to analyze items that they found in textbooks or that were used in their teaching placements. Students practiced a range of item types and formats, and some of the courses spent a majority of the course time focusing on item design.
Other types of assignments were observed less frequently. Almost a third of the courses included an assignment that had the students developing or using a rubric to evaluate student performances. In 11 (31%) of the courses, the rubric design was specified either as part of the unit assessment system or as a stand-alone assignment. These rubrics were most frequently associated with the design and implementation of a particular performance task, such as a writing assignment or a laboratory report. An assignment that was focused on the design and use of a summative assessment was included in 16 (44%) of the syllabi. These assignments included both quizzes that the students designed for use in their classrooms and tests that were included with the assessment system for the plan of a unit of instruction. Fourteen (39%) of the courses had an assignment with a specific focus on the analysis of data. These assignments were typically based on the classroom summative assessments that were part of the lesson or unit analysis. They also included assignments in which the student teaching candidate collected data from individual students who had been given a battery of assessments. The focus of the data analysis was to either provide a description of student learning or to articulate some future teaching that might be needed based on the performance of the children. Five of the courses had explicit language around the use of the data from assessments for instructional decision making. Three of these course assignments indicated that the data analysis would be conducted in class and focus on the use of state test data, but the specifics of how this would happen were not clear.
Ten (28%) of the courses had an assignment that was focused specifically on formative assessment. These assignments ranged from the design and implementation of a formative assessment with a class or a group of students to the analysis of a reading focused on formative assessment practices. The ways in which some form of statistics was addressed in the courses varied considerably. Only nine (25%) of the courses had assignments that focused on statistical analyses. These might be based on data that were provided in class or that the students needed to obtain from their school placements. The nature of the data that were the focus of these assignments was difficult to ascertain from the description of the assignment in the syllabi. Seven (19%) of the courses had an assignment that was focused on the case study of a student or a small group of students. These assignments occurred in courses that were intended for special education or reading teachers. Seven (19%) of the courses included an assignment that addressed portfolio assessment in some form. In four of these courses, students were required to reflect on an article about portfolio assessment rather than design and implement elements of such a portfolio.
Table 10 presents a summary of these findings of the major assignment types.
Table 10. Distribution of Major Assignment Types
Table 11 indicates the frequency of assignments that occurred in less than 15% of the courses. These assignments included grading plans or philosophies, observational assessments, or literature reviews.
Table 11. Remaining Assignment Types
Course Week Analysis
A total of 35 syllabi had sufficient information about the topics that were addressed in the course. The courses analyzed included a range of weeks of the course, from 6 weeks for a practicum course to 16 weeks for some of the semester-level courses. Several of the courses met more than once a week, and the topic addressed across the multiple sessions in the week was assigned to only one category. A total of 419 course sessions were coded.
Assessment and measurement was the category most frequently identified in the course sessions, assigned to 270 (64%) of the course weeks. This category included the purposes and uses of assessments, types of assessment, and multiple other assessment and measurement topics. Some of the assessment courses had all but the introductory session, and the final session focused on the development of one or more types of assessment items or tasks. Other assessment and measurement topics included introductions to state and federal assessment, such as the PARC and Smarter Balanced assessment systems, assuming that their states are participating in one or the other. The representation and communication of assessment information was the focus of another five of the course weeks. Measurement topics included concepts such as reliability and validity, as well as the issues of differences in reporting levels from the perspectives of cohorts, courses, and grades.
The second most frequent category of the topic of the course week was the other category, which included classroom activities and discussions on such topics as lesson planning, standards, disciplinary content discussions, learning of students, behavior, motivation, educational philosophy, or additional pedagogical topics. A total of 63 (15%) of the sessions were coded into the other category. Given the number of courses that were included in this analysis that were provided by the schools of education to show how data literacy was incorporated into multiple courses, this preponderance of other topics as the focus of the course week makes sense.
The remaining categories of the topics of the course week were relatively infrequently addressed. Analytic processes, statistics, data topics, data collection methods beyond assessments, data-driven decision making, inquiry around data, and data tools and systems were each addressed in less than 5% of the course weeks.
In only one course were the students provided with the opportunity for exploring the nature and use of data systems, and that was a course that included half of the class sessions in a local high school. In this course, the objectives of two course sessions included:
to identify the basic structure and philosophy of the data-driven decision making systems;
to implement common and state-of-the-art technologies used by schools to collect, analyze, and report on student outcomes;
to understand how to use technological systems and information technologies;
to collect, retrieve, and analyze student learning outcomes; and
to realize the structure and function of relational databases.
This course included assignments that required that students develop a unit plan with assessments, implement a formative assessment with a group of students, and analyze and write assessment items. It is one of only three courses that required students to write a grading plan or philosophy.
Table 12 provides the overall distribution of topics by course weeks. Overall, the analysis of the content addressed during the course weeks affirms the findings of the holistic examination of course focus and the analysis based on the assignments. These courses, provided by the schools of education in response to a request for experiences students had with data literacy, largely address assessment literacy.
Table 12. Percentage of Course Weeks That Addressed Data Literacy Topics
Note: A total of 63 course weeks were assigned to the other category.
The findings across the three types of analysisholistic focus of the course, analysis of the student assignments, and identification of the course content through topics of the course weeksprovide a picture of how the schools of education appear to be focused on assessment rather than on the broader picture of data literacy that has emerged as a crucial function of what teachers should know and be able to do in the school. The holistic analysis indicates that the majority of the courses are focused on assessment. The assignments required of students predominantly involve the design and review of assessment items or the assessment plans for lessons or units of study. The number of course weeks assigned to the discussion of assessment and measurement design issues constitutes the majority of the time spent in these courses.
ANALYSIS OF THE LICENSURE DOCUMENTS
We were able to locate documents for all states but Wyoming, which has no identifiable regulations.5 Therefore, our complete data set included 49 states and the District of Columbia.6 Seven states (AR, AZ, DE, NV, ND, OR, and SC) make use of the InTASC standards (CCSSO, 2011, 2013), either as their standards or incorporated into other standards, and one state (SD) uses the Danielson framework (2013). All other states (except WY) have some sort of documentation that contains more or less detail and addresses what knowledge and skills teacher candidates need to be licensed. These documents may or may not be informed by relevant research.
The documents were rated for several variables that reflect the integration of data use concepts. A first rating focused on the extent to which a state explicitly focused on data. This is important to determine because of the existing confusion, noted earlier, between data literacy and assessment literacy among policy makers, practitioners, and some researchers (Mandinach & Gummer, 2011, 2013a). Twenty states do not mention data or mention on data in a limited manner in their documentation, whereas 31 states explicitly deal with the concept (see Appendix A). In contrast, only two states (MT, WY) do not mention assessment in their regulations. All but seven states describe specific skills that teacher candidates must exhibit. However, there is a wide range in the degree of specificity of the knowledge and skills listed. Some states, such as Ohio, South Carolina, and Virginia, include highly specific statements about requisite skills, whereas several states (MS, MT, WI, MA, WA, and NE) lack specificity.
The amount of information about data and assessment also varied. Some states include only limited information of relevance (NE, WA, MT, MS, and WI), whereas others contain substantial information (OH, VA, and the InTASC states). Seven states (MI, NM, WV, SD, KY, OH, GA) have developmental continua that outline how the skills differ from novice to expert teacher. Eight states have a specific data standard, but even these vary widely. For example, the District of Columbia has a standard entitled Data and Assessment, yet the standard clearly is about assessments. In contrast, North Carolina has an entire document devoted to describing data-driven decision making and data literacy (North Carolina Department of Public Instruction, 2013): Data literacy refers to ones level of understanding of how to find, evaluate, and use data to inform instruction (p. 1). It further states, A data literate person possesses the knowledge to gather, analyze, and graphically convey information and data to support decision-making (p. 1).
The documents were rated to determine if they deal with data literacy and assessment literacy. Under half (23) address data literacy as it has been defined. However, states like North Carolina, Ohio, South Carolina, Utah, Virginia, and Alabama outline with great specificity the skills that comprise data literacy. More states (37) articulate criteria or standards around assessment literacy, with five (SC, UT, OH, HI, IL) specifically focusing on the construct. It is not unexpected that more states deal with assessment literacy than data literacy because of the nature of the data with which teachers typically deal. States like Alabama, North Carolina, Ohio, and Tennessee have a relatively heavy emphasis on data literacy rather than assessment literacy.
Identifying the Specific Skills
The experts definitional study yielded statements of nearly 100 skills and knowledge topics around data use (Mandinach & Gummer, 2012, 2013a). This study did not constrain the definition to only teachers; it pertained to educators more broadly defined. As noted, they were categorized into: (a) problem focus, (b) data focus, (c) process focus, and (d) disciplinary, topical, dispositional, and other knowledge focus. Some experts recognized that for teachers, there is a need to integrate pedagogy and content knowledge with data analytics. Yet content knowledge is not a component found among other frameworks for data use (Mandinach & Jackson, 2012). These frameworks are completely devoid of the content area in which data are to be used. As we touch on later and discuss at length elsewhere (Gummer & Mandinach, 2015, this issue), our conceptual framework, what we refer to as data literacy for teaching, consists of data use for teaching (the typical analytic aspects), integrated with content knowledge and pedagogical content knowledge. The latter two domains anchor data literacy to the specific context of teaching and instruction.
Appendix B provides a list of the 70 skills or elements that remain from a synthesis the experts definitions. Note that some of the statements are complex, having combined similar skills. Also of interest, but not reflected in the list, is that some experts recognized a developmental continuum for data use more generally, not specific skills. They discussed how basic data literacy might look in contrast to full fluency or expertise in data literacy. It isnt clear whether they are the same skills, but with increasing sophistication or different subsets of skills. This is a similar concept to the developmental continua found in a limited number of the licensure documents and in the learning progressions in the InTASC standards (CCSSO, 2013). The extent to which the skills differ or evolve based on ones role, level of experience, and expertise is not apparent.
The licensure analyses enabled us to identify statements about specific skills or elements for individual states to see if there was evidence of inclusion of data literacy in licensure requirements. Assessment literacy was also included in the analyses because it is seen as a component of data literacy (Mandinach & Gummer, 2011, 2013a).
States varied tremendously in the number of elements found in their documentation (see Table 13), ranging from 0 (WY) and 3 (AK) to 52 (ND and SC). The states with the most elements were those that use the InTASC standards, although for some of these states, additional skills beyond those specified in InTASC were identified. The average number of elements per state was 21.76, with a standard deviation of 14.39.
Table 13. Number of Skills Per State
The seven states with the most elements were the InTASC states followed by Illinois (36), Alabama (35), Kansas (34), and New Hampshire (34).
The number of states that identified specific elements also varied greatly, from 1 state (for manipulate) to a high of 43 (for assess). The average number of states per element was 18.86, with a standard deviation of 10.78. Elements noted by more than 20 states included:
Some elements that the literature and experts acknowledge as being important to effective data use were noted by fewer states. They include:
Topics such as the use of research (19 states) and the ethical use of data (18 states) are considered emerging areas of importance for data-driven decision making.
When we examined the documents for outcomes, a somewhat expected pattern emerged. Of the 20 outcomes identified, the two most frequent uses of data were for assessment (48) and instruction (46), followed by student progress (41) and student learning (40). Other uses included to inform performance (34), curricular goals (28), to meet student needs (32), to increase growth (20), and to inform teaching (22). The least frequent uses were for program effectiveness (6), accountability (7), learning gains (7), guidance (8), readiness (9), and patterns and trends (9). The average number of states per outcome was 13.97, with a standard deviation of 21.8.
Alignment and Component Identification
The statements of skills and knowledge generated from the two studies were combined into one list. The origin of each skill was noted; that is, whether it come from the experts definitions, the licensure documents, or both. The experts generated 50 skills, and the licensure documents yielded 44 skills or elements. An attempt was made to reduce redundancy by combining similar elements. The result was a list of 59 skills or elements, including nine components (see Appendix C). It is important to note that further analysis conducted in the conceptual framework article (Gummer & Mandinach, 2015, this issue) has yielded slightly different terminology and groupings because of the continued refinement of the definitional process. For example, nine components were noted in the empirical study, and six have emerged from the conceptual work. In this instance, the empirical work reported here has informed the conceptual and is leading to further refinement. This reflects the evolving nature of the work.
Of the 59 elements, 35 were noted in both sources. The nine elements that were unique to the licensure documents pertained to the application of data to the instructional process. For example, planning, designing and guiding instruction, individualizing, and differentiating definitely are part of pedagogy and instruction. Some were considered procedural, such as reflecting on data, reviewing, and documenting. Elements unique to the experts list pertained to general data-driven decision making. For example, licensure does not address the need to identify problems of practice, how to frame questions, understanding context, causality, testing assumptions, and generating and testing hypotheses. Also excluded from the state documents are some issues around data quality, using qualitative data, and considering the consequences of a decision.
The next step in the analytic process was to categorize and name related components and processes. This was an essential step in the development of our conceptual framework. Three overarching processes were identified7: (a) engaging in an inquiry cycle; (b) transforming data to information to actionable knowledge; and (c) communication. Data-driven decision making is seen as a cycle or process of inquiry. The cycle pervades all that goes on during decision making. The process also is a continuum of transforming data into actionable knowledge that can be applied by teachers in classrooms to an educational problem, issue, or question. Communication is required, whether talking to colleagues, students, parents, or other stakeholders. Nine components were identified: (a) inquiry processes; (b) habits of mind; (c) general data use; (d) data quality; (e) data properties; (f) data use procedural skills; (g) transformation of data to interpretation; (h) transformation of data to interpretation; and (i) communication.
Inquiry processes. Elements that are part of the inquiry process focus on aspect of identifying a problem, an issue, or a question, thinking critically, and understanding the context of the problem. Eleven such elements were identified. Teachers should be able to:
Identify a problem of practice
Frame a question
Understand the context
Involve stakeholders, including students
Probe for causality
Evaluate outcomes, scrutinize results, and engage a feedback loop
Determine next steps
Engage in inquiry cycle
Habits of mind. Habits of mind are general strategies, dispositions, or tools that surround and influence the use of data. Three elements were identified:
Belief in data
General data use. Two elements were categorized under general data use because of their global nature.
Ethics of data use
Data quality. Data quality is a topic on which the experts focused more because it is a general principle of data-driven decision making and not necessarily something that is readily taught in teacher preparation programs. Three elements were identified:
Understand data qualityaccuracy, completeness
Knowledge of assessments and their psychometric properties
Data properties. The properties of the data refer to the alignment of the data to the purpose of the inquirythat is, making sure that the appropriate data are being used to solve the problem. More specifically to assessment, it is the knowledge of what measures yield what kinds of data and their alignment to the question at hand. Eight elements were identified:
Understand the purpose of different data
Identify and evaluate the right data
Use qualitative data
Use quantitative data
Understand what data are not applicable
Use formative and summative assessments
Use multiple measures/sources of data
Use research evidence
Data use procedural skills. The procedural skills are the actions taken to attain a decision. These elements are the fundamentals of data literacy. Fifteen elements were identified:
Use technologies to support data use
Find, locate, access, and retrieve data
Collect, gather, and store data
Reflect on data
Review and document
Develop sound assessments
Drill down, aggregate, disaggregate strand items
Transformation of data to information. The transformation of data into information is the first step in working toward the provision of actionable knowledge as a continuum of steps of using data to inform practice. The elements that comprise this component focus on the examination and analysis of data in ways that lead toward interpretable results.
Analyze data and use statistics
Understand data displays and representations
Synthesize diverse data
Summarize data and explain
Draw inferences and conclusions from the information analyzed during the inquiry process
Consider impact and consequences, intended and unintended
Assess patterns and trends
Transform data to information to actionable knowledge
Transformation of data to implementation. The transformation of data to implementation is essentially about taking data and using them to inform instruction and classroom practice. Some have called this instructional decision making (Means, Chen, DeBarger, & Padilla, 2011) or pedagogical data literacy (Mandinach, 2012). The experts in the data literacy study were less focused on the transformation of data to instructional actions and therefore mentioned this set of elements less often than they appeared in the licensure documents.
Pedagogical data literacy, instructional adjustments, pedagogical content knowledge
Diagnose and monitor
Plan, design, and guide
Adapt, modify, individualize, and differentiate
Adjust and apply
Understand data into an implemented decision
Communication. The final category is communication. Teachers need to be able to communicate about data and results to other educators, students, parents, and other stakeholders.
Our analyses yield several key findings. First, schools of education report that they are teaching stand-alone courses on data-driven decision making. They also report that they are integrating data use concepts into existing courses. Additionally, schools of education expect to introduce courses on data use in the future. The survey results also indicate that there is a clear emphasis on assessment and assessment data, with much less focus on nonassessment data. The clear message from the survey is that schools of education conflate data literacy with assessment literacy because their focus really is on assessment data rather than a broad range of data.
The syllabus analyses provide a deeper examination into what actually is being taught in a subset of the courses. These analyses also can provide confirmation or disconfirmation of the survey results. The examination of the syllabi again provide a depiction of the emphasis on assessment to the detriment of other data. Few, if any, courses actually provide a comprehensive examination of data-driven decision making as indicated by the required readings and the structure of the class. The syllabus analyses align with the survey results and confirm that the courses primarily are about assessment but disconfirm that the courses should be considered as offerings about data-driven decision making.
The licensure analyses provide a perspective on how education may view data literacy. The results indicate that defining data literacy and its component skills and elements is a complex and challenging process. Many skills have been identified both in the licensure documents and by the experts. They appear as statements of needed knowledge and skills. But it is important to note that not all skills were recognized by all experts or were in all the licensure documents. There is substantial overlap, yet the experts focused on the fundamental principles of data-driven decision making, whereas the licensure documents included skills that transform data into instructional action. These emphases make sense given the difference between conducting educational research in data literacy and writing standards for teacher licensure. Taking a high-level perspective, the several global constructs can help to define data literacy. In contrast, the specific skills can serve as essential information about what needs to be included in courses that pertain to data use. They can serve as a roadmap for the development of courses on data-driven decision making or the integration of data concepts into existing courses for preservice or graduate education courses.
The simple inclusion of data concepts in the licensure documents is not an assurance that data-driven decision making is being taught in schools of education. Conversely, having a school of education include a so-called data course does not guarantee that the course actually is about data-driven decision making. We strongly suspect that although the schools of education say they are teaching data literacy, they really are not. The licensure documents serve as a guide for skills and knowledge that should be addressed. The question remains if and how those standards or statements are actually being translated into action in courses within each state. Obviously there are differences across states. Some states have limited inclusions of data-related statements, whereas others address data in a more comprehensive manner. Further, for states that use other frameworks, such as InTASC or Danielson, the question arises as to whether they are actually using the framework and implementing the required skills. For some states, the answer appears to be yes, but for a subset of states, the answer is no. It is impossible to determine at this stage the relationship of the extent to which data literacy is addressed in licensure documents and the extent to which a focus on data is part of the teacher preparation programs in colleges and universities. There may be states with comprehensive standards and schools of education where there are no courses related to data. There may be states with little or no data standards, yet the institutions have data courses. Schools of education may have nascent efforts for data courses of integration. Other programs may think they are teaching data literacy but are not actually covering the salient topics or are simply assessment literacy courses. It is quite likely, in some of the states where there is substantial documentation, that activity around data literacy may not be well established in the programs. The key here is to ascertain in a reliable and valid manner the extent to which the licensure requirements are actually being carried out.
There is one form of validation for our coding scheme: the DQCs (2012, 2013b, 2013c) annual survey Action 9. Action 9 asks states8 whether they have implemented policies and are promoting practices, including professional development and credentialing, to ensure that educators know how to access, analyze, and use data appropriately. But there are measurement problems with this index. The survey is self-report and may or may not have been completed by an individual knowledgeable about what occurs in teacher preparation programs within each state. Even if this survey is completed accurately to reflect the licensure documentations, there still is no guarantee of actual application in practice.
Results from the 2013 survey partially confirm some of our analyses. They also raise some questions. The survey results report that 29 states include data literacy as a requirement for certification/licensure or data literacy training as a requirement for state program approval, and improvement from the prior survey. Also according to the survey, 19 states9 indicate that they have licensure requirements for teachers around data literacy. There is an alignment between the DQCs 19 states reporting a licensure requirement, yet our results might have included more states, given the prominence of data literacy in the reviewed documents. Among the 19 states, six states (AR, DE, KS, NY, OH, and SC) were highlighted for creating high-quality licensure policy for data literacy. Our analyses would concur with five of the six, with New York being slightly lower than the other states. Eleven states10 report that they have no data literacy policies. Among the 11 states that indicated they have no data literacy policies are 3 InTASC states (AZ, NV, and ND). It is clear that the adoption of the InTASC standards does not necessarily translate into formal licensure actions. This is further confirmed by an interaction in one of those states where state staff at the department of educations licensure window did not even know what InTASC was when the first author approached them with a direct query. Although the state board of education had adopted InTASC, the adoption obviously has not been actionable, has not had an impact, and was not widely known.11 Three states12 report that they have a data literacy assessment. One of these states, Missouri did not appear strong in our analyses, yet they report in the DQC survey that they assess data literacy. There is a bit of a disconnect here. The alignment of our work to the DQC survey yields a number of questions worth deeper examination.
Another issue is the level at which the licensure documentation might be written to have maximum impact on the course offerings in schools of education. The language found in the licensure documents is necessarily quite general. The documents provide a high-level view of what skills and knowledge teacher candidates should demonstrate. The range of generality was quite large. Getting down into the weeds and noting requisite skills with great specificity may be too challenging programmatically for a school of education to accommodate in their courses. However, the level of specificity can help professors in preservice programs, professional development providers, and technical assistance programs to articulate the relationships between the licensure standards and curricula to include the myriad of skills or a subset of them into courses and training. It can also potentially stimulate the development of comprehensive course content or training materials that can be used in a variety of teacher preparation contexts. Questions remain as to whether the licensure criteria can and should be written more specifically or if there is a difference between standards and curricula in programs. Certainly there needs to be a translation process between the standards and curricula. Additional questions remain about the awareness of and the impact of licensure documents on implementation practice. Just because something is written in the licensure documents does not guarantee implementation and impact on schools of education.
The definition of data literacy for teaching and the identification of the component skills and knowledge are two steps in the arduous process of helping teachers become facile with data. The definition can help them to understand what it means for an educator to be data literate. This is especially important given that they report that they are teaching data literacy, but that may not be the case in many institutions.
Also of relevance is the accessibility of the licensure documents, not just for research use but, more important, for the practice audience. As the project attempted to locate the licensure documents on department of education websites, some were easy to locate and others nearly impossible, requiring assistance from NASDTEC officials and departmental staff. However, in some states, licensure resides in agencies other than the state departments of education (examples are KY and OR). In Oregon, for example, there is information on the Oregon Department of Education website and also on the Teacher Standards and Practices website. Without such knowledge, locating the appropriate documents would have been difficult. Having disparate venues for licensure may create an impediment for educators seeking to find information within and across states as they try to obtain their licenses.
Perhaps a more major issue is effecting change in institutions of higher education. How to include data literacy for teaching into schools of education is a complex and systemic issue (Mandinach & Gummer, 2013b). Consider the key variables in the equation, their roles, and the constraints placed on them. Schools of education are at the center of the issue. And the question is how to facilitate their inclusion of data-driven concepts into existing courses or stand-alone data courses. The situation is very much like the old joke, How many psychologists does it take to change a light bulb? The answer: only one, but the light bulb must want to change. Like the light bulbs, schools of education must want to change; that is, they must want to take on the role of helping to build educators capacity to use data.
Many deans have told us that they want to include data courses in their curricula, but they have encountered several obstacles. First, they have no flexibility in their curriculum to offer a data course. To that problem, we suggest integrating data into existing courses. Second, they have no one capable of teaching data courses. One solution may be the development of virtual materials designed by experts that professors can integrate into their courses. Western Oregon University exemplifies the integrated approach to data literacy in its preparation of teachers and in its institutional creation of a data culture of continuous improvement. The institutional and instructional philosophy that resonates throughout its curricula is using data to link teaching and learning. Third, data use is not considered sufficiently important to allocate a faculty slot. The virtual solution again might be viable. Bringing in a local districts data expert as an adjunct might be possible. Contracting with some of the known professional development or technical assistance providers might be another option. School districts typically use a turnkey model in which principals are trained and then train their schools teachers and staff. Using this model, it may be possible to bring in an expert to help schools of education faculty recognize where data concepts can be integrated into existing courses or how the courses can be modified to accommodate the topic. It is worth repeating that the schools of education must want to change. We must recognize, however, that institutions of higher education are notorious for resisting change, and change comes slowly.
Further, schools of education do not function in isolation. There are many other influences on them. Currently, there is increasing pressure for the evaluation of teachers once they have left their preparation programs. Teachers subsequent performance (or their students performance) is seen as a reflection of the preparation programs. This trend brings into the equation at least three other variables: the credentialing or licensure agencies; testing organizations; and local school districts. Before we describe the roles of these entities, let us present a metaphor developed by the DQC (2013a). They espouse that data should be used as a flashlight to guide continuous improvement rather as than a hammer for accountability and compliance. Herein lies the challenge for two of the agencies. The licensure agencies can tighten their requirements, and schools of education can decide whether to respond. The more stringent the requirements become, the more the pendulum swings toward the hammer. As testing organizations, such as Educational Testing Service, decide to include data literacy on the Praxis, the hammer becomes a sledgehammer, one that will come down hard on candidates who fail to exhibit data skills and the programs that produce those candidates. Local school districts can also apply pressure for change to schools of education by requiring that applicants for teaching positions demonstrate their capacity to use data. This is happening in some locations with candidates for principal positions (Long, Rivas, Light, & Mandinach, 2008). If candidates are rejected, that reflects back on the school of education. This is also a hammer.
What is needed is the flashlight to illuminate the path to change and continuous improvement. First, the schools of education must recognize that they need to integrate data into course content and practica. Second, they must develop a strategy for how the change will occur. They know the distal goal but must take proximal steps toward achieving the objective. Third, they must consult with a variety of sources that can help them in this change process. This might include institutions that have already introduced data into courses and practice. It might also include bringing in experts such as researchers in the field, professional developers, in-service providers from local school districts, and others who might provide guidance and models for successful integration and course content. Fourth, they might consider working with the credentialing and licensure agency in their state to align the evolving content with the requirements. Finally, they might consult with test developers at Educational Testing Service about their intent to include data literacy in Praxis to better understand how they are conceptualizing the construct. That knowledge can be reflected back on the design of courses in the school of education. Virtual courses and resources and massive open online courses (MOOCs) might be another viable solution. The provision of such a broad-based course offering could provide data-driven courses to many schools of education and professional development opportunities to districts throughout the country.
LIMITATIONS AND CAVEATS
The empirical and theoretical work on defining data literacy continues to evolve. The work presented here represents a view of the construct that will become more refined over time. Parts of the construct are knowledge and skills but are described here as statements about these elements. Beliefs make up another part of the construct. As can be seen in our conceptual article (Gummer & Mandinach, 2015, this issue), categorizing the elements or skills within the components of the inquiry process is a complicated enterprise. As the work continues, some elements may drop out, whereas others may be combined to reduce redundancy. For example, licensure documents and experts use terms such as adapt, adjust, and modify. We need to conduct a more in-depth analysis of how these terms are being used and to determine if indeed they are synonymous or different. The differences may be nuanced and subtle but important if they reflect essential points of discrimination.
Some limitations of the study may prevent us from achieving definitive results. For the survey, a major limitation is the depressed response rate. The 25.7% response rate is not atypical when compared with other online surveys. It still limits the extent to which we make definitive statements, despite the large number of teacher candidates produced by the participating institutions. The respondents may or may not be representative of the population of schools of education.
Through extensive conversations with deans, as we sought to increase the responses, we learned that schools of education assumed that the survey was in some way associated with the National Council of Teacher Quality (NCTQ). NCTQ had just completed a rather divisive and controversial rating of schools of education, and the deans were understandably reluctant to cooperate. One dean actually used the terms witch hunt and gotcha. Many deans explicitly declared that they would not participate if we had anything to do with NCTQ. Other deans reported that they blocked any future attempts for follow-up requests to participate in the survey. These were individuals who provided us with feedback. We can surmise that this issue, in addition to deans typically receiving many requests for survey responses, decreased the response rate despite the endorsements from AACTE, NCATE/CAEP, NBPTS, and NASDTEC.
Another limitation of the survey is that we did not ask about both teacher and administrator preparation. Our original intent was to include both, but our funder requested that we focus only on teachers. It was our contention, based on interactions with leading professors, that courses for administrators were more pervasive, particularly stand-alone courses. The survey results would have either confirmed or disconfirmed our anecdotal impressions. The question about relative prevalence remains unanswered.
A limitation of the syllabus review is that the analyses are based on a relatively small number of usable documents. These syllabi may or may not be representative of the universe of syllabi being used across the country. Another limitation is that the courses provided were volunteered by the schools of education, relying on the knowledge of the programs by the person(s) responding to the surveys. There is the possibility that the request for the course was relayed from person to person until someone responded with a course that he or she felt would fit the request. For this analysis, only one course per institution was used, typically the course that appeared to most strongly address the use of data. It is possible that the course chosen for the syllabus analysis is not the course that was at the center of the respondents consideration when he or she filled out the survey. It is possible that the analysis of a suite of courses would provide a better picture of how a focus of data across a number of courses was developed, but the initial analysis of the multiple course sets did not support such a hypothesis.
Three limitations pertain to the licensure review. First, some of the documents may be dated and may not have been revised to reflect the emerging emphasis on data literacy. It was often impossible to locate a publication date for the regulations, so there was no way of determining how current they were. One might assume that the more current document may be more likely to include data literacy. Second, for many states, locating the right documents was a monumental task. It is possible that other documents may exist in licensure agencies parallel to the state departments of education. We believe we have the right documents for most states, but it is possible that we may have missed a few. Third, our search procedure captured the majority of the relevant materials but likely missed a small amount. It would have been a huge undertaking to read every word of every document, probably several thousand pages. We feel confident that our search procedure identified more than 95% of the relevant information.
A final caveat remains and pertains to the entire subject: the confusion between data literacy and assessment literacy. As we have noted, we see data literacy as a broad construct of which assessment is an important component. Assessment literacy refers to the ability to use assessment results appropriately and to understand the basic psychometric properties of the measures. Data literacy goes beyond assessment results to include other sources of data. We have posited the comprehensive definition of data literacy for teachers, and the DQC has developed its definition that is more streamlined to be used to inform policy makers and professional organizations. Until definitions like these are acknowledged and used, the conflation between the two constructs will continue. We suspect that confusion may have impacted this study.
This study has tried to determine the extent to which schools of education provide courses or parts of courses that pertain to data-driven decision making. The fact that institutions report that they have such courses, yet the syllabi indicate that the course content is cursory at best and more aligned to assessment literacy than data literacy, is an important finding and one that can generate constructive and informative discussions. There is an obvious disjunture between what the schools of education report and think they are teaching, and what content actually appears in the syllabi. Correspondingly, the fact that assessment literacy looms large or larger in the licensure documents also provides a focus for discussion. Most certainly, assessments are a major source of data for teachers, but other data are also important. We would expect assessment literacy to loom large in courses, syllabi, and licensure requirements, but not to the exclusion of data literacy.
We see the findings as an opportunity and a challenge. This is an opportunity for a teachable moment for the field in that we can use these results to begin conversations with policy makers, professional organizations, institutions of higher education, licensure agencies, and even testing organizations to help them recognize the nuances of the two constructs. These conversations have already begun. The challenge lies in fomenting actual change in thinking and practice. Awareness is a first step, done through a process of educative presentations, discussions, and meetings. But this must be broad based, not isolated. It must be systemic. Stimulating real reform is a much longer and more complex process, one that will require the acceptance and willingness of many different stakeholder groups. Change comes slowly, but this change is important if educators are to keep pace with other professionals in terms of their ability to use evidence and data to inform their practice.
This study was funded by the Michael and Susan Dell Foundation. The data literacy study was funded by the Bill & Melinda Gates Foundation. The survey has endorsements from AACTE, NCATE/CAEP, NBPTS, and NASDTEC.
1. The first author was the leader of that group.
2. This work was sponsored by the Spencer Foundation.
3. For the purposes of this report, the District of Columbia will henceforth be considered a state.
4. Other is excluded from this analysis.
5. Project staff communicated with the Wyoming Department of Education to determine if relevant documentation exists. No such documentation was uncovered.
6. Hereafter referred to as a state.
7. Note that this categorization has been streamlined in the conceptual framework article (Gummer & Mandinach, this issue, 2015).
8. The survey is sent to the governors office and asks that knowledgeable individuals complete relevant portions of it.
9. AR, CT, DE, FL, KS, KY, LA, MD, MT, NH, NY, NC, OH, RI, SC, TN, UT, VA, and WI.
10. AZ, IL, NE, NV, NJ, ND, OK, SD, TX, WA, and WV.
11. In 2014, this state (AZ) created a rubric for data literacy skills against which they will hold their schools of education accountable.
12. LA, MD, and MO.
Aguerrebere, J. (2009, August). Remarks made at the Teachers Use of Data to Impact Teaching and Learning conference, Washington, DC.
American Recovery and Reinvestment Act of 2009 Public Law No.111-5. (2009).
Baker, E. L. (2003). From unusable to useful assessment knowledge: A design problem (CSE Technical Report 612). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.
Blue Ribbon Panel on Clinical Preparation and Partnerships for Improved Student Learning. (2010). Transforming teacher education through clinical practice: A national strategy to prepare effective teachers. Washington, DC: National Council for Accreditation of Teachers Education.
Choppin, J. (2002, April). Data use in practice: Examples from the school level. Paper presented at the meeting of the American Educational Research Association, New Orleans, LA.
Council of Chief State School Officers, Interstate Teacher Assessment and Support Consortium. (2011). InTASC model core teaching standards: A resource for state dialogue. Retrieved from http://www.ccsso.org/Documents/2011/InTASC_Model_Core_Teaching_Standards_2011.pdf
Council of Chief State School Officers. (2013, April). InTASC model core teaching standards and learning progressions for teachers 1.0. Washington, DC: CCSSOs Interstate Teacher Assessment and Support Consortium.
CCSSO Task Force on Teacher Preparation and Entry into the Profession. (2012). Our responsibility, our promise: Transforming teacher preparation and entry into the profession. Washington, DC: Council of Chief State School Officers.
Danielson, C. (2013). The framework for teaching. Available at http://www.danielsongroup.org
Data Quality Campaign. (2012). State action 9: Educator capacity 2011 state analysis. Retrieved from http://www.dataqualitycampaign.org/stateanalysis/actions/9/.
Data Quality Campaign. (2013a, April). Changing the ground game: Focus on people to improve data use at the local level. Presentation at the Changing the Ground Game conference, Crystal City, VA.
Data Quality Campaign. (2013b). Right questions, right data, right answers: Data for action 2013. Washington, DC: Author.
Data Quality Campaign. (2013c). State analysis by state action: State action 9. Retrieved from http://dataqualitycampaign.org/your-states-progress/10-state-actions?action=nine
Data Quality Campaign. (2014, February). Teacher data literacy: Its about time. Washington, DC: Author.
Data Quality Campaign Data Literacy Working Group. (2013). Definition of what it means to be a data literate educator. Washington, DC: Author.
Duncan, A. (2009a, March 10). Federal leadership to support state longitudinal data systems. Comments made at the Data Quality Campaign Conference, Leveraging the Power of Data to Improve Education, Washington, DC.
Duncan, A. (2009b, June 8). Secretary Arne Duncan addresses the Fourth Annual IES Research Conference. Speech made at the Fourth Annual IES Research Conference, Washington, DC. Retrieved from http://www.ed.gov/news/speeches/2009/06/06-82009.html
Duncan, A. (2009c, May 20). Testimony before the House Education and Labor Committee. Retrieved from http://www.ed.gov/print/news/speeches/2009/05/05202009.html
Duncan, A. (2010a, July 28). Data: The truth teller in education reform. Keynote address at Educate with Data: STATS-DC 2010, Bethesda, MD.
Duncan, A. (2010b, November 16). Secretary Arne Duncans remarks to National Council for Accreditation of Teacher Education. Retrieved from http://www.ed.gov/news/speeches/secretary-arne-duncans-remarks-national-council-accreditation-teacher-education
Duncan, A. (2012, January 18). Leading education into the information age: Improving student outcomes with data. Roundtable discussion at the Data Quality Campaign National Data Summit, Washington, DC.
Feldman, J., & Tung, R. (2001). Using data-based inquiry and decision making to improve instruction. ERS Spectrum, 19(Summer), 1019.
Greenburg, J., & Walsh, K. (2012). What teacher preparation programs teach about K-12 assessment. Washington, DC: National Council on Teacher Quality.
Gummer, E. S., & Mandinach, E. B. (2015). Building a conceptual framework for data literacy. Teachers College Record, 117(4).
Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J., & Wayman, J. (2009). Using student achievement data to support instructional decision making (NCEE 2009-4067). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/practice_guides/dddm_pg_092909.pdf
Hess, F. M., & Kelly, A. P. (2007). Learning to lead: What gets taught in principal-preparation programs. Teachers College Record, 109(1), 244274.
Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the data-driven mantra: Different conceptions of data-driven decision making. In P. A. Moss (Ed.), Evidence and decision making: 106th yearbook of the National Society for the Study of Education: Part I (pp. 104131). Malden, MA: Blackwell.
Long, L., Rivas, L., Light, D., & Mandinach, E. B. (2008). The evolution of a homegrown data warehouse: TUSDStats. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 209232). New York, NY: Teachers College Press.
Mandinach, E. B. (2009, October). Data use: What we know about school-level use. Presentation at the Special Education Data Accountability Center Retreat, Rockville, MD.
Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform practice. Educational Psychologist, 47(2), 7185.
Mandinach, E. B., & Gummer, E. S. (2011). The complexities of integrating data-driven decision making into professional preparation in schools of education: Its harder than you think. Alexandria, VA, Portland, OR, and Washington, DC: CNA Education, Education Northwest, and WestEd.
Mandinach, E. B., & Gummer, E. S. (2012). Navigating the landscape of data literacy: It IS complex. Washington, DC, and Portland, OR: WestEd and Education Northwest.
Mandinach, E. B., & Gummer, E. S. (2013a). Defining data literacy: A report on a convening of experts. Journal of Educational Research and Policy Studies, 13(2), 628. Retrieved from http://normessasweb.uark.edu/jerps/JERPS_april.pdf
Mandinach, E. B., & Gummer, E. S. (2013b). A systemic view of implementing data literacy into educator preparation. Educational Researcher, 42(1), 3037.
Mandinach, E. B., & Honey, M. (Eds.). (2008). Data-driven school improvement: Linking data and learning. New York, NY: Teachers College Press.
Mandinach, E. B., & Jackson, S. (2012). Transforming teaching and learning through data-driven decision making. Thousand Oaks, CA: Corwin Press.
Mandinach, E. B., Rivas, L., Light, D., & Heinze, C. (2006, April). The impact of data-driven decision making tools on educational practice: A systems analysis of six school districts. Paper presented at the meeting of the American Educational Research Association, San Francisco, CA.
Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education. Santa Monica, CA: RAND Education.
Mason, S. (2002, April). Turning data into knowledge: Lessons from six Milwaukee public schools. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.
Massell, D. (2001). The theory and practice of using data to build capacity: State and local strategies and their effects. In S. H. Fuhrman (Ed.), From the capitol to the classroom: Standards-based reform in the states (pp. 148169). Chicago, IL: University of Chicago Press.
Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers ability to use data to inform instruction: Challenges and supports. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development.
Means, B., Padilla, C., & Gallagher, L. (2010). Use of education data at the local level: From accountability to instructional improvement. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development.
Miller, M. (2009). Achieving a wealth of riches: Delivering on the promise of data to transform teaching and learning (Policy Brief). Washington, DC: Alliance for Excellent Education.
North Carolina Department of Pubic Instruction. (2013). Data literacy. Retrieved from http://ites.ncdpi.wikispaces.net/Data+Literacy.
Schafer, W. D., & Lissitz, R. W. (1987). Measurement training for school personnel: Recommendations and reality. Journal of Teacher Education, 38(3), 5763.
U.S. Department of Education. (2009). Race to the Top program: Executive summary. Washington, DC: Author.
Wise, S. L., Lukin, L. E., & Roos, L. L. (1991). Teacher beliefs about training in testing and measurement. Journal of Teacher Education, 42(1), 3742.
Characteristics by State
N.B. X denotes existence; / denotes less strong emphasis; XX or additional Xs indicate strength of emphasis.
List of Skills and Knowledge from Experts Definitions
Identify problems of practice
Synthesize diverse data/information/assimilate
Engage in inquiry cycle
Identify the right data/select data/qualitative and quantitative/evaluate applicability/understanding utility of data
Understand data quality and limitationsaccuracy, completeness
Understand how to use the findings
Understand the purposes of different data
Use multiple sources of data
Use formative and summative assessments/knowing which assessments/benchmarks
Knowing where to find the right data/data location
Ability to assess patterns/trends
Know the research that can inform the issue
Ability to use technology to support data use
Analyze different levels of datacohort, course, grade
Drill down to different layers of dataaggregate, disaggregate, strand, item
Rethink with new information
Instructional decision making/pedagogical data literacy/instructional adjustments
Data comprehension [data displays and representations/reports/presentations/longitudinal, cross-sectional]
Ability to solve problems
Make meaning/identify and critique meaning
Marshal facts/support or refute
Use of statistics
Use of quantitative evidence
Use qualitative data
Knowledge of assessments/psychometrics
Application of interpretations
Understand how to involve stakeholders
Evaluate outcomes/scrutinizing results
Action that leads to change
Transform data into information and knowledge and actionable decisions
Translate knowledge into appropriate responses/implement decision/draw conclusions
Ability to access data
Ability to evaluate success of proposed solution/feedback loop
Ability to understand the data-driven process
Understand the context in which decision is being made
Probe for causality
Using habits of mind
Consider impact, consequencesintended or unintended
Pedagogical content knowledge
Determine additional next steps
Understand what data are not actionable
Troubleshoot problematic data
Ethics of data use
Understand scaled scores, percentiles, performance levels
Belief in data
Develop a sound design of assessments
Metacognitive aspects/knowing or nor knowing about data
Instructional data literacy = statistics literacy + assessment literacy + analysis literacy
Content [what] and Performance Components [how much knowledge and skills]
Understanding the difference along a continuum from basic skills in data literacy to complete fluencylevel of ability necessary to be considered data literate
Awareness of knowledge/skillApplication of knowledge/skillMastery of knowledge/skill
α General data-driven decision making—should be applicable.
β Licensure documents don’t deal with qualitative data.
γ Really about PDL (pedagogical data literacy)—experts didn’t cover