Toward a Typology of Technology-Using Teachers in the “New Digital Divide”: A Latent Class Analysis (LCA) of the NCES Fast Response Survey System Teachers’ Use of Educational Technology in U.S. Public Schools, 2009 (FRSS 95)


by Kenneth E. Graves & Alex J. Bowers - 2018

Background: Recently, policy makers and school leaders have heavily invested in the promise that educational technology could catalyze systemic school change. Yet some critics note that the conversation surrounding technology in schools is a red herring that has not produced clear, definitive, and equitable results across different school settings. In order to address this concern, prior research has mainly focused on understanding how and why teachers use technology. Still, we argue that an understudied third perspective—examining what types of technology-using teachers exist—could provide innovative and impactful insights to shape research, policy, and practice in instructional technology.

Purpose of the Study: We investigate the extent to which there is a typology of teachers who use technology, as well as to what extent school- and teacher-level variables predict membership in the different subgroups in the typology, by analyzing a nationally generalizable sample (2,764 teachers) from the Teachers' Use of Educational Technology in U.S. Public Schools, 2009 Fast Response Survey System dataset, collected by the National Center for Education Statistics.

Research Design: We used a three-step, one-level latent class analysis (LCA) with nationally generalizable data that identify significantly different types of technology-using teachers, as well as what covariates predict membership in the identified subgroups.

Findings: We find that there are four statistically significant subgroups of technology-using teachers: Dexterous (24.4%), Evaders (22.2%), Assessors (28.4%), and Presenters (24.8%). We also find that several covariates, such as student socioeconomic status, school type, enrollment, years of teacher experience, and total number of school computers, predicted teachers' membership in these four subgroups of technology-using teachers.

Conclusions: Our findings reiterate the notion that technology-using teachers are not a monolithic group or are randomly distributed across school settings, finding that low-income schools are more likely to have teachers who use technology in less meaningful ways. As a quantitative phenomenology, this study provides one of the first empirically based, nationally generalizable depictions of technology use in schools, which could inform school leaders and policy makers as they evaluate new digital tools, design professional learning for teachers, and tackle inequalities in technology access, teacher knowledge, and technology-mediated learning experiences and outcomes for students.

Keywords: Educational Technology, Teachers, Social Justice, Digital Divide, Technology Leadership, Latent Class Analysis, Mixture Modeling, Survey Research, NCES



INTRODUCTION


The purpose of this study is to examine the extent to which there is a typology of technology-using teachers using a nationally generalizable dataset, the Fast Response Survey System Teachers’ Use of Educational Technology in U.S. Public Schools, 2009 (FRSS 95) from the National Center for Education Statistics (NCES). Many educators and policy makers believe that technology is the key to richer, highly personalized, and collaborative learning experiences for all students (Collins & Halverson, 2009; U.S. Department of Education, 2016). This excitement in the transformative potential of instructional technology has impacted the inner workings of American schooling in three key ways. First, national reports show that teacher technology use has increased steadily over the past five years (Bill and Melinda Gates Foundation, 2015; Purcell, Heaps, Buchanan, & Friedrich, 2013). Second, school technology budgets continue to increase, reaching a cumulative, national all-time high of more than one billion dollars in 2014 (Winters & McNally, 2014). And third, over the last decade, there has been a promising and significant body of research that investigates the extent of technology integration efforts and its impact on teacher pedagogy and larger school change efforts (Becker, 2000; Lemke, Coughlin, & Reifsneider, 2009; Lesgold, 2003; Wenglinsky, 1998), particularly in historically underserved communities (Warschauer, 2000). Indeed, teachers, school leaders, policy makers, and researchers are looking to use the power of technology as a catalyst to improve schooling for the next generation.


However, several researchers have noted that the conversation surrounding technology in schools is a red herring in school reform that has yet to produce clear and definitive results. Larry Cuban (2015a) wrote, “The evidence thus far that increased access and use of these technological tools has, indeed, solved any of the problems is distressingly missing.” In fact, in one of the first experimental design studies by Mathematica and SRI International on the effectiveness of reading and mathematics software across 132 U.S. schools, Dynarski et al. (2007) found that there were no observed effects of the educational software on student test scores in the treatment group.


Additionally, social justice issues of the “new digital divide” have also created significant challenges in technology adoption in schools. A significant body of research (Becker, 2007; Harris, 2015; Warschauer, Knobel, & Stone, 2004) has asserted that teachers’ technology use is inherently moderated by an unjust system of inequitable access to digital tools and instructional resources for historically underserved communities. Although schools are continually pushed to adopt new technologies year after year, a “cyclic amnesia” of the relationship between teachers and technology continues to fester (Zhao, Zhang, Lei, & Qiu, 2016). Therefore, as research in the field of educational technology continues to grow, examining teacher technology use from a different lens could enhance institutional efforts to support wide-scale technology adoption efforts.


Research on teacher technology use is extensive, and recent peer-reviewed and practitioner-focused literature has described teacher technology use from three perspectives: how teachers use technology, why teachers use technology, and which types of teachers use technology. First, a significant body of research (Becker, Wong, & Ravitz, 1998; Koehler & Mishra, 2009; McKnight et al., 2016; Mishra & Koehler, 2006; O'Dwyer, Russell, & Bebell, 2004, 2005; Rowand, 2000; Russell, Bebell, O'Dwyer, & O'Connor, 2003) has argued for a more multifaceted conception of how teachers use technology, rather then measuring it through a single construct. In the second perspective of examining why certain teachers use technology more than others, Heitink, Voogt, Verplanken, Braak, and Fisser (2016) found that teachers tend to adopt technology to simply engage students and to support learning goals and activities, while Hew and Brush (2007) identified 123 external and internal additional moderating factors. However, in terms of the third perspective, there are very few examples in the literature that investigate which types of teachers use technology. For example, Cuban (2015b) spotlighted one diagram in which a teacher used a pencil as a metaphor to describe several different types of teacher technology users. In this diagram, the pencil tip was termed the technology “leaders,” while the “erasers” were out to “undo the work of the leaders” (Cuban, 2015b). In the peer-reviewed literature, there are also two qualitative studies (Donnelly, McGarr, & O'Reilly, 2011; Mama & Hennessy, 2013) that described at least four subgroups of technology-using teachers in schools in Cyprus and in Ireland. Nonetheless, there are no studies to date that test the veracity of these hypotheses around teacher technology types with empirical, nationally representative data.


Thus, the motivation of this study is to extend the research on teacher technology use and to investigate the extent to which there is a typology of teachers who use technology in their classrooms using a nationally generalizable dataset, the NCES Fast Response Survey System Teachers’ Use of Educational Technology in U.S. Public Schools, 2009. By using recent innovations in person-centered statistics and typology subgroup analysis, namely Latent Class Analysis (LCA), we find that there are four significantly different subgroups of technology-using teachers: Dexterous (24.4%), Evaders (22.2%), Assessors (28.4%), and Presenters (24.8%). We also find that contextual variables, such as student socioeconomic status, school type, enrollment, years of teaching experience, and total number of computers, significantly predict the odds of a teacher belonging to the Evaders, Assessors, and Presenters groups. We argue that the implications of a nationally generalizable typology of teacher technology types could be a critical piece of the reform puzzle as school districts design evidence-based interventions that address the needs of teachers and school leaders on the ground level of implementation (Culp, Honey, & Mandinach, 2003), while also reiterating the prevailing reality that teachers in low income communities still struggle to access adequate resources to adopt and meaningfully use new technologies in instruction (Harris, 2015; Valadez & Duran, 2007; Warschauer, 2003). Our goal is that this person-centered conception of teacher technology use can provide a clearer picture of the challenges facing teachers and school leaders as they seek to leverage the power of information and communication technologies (ICTs) to create better learning experiences for all students.


LITERATURE REVIEW


Instructional technology literature contains a significant and growing body of teacher technology use research. This literature can be divided into three key perspectives: how teachers use technology, why teachers use technology, and which types of teachers use technology.


First, it is difficult to concretely describe how teachers use technology because our understanding of this question has evolved over time. Early survey research in the late 1990s described that teachers were using technology only to prepare for instruction (Market Data Retrieval, 1999). However, subsequent research findings showed that teachers’ use of digital tools was much more multidimensional than once thought. These researchers criticized early surveys for confounding the indicators of teacher technology use into a single generic construct (Bebell, Russell, & O'Dwyer, 2004; Rowand, 2000). Instead of describing how teachers use technology as a single action, through several studies that use confirmatory factor analysis with large scale surveys (Russell, O'Dwyer, Bebell, & Miranda, 2004), research found that the construct of how teachers use technology is characterized by seven positively correlated indicators: (1) teachers’ use of technology for class preparation; (2) teachers’ professional email use; (3) teachers’ use of technology for delivering instruction; (4) teachers’ use of technology for accommodation; (5) teacher-directed student use of technology during class time; (6) teacher-directed student use of technology to create products; and, (7) teachers’ use of technology for grading (Russell, Bebell, et al., 2003; Russell et al., 2004). Likewise, in their study on teachers’ use of educational technology in seven states, McKnight et al. (2016) found that teachers use technology for communication, direct instruction of content, accommodations, collaboration, research, and assessment. This multifaceted understanding of how teachers use technology over time was, and continues to be, a critical part of how researchers capture a more complex snapshot of how teachers use digital tools across different school settings and attempt to build a grounded theory of teacher technology use (Bebell et al., 2004; Koehler & Mishra, 2009; Mishra & Koehler, 2006).


Although we understand that teachers use technology in a variety of ways, there is still a significant problem in understanding how teachers use technology within this new multidimensional construct (Bebell et al., 2004; Russell, Bebell, et al., 2003). This problem persists for three main reasons. First, despite evidence that shows that teachers have varied technology use habits (Rowand, 2000; Russell et al., 2004), policy makers and school leaders continue to perpetuate a broad and superficial definition of how teachers use technology in the professional learning and evaluation of teachers, focusing on if a teacher can use digital tools rather than how he or she is using them (Bebell et al., 2004; Cuban, 2001, 2015a; Lawless & Pellegrino, 2007; Russell, Bebell, et al., 2003). Second, our current measures of teacher technology use do not include a wide array of individual and contextual factors that may influence how a teacher integrates technology in instruction (Lesgold, 2003; Wenglinsky, 2005). Third, issues of equity and access have complicated how we understand how teachers use technology in diverse school environments. Warschauer (2000) noted:


In analyzing [the] integration of technology into instruction, Cuban (1993) proclaimed that “computers meets classroom: Classroom wins” (p. 185) . . .  the computer “beats” the classroom, it doesn’t necessarily beat the system. [Technology in schools] can all leave intact or even reinforce patterns by which schools channel students into different social systems. (p. 18)


In other words, as more technology enters learning spaces with a diverse set of needs and challenges, issues of social justice influence how teachers envision technology integration (McLeod, Bathon, & Richardson, 2011; Natriello, 2001; Valadez & Duran, 2007). Consequently, new research inquiries emerged to investigate the individual and school-level barriers on why certain teachers use technology in order to address these prevailing challenges.


Ertmer (1999) described two types of barriers that influence why certain teachers use technology more than others, referred to as first-order barriers and second-order barriers. First-order barriers are defined as “obstacles that are extrinsic to teachers [such as] the types of resources (e.g., equipment, time, training, support) that are either missing or inadequately provided in teachers’ implementation environments” (Ertmer, 1999, p. 50). When first-order barriers exist, there are fewer opportunities for teachers to integrate technology in instruction (Cuban, Kirkpatrick, & Peck, 2001; Ertmer, 1999; Hew & Brush, 2007; Mumtaz, 2000). While policy talk tends to focus on first-order barriers, second-order barriers present a more difficult challenge, where action is “rooted in teachers’ underlying beliefs about teaching and learning and may not be immediately apparent to others or even to the teachers themselves” (Ertmer, 1999, p. 51). A significant body of research (Cho & Littenberg-Tobias, 2016; Ertmer, 2005; Ertmer & Ottenbreit-Leftwich, 2010; Ertmer, Ottenbreit-Leftwich, Sadik, Sendurur, & Sendurur, (2012); Hsu, 2016; Levin & Wadmany, 2006; Palak & Walls, 2009) has found that teacher beliefs, attitudes, and enacted values with technology and instruction are closely associated with why certain teachers choose to integrate technology in their classroom practice. Although we understand many of the barriers, there are still lingering questions about how this knowledge translates into practice.


There is still no clear picture of the relationship between first- and second- order barriers and how this relationship influences different types of teachers who use technology (Ertmer, 1999; Hew & Brush, 2007). Ertmer (1999) asked two critical questions about these barriers and their relationship with teachers:


Do teachers at higher levels of use encounter relatively fewer first- and second-order barriers? In what ways are barriers that are encountered by teachers at higher levels of technology use similar or dissimilar to those encountered by teachers at lower levels of use? (p. 52)


Ertmer (1999) opened up the possibility that teachers with similar technology usage patterns could have different experiences with certain second- and first-order barriers. Likewise, as these barriers exist within the sociocultural context of a school (Sherman & Howard, 2012; Windschitl & Sahl, 2002), comparing these subgroups of technology-using teachers could provide additional insight into how action, intent, and context are interrelated when teachers use technology in their classrooms. As such, another perspective emerges to further explain teacher technology use—whether or not different types of technology-using teachers exist.


The third perspective, which types of teachers use technology, forms a growing subset of research in the area of teacher technology use. The Rogers (1962) innovation adopter categories were arguably the first technology user typology to describe user technology habits. Rogers (1962) theorized that users who adopt technology can be divided into five user segments: innovators, early adopters, early majority, late majority, and laggards. While the first three are characterized by quick adoption of new tools, the last two are more reluctant to learn new technologies and integrate them in their practice (Rogers, 1962). The Rogers (1962) adopter categories spawned several other typologies of technology-using teachers in the literature.


There are two small-scale qualitative studies that classify different types of teachers who use technology. After interviewing thirteen (n=13) Irish science educators about their use of a computer program, Donnelly et al. (2011) theorized four subgroups of teachers who integrate technology into their instructional practices: contented traditionalist, the selective adopter, the inadvertent user, and the creative adapter. While creative adapters and selective adopters are intrinsically empowered and motivated to integrate technology into their pedagogical practice and open to new types of teaching tools and methods, the contented traditionalist and inadvertent user tend to adopt technology only by force or pressure from their colleagues.


Likewise, Mama and Hennessy (2013) also conducted a multi-case study on the technology use habits and beliefs of 11 (n=11) teachers in Cyprus and argued for four distinct subgroups of teachers. Teachers are generically labeled (Groups A–D) and are identified by attitudinal characteristics. For example, Group A, the high user group, consists of teachers who are both integrational and diversifying, meaning their use of ICTs align with lesson objectives and their beliefs center on technology as a tool for differentiation. In contrast, Group D, the low users, describes a teacher who is more inimical and subversive (Mama & Hennessy, 2013).


Table 1. Summary of Existing Typologies of Technology-Using Teachers

Author and Year, Literature Type

Sample Size

Research Analytical Method

Summary of Findings

Mama and Hennessy (2013)


Peer-reviewed article

11 teachers (N = 11) in elementary school

Qualitative; multi-case study

Four (4) subgroups:

Group A (n = 2): Moderate to high usage, constructivist-oriented purpose, encourages autonomous learning, use related to lesson objectives

Group B (n = 3): Low to moderate usage, engagement-oriented purpose, encourages student motivation, use related to student technical knowledge

Group C (n = 5): High usage, administrative-oriented purpose, encourages research skills and information gathering, use related to improving teacher efficiency

Group D (n = 1): Moderate to low usage, necessity-oriented purpose, distracts students, lack of use related to fear and threat to authority

Donnelly et al. (2011)


Peer-reviewed article

13 (N = 13) science teachers and other education stakeholders

Qualitative; semi-structured interviews

Four (4) subgroups:

Contented traditionalist: focus on assessment, fatalistic, low technological pedagogical content knowledge

Selective adopter: focus on assessment, teacher-centered but willing to change, high technological pedagogical content knowledge but only when preparing for assessment

Inadvertent user: use from external pressure, student-centered but unaware of classroom implications, lack of ownership, low technological pedagogical content knowledge

Creative adapter: strong student-centered approach, adaptable pedagogy, strong sense of purpose and empowerment, high and varied technological pedagogical content knowledge


Collectively, this research on typologies of technology-using teachers (Donnelly et al., 2011; Mama & Hennessy, 2013) highlighted three common themes. First, these studies highlighted the need for more person-centered approaches in describing technology-using teachers. Arguably, the qualitative findings from these studies provide the first robust, person-centered descriptions of the technology-using teacher. Second, in all three typologies, frequency of use (low, middle, and high use) appears to be an organizing characteristic in designating subgroups of technology-using teachers. Third, the findings from both of these studies describe at least four, mutually exclusive subgroups of technology-using teacher types.


Although these studies provide some of the first descriptions of which types of teachers use technology, there are still prevailing concerns pertaining to the validity, the generalizability, and sociocultural implications of the findings. As mentioned earlier, there is a clear consensus in the literature that teacher technology use should be measured in a more multidimensional fashion (Bebell et al., 2004). However, each of the two aforementioned studies investigated only teacher technology users from the perspective of one of the seven significant correlates of teacher technology use (Bebell et al., 2004). In addition, although large-scale data on teacher technology use are scarce, the small sample size, the absence of statistically significant groupings, and the lack of actual membership proportions per subgroup raise additional questions about the generalizability of the findings to all technology-using teachers. Finally, research suggests that there is a “digital divide” between high- and low-income schools in access to digital tools, content, and teacher resources for technology-focused curriculum and instruction (Natriello, 2001; Valadez & Duran, 2007; Wenglinsky, 1998). Mama and Hennessy (2013) and Donnelly et al. (2011) failed to describe differences in access because the researchers examined teacher technology use within one school setting. Understanding how teacher types may differ across school settings should be the next frontier for teacher technology user typology studies. Thus, the motivation of our study is to address these issues described above by using latent class analysis (LCA), a mixture modeling approach that statistically tests the extent to which there are subgroups of similar individuals within a nationally representative dataset of teachers across schools in the United States.


FRAMEWORK OF THE STUDY


Identifying subgroups of teachers in schools with national data has become an emerging trend in educational research. Typology subgroup studies typically use methods such as cluster analysis (Antonenko, Toy, & Niederhauser, 2012) to develop profiles of students and teachers. Although cluster analysis produces meaningful groupings, the method does not embed a hypothesis test in the analysis, leaving measures of best fit up to the interpretation of the researcher (Vermunt & Magidson, 2002). This study utilizes latent class analysis (LCA) to statistically determine the extent to which there are homogenous groups of individuals within a heterogeneous dataset (Asparouhov & Muthén, 2008; Henry & Muthén, 2010; Jung & Wickrama, 2008).


There is a wealth of other education research that uses LCA to explore typologies of different teachers, leaders, and schools. For example, Boyce and Bowers (2016) identified two significantly different types of principals who exit their schools. The first group, Satisfied principals, reported higher satisfaction with their job performance, salary, attitude, and self-perception of their leadership than the second group, the Disaffected. In another example, Brasseur-Hock, Hock, Biancarosa, Kieffer, & Deshler  (2011) found that there are four significantly different levels of students and their reading comprehension levels. Finally, in validating the Comprehensive Assessment of Leadership for Learning (CALL) survey in a two-level LCA, Bowers, Blitz, Modeste, Salisbury, and Halverson (2017) argued for three significantly different groups of teachers in three types of schools that enact leadership for learning behaviors, using the ordinal grouping of low, moderate, and high leadership for learning teachers and schools.


Yet, despite interest in this type of analysis in education research and the wealth of literature on teacher technology use in schools, little is known about the extent to which there are different groups of teachers who share similar technology usage habits. While the majority of research in the domain of educational technology focuses on how and why teachers use technology to describe and to generalize teacher technology integration efforts, the goal of this study is to explore if there are underlying subgroups of similar teacher technology users within nationally generalizable data, while also exploring various teacher level covariates that could predict membership in the subgroups. Thus, the research questions for this study are:


1)

Using a nationally representative dataset, to what extent are there different types of teachers who use technology?

2)

To what extent are other contextual factors, such as urbanicity, percentage of free/reduced lunch, total number of classroom computers, school type, years of teaching experience, and enrollment, associated with membership in these subgroups of technology-using teachers?


METHODS

DATA


This study is a secondary analysis of the public use data from the Fast Response Survey System – Teachers’ Use of Educational Technology in U.S. Public Schools, 2009 (FRSS95). This survey was originally collected in 2009 by the National Center for Education Statistics and had a representative sample size of 3,159 teachers from public schools across the United States. Weights were provided through a complex probabilistic weighting strategy so that findings can be generalizable to all 2.39 million public school teachers in the United States in 2009 (National Center for Education Statistics, 2009). The data on teachers’ use of educational technology include information on the use of computers and Internet access, teacher responses on students’ use of educational technology, teacher professional development, and availability of technology resources (Gray, Thomas, Lewis, & Tice, 2010).


The FRSS 95 provides a unique opportunity to explore teacher technology types with national data. As such, we selected these data for four reasons. First, although the dataset contains variables directly related to educational technology, research that uses the FRSS 95 dataset to describe teachers’ use of educational technology is virtually nonexistent. Second, it directly relates to factors that influence teacher technology use in schools (Gray et al., 2010), a clear application to the research questions at hand. Third, with the statistical weights applied, FRSS 95 is nationally generalizable (National Center for Education Statistics, 2009), and findings from this study could contribute to current research, practice, and policy in field of educational technology. Fourth, the FRSS 95 provides the most recent, nationwide data available on teacher technology surveying teachers in 2009, at the time of analysis.


The sample for this study relies on a subset of the full FRSS 95 dataset. Given the related literature and research questions for the study, we selected teachers based on their frequency of technology use. Teachers who responded “rarely,” “sometimes,” or “often” (i.e., some degree of technology usage) for question (Q2A), “how frequently do you or your students use computers during instructional time in your classroom,” were used in the final analysis. All other responses (“never” or “not applicable”) were excluded. Of 3,159 teachers in the full sample, we examined a subset of n = 2,764 teachers who indicated that they use technology in their classrooms.


We also applied the final sampling weights (TFWT) from FRSS 95 to the data so that the results of the LCA could be generalized to a national population of technology-using teachers in the United States in 2009.


VARIABLES INCLUDED IN THE ANALYSIS


We drew on the literature and theory on teacher technology use to guide our inclusion of variables for our analysis. Our indicator variables focused on the measures of the teacher technology use construct as outlined in the Bebell et al. (2004) study. Our covariates were selected based on teacher and school factors that previous literature identified as being associated with teacher technology use.


Teacher Use of Technology for Instruction


The 2009 FRSS 95 included 15 questions related to how teachers use technology in their preparation for direct instruction. We decided not to include all 15 questions in the study, omitting questions that asked about specialized software (e.g., photo editing software), word processing programs, and Internet browsers, or resources that are typically censored in public schools (e.g., social media). These questions were omitted based on prior research that suggests that certain technologies have been institutionalized by teachers as they prepare for instruction and do not add a significant contribution to understanding how teachers use technology in schools (Adams, 2006; Kuiper & de Pater-Sneep, 2014; Russell, Goldburg, & O'Conner, 2003). Furthermore, maintaining a parsimonious model closely related to the relevant literature is helpful to maintaining the appropriate level of statistical power (Boyce & Bowers, 2016; Dziak, Lanza, & Tan, 2014). Thus, the following uses were included in the statistical model: making presentations; administering computer-based tests; and using drill, practice, and tutorial software programs. Teachers were asked to rate their frequency of use on a 4-point scale. For this study, responses were dichotomized into high to moderate (1 = “often/sometimes”) and low to none (0 = “rarely/never”) usage of technology for direct instruction. Specifics on the survey questions used, response coding schema, and the descriptive statistics for these and other variables can be found in Appendix 1-A.


Preparation to Use Technology


The 2009 FRSS 95 included six questions that asked teachers about their preparation to use educational technology in their school, three of which were used for this study. Activities, such as professional learning activities (Brinkerhoff, 2006), training from technology staff (Ausband, 2006), and independent learning (Yan & Piper, 2003), were included in the model based on relevant literature. Using a 4-point scale, teachers were asked about the extent to which these activities have prepared them to use technology. Responses were dichotomized into not at all (0 = “not at all”) and to some extent (1 = “minor/moderate/major extent”).


Disposition Toward Professional Learning


The 2009 FRSS 95 included one question that asked teachers if their professional learning in technology met their needs and goals. We included this variable in response to the Vannatta and Fordham (2004) findings that a teacher’s willingness to change and their effort to participate in professional learning predicts classroom technology use. Measured on a 4-point scale, questions measured how teachers responded negatively or positively toward whether technology professional development met their goals. Responses were dichotomized into either positive (1 = “somewhat agree/strongly agree”) or negative (0 = “somewhat disagree to strongly disagree”).


Use of Technology for Productivity


The 2009 FRSS 95 included 12 questions that asked teachers how often they use technology for certain productivity tasks. Questions about email to students and parents, as well as student record management, were included based on the Bebell et al. (2004) study that listed these two specific scales (i.e., email and grading) to be associated with the teacher technology use construct. Teachers were asked to rate their frequency of usage on a 4-point scale. For the analysis, responses were dichotomized into high to moderate (1 = “often/sometimes”) and low to none (0 = “rarely/never”).


Teacher-directed Student Use of Technology for Discrete and Hands-on Skills


The 2009 FRSS 95 included 13 questions that asked about teacher-directed student use of technology, seven of which were used in the analysis. Many questions were omitted because the majority of the respondents answered “not applicable” to the question. The questions for this indicator have been divided into 1) technology to learn discrete skills and 2) to perform hands-on tasks, based on extensive research on how certain classroom activities benefit from the integration of technology and lead to increased student transfer and understanding of content (Bransford, Brown, & Cocking, 2000). Activities involving discrete skills include preparing written text, learning and practicing basic skills, conducting research, and solving problems with data and calculations. Teacher-directed student uses of technology for hands-on skills include developing multimedia (Neo & Neo, 2009), making art and other creative media (e.g., music, movies, and webcasts) (Greenhow, Robelia, & Hughes, 2009), and conducting experiments (Newman et al., 2012). Similar to the analysis on using technology for productivity, responses were dichotomized into high to moderate (1 = “often/sometimes”) and low to none (0 = “rarely/never”).


Across the variables, missingness ranged from 0% to 25%. Following the recommendations for missing data in samples of this type of analysis (Strayhorn, 2009), we relied on missing data imputation using Full Information Maximum Likelihood (FIML) as recommended (Asparouhov & Muthén, 2013; Enders, 2010; Vermunt & Magidson, 2007).


Covariates


Hew and Brush (2007) have identified 123 extrinsic and intrinsic teacher- and school-related factors that influence how teachers integrate technology into the classroom. Because of the rapid data collection strategy employed by the Fast Response Survey System program (National Center for Education Statistics, 2009), most of these factors are not included in the survey questionnaire. Still, we included some teacher-level demographic factors, such as years of teaching experience, as well as school level demographics factors, such as urbanicity, percentage of students on free and reduced lunch, school type, enrollment, and number of total computers, in the model. There were no missing data from the covariates. Descriptive statistics, variable recodes, and survey question used for the covariates can be found in Appendix 1-B.


ANALYTIC MODEL


We used latent class analysis (LCA) for this study to determine if there were significantly different types of teachers who use technology in schools. In general, LCA is a subset of mixture modeling, which is useful in determining the extent to which there is one or more than one subgroup of responders within a dataset (Jung & Wickrama, 2008; Masyn, 2013; Múthen, 2002, 2004; Muthén & Asparouhov, 2002; Samuelsen & Raczynski, 2013; Vermunt & Magidson, 2004). LCA was selected as the analytic technique because LCA evaluates how groups of individuals differ or relate to one another, or simply put, the method is person-centric (Boyce & Bowers, 2016; Jung & Wickrama, 2008). In contrast to previous studies on teachers’ technology use that focus on how different technology use indicators relate to one another (Bebell et al., 2004; Ertmer, Ottenbreit-Leftwich, & York, 2006), the research questions here are centered on the teachers, and as such, LCA was the most suitable analytic model. As standard in these types of analyses, Figure 1 is the structural equation model that we tested for the study.


Figure 1. Structural and Conceptual Equation Model for 1-Level Latent Class Analysis (LCA) of Teachers Who Use Technology

[39_22277.htm_g/00002.jpg]



Labeled as “Latent Classes C,” the different subgroups of technology-using teachers are determined based on the seven indicator variables described above: use of technology for instruction, preparation to use technology, disposition toward professional learning, use of technology for productivity, teacher-directed use of technology for productivity, and teacher-directed student use of technology for hands-on tasks. We then added six covariates (identified on the left side of figure as urbanicity, percentage of free/reduced lunch, total number of classroom computers, school type, years of teaching experience, and enrollment) as control variables.


All statistical procedures were performed in Mplus, version 7.4 (Muthén & Muthén, 2012). The Mplus code used for the analysis is included in Appendix 1-C. Following the latent class analysis literature, this study uses a 3-step LCA structural equation modeling framework (Asparouhov & Muthén, 2013; Kim, Vermunt, Bakk, Jaki, , & Horn, in press; Lanza, Tan, & Bray, 2013; Vermunt, 2010). First, as suggested in the literature (Jung & Wickrama, 2008; Nylund, Asparouhov, & Muthén, 2007; Nylund-Gibson, Grimm, Quirk, & Furlong, 2014), we performed an initial LCA using the indicator variables to determine the number of statistically different types of technology-using teachers through hypothesis testing. This initial step includes only the indicator variables to ensure that no other variable would confound how the groups are identified. Each respondent is then assigned to the most likely class.


In this step, LCA uses an iterative process with a different number of classes in each model in order to determine model fit. However, there is no one method in the literature that is considered the best way to correctly indicate the proper number of classes in the model (Bakk & Vermunt, 2016; Dziak et al., 2014; Jung & Wickrama, 2008; Lo, Mendell, & Rubin, 2001; Muthén & Asparouhov, 2002; Tofighi & Enders, 2008; Vermunt & Magidson, 2004). There are two conventions described in the literature. In the first method, some research suggests using the Bayesian information criteria (BIC), a statistic that compares the BIC of the current model, k, with the BIC from the k-1 class model (Magidson & Vermunt, 2004; Muthén & Asparouhov, 2002; Nylund et al., 2007). In other words, when performing the analysis, a model with a specific number of classes is estimated one at a time, progressively increasing in number of classes until the specified model has a larger BIC value than the previous (Jung & Wickrama, 2008; Nylund et al., 2007). When this occurs, the previously selected model is the best fit. In contrast, the Lo-Mendell-Rubin (LMR) adjusted likelihood test can be used to determine model fit as well, using a hypothesis test to determine whether the current model, k, is a better, statistically significant model fit than the previously estimated k-1 class model (Lo, 2005; Lo et al., 2001). Again, a model would be specified with varying number of classes one at a time until the p-value of the test is not significant. When this occurs, the previously selected model is the best fit. We considered the BIC and LMR statistics, as well as an a priori number of different subgroups (n = 4) based on previous literature, when we selected the proper number of groups in the data.


Next, using the auxiliary command (R3STEP) with the six covariates (Kim et al., in press), we performed another LCA with a post-hoc multinomial logistic regression to estimate the odds of an individual belonging to a group based on the covariates. The last step of the 3-step sequence, a chi-square testing procedure to produce distal outcomes, was omitted due to a lack of appropriate follow-up data to test in the FRSS dataset.


RESULTS


We now describe four different types of technology-using teachers, along with the covariates that predict membership in these groups. To find the best model fit, we performed the LCA on a two-class model, running subsequent models that increased in the total number of specified classes until both the BIC and LMR statistics indicated the best model fit (Jung & Wickrama, 2008; Masyn, 2013; Múthen, 2002). A seven-class model was the preliminary result of the initial analysis. Based on the literature on using the LMR test (Lo et al., 2001), the five-class model had the first non-significant p-value (p = 0.732), demonstrating that the previous model, four-class model, was the best model fit for the data using this statistic. The four-class LCA model fit the data well, with fit statistics of AIC = 42204.407, BIC = 42625.014, -Log likelihood = 21031.203, LMR p < 0.001, and entropy = 0.674. Also, Table 2 shows the classification probabilities for latent class membership. The classification probabilities figure shows the probability of an individual belonging to a particular group to be placed in that group when fitting the model. In examining the probabilities in the diagonal, as well as in the off-diagonal cells, the probabilities show that the model fit the data well for all four groups.


Table 2. Classification Probabilities for the Most Likely Latent Class Membership (Column) by Latent Class (Row)

 

Most Likely Latent Class Membership

Latent Class

Class 1

(Evaders)

Class 2 (Dexterous)

Class 3 (Assessors)

Class 4 (Presenters)

Class 1

0.831

0.001

0.113

0.056

Class 2

0.001

0.879

0.044

0.077

Class 3

0.088

0.052

0.794

0.070

Class 4

0.050

0.082

0.076

0.789


In addition, we also considered the BIC to determine the best model fit (Jung & Wickrama, 2008; Múthen, 2002). With this analysis, the first positive change in the BIC fit statistic occurred between the six-class model (BIC = 42513.032) and the seven-class model (BIC = 42521.286), indicating that the six-class model is the best model fit according to the BIC. However, we chose the more conservative four class model as the best fit with a significant LMR, as the LMR fit statistic has been identified in the literature as the more conservative of the measures, erring on the side of a more parsimonious model fit to avoid issues of model over-interpretation (Tofighi & Enders, 2008). As such, although up to six classes could fit the data, we argue for and interpret the four-class model. Table 3 presents the estimated model fit statistics for the each of the iterations of the seven-class model.


Table 3. LCA Results and Fit Statistics for Teachers Who Use Technology in Their Classrooms

Model

AIC

BIC

-Log Likelihood

LMR Test for k-1 classes

p

Entropy

Two classes

43232.732

43442.087

21582.366

3177.214

<0.001

0.729

Three classes

42676.497

42990.492

21285.249

590.098

0.003

0.707

Four classes

42204.407

42625.041

21031.203

504.553

<0.001

0.674

Five classes

42006.373

42533.648

20914.187

232.404

0.732

0.681

Six classes

41879.118

42513.032

20832.559

162.119

0.798

0.699

Seven classes

41780.732

42521.286

20765.366

133.450

0.763

0.710

Note: AIC = Akaike information criteria; BIC = Bayesian information criteria; LMR = Lo-Mendell-Rubin adjusted likelihood ratio test

Note: n = 2,764


We identified four significantly different groups of teachers who use technology in their classrooms. We named these four subgroups Dexterous, Presenters, Assessors, and Evaders. For purposes of comparison, Figure 2 details an indicator plot for the proportions of the indicator variables per subgroup. The LCA model identified two groups in the typical high use–low use hierarchy. The high users, or Dexterous teachers, made up approximately 24.4% of the sampled teachers. This group had a high proportion of flexible teachers who indicated using technology in a variety of functions, including for themselves to prepare for classroom activities and for directing students to use technology with hands-on and discrete tasks, such as preparing written texts, conducting research, developing multimedia presentations, and conducting experiments. Dexterous teachers also had the highest satisfaction with technology professional learning, with 92.1% of these teachers indicating that the professional learning experiences with technology met their goals.


Figure 2. Statistical indicator plot of Latent Class Analysis results showing four subgroups of technology-using teachers. The Dexterous teachers (24.4%) are the highest and most flexible users of classroom technology, while the Evaders (22.2%) have the lowest usage across the indicators. The Assessors (28.4%) and Presenters (24.8%) are the two largest groups and use technology for specific pedagogical techniques.


[39_22277.htm_g/00004.jpg]


In contrast, 22.2% of teachers were in the Evaders group. This group of teachers indicated that they neither directed students to use technology to complete discrete tasks or hands-on tasks nor did they use technology to administer tests (17.9%) or use skill and practice software (19.5%) with students. In fact, Evaders also indicated the lowest technology use for productivity, like student record management (64.5%), email with parents (41.8%), and email with students (8.4%). Although teachers in all four groups had high levels of engagement in learning about technology through professional development, 69.2% of Evaders, the lowest proportion of the four groups, reported that these experiences met their professional goals.


Interestingly, the LCA model also identified the two highest proportions of teachers in groups that use technology for specific pedagogies and teaching styles. Approximately 24.8% of the sample, the majority of teachers in the Presenters group reported using technology for their own classroom presentations (82.6%) and for instructing their students to use technology for their presentations (70.5%). In looking at student use with this subgroup, the Presenters group also has the second highest proportion of teachers (second to the Dexterous group) to have students use technology to prepare written texts (92.0% for Dexterous, 82.0% for Presenters) and to conduct research (95.0% for Dexterous and 89.4% for Presenters). Yet, in the second lowest proportion after the Evaders group, Presenters also indicate that they rarely use technology to prepare drill and practice instruction for students (27.4%), to lead students in solving problems and analyzing data (26.5%), to create visual or digital media (36.3%), or to conduct experiments (14.4%).


The largest proportion of technology-using teachers is the Assessors, who make up 28.4% of the sample. Individuals in this group indicated that they direct their students to use technology when practicing basic skills (94.0%) and when preparing for instruction with drill and practice software (77.3%). Again, the Assessors group shares the inclination to use technology to practice basic skills like the Dexterous group; however, with the second lowest usage pattern from the Evaders, Assessors indicate that they infrequently use technology to have their students create presentations (8.6%) and use creative media (9.9%), and less than half of respondents have students produce written texts (47.8%).

We present the covariates that were examined to estimate the odds of a teacher belonging to a particular group in Table 4. Dexterous teachers were used as the reference category to assist with interpretation, and relative effect sizes are reported based on significant differences. Results show that when a school has more than 50% of students eligible for free and reduced lunch, teachers are 1.36 times more likely to be in the Assessor group than the Dexterous group (p = 0.056) and more than two times less likely to be a Presenter than Dexterous (p < 0.001). In comparing small (less than 300 students) to medium (300–999 students) schools, teachers in small schools are 1.48 times more likely to be an Evader than a Dexterous technology-using teacher (p = 0.086). Likewise, compared to secondary teachers, teachers in elementary schools are 1.65 times more likely to be in the Evaders group than the Dexterous group (p = 0.006) and more than three times more likely to be an Assessor than Dexterous (p < 0.001). Elementary school teachers are also 2.22 times less likely to be a Presenter than a Dexterous teacher who uses technology (p < 0.001). Years of teaching experience also predicted the odds of teachers belonging to a technology user group, indicating that for every one unit increase in teaching experience, technology-using teachers are 1.02 times less likely to be an Evader than Dexterous (p = 0.027) and 1.28 less likely to be a Presenter than Dexterous (p = 0.002). Finally, in looking at first-order barriers for technology (Ertmer, 1999), for every one unit increase in total number of computers in a classroom, teachers are 1.29 times less likely to be in the Evaders subgroup (p < 0.001), 1.07 times less likely to be in the Assessors subgroup (p < 0.001), and 1.05 times less likely to be in the Presenters subgroup than to be Dexterous (p < 0.001).


Table 4. Means and Odds Ratios for Covariates with Dexterous Teachers Who Use Technology as the Reference Group

 

Dexterous (24.4%)

 

Evaders (22.2%)

 

Assessors (28.4%)

 

Presenters (24.8%)

Variable

Mean

Odds Ratio

 

Mean

Odds Ratio

p

 

Mean

Odds Ratio

p

 

Mean

Odds Ratio

p

School urbanicity:

               

 City

0.20

 

0.23

  

0.467

 

0.23

  

0.435

 

0.20

  

0.421

 Town

0.15

 

0.13

  

0.609

 

0.14

  

0.745

 

0.15

  

0.280

 Rural

0.31

 

0.30

  

0.970

 

0.30

  

0.560

 

0.30

  

0.836

>50% free and reduced lunch

0.45

 

0.47

  

0.610

 

0.53

1.36

~

0.056

 

0.29

0.38

***

<0.001

School type:

                 

 Elementary

0.54

 

0.65

1.65

**

0.006

 

0.76

3.27

***

<0.001

 

0.40

0.45

***

<0.001

Enrollment:

                 

 Small (<300)

0.11

 

0.15

1.48

~

0.086

 

0.14

  

0.522

 

0.10

  

0.887

 Large (>1000)

0.29

 

0.23

  

0.952

 

0.17

  

0.388

 

0.35

  

0.835

Years of teaching

 experience

14.36

 

13.18

0.98

*

0.027

 

14.56

  

0.859

 

13.13

0.78

**

0.002

Number of total computers

6.52

 

2.81

0.77

***

<0.001

 

3.96

0.93

***

<0.001

 

4.55

0.95

***

<0.001

Note: ~p < .10; *p ≤ .05; **p ≤ .01; ***p ≤ .001


DISCUSSION


This study informs the literature on teacher technology use by using a nationally generalizable dataset to examine technology-using teachers within the current multidimensional measures of teacher technology use. Using latent class analysis (LCA) to explore six domains of teacher technology use variables, we identified four significantly different groups of technology-using teachers: Dexterous, Presenters, Assessors, and Evaders. We derived the names for our four-subgroup typology of technology-using teachers based on our narrative interpretation of the survey response data. Our hope is that our labels provide a clear and concise portrayal of how teachers describe their technology usage habits in schools in 2009. Here, we briefly describe our typology again with a few concrete example characteristics of the subgroups.


Dexterous teachers are flexible and wide-ranging users who integrate technology for different modes and purposes. Affectionately known as the “innovators” and the “early adopters” (Rogers, 1962), Dexterous teachers report that they are comfortable with any type of technology and ready to learn more through professional development opportunities. In sharp contrast, Evaders are resistant to use technology in every way, including sending emails to students and taking daily attendance. Presenters are teachers who prefer using technology to aid with lectures and interactive whiteboard activities, while guiding students to use presentation to produce written texts and presentations. Finally, Assessors are most comfortable with using technology as drill and practice software, directing students to use this technology to practice basic skills in content areas such as mathematics or literacy.


The findings from the study add to the teacher technology use literature in three ways. First, this study is the first to use national data to examine the assumption that there are different types of technology-using teachers. Second, our findings break out of the traditional ordinal scale of low, medium, and high frequency of technology use as presented in past research findings (Donnelly et al., 2011; Mama & Hennessy, 2013). Rather, the results describe the differences between these teachers in their pedagogical uses of technology, their beliefs and dispositions toward technology, their personal use of technology for productivity, and how they direct students to use technology in various tasks, the first time this has been done within the same statistical model. Third, with the weights applied, conducting an LCA on nationally representative data allows the findings to be generalizable to the entire population of more than 2 million public school teachers in 2009.


Our findings are aligned to the qualitative work of Mama and Hennessy (2013). In their study, they uncovered four different types of technology-using teachers, while also finding that teacher beliefs and attitudes are important indicators for the different types of technology-using teachers. However, our present study differs in two key ways. First, the scope of Mama and Hennessy (2013) focused only on using attitudes and beliefs to create their typology of teachers who use information and communication technologies (ICTs). Our use of the full LCA model, including not only teacher dispositions toward technology but also teacher-centered use of technology, teacher directed student uses of technology, and preparation to use technology, provided the opportunity to find four statistically significant groups of technology-using teachers. This allowed us to understand differences across all four groups in more dynamic detail than just their attitudes and beliefs.


The second difference between the present study and the Mama and Hennessy (2013) study is our understanding of the proportion of teachers belonging to each subgroup. Our present study shows a nationally representative ratio of technology-using teachers: Assessors (28.4%), Dexterous (24.4%), Presenters (24.8%), and Evaders (22.2%). While the Mama and Hennessy (2013) study provides rich descriptions and conveys the essence of the lived experience of eleven technology-using teachers within one school, the ratios of teachers in each subgroup are not generalizable on a larger scale. As such, our study extends this work on technology-using teachers to the entire population of U.S. public school teachers, identifying a concrete ratio of different types of teacher users. This again raises important considerations in using a larger sample size (n = 2,764) from a national-level dataset.


Our findings are also congruent with the four-user typology in the Donnelly et al. (2011) study. In their work, the authors describe four different subgroups of teachers divided by their assessment practices. A significant body of research (Ertmer, 2005; Ertmer & Ottenbreit-Leftwich, 2010; Ravitz, Becker, & Wong, 2000; Zhao, Pugh, Sheldon, & Byers, 2002) points out that effective technology-using teachers tend to have a more learning, student-centered approach toward learning, as opposed to an assessment or teacher-focused centered approach. Like the findings in Donnelly et al. (2011), our findings show that the largest variance among the four significantly different subgroups of teachers lies in how they use technology for themselves and for their students (see Figure 2). While Dexterous and Presenter teacher types reported higher usage of student-centered approaches toward technology (i.e., making presentations, conducting research, developing multimedia, creating art and webcasts), Evader and Assessor teacher types tended to use more teacher-centered approaches (i.e., administering tests, drill and practice programs, solving problems). While the Donnelly et al. (2011) study presented rich descriptions of the technology types, again, the present study tests this hypothesis with a larger sample of teachers and provides a nationally generalizable proportion of teachers who belong to each one of the subgroups with a particular propensity for certain pedagogical approaches with technology.


This study also sheds light on the critical issue of examining teacher technology use within a social justice framework. As described earlier, the results of our study show that school socioeconomic status (see Table 4) significantly predicted a teacher’s membership in two of the subgroups in national data. We found that technology-using teachers in schools with more than half of the students on free or reduced lunch were 1.36 times more likely to be an Assessor than a Dexterous teacher. Likewise, in these same schools, teachers were less likely to be a Presenter than a Dexterous teacher. In light of these seemingly contradictory findings, we revisit this notion of a “new digital divide” that perpetuates gross inequities “in [the] differential ability” (Warschauer & Matuchniak, 2010, p. 213) to effectively use technology in teaching, learning, and leading in certain types of schools (Valadez & Duran, 2007; Warschauer, 2003; Warschauer et al., 2004). While it is outside of the scope of this study to explain why teachers in low-income schools have higher odds of belonging to these two groups, our findings push the concerted effort nationwide to close the “new digital divide” through teacher professional development that focuses on more hands-on, project-based applications of technology that encourage critical thinking and deeper understanding of content (Ertmer et al., 2012; Mostmans, Vleugels, & Bannier, 2012; Ravitz et al., 2000; Vannatta & Beyerbach, 2000). Through emphasizing more student-centered approaches, how teachers use new technologies could align with high-impact instructional best practices that use social justice pedagogies to affirm, validate, and celebrate all students’ personal identities and life experiences (O'Hara, Pitta, Pritchard, & Webb, 2016). In all, we encourage the development of more robust, nationally representative survey instruments on teacher and school leader use of educational technology in schools as research looks to use nationally representative, quantitative data to address prevailing questions about equity, teaching, technology, and school change.


LIMITATIONS


While we argue that the results of our study are significant, we recognize that our study is limited in five key ways. First, the data collected on teachers’ use of educational technology were collected in early 2009. Given that how teachers use digital tools is constantly shifting and evolving, we recognize that the data collected in one given year might not fully represent how teachers are using technology in the classroom at any time before or after 2009. However, we used the FRSS 95 dataset because it was the most recent, nationally generalizable survey available from the National Center of Education Statistics, which provides information on teacher computer use, number of technology resources, and teachers’ perspective on technology-based professional learning. We encourage the collection of additional nationally representative data in order to capture the more current trends in school technology implementation efforts.


Second, the sample size of the study (N = 2,764), while one of the largest used to date in considering subgroups of technology use, is relatively small due to the limited nature of the FRSS 95 sampling procedures. In looking forward, alternative national datasets with larger samples should be analyzed to continue identifying subgroups in technology-using teachers with a higher degree of statistical power to identify small to moderate effect size differences between these groups. Third, the results of the LCA yielded a strong model fit of at least four significantly different groups of technology-using teachers. However, in considering both fit statistics identified in the literature that determine the best model fit (Jung & Wickrama, 2008; Lo, 2005; Lo et al., 2001; Masyn, 2013; Múthen, 2002), as well as the entropy (0.674) of the four-class model, there could be up to six different groups that can be identified in the data. Still, we are confident in our decision in interpreting the more parsimonious, four-class model due to the more conservative estimation of the LMR test and to avoid over-interpretation of the model. Fourth, robust variable selection within national datasets could provide a more complete picture of the types of teachers who use technology. Our hope is that subsequent national surveys on teacher technology use would consider more research-based constructs when developing future instruments. Finally, although our findings are robust, we cannot address the question of why certain teachers belonged to certain groups or why certain external variables predicted membership in these subgroups. We encourage future research to address these critical questions about teacher technology users through other descriptive studies.


CONCLUSIONS AND IMPLICATIONS


Our study reiterates the fact that technology-using teachers are not a monolithic group. We can identify four statistically distinct groups of technology-using teachers that are generalizable to a population of U.S. public school teachers in 2009. Also, we find that the subgroups are not randomly distributed across school contexts, as low-income schools are more likely to have teachers who use technology in less meaningful ways. This propels the movement to advance a social justice oriented theory for technology integration in schools that works to address digital inequity not only with what tools educators have but also what policies are developed to ensure that teachers and leaders are provided with the best professional supports and learning opportunities to learn how to use technology in ways that promote critical thinking, empower students’ identities, and validate students’ voice and perspectives as part of the learning process (Jenkins, Ito, & boyd, 2016; Livingstone, 2004).


As such, our study has several implications for actionable improvement in research, policy, and practice in educational technology. First, our new approach to exploring technology user typologies has a considerable amount of implications for the development of educational technology products, as well as how schools select which technologies they purchase for teachers. Our findings are aligned with past research that problematized the widely accepted belief that technology tools are socially neutral entities that can be used with one approach across time, contexts, and individuals (Biraimah, 1993; Furr, Ragsdale, & Horton, 2005; Gorski, 2009). Knowing this, remaining fixated on describing technology as “just a tool” becomes difficult to justify. Zhao, Alvarez-Torres, Smith, and Tan (2004) noted that when educators only envision “technology [as] just a tool, a means to an end” (p. 1), this belief can have detrimental implications for educational practice. Promoting the technology-as-tool argument in schools “gives teachers a false sense of empowerment, as well as a feeling of guilt when they do not achieve their intended goals . . . [technology] comes with shapes and expectations” (Zhao et al., 2004, p. 1). As technology leaders in industry develop new innovations and as school leaders make decisions to purchase and promote certain classroom technologies, it is imperative to understand that the tools themselves propagate expectations for teacher usage. Seeing that two of the four groups of technology-using teachers in our study (Presenters and Assessors) use technology for distinct pedagogical purposes, we implore school leaders to circumspectly select new technologies that align to the vision for teaching and learning they expect to see in classrooms, particularly in schools that have been historically marginalized. For example, does purchasing and installing stationary, interactive whiteboards actually encourage teachers to use technology in active, student-centered ways? Do one-to-one laptop programs promote teacher growth in a school of Evaders or Presenters? In looking forward, our study provides a empirically based typology framework for additional research and evaluation studies investigating what different types of digital tools district leaders purchase and how that might influence what type of technology-using teachers exist at the individual, school, or district levels.


The typology described throughout this paper also reiterates the need for data-driven professional learning experiences for technology-using teachers that are situated to address the needs in their school contexts (Bauer & Kenton, 2005; Lawless & Pellegrino, 2007; Meier, 2005; Mouza, 2009). Prior research argues that technology professional development cannot assume homogeneity of teachers’ skill levels and competencies with technology (Brinkerhoff, 2006; Hughes & Ooms, 2004; Mouza & Wong, 2009; Phillips, Desimone, & Smith, 2011; Swan et al., 2002). As districts design new personalized, professional learning opportunities for teachers to address this reality, there is a renewed call for school leaders to use data for evidence-based improvement in professional learning (Bowers, Shoho, & Barnett, 2014). Our hope is that the use of latent class analysis in our study could provide a useful and innovative methodological model toward using quantitative data to create evidence-based technology professional development that focuses on building the capacity and skills of the teacher starting from his or her current practice. We imagine that district data leaders can utilize latent class analysis as a means to help identify sustained opportunities for professional development for teachers and encourage teachers in the same subgroup, or even different groups, to participate in highly customized, evidence-based professional learning communities.   


Our study also has strong implications for research and practice in school technology leadership. Given that effective technology leadership continued to be the largest predictor of positive technology-related outcomes in schools (Anderson & Dexter, 2005), the results of the present study help in two ways. First, as modern conceptions of educational leadership (Boyce, 2015; Murphy, Elliott, Goldring, & Porter, 2007; Spillane, Halverson, & Diamond, 2001) posit that leadership is not a function of one individual but rather a series of leadership roles, this typology of technology-using teachers can help school leaders identify certain characteristics of individuals who can act as teacher leaders based on their own vision for technology integration in their schools. Second, because our findings are nationally generalizable, leaders can use this typology as a starting point with teachers as they push teachers toward their growing edge in what they were taught to do and what they actually do in the classroom with technology (Pope, Hare, & Howard, 2002). Because the full LCA model accounts for multiple indicators of teacher technology use, the findings reveal critical gaps in how certain teachers use technology, allowing leaders to target specific knowledge and skills that teachers need to grow. We encourage further multilevel latent class analysis (Urick & Bowers, 2014) that nests these subgroups of teachers within schools with certain types of leaders to further explore the impact of technology leadership on the teacher technology types.


In examining teacher technology use as a multifaceted construct (Bebell et al., 2004), this study presents a clearer picture of teacher technology use and has several implications for the development of future policy interventions. In examining district-level policy for teacher technology use, Culp et al. (2003) found that policy makers tend to use three predetermined rationales to warrant the increased investment of instructional technology, such as envisioning technology as a tool for addressing challenges with teaching and learning, using technology as a change agent for instructional practice, and promoting technology as a central force in economic competitiveness (Culp et al., 2003, pp. 5–6). While these rationales are notable in light of the increased emphasis on digital age learning (U.S. Department of Education, 2016), schools continue to suffer from an implementation problem when addressing certain barriers to sustaining technology integration efforts (Ertmer, 1999; Hsu, 2016; Kopcha, 2012). It is easy to see that many of the policies concerning educational technology are implemented using an ineffective “forward mapping approach,” by which policies are created and then implemented by policy makers without the input of the individuals on the ground level in planning and execution (Elmore, 1980). Forward-mapped policy implementation can result in confusion, error, and obscurity on the ground level (Elmore, 1980). Hence, our nationally generalizable typology of four significant different subgroups of technology-using teachers, along with usage indicators of each type and predictive variables, can provide policy makers with a starting point as they create more person-centered, grassroots instructional technology policy interventions that evolve based on the characteristics and needs of the lowest level of implementation, or teachers in schools. Our findings, along with this “backward mapping” approach toward policy (Elmore, 1980), could help transform district-level decision making strategies as they develop policy instruments pertaining to resource allocation, teacher evaluation, teacher professional development, and teacher and principal preparation programs that better support the growth of various types of technology-using teachers and leaders.


Finally, this study is marked as an emerging subset of research called “quantitative phenomenology” (Bowers et al., 2017) in which researchers use empirical, national-level data to explore shared experiences of students, teachers, and school leaders. While we know that our study simply shows what types of technology-using teachers exist, rather than how or why these variables interact to influence teacher technology use, we maintain that this present study is the start of many important contributions to the field of educational technology, as it is one of the only studies to quantitatively examine teacher technology user types with nationally representative data while also building on prior qualitative research that poignantly addresses the existing questions and complexities of teacher technology use in schools.


Acknowledgement


This research was supported by a grant from the American Educational Research Association which receives funds for its "AERA Grants Program" from the National Science Foundation under Grant #DRL-0941014. Opinions reflect those of the authors and do not necessarily reflect those of the granting agencies.


References


Adams, C. (2006). PowerPoint, habits of mind, and classroom culture. Journal of Curriculum Studies, 38(4), 389–411.


Anderson, R. E., & Dexter, S. L. (2005). School technology leadership: An empirical investigation of prevalence and effect. Educational Administration Quarterly, 41, 49–82.


Antonenko, P. D., Toy, S., & Niederhauser, D. S. (2012). Using cluster analysis for data mining in educational technology research. Educational Technology Research and Development, 60(3), 383–398.


Asparouhov, T., & Muthén, B. (2008). Multilevel mixture models. In G. R. Hancock & K. M. Samuelsen (Eds.), Advances in latent variable mixture models (pp. 27–51). Charlotte, NC: Information Age.


Asparouhov, T., & Muthén, B. (2013). Auxiliary variables in mixture modeling: 3-step approaches using Mplus. Mplus Web Notes, 1–24.


Ausband, L. T. (2006). Instructional technology specialists and curriculum work. Journal of Research in Technology in Education, 39(1), 1–21.


Bakk, Z., & Vermunt, J. K. (2016). Robustness of stepwise latent class analysis with continuous distal outcomes. Structural Equation Modeling: A Multidisciplinary Journal, 23(1), 20–31.


Bauer, J., & Kenton, J. (2005). Toward technology integration in the schools: Why it isn’t happening. Journal of Technology and Teacher Education, 44(1), 59–62.


Bebell, D., Russell, M., & O'Dwyer, L. M. (2004). Measuring teachers' technology use: Why multiple measures are more revealing. Journal of Research on Technology in Education, 37(1), 45–63.


Becker, H. J. (2000). Findings from the teaching, learning, and computing survey: Is Larry Cuban right? Educational Policy Analysis Archives, 8(51).


Becker, H. J., Wong, Y., & Ravitz, J. L. (1998). Teaching, learning, and computing: A national survey of schools and teachers describing their best practices, teaching philosophies, and uses of technology. Retrieved from Irvine, CA: Author.


Becker, J. D. (2007). Digital equity in education: A multilevel examination of differences in and relationships between computer access, computer use, and state-level technology policies. Education Policy Analysis Archives, 15(3), 1–38.


Bill and Melinda Gates Foundation. (2015). Teachers know best: What educators want from digital instructional tools 2.0. Retrieved from Seattle, WA: Author.


Biraimah, K. (1993). The non-neutrality of educational computer software. Computers & Education, 20(4), 283–290.


Bowers, A. J., Blitz, M., Modeste, M., Salisbury, J., & Halverson, R. (2017). How leaders agree with teachers in schools on measures of leadership practice: A two-level latent class analysis of the Comprehensive Assessment of Leadership for Learning. Teachers College Record, 119(4).


Bowers, A. J., Shoho, A. R., & Barnett, B. G. (2014). Considering the use of data by school leaders for decision making: An introduction. In A. J. Bowers, A. R. Shoho, & B. G. Barnett (Eds.), Using data in schools to inform leadership and decision making (pp. 1–16). Charlotte, NC: Information Age.


Boyce, J. L. (2015). Commitment and leadership: What we know from the School and Staffing Survey (SASS). Doctoral dissertation.


Boyce, J., & Bowers, A. J. (2016). Principal turnover: Are there different types of principals who move from or leave their schools? A latent class analysis of the 2007–08 Schools and Staffing Survey and the 2008–09 Principal Follow-Up Survey. International Journal of School Leadership, 15(3), 237–272.


Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.


Brasseur-Hock, I. F., Hock, M. F., Biancarosa, G., Kieffer, M. J., & Deshler, D. D. (2011). Adolescent struggling readers in urban schools: Results of a latent class analysis. Learning and Individual Differences, 21(4), 438–452.


Brinkerhoff, J. (2006). Effects of a long-duration, professional development academy on technology skills, computer self-efficacy, and technology integration beliefs and practices. Journal of Research in Technology in Education, 39(1), 22–43.


Cho, V., & Littenberg-Tobias, J. (2016). Digital devices and teaching the whole student: Developing and validating an instrument to measure educators' attitudes and beliefs. Educational Technology Research and Development, 64(4), 643–659.


Cho, V., & Wayman, J. C. (2014). Districts' efforts for data use and computer data systems: The role of sensemaking in system use and implementation. Teachers College Record, 116(2).


Collins, A., & Halverson, R. (2009). Rethinking education in the age of technology: The digital revolution and schooling in America. New York, NY: Teachers College Press.


Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.


Cuban, L. (2015a). Kludge: A metaphor for technology use in schools.  Retrieved from https://larrycuban.wordpress.com/2016/01/30/kludge-a-metaphor-for-technology-use-in-schools/


Cuban, L. (2015b). Technology evangelists, skeptics, and those in the middle.  Retrieved from https://larrycuban.wordpress.com/2015/11/12/technology-evangelists-skeptics-and-those-in-the-middle/


Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: Explaining an apparent paradox. American Educational Research Journal, 38(4), 813–834.


Culp, K. M., Honey, M., & Mandinach, E. (2003). A retrospective on twenty years of education technology policy. Retrieved from Washington D.C.: Author.


Donnelly, D., McGarr, O., & O'Reilly, J. (2011). A framework for teachers' integration of ICT into their classroom practice. Computers & Education, 57, 1469–1483.


Dynarski, M., Agodini, R., Heaviside, S., Novak, T., Carey, N., Campuzano, L., . . . Sussex, W. (2007). Effectiveness of reading and mathematics software products: Findings from the first student cohort. (NCEE 2007–4005). Washington, DC: U.S. Department of Education.


Dziak, J. J., Lanza, S. T., & Tan, X. (2014). Effect size, statistical power, and sample size requirements for the bootstrap likelihood ratio test in latent class analysis. Structural Equation Modeling: A Multidisciplinary Journal, 21(4), 534–552.


Elmore, R. F. (1980). Backward mapping: Implementation research and policy decisions. Political Science Quarterly, 94(4), 601–616.


Enders, C. K. (2010). Applied missing data analysis. New York, NY: Guilford Press.


Ertmer, P. A. (1999). Addressing first- and second-order barriers to change: Strategies for technology integration. Educational Technology Research and Development, 47(4), 47–61.


Ertmer, P. A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology integration? Educational Technology Research and Development, 53(4), 25–39.


Ertmer, P. A., & Ottenbreit-Leftwich, A. T. (2010). Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42(3), 255–284.


Ertmer, P. A., Ottenbreit-Leftwich, A. T., Sadik, O., Sendurur, E., & Sendurur, P. (2012). Teacher beliefs and technology integration practices: A critical relationship. Computers & Education, 59, 423–435.


Ertmer, P. A., Ottenbreit-Leftwich, A. T., & York, C. S. (2006). Exemplary technology-using teachers: Perceptions of factors influencing success. Journal of Computing in Teacher Education, 23(2), 55–61.


Furr, P. F., Ragsdale, R., & Horton, S. G. (2005). Technology's non-neutrality: Past lessons can help guide today's classrooms. Education and Information Technologies, 10(3), 277–287.


Gorski, P. C. (2009). Insisting on digital equity: Reframing the dominant discourse on multicultural education and technology. Urban Education, 44(2), 348–364.


Gray, L., Thomas, N., Lewis, L., & Tice, P. (2010). Teachers' use of educational technology in U.S. public schools, 2009: First look (NCES 2010-040). Retrieved from Washington D.C.: National Center of Education Statistics.


Greenhow, C., Robelia, B., & Hughes, J. E. (2009). Learning, teaching and scholarship in a digital age Web 2.0 and classroom research: What path should we take now? Educational Researcher, 38(4), 246–259.


Harris, M. (2015). The educational digital divide: A research synthesis of digital inequity in education. Unpublished manuscript.


Heitink, M., Voogt, J., Verplanken, L., Braak, J. v., & Fisser, P. (2016). Teachers' professional reasoning about their pedagogical use of technology. Computers & Education, 101, 70–83.


Henry, K. L., & Muthén, B. (2010). Multilevel latent class analysis: An application of adolescent smoking typologies with individual and contextual predictors. Structural Equation Modeling: A Multidisciplinary Journal, 17(2), 193–215.


Hew, K. F., & Brush, T. (2007). Integrating technology into K–12 teaching and learning: Current knowledge gaps and recommendations for future research. Education Technology Research Development, 55, 223–252.


Hsu, P.-S. (2016). Examining current beliefs, practices, and barries about technology integration: A case study. TechTrends, 60(1), 30–40.


Hughes, J. E., & Ooms, A. (2004). Content-focused technology inquiry groups: Preparing urban teachers to integrate technology to transform student learning. Journal of Research on Technology in Education, 36(4), 397–411.


Jenkins, H., Ito, M., & boyd, d. (2016). Participatory culture in a networked era. Cambridge, U.K.: Polity Press.


Jung, T., & Wickrama, K. A. S. (2008). An introduction to latent class growth analysis and growth mixture modeling. Social and Personality Psychology Compass, 2(1), 302–317.


Kim, M., Vermunt, J., Bakk, Z., Jaki, T., & Horn, M. L. V. (in press). Modeling predictors of latent classes in regression mixture models. Structural Equation Modeling: A Multidisciplinary Journal.


Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology and Teacher Education, 9(1), 60–70.


Kopcha, T. J. (2012). Teachers’ perceptions of the barriers to technology integration and practices with technology under situated professional development. Computers & Education, 59, 1109–1121.


Kuiper, E., & de Pater-Sneep, M. (2014). Student perceptions of drill-and-practice mathematics software in primary education. Mathematics Education Research Journal, 26(2), 215–237.


Lanza, S. T., Tan, X., & Bray, B. C. (2013). Latent class analysis with distal outcomes: A flexible model-based approach. Structural Equation Modeling: A Multidisciplinary Journal, 20(1), 1–26.


Lawless, K. A., & Pellegrino, J. W. (2007). Professional development in integrating technology into teaching and learning: Knowns, unknowns, and ways to pursue better questions and answers. Review of Educational Research, 77(4), 575–614.


Lemke, C., Coughlin, E., & Reifsneider, D. (2009). Technology in schools: What the research says. Retrieved from Culver City, CA: Author.


Lesgold, A. (2003). Detecting technology's effects in complex school environments. In G. D. Haertel & B. Means (Eds.), Evaluating educational technology: Effective research designs for improving learning (pp. 38–74). New York, NY: Teachers College Press.


Levin, T., & Wadmany, R. (2006). Teachers' beliefs and practices in technology-based classrooms: A developmental view. Journal of Research on Technology in Education, 39(2), 157–181.


Livingstone, S. (2004). Media literacy and the challege of new information and communication technolgies. Communications Review, 1(7), 3–14.


Lo, Y. (2005). Likelihood ration tests of the number of components in a normal mixture with unequal variances. Statistics & Probability Letters, 71, 225–235.


Lo, Y., Mendell, N. R., & Rubin, D. B. (2001). Testing the number of components in a normal mixture. Biometrika, 88(3), 767–778.


Magidson, J., & Vermunt, J. K. (2004). Latent class models. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology for the social sciences (pp. 175–198). Thousand Oaks, CA: Sage.


Mama, M., & Hennessy, S. (2013). Developing a typology of teacher beliefs and practices concerning classroom use of ICT. Computers & Education, 68, 380–387.


Market Data Retrieval. (1999). Technology in education 1999. Retrieved from Shelton, CT: Author.


Masyn, K. E. (2013). Latent class analysis and finite mixture modeling. In T. D. Little (Ed.), The Oxford handbook of quantitative methods in psychology (Vol. 2, pp. 551–611). New York, NY: Oxford University Press.


McKnight, K., O'Malley, K., Ruzic, R., Horsley, M. K., Franey, J. J., & Bassett, K. (2016). Teaching in a digital age: How educators use technology to improve student learning. Journal of Research on Technology in Education, 48(3), 194–211.


McLeod, S., Bathon, J. M., & Richardson, J. W. (2011). Studies of technology tool usage are not enough: A response to the articles in this special issue. Journal of Research in Leadership Education, 6(5), 288–297.


Meier, E. B. (2005). Situating professional development in urban schools. Journal of Educational Computing Research, 32(4), 395–407.


Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.


Mostmans, L., Vleugels, C., & Bannier, S. (2012). Raise your hands or hands-on? The role of computer-supported collaborative learning in stimulating intercreativity in education. Journal of Educational Technology & Society, 15(4), 104–115.


Mouza, C. (2009). Does research-based professional development make a difference? A longitundinal investigation of teacher learning in technology integration. Teachers College Record, 111(5), 1195–1241.


Mouza, C., & Wong, W. (2009). Studying classroom practice: Case development for professional learning in technology integration. Journal of Technology and Teacher Education, 17(2), 175–202.


Mumtaz, S. (2000). Factors affecting teachers' use of information and communications technology: A review of the literature. Journal of Information Technology for Teacher Education, 9(3), 319–341.


Murphy, J., Elliott, S. N., Goldring, E., & Porter, A. C. (2007). Leadership for learning: A research-based model and taxonomy of behaviors. School Leadership and Management, 27(2), 179–201.


Múthen, B. (2002). Beyond SEM: General latent variable modeling. Behaviormetrika, 21(51), 81–118.


Múthen, B. (2004). Latent variable analysis. In D. Kaplan (Ed.), The Sage handbook for quantitative methodology for the social sciences (pp. 345–368). Thousand Oaks, CA: Sage.


Muthén, B., & Asparouhov, T. (2002). Latent variable analysis with categorical outcomes: Multiple-group and growth modeling in Mplus. Mplus Web Notes, 4(5), 1–22.


Muthén, L., & Muthén, B. (2012). Mplus user's guide (7th ed.). Los Angeles, CA: Muthén & Muthén.


National Center for Education Statistics, U.S. Department of Education. (2009). Technical notes - Teachers' use of educational technology in U.S. public schools, 2009. Retrieved from Washington D.C.:


Natriello, G. (2001). Bridging the second digital divide: What can sociologists of education contribute? Sociology of Education, 74(3), 260–265.


Neo, M., & Neo, T.-K. (2009). Engaging students in multimedia-mediated constructivist learning—Students' perceptions. Educational Technology and Society, 12(2), 254–266.


Newman, G., Wiggins, A., Crall, A., Graham, E., Newman, S., & Crowston, K. (2012). The future of citizen science: Emerging technolgies and shifting paradigms. Frontiers in Ecology and the Environment, 10(6), 298–304.


Nylund, K. L., Asparouhov, T., & Muthén, B. O. (2007). Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Structural Equation Modeling: A Multidisciplinary Journal, 14(4), 535–569.


Nylund-Gibson, K., Grimm, R., Quirk, M., & Furlong, M. (2014). A latent transition mixture model using the three-step specification. Structural Equation Modeling: A Multidisciplinary Journal, 21(3), 439–454.


O'Dwyer, L., Russell, M., & Bebell, D. (2004). Identifying teacher, school, and district characteristics associated with elementary teachers' use of technology: A multilevel perspective. Education Policy Analysis Archives, 12(48), 1–33.


O'Dwyer, L., Russell, M., & Bebell, D. (2005). Identifying teacher, school, and district characteristics associated with middle and high school teachers' use of technology: A multilevel perspective. Journal of Educational Computing Research, 33(4), 369–393.


O'Hara, S. P., Pitta, D. A., Pritchard, R. H., & Webb, J. M. (2016). Implementing new technologies to support social justice pedagogy. In R. Papa, D. M. Eadens, & D. W. Eadens (Eds.), Social justice instruction: Empowerment on the chalkboard (pp. 103–114). New York, NY: Springer.


Palak, D., & Walls, R. R. (2009). Teachers' beliefs and technology practices: A mixed-methods approach. Journal of Research on Technology in Education, 41(4), 417–441.


Phillips, K. J. R., Desimone, L., & Smith, T. M. (2011). Teacher participation in content-focused professional development & the role of the state policy. Teachers College Record, 113(11), 2586–2621.


Pope, M., Hare, D., & Howard, E. (2002). Technology integration: Closing the gap between what preservice teachers are taught to do and what they can do. Journal of Technology and Teacher Education, 10(2), 191–203.


Purcell, K., Heaps, A., Buchanan, J., & Friedrich, L. (2013). How teachers are using technology at home and in their classrooms. Retrieved from Washington DC: Author.


Ravitz, J. L., Becker, H. J., & Wong, Y. (2000). Constructivist-compatible beliefs and practices among U.S. teachers. Retrieved from


Rogers, E. M. (1962). Diffusion of Innovations. New York, NY: The Free Press.


Rowand, C. (2000). Teacher use of computers and the Internet in public schools: Stats in brief. Retrieved from Washington, DC:


Russell, M., Bebell, D., O'Dwyer, L., & O'Connor, K. (2003). Examining teacher technology use: Implications for preservice and inservice teacher preparation. Journal of Teacher Education, 54(4), 297–310.


Russell, M., Goldburg, A., & O'Conner, K. (2003). Computer-based testing and validity: A look back into the future. Assessment in Education, 10(3), 279–293.


Russell, M., O'Dwyer, L., Bebell, D., & Miranda, H. (2004). Technical report for the USEIT study. Retrieved from Boston, MA:


Samuelsen, K. M., & Raczynski, K. (2013). Latent class/profile analysis. In Y. Petscher, C. Schatschneider, & D. Compton (Eds.), Applied quantitative analysis in education and the social sciences (pp. 304–328). New York, NY: Routledge.


Sherman, K., & Howard, S. K. (2012). Teachers' beliefs about first- and second-order barriers to ICT integration: Preliminary findings from a South African study. Paper presented at the Society for Information Technology and Teacher Education.


Spillane, J. P., Halverson, R., & Diamond, J. B. (2001). Investigating school leadership practice: A distributed perspective. Educational Researcher, 30(3), 23–28.


Strayhorn, T. (2009). Accessing and analyzing national databases. In T. Kowalski & T. Lasley (Eds.), Handbook of data-based decision making in educaiton. New York, NY: Routledge.


Swan, K., Holmes, A., Vargas, J., Jennings, S., Meier, E., & Rubenfield, L. (2002). Situated professional development and technology integration: The Capital Area Technology and Inquiry in Education (CATIE) mentoring program. Journal of Technology and Teacher Education, 10(2), 169–190.


Tofighi, D., & Enders, C. K. (2008). Identifying the correct number of classes in growth mixture models. In G. R. Hancock & K. M. Samuelsen (Eds.), Advances in latent variable mixture models (pp. 317-341). Charlotte, NC: Information Age.


U.S. Department of Education. (2016). Future ready learning: Reimagining the role of technology in education. Washington, DC: Author.


Urick, A., & Bowers, A. J. (2014). What are the different types of principals across the United States? A latent class analysis of principal perception of leadership. Educational Administration Quarterly, 50(1), 96–134.


Valadez, J. R., & Duran, R. (2007). Redefining the digital divide: Beyond access to computers and the Internet. The High School Journal, 90(3), 31–44.


Vannatta, R. A., & Beyerbach, B. (2000). Faciltating a constructivist vision for technology integration among education faculty and preservice teachers. Journal of Research on Computing in Education, 33(2), 132–148.


Vannatta, R. A., & Fordham, N. (2004). Teacher dispositions as predictors of classroom technology use. Journal of Research on Technology in Education, 36(3), 253–271.


Vermunt, J. K. (2010). Latent class modeling with covariates: Two improved three-step approaches. Political Analysis, 16(4), 450–469.


Vermunt, J. K., & Magidson, J. (2002). Latent class cluster analysis. In J. A. Hagenaars & A. L. McCutcheon (Eds.), Applied latent class analysis. New York, NY: Cambridge University Press.


Vermunt, J. K., & Magidson, J. (2004). Latent class analysis The Sage encyclopedia of social sciences research methods (pp. 549–553).


Vermunt, J. K., & Magidson, J. (2007). Latent class analysis with sampling weights: A maximum-likelihood approach. Sociological Methods & Research, 36(1), 87–111.


Warschauer, M. (2000). Technology and school reform: A view from both sides of the tracks. Education Policy Analysis Archives, 8(4), 1–22.


Warschauer, M. (2003). Demystifying the digital divide. Scientific American, 289(2), 43–47.


Warschauer, M., Knobel, M., & Stone, L. (2004). Technology and equity in schooling: Deconstructing the digital divide. Educational Policy, 18(4), 562–588.


Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of equity in access, use, and outcomes. Review of Research in Education, 34(1), 179–225.


Wenglinsky, H. (1998). Does it compute? The relationship between educational technology and student achievement in mathematics. Retrieved from Princeton, NJ:


Wenglinsky, H. (2005). Using technology wisely: The keys to success in schools. New York, NY: Teachers College Press.


Windschitl, M., & Sahl, K. (2002). Tracing teachers' use of technology in a laptop computer school: The interplay of teacher beliefs, social dynamics, and institutional culture. American Educational Research Journal, 39(1), 165–205.


Winters, M., & McNally, T. (2014). 2014 U.S. edtech funding hits $1.36B. edSurge. Retrieved from https://www.edsurge.com/news/2014-12-23-2014-us-edtech-funding-hits-1-36b


Yan, W., & Piper, D. M. (2003). The relationship between leadership, self-efficacy, computer experience, attitudes, and teachers' implementation of computers in the classroom. Paper presented at the Proceedings of Society for Information Technology and Teacher Education International Conference 2003, Chesapeake, VA.


Zhao, Y., Alvarez-Torres, M. J., Smith, B., & Tan, H. S. (2004). The non-neutrality of technology: A theoretical analysis and empirical study of computer mediated communication technologies. Journal of Educational Computing Research, 30(1/2), 23–55.


Zhao, Y., Pugh, K., Sheldon, S., & Byers, J. L. (2002). Conditions for classroom technology innovations. Teachers College Record, 104(3), 482–515.


Zhao, Y., Zhang, G., Lei, J., & Qiu, W. (2016). Never send a human to do a machine's job: Correcting the top 5 edtech mistakes. Thousand Oaks, CA: Corwin.



APPENDIX 1-A


Descriptive Statistics of Indicator Variables for Teachers Who Use Technology in Their Classrooms

Variable Name

N

Min

Max

Mean

SD

FRSS 95 Variable

Use of technology for instruction

     


Making presentations

2,764

0

1

0.664

0.472

Q6G;0=Never/Rarely, 1=Sometimes/Always

Administering tests

2,764

0

1

0.455

0.498

Q6H;0=Never/Rarely, 1=Sometimes/Always

Drill, practice programs, tutorials

2,764

0

1

0.525

0.499

Q6J; 0=Never/Rarely, 1=Sometimes/Always

Preparation to use technology

      

Professional learning activities

2,675

0

1

0.951

0.215

Q9C; 0=Not at all, 1=To some extent

Training from technology staff

2,675

0

1

0.943

0.232

Q9D; 0=Not at all, 1=To some extent

Independent learning

2,705

0

1

0.979

0.142

Q9E; 0=Not at all, 1=To some extent

Disposition toward professional learning

Technology PD met goals

2,446

0

1

0.824

0.381

Q11A; 1=Agree or strongly agree

Use of technology for productivity

Email or listserv with parents

2,764

0

1

0.599

0.490

Q8A1;0=Never/Rarely, 1=Sometimes/Always

Email or listserv with students

2,764

0

1

0.252

0.434

Q8A2;0=Never/Rarely, 1=Sometimes/Always

Student record management

2,764

0

1

0.807

0.395

Q6D; 0=Never/Rarely, 1=Sometimes/Always

Teacher-directed student use of technology for discrete skills

Preparing written text

2,498

0

1

0.633

0.482

Q7A; 0=Never/Rarely, 1=Sometimes/Always

Learning/practicing basic skills

2,573

0

1

0.699

0.459

Q7C; 0=Never/Rarely, 1=Sometimes/Always

Conducting research

2,545

0

1

0.688

0.464

Q7D; 0=Never/Rarely, 1=Sometimes/Always

Solving problems, analyzing data, performing calculations

2,237

0

1

0.466

0.499

Q7H; 0=Never/Rarely, 1=Sometimes/Always

Teacher-directed student use of technology for hands-on tasks

Developing and presenting

 multimedia presentations

2,348

0

1

0.450

0.498

Q7J; 0=Never/Rarely, 1=Sometimes/Always

Creating art, music, movies, or

 webcasts

2,159

0

1

0.266

0.442

Q7K; 0=Never/Rarely, 1=Sometimes/Always

Conduct experiments or perform measurements

2,067

0

1

0.266

0.442

Q7I; 0=Never/Rarely, 1=Sometimes/Always

       

Sub-Sample n

2,764

     



APPENDIX 1-B


Descriptive Statistics of Covariates for Teachers Who Use Technology in Their Classrooms

Variable

N

Min

Max

Mean

SD

FRSS 95 Variable

School urbanicity:

      

  City

2,764

0

1

0.216

0.412

URBAN; 1=City

  Town

2,764

0

1

0.145

0.350

URBAN; 1=Town

  Rural

2,764

0

1

0.302

0.459

URBAN; 1=Rural

School type:

      

  Elementary

2,764

0

1

0.588

0.492

LEVEL; 1=Elementary school

Enrollment:

      

  Small (less than 300)

2,764

0

1

0.124

0.329

SIZE; 1=Less than 300

  Large (more than 1000)

2,764

0

1

0.258

0.438

SIZE; 1=300 to 999

More than 50% of students

  eligible for free or reduced  

  lunch

2,764

0

1

0.434

0.496

POVST; 1=More than 50%

Number of computers in

  classroom

2,764

0

33

4.47

5.819

Q1A1_TOP

Years of teaching experience

2,764

1

41

13.83

9.797

Q15_TOP

       

Sub-Sample n

2,764

     



APPENDIX 1-C


Mplus Code


TITLE: Teacher Technology Use LCA, FRSS 95 2009


DATA: FILE = "C:/Users/keg2132/Documents/InUseDataFiles/FRSS95_CLEAN.dat" ;


VARIABLE:

 NAMES = ID Q6G_R Q6H_R Q6J_R Q9C_R

  Q9D_R Q9E_R Q11A_R Q8A1_R Q8A2_R Q6D_R

  Q7A_R Q7C_R Q7D_R Q7H_R Q7J_R Q7K_R

  Q7I_R CITY TOWN RURAL POVST_R ELEM

  SEC SMALL MEDIUM YRSEXP COMPS TFWT ;

 MISSING = ALL(9999) ;

 WEIGHT = TFWT ;

 IDVARIABLE = ID ;

 USEVARIABLES = Q6G_R Q6H_R Q6J_R Q9C_R

  Q9D_R Q9E_R Q11A_R Q8A1_R Q8A2_R

  Q6D_R Q7A_R Q7C_R Q7D_R Q7H_R Q7J_R

  Q7K_R Q7I_R ;

 CATEGORICAL = Q6G_R Q6H_R Q6J_R Q9C_R

  Q9D_R Q9E_R Q11A_R Q8A1_R Q8A2_R

  Q6D_R Q7A_R Q7C_R Q7D_R Q7H_R Q7J_R

  Q7K_R Q7I_R ;

CLASSES = c(4) ;

AUXILIARY =

 (R3STEP) CITY TOWN RURAL POVST_R ELEM

  SEC SMALL MEDIUM YRSEXP COMPS ;


ANALYSIS:

 TYPE = MIXTURE;

 PROCESSORS = 8 (STARTS) ;

 MITERATION = 5000 ;

 STARTS = 25000 250 ;

 STITERATIONS = 100 ;


OUTPUT:

 SAMPSTAT STANDARDIZED TECH11 ;


PLOT:

 TYPE = plot3 ;

 SERIES = Q6G_R Q6H_R Q6J_R Q9C_R

 Q9D_R Q9E_R Q11A_R Q8A1_R Q8A2_R

 Q6D_R Q7A_R Q7C_R Q7D_R Q7H_R Q7J_R

 Q7K_R Q7I_R (*) ;


SAVEDATA:

 SAVE = CPROBABILITIES ;

 FILE = CPROBS-KEG-001.DAT ;

 FORMAT = FREE ;

 ESTIMATES = MIXEST-001.DAT ;

 







Cite This Article as: Teachers College Record Volume 120 Number 8, 2018, p. 1-42
https://www.tcrecord.org ID Number: 22277, Date Accessed: 2/26/2021 1:25:23 PM

Purchase Reprint Rights for this article or review