Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

What Is a Culture of Evidence? How Do You Get One? And . . . Should You Want One?


by Charles A. Peck & Morva A. McDonald - 2014

Background/Context: Contemporary state and national policy rhetoric reflects increased press for “evidence-based” decision making within programs of teacher education, including admonitions that programs develop a “culture of evidence” in making decisions regarding policy and practice. Recent case study reports suggest that evidence-based decision making in teacher education involves far more than access to data—including a complex interplay of motivational, technical, and organizational factors.

Purpose: In this paper we use a framework derived from Cultural Historical Activity Theory to describe changes in organizational practice within two teacher education programs as they began to use new sources of outcome data to make decisions about program design, curriculum and instruction.

Research Design: We use a retrospective case study approach, drawing on interviews, observations and documents collected in two university programs undergoing evidence-based renewal.

Conclusions: We argue for the value of a CHAT perspective as a tool for clarifying linkages between the highly abstract and rhetorically charged concept of a “culture of evidence” and concrete organizational practices in teacher education. We conclude that the meaning of a “culture of evidence” depends in large measure on the motivations underlying its development.

Contemporary accountability pressures have led both policymakers and practitioners in many fields of professional practice to embrace theories of action related to a variety of ideas about the use of evidence in decision making. Whether enacted within a rhetorical discourse emphasizing reliance on “what works” as in the UK and other European countries (Young, Ashby, Boaz, & Grayson, 2002), or that of “scientifically based research” as in the United States (National Research Council, 2002), the logic and ideology underlying this policy movement reflect the pervasive belief that critical questions of social policy can, and should, be resolved through the application of the methods of social science research. In teacher education, one of the most vigorous and visible commitments to this approach has been the Teachers for a New Era (TNE) initiative sponsored by the Carnegie Corporation of New York. The first design principle used to articulate the TNE theory of action for achieving significant reform in teacher education through this initiative was that “decisions in programs of teacher education should be guided by a respect for evidence” (Fallon, 2006). Perhaps most significantly, the logic of evidence-based program improvement has been embedded in national standards for program accreditation (NCATE [n.d.] Standard IIc.  Retrieved from http://www.ncate.org.  December 15, 2013), as well as standards for program review and approval in an increasing number of states. It seems that the idea that evidence about program outcomes should be placed center stage in decisions about policy and practice has become evident “everywhere” in teacher education (Cochran-Smith, 2005).


Of course, the general proposal that research and other forms of evidence should be routinely used to make decisions about policy and practice is hardly a radical idea. Initiatives grounded in closely related arguments about the value of empirical approaches to programmatic decision making have been undertaken in several fields, variously conceptualized as efforts to promote “research utilization,” “knowledge transfer,” “organizational learning,” and “diffusion and adoption of innovation” (Achenbach, 2005; Estabrooks, 2007; Honig & Coburn, 2008; Miles & Huberman, 1984; van Kammen, deSavigny, & Sewankambo, 2006). In the context of the contemporary wave of interest in evidence-based decision making, a relatively new construct that has received considerable attention in higher education, and other fields, is the notion that institutions and programs should strive to create a “culture of evidence” (Dwyer, Millett, & Payne, 2006; Fallon, 2006). We interpret the idea of “culture” as used in this context to suggest that developing and regularizing data-based decision practices might not simply be a question of collecting and analyzing data, but might extend to broader dimensions of organizational change, including collective values, institutional policies, and the nature of conceptual and material tools through which decision practices are enacted (Brown & Duguid, 2002; Cochran-Smith & the Boston College Evidence Team [BCET], 2009; Moss & Piety, 2007). Two recent empirical accounts of evidence-based program renewal processes in teacher education support this view (Cochran-Smith & BCET, 2009; Peck, Gallucci, & Sloan, 2010). For example, Cochran-Smith and her colleagues described the process of evidence-based change they undertook as part of the Teachers for a New Era project at Boston University. In this report, the authors underscored the multidimensional nature of the change process, including not only development of new outcome measures, but also the clarification of the values and beliefs to guide choices about outcome measures and the renegotiation of organizational practices related to using data for decision making. The complex and multidimensional nature of the relationships between evidence, organizational culture, and change are equally evident in another recent study of evidence-based change in teacher education (Peck et al., 2009). In this report, the authors describe how one program approached implementation of a new state-mandated preservice teacher performance assessment through a strategic process of organizational renewal and change centered on the use of the new source of outcome data for program evaluation and improvement. While the contexts in which these examples of evidence-based programmatic change in teacher education were undertaken were different in many ways, their findings converge in identifying the complex interplay of organizational values, motives, policies, and practices that must be considered in developing a concrete and practical response to the question: What is a culture of evidence in teacher education?


In this paper, we hope to contribute to the ongoing conversation about this question in a way that is relevant to current and intensifying policy pressures in teacher education. Drawing on contemporary socio-cultural theories, our analysis is based on the assumption that learning, decision making, and change processes within organizations are negotiated through participation in concrete organizational practices and social relationships, and mediated through use of specific conceptual and material tools (Boreham & Morgan, 2004; Nicolini, Gherardi, & Yanow, 2003). Using a conceptual framework developed by Yrjo Engestrom (1987, 2001, 2008), we describe cultural changes we have documented in two university teacher education programs, each of which has been engaged in substantial efforts to create evidence-based decision-making policies and practices. We compare practices in these two programs prior to the initiation of efforts to introduce and use new sources of evidence in decision making with those observed after each program had worked to develop and implement evidence-based decision practices over a period of several years. We also describe examples of actions undertaken by faculty and program leaders in these programs as part of strategic efforts to build programmatic capacity to collect, analyze, and use data for organizational learning and change. Finally, we take up the provocative question of whether a “culture of evidence” is something we should seek to create in teacher education (Cochran-Smith & BCET, 2009).


CULTURAL-HISTORICAL ACTIVITY THEORY AND ORGANIZATIONAL CHANGE


Our approach to unpacking the idea of a “culture of evidence” focuses on social and material aspects of organizational policy and practice, and may be distinguished from a variety of other approaches to understanding learning, decision making, and change processes, including those drawn from cognitive science (Simon, 1991), social psychology (Argyris & Schon, 1996), sociology (Meyer & Rowan, 1977), and political science (McDonnell & Elmore, 1987). In electing this socio-cultural approach, we attend particularly to the ways in which complex human activity is situated in webs of (often conflicting) individual and collective goals, and mediated by a variety of conceptual and material tools (Brown & Duguid, 1991; Nicolini et al., 2003; Orlikowski, 2002; Wenger, 1998). A guiding assumption here is that an organizational “culture” is defined by the organization’s ways of doing things, as individuals participate in collective activities that are shaped by the goals of the organization, the tools at its disposal, and its history (Schein, 1993).


An extensive line of research and development work based on this general theoretical orientation has been conducted over the last two decades by Engestrom and his colleagues (Engestrom, 1987, 2001, 2008; Engestrom & Sannino, 2010). Extending foundational theorizing by Vygotsky (1978) and Leontiev (1978), Engestrom developed a conceptual framework that parses complex systems of human activity (such as a teacher education program) in terms of six interacting parameters of practice. His analysis began by locating a subject, or point of view from which the activity is considered. This point of view may be that of a specific individual or a group of individuals participating in the activity in question. For purposes of the present analysis, we have found it useful to think of the subject as the group of faculty/staff involved in delivering a program of teacher education. Within the Engestrom framework, the object of activity may be understood as its goal, or objective—to be distinguished from its outcome, which may or may not reflect attainment of the goal. The general object of activity for a teacher education program, of course, is the preparation of competent and effective teachers, although other objects/goals can and do exist for every program. A third dimension of the activity system consists of the instruments or tools used by participants to achieve the object. These tools may be conceptual, linguistic, or material in nature. Material tools commonly used in teacher education include syllabi, observation protocols, and assessment tools; conceptual tools might include ideas such as social justice, culturally responsive teaching, or evidence-based practice.


To this basic subject–object–instruments triad, Engestrom added three dimensions of social context. The first consists of rules, including both the formal policies and informal regularities and expectations for practice that influence the activity in question. In teacher education, these are myriad and include local and state policies, expectations for faculty participation and performance, and rules for faculty/staff compensation. Community refers to the characteristics, and particularly values, beliefs, and ideologies of the group undertaking the activity in question. For our present analysis, this would include the faculty, staff, students, and school partners involved in a teacher education program. Finally, Engestrom identified division of labor as a key feature of complex activity systems. This refers to formal and de facto differentiation of roles and responsibilities for those participating in the activity. A major issue for every teacher education program has to do with the way participation in the work of the program is structured by roles that (typically) differentiate between coursework and fieldwork, or between university and public-school-based teacher educators, or between part-time and tenure-line faculty. Figure 1 depicts the Engestrom Activity Theory framework and illustrates the lines of interacting influence among the dimensions of the system.


Figure 1. Engestrom Activity Theory framework


[39_17359.htm_g/00002.jpg]



In the context of teacher education, significant tensions arise due to contradictions that exist between elements of the activity system. For example, traditional divisions of labor in university-based teacher education often differentiate among faculty roles emphasizing coursework on campus, supervisor roles emphasizing observation and evaluation of teacher candidates in the field, and cooperating teacher roles emphasizing collaboration with candidates in the classroom. These highly differentiated roles often lead to significant differences in how participants understand the “object” of activity, that is, what it means to be a competent and well-prepared beginning teacher (Valencia, Grossman, Place, & Martin, 2009). Policies (rules) for compensation and decision authority are tied to each of these roles, which often lead to a fractionated and divisive sense of status, power, and inclusion in program decision making (community). The communication, collaboration, and coordination among all of these participants can be significantly affected by the qualities of tools (e.g., curriculum, instruction, and candidate assessments) used to carry out the work of the program. In short, the Engestrom framework offers an analytic tool that can be used to make the idea of “culture” as applied to a teacher education program more concrete and visible as the product of a highly dynamic interplay among multiple dimensions of program policy and practice. In the following section, we use this framework to describe changes in program culture drawn from our experiences in two teacher education programs undergoing significant efforts to use new kinds of candidate performance data in decision making. Our representation of the Engestrom framework, including the underlying theoretical constructs of cultural–historical activity theory from which it is derived, is necessarily limited here. More comprehensive explications of the framework, its underlying social theory, and the many contexts (and critiques) of its use in a variety of human service contexts may be found in recent papers by Daniels (2010); Ellis, Edwards and Smagorinsky (2010); Engestrom and Sannino (2010); Langemeyer and Roth (2006); and Roth and Lee (2007).


TWO CASE EXAMPLES


The case examples we present here may be understood as “instrumental” in the sense that we explore them as a means of “providing insight into an issue” (Stake, 2005, p. 437). The data we present in describing these cases are drawn from documents collected over the course of several years of our participation in two graduate-level teacher education programs situated at research-intensive universities identified by pseudonyms: Point Break University (PBU) and Olympic Vista University (OVU). During the time period from which these descriptions are drawn, both programs were undergoing significant processes of change and renewal related to issues of evidence and decision making. Our documentation of changes in these programs is based on a variety of data sources, including minutes and related artifacts of program planning meetings, interviews with faculty, changes in job descriptions and responsibilities, and changes in curriculum—both coursework and fieldwork aspects of the programs.1 In many cases, the examples we have used are drawn from data originally collected in previous studies (Duggan et al., 2008; Peck, Gallucci, & Sloan, 2010; Peck et al., 2009; Peck, Muzzo, & Sexton, 2012). We use examples of changes in policy and practice within these two programs not as an empirical argument, but as a heuristic strategy for addressing the question of what is a culture of evidence, taking note of ways in which the answer to this question may be similar and different across programs. We claim no warrant for identifying the practices observed in these programs as “exemplary.” We offer them only as concrete examples of how two programs have taken up some of the challenges around evidence and decision making that are such a salient part of the contemporary policy landscape in teacher education.


POINT BREAK UNIVERSITY


The Masters in Teaching program at PBU annually graduates approximately 100 teacher candidates, distributed equally between elementary and secondary education. The change process in this program was largely driven by state policy mandates requiring implementation of a new performance assessment as a program outcome measure. In the context of these new mandates, PBU participated with a consortium of public and private universities engaged in the development and implementation of the Performance Assessment for California Teachers (PACT) (see Pecheone & Chung, 2006 for a comprehensive description of this instrument and the history of its development). Our descriptions of organizational practices in PBU are drawn from several sources of data collected throughout the two-year period in which we studied the implementation process.


Narrative fieldnotes were made during observations of faculty meetings, small group work meetings, classes, and student work groups.

A series of two to three semistructured interviews were conducted over the course of the study with key informants.

Short “free-writes” were conducted with faculty at key meetings to gather data on their views and experiences of the implementation process.

Focus-group interviews were conducted with student groups to gather information about their experiences with PACT.


During this time, faculty analyzed new state policy mandates and developed new policies and practices related to collecting and using PACT outcome data (Peck et al., 2009, 2010).


OLYMPIC VISTA UNIVERSITY


The teacher education program at OVU is a Masters in Teaching program, annually producing about 130 teachers in elementary and secondary education. OVU was one of the institutions participating in the Teachers for a New Era (TNE) sponsored by the Carnegie Corporation of New York. A primary goal of this project was the redesign of programs of teacher education based on “a respect for evidence” (Fallon, 2006). As part of a broader effort to develop policies and practices that incorporated several streams of evidence related to program outcomes (McDiarmid & Peck, 2012; Nolen, Horn, Ward, & Childers, 2011), the OVU program also adopted PACT (modified slightly to fit state program standards, and later transposed into the national Teacher Performance Assessment [TPA]) as one of its primary outcome measures. Our description of organizational practices at OVU is drawn from the following data sources (collected as part of the evaluation of the five-year TNE project):


Semistructured faculty interviews were conducted. (Two rounds of interviews were conducted with 11 faculty members who taught regularly in the teacher education program.)

Fieldnotes were collected during program renewal and planning meetings undertaken in the context of the TNE project. These meetings were documented over the course of four years.

Informal surveys of faculty experiences and viewpoints regarding the program renewal and planning process were performed. These surveys, which often consisted of brief written responses soliciting input in the context of decisions to be made in the renewal process, were conducted over the four years of program renewal and planning activity.

Artifacts of the renewal work, including meeting agendas and minutes, documents such as program policy proposals, and “design principles” for program changes were collected.


EMERGENT CULTURES OF EVIDENCE IN TWO PROGRAMS


In this section, we apply an activity theory framework to describe the organizational practices at PBU and OVU and how they changed over time. We present our description and analysis of practices in the teacher education programs at PBU and OVU in the three tables following, each of which is organized around the activity system parameters indicated in Figure 1. The descriptions of these programs and the changes they have undergone are intended to serve a heuristic function only—that is, as a “sacrificial draft” response to the question “what is a culture of evidence?” Our analysis anticipates, but does not itself constitute, a rigorous empirical investigation of the ways programs of teacher education may change as they strive to become more evidence-based.


PBU


Organizational practices at PBU have historically reflected many of the tensions that are well known in teacher educations, including those that shape and sustain the problem of fragmentation among coursework and between coursework and fieldwork placements. Program faculty and staff identified strongly with their roles within the program and were highly committed to the integrity of their individual practice within these roles. It was not unusual for program members to undertake significant efforts to evaluate their personal practice, and some faculty and supervisors also participated in informal “communities of practice” focused on coordination, evaluation, and improvement of their work. A significant feature of these collaborative activities was that they did not transcend role definitions: course instructors had little direct collaboration with field supervisors or with cooperating teachers. Individuals and small communities of practice often developed or revised tools to support specific types of work, but these tools were not often known and were very seldom used across courses, much less across coursework and fieldwork in the program.


Introduction of a new tool (PACT) as a source of program-wide outcome data made information about candidate performance in the classroom more visible and available to all program faculty and staff (Peck et al., 2010). These new data raised serious concerns about what candidates were actually taking up from their coursework and implementing in their own teaching practice. Program faculty and staff were highly motivated to deal with the issues raised by these data, and undertook a variety of changes in their individual practice. At the same time, the expanded view of candidate learning outcomes made it clear that some issues could not be addressed within the constraints of coursework or field experiences alone. This led to an expansion of collaborative efforts, including the development of new tools for aligning coursework and fieldwork and for addressing specific areas of concern about the adequacy of candidate preparation. Other changes in organizational practice were undertaken as program members strove to create supports for new kinds of collaboration and for the ongoing use of PACT as a regular source of data about what candidates were learning. These included changing job descriptions and responsibilities, developing new protocols for program meetings, and revising compensation policies to support the work of data collection, analysis, and interpretation required by the new performance assessment. A significant effect of these changes was the emergence of more inclusive and broadly shared understanding and participation in the collective work of the program. As one faculty member put it,


Previously, we had a handful of people that had sort of—were powerful forces in dictating, you know, how the program was and where it might go, because they had the knowledge about the program, about the students, and those kinds of things. But now we have far more people with that same kind of knowledge, far more people participating in the process. And we get comments like we're more of a program. (faculty interview)


In Table 1, we describe practices at PBU and how they changed over time as faculty and staff began using a new source of evidence (PACT) to evaluate program outcomes.


Table 1. Organizational Practices at PBU

Activity Dimension

Historical Practices

Emerging Practices

Subject: The identity of the program as a “collectivity”—the perspective from which program members see the work of the program

Four main groups of people worked in the program: career teacher educators (many of whom hold the doctorate), supervisors (generally experienced teachers, often retirees), tenure-line faculty, and graduate students. These groups maintained a distanced collegiality, but worked largely independently of one another.

These groups still exist, but share a more concrete and unified view of their own work within the context of the program and a stronger sense of affiliation to the collective.

Object: The purpose of the work, as understood by program faculty, staff, and field supervisors

The objective of the program was preparation of competent teachers—but this was understood in a rather abstract way, and understood in a different way by each of the individuals and, to some extent, by each of the groups. For example, supervisors often defined the object of their work in terms of successful participation in the public school classroom; faculty often saw the object of their work in terms of candidate performance in their courses.

The objective is preparation of teachers who can do specific things (as documented by PACT), in addition to broader dispositions and habits of practice. This constitutes a sharper and more concrete view of the “object” with clearer and more specific features, which is shared more broadly among faculty, supervisors, and cooperating teachers.

Instruments: The conceptual and material tools used to carry out the work of the program—e.g., curriculum frameworks, syllabi, observation protocols, and assessments

A variety of outcome measures were used, including a Candidate Performance Record (a checklist of candidate dispositions and skills completed in student teaching), credential portfolio (a collection of artifacts from coursework and fieldwork selected by candidates) course assignments, and program satisfaction surveys. Different tools were used by different groups working in the program.

PACT is used as an overall program outcome measure, with resulting changes in the Candidate Performance Record, credential portfolio, and coursework assignments. All program working groups (e.g., faculty, field supervisors, and administrators) actively use PACT data in addition to other data sources to evaluate their individual and collective practice.

Rules: The explicit and implicit regulations, norms, and conventions that afford and constrain practices within the program

Compensation policies for supervisors focused on fieldwork only; program meetings focused on management and coordination of program processes (rather than analysis of outcomes).

Regularized expectations for program-wide evaluation conversations focused on PACT data; supervisor job descriptions were rewritten to include participation in PACT scoring, data analysis, and decision making; faculty are expected to participate in PACT training and scoring.

Community: The values, beliefs, and ideologies held by program participants

Program members highly valued the integrity and effectiveness of their individual practice and strongly identified with the quality of the program; faculty often placed higher value on abstract/theoretical stances (e.g., social justice and constructivism); supervisors often placed higher value on successful participation in the classroom.

Individuals and groups place value in the collective in more concrete ways (e.g., understand what others are doing and concerned with how their individual practice relates to the whole). The idea of “evidence” is part of the lexicon, and using data to make decisions is identified as an important quality of the program. “That’s what we do here: we analyze data and make decisions.”

Division of labor: The differentiation of roles and responsibilities for doing the work of the program

Faculty/instructors taught and evaluated their courses individually; supervisors focused on “the fieldwork piece” and evaluated candidates within a “community of practice” that operated relatively independently of the faculty.

“Everyone scores”—faculty and supervisors share responsibility for preparing candidates for PACT and for data analysis, decision making, and program outcomes as measured by PACT.



OVU


Historically, the characteristics and organizational practices of the OVU program were much like those at PBU and other university programs. Faculty, field supervisors, and teaching assistants strongly identified with the integrity and effectiveness of their own practice, and had historically engaged in systematic efforts to improve the work they were doing (Parker & Hess, 2001). At the same time, the connections between the practice of individuals, and between those in varying roles within the program, were tenuous, much as they were at PBU. A series of studies sponsored by the TNE project were undertaken to examine candidate learning outcomes across elements of the program (e.g., Nolen et al., 2011). Much as the PACT data did at PBU, these data served to challenge faculty assumptions about the collective efficacy of their work. A wide set of structural and curricular changes to the program ensued. One of these involved adopting PACT (with some minor modifications related to state policy standards) as a regular means of collecting candidate outcome data. As at PBU, additional changes were made to facilitate and support use of these new data sources in program meetings and decision processes. Participation in these meetings was broadened to more regularly include field supervisors and cooperating teachers as it became clear that many of the issues raised by the new outcome data implied the need for stronger collaboration across the activity contexts of the program. A general increase in attention to connections between coursework and fieldwork aspects of the program was reflected in minutes of faculty meetings and faculty interviews, as the struggles candidates were experiencing in re-contextualizing what they were learning in courses for use in their own teaching practice became more visible to the faculty (Horn, Nolen, Ward, & Campbell, 2008).


In Table 2, we present an analysis of the activity system for the teacher education program (TEP) at OVU. We describe changes in practices that were observed as the program participants used new kinds of outcome evidence collected through the TNE project (including PACT data), to make decisions about program structure, curriculum, and instruction.


Table 2. Organizational Practices at OVU

Activity Dimension

Historical Practices

Emerging Practices

Subject: The identity of the program as a “collectivity”—the perspective from which program members see the work of the program

Loosely affiliated groups of faculty and staff were involved in TEP; faculty and field supervisors identified with their own work and operated relatively independently.

Faculty/supervisors/cooperating teachers share a stronger collective perspective on the program and work more collaboratively to achieve goals of TEP.

Object: The purpose of the work, as understood by program faculty, staff, and field supervisors

Faculty viewed the object of their work in the program through the lens of course assignments and grades; supervisors used a separate framework, the “Goals and Targets” for evaluating fieldwork. Neither group had a concrete understanding of the goals of the other.

Course assignments were aligned with PACT; supervisors incorporate guiding questions for fieldwork observation and evaluation aligned with PACT.

Instruments: The conceptual and material tools used to carry out the work of the program—curriculum frameworks, syllabi, observation protocols, assessments

Candidates were evaluated via “silos” of course assignments and grades, and fieldwork observations; outcomes were measured through satisfaction surveys and observation measures with unknown reliability; separate “Goals and Targets” framework was used to evaluate candidates in fieldwork, but not in the rest of the program.

More reliable and valid measures of program-wide outcomes are used, including PACT, faculty studies (e.g., Nolen et al., 2011; Windschitl et al., 2010); course assignments and fieldwork evaluations are adapted to align with PACT; separate “Goals and Targets” framework used in fieldwork is phased out; new program practices are developed to address problems identified through outcome measures (e.g., field-based “Studio Days” embedded in coursework).

Rules: The explicit and implicit regulations, norms, and conventions that afford and constrain practices within the program

Supervisors were paid for field work only and met separately from faculty. Faculty accountability was based on coursework outcomes only; supervisor responsibilities focused on the “Goals and Targets” framework used only by them; faculty meetings were generally attended by faculty only and largely focused on program management issues.

Field supervisors are paid to attend TEP division meetings and participate in program renewal deliberations; supervisors and faculty are both expected to incorporate common framework (PACT) in their practice, aligning coursework assignments, project, and activities with a common object in mind; cooperating teachers are regularly invited to faculty meetings. All groups participate in presentation, discussion, and decision making based on program outcome data.

Community: Values, beliefs, and ideologies held by program participants

Program values reflected cultural privilege of coursework over fieldwork, “theory” over practice; faculty interpreted their “authority” over the curriculum in the context of university governance policy; relatively little attention and value were placed on communication between supervisors and course instructors; strong identification with research and inquiry in general.

Increased focus and value are placed on candidate practice in the classroom; increased identification with curriculum authority is based on data-grounded decisions; stronger valuation of field-based supervisors and cooperating teachers is based on participation in events such as TEP program meetings.

Division of labor: The differentiation of roles and responsibilities for doing the work of the program

Supervisors and faculty divided responsibilities for instruction and candidate assessment into fieldwork and coursework components; cooperating teachers were not directly involved in any aspect of curriculum decision making; outcome survey data were examined by program administration and state oversight committees only.

Faculty and supervisors share instructional responsibility for preparing and supporting candidates for PACT; supervisors, course TAs, and faculty (to a limited extent) share data analysis work; faculty, supervisors, TAs, and some cooperating teachers jointly participate in interpretation of the data and related decisions about curriculum and instruction.


WHAT IS A CULTURE OF EVIDENCE?” A CROSS CASE ANALYSIS


In Table 3, we offer a more abstract description of the changes observed across the two programs, as well as an interpretation of some of the general impacts of these changes. While we ascribe certain kinds of impacts on the program to changes in specific dimensions of the activity system, it is important to recognize one of the important implications of Engestrom’s visual representation of the activity system (refer to Figure 1): causal relations are assumed to be multidirectional and transactional in nature. For example, while introducing new tools (e.g., TNE studies and PACT data) into each of these programs made new kinds of learning about program outcomes possible for faculty and staff, it was clear that program rules, community values, and divisions of labor strongly influenced the extent to which these opportunities for learning and development were taken up by program participants. A significant outcome of the learning process itself was the initiation of change, not only changes in the structure and curriculum of the program itself, but changes in the organizational practices that were used to carry out the work of the program. A significant example of this had to do with the program policies around the kinds of work expected and compensated for field supervisors. As new kinds of outcome data made the disconnections between fieldwork and coursework expectations more visible in both programs, it became clear that it was critical to change role expectations and compensation policies for supervisors (and faculty) in ways that allowed them to collaborate more extensively. In both programs, this meant revising job descriptions rather than simply adding new responsibilities. In general, changes in roles and expectations reflected a new understanding of the work of the program focused around a more concrete and shared set of outcomes. Values and ideologies reflected in these programs as communities, both of which tended to privilege the knowledge of course-based faculty over that of field supervisors (Zeichner & Payne, 2011), also began to shift as the importance of field-based outcome data and deeper collaboration between fieldwork and coursework became more salient to both faculty and supervisors.


Table 3. General Changes in Organizational Practice and Related Impacts

Dimension of Activity

Prior
Characteristics

Emerging
Characteristics

Impacts and Trajectories of Change

Subject: The identity of the program as a “collectivity”—the perspective from which program members see the work of the program

Loosely affiliated groups of people are involved in TEP, each of which operates relatively independently; identification with individual practice.

Stronger understanding and identification with collective practice

Increased sense of membership in the program as a whole; increased sense of responsibility for coordinating individual and collective practice

Object: The purpose of the work, as understood by program faculty, staff, and field supervisors

Multiple definitions of program goals; little common understanding

Increased common and concrete understanding of the goals (object) of the program

Increased alignment of individual work with collective goals

Instruments: The conceptual and material tools used to carry out the work of the program

Course assignments and grades, field observation protocols, program satisfaction surveys, and other tools are situated in “silos” of individual practice.

Increased use of common tools to document and evaluate candidate performance and effectiveness in the classroom

The use of common tools creates a common language through which program outcomes are made more visible to faculty and staff. This motivates and facilitates increased attention to connections across courses and between fieldwork and coursework.

Rules: The explicit and implicit regulations, norms, and conventions that afford and constrain practices within the program

Faculty are expected to be responsible for individual coursework only; supervisors are expected to focus on a separate set of goals in the field; expectations are limited for participation in common meetings.

Field supervisors are paid to attend faculty meetings and participate in program deliberations; supervisors and faculty are both expected to attend to common objectives in their practice; faculty meetings are focused on presentation and interpretation of common outcome data.

Expectations, structures, and supports for joint participation create opportunities for supervisors/faculty/staff to analyze common outcome data, develop common language, and jointly negotiate program changes.

Community: Values and beliefs of program participants

TEP values reflect privilege of coursework over fieldwork, “theory” over practice; strong identification with integrity of individual practice; strong identification with research and inquiry in general (although this was not typically extended to the context of the program as a whole)

Increased value placed on candidate practice in the classroom; increased identification with data-grounded decision making in the program and with the program as an object of research; increased concern with collective effectiveness of the program.

Stronger commitment to collective, program-level action and achievement of program-level outcomes; stronger focus on linking course assignments and projects directly to candidate participation in the classroom; broader and more vigorous participation in program planning, coordination and collaboration

Division of labor: The differentiation of roles and responsibilities for doing the work of the program

Supervisors and faculty divide responsibilities of instruction and candidate assessment into fieldwork and coursework components; program outcome data (satisfaction surveys) are examined primarily by program administration and state oversight committees.

Supervisors, faculty, and TAs share the work of data analysis and interpretation of the data, and they jointly make decisions; Faculty and supervisors share instructional responsibility for preparing candidates to achieve program objectives.

Increased common understanding of the assessment rubrics; shared understanding of program strengths and weaknesses and their implications for individual and collective action; increased alignment between fieldwork and coursework; more inclusive and democratic decision making


A CULTURE OF EVIDENCE—HOW DO YOU GET ONE? SOME FEATURES OF THE ORGANIZATIONAL CHANGE PROCESS IN THE TWO PROGRAMS


Our observations about the organizational change processes we have described above do not represent the findings of experimental evaluation. With that caveat in mind, we offer several comments about changes we observed in these programs that may be of value to others. First, in both programs, faculty were initially quite skeptical about the idea that new kinds of data would show the need for significant programmatic change. Both programs received generally favorable ratings on satisfaction surveys conducted with graduates and employers. So, the context of the early work in both programs was not (initially) one of needing to remediate perceived program deficits, but one of inquiry related to a general interest in the possibility of program improvement (Peck et al., 2010). Second, in each case, the data emerging from early pilot studies with relatively small numbers of candidates significantly challenged faculty and staff assumptions about what candidates were actually taking up from courses and implementing in their classroom practice. These experiences constituted what Engestrom (1987, 2001) has characterized as a “disturbance” in the activity system. In both programs, these early data were pivotal in engaging program members more vigorously with the challenges of implementing and using new sources of evidence in evaluating program outcomes. Third, we observed that program faculty and staff, whether their primary role in the program was focused on coursework or fieldwork, were highly responsive to the needs for change in their individual and collective practice, once the need for change was visible to them in terms of concrete descriptions of candidate practice. While this observation is inconsistent with the widely held view that higher education faculty are resistant to change, it is nevertheless quite congruent with studies of other practitioners (including those as varied as fighter pilots and physicians) that show that practitioners are often highly responsive to data they view as valid and relevant to the contexts of their practice (Lipshitz & Popper, 2000; Naot, Lipshitz, & Popper, 2003; Popper & Lipshitz, 1998). Perhaps most fundamentally, our observations in these two programs suggest that creating a “culture of evidence” is not simply about collecting and analyzing data. In each of these cases, the use of new data sources to inform decisions and actions about the program required making substantive changes in a variety of organizational practices, all of which contributed to a sense of change in “culture.”


LIMITATIONS OF THE CASES


Several features of these two programs warrant explicit comment with regard to what we might learn from them. Each was situated in the relatively rich intellectual and human resource context of a public research university. Among other things, this meant that strong traditions and values around inquiry were present within both programs. Second, the changes we observed in both programs took place in the context of significant external pressures. At PBU, these were constituted by new state policy mandates. At OVU, the pressures included the challenging obligations of the TNE project, which included strong press for making “evidence-based decisions” about program renewal. These and other contextual dimensions of the cases suggest caution in generalizing the specifics of our analyses—we do not assume the same conditions obtained in other programs, nor that the changes we observed would necessarily unfold in the same way in another context. Rather, our suggestion here is that the analytic tools we have used to define and investigate the cultural features of these programs may have value in understanding the conceptual and pragmatic challenges involved in answering the question in programs of teacher education: “What is a culture of evidence?”


AND… SHOULD YOU WANT ONE?


The presumption that using data to make decisions will improve practice and program outcomes has strong appeal within the rational/technical worldview of most policymakers, the public, and many education researchers. And, as we noted in the introductory sections of our paper, the general idea that social policy can and should be based on empirical evaluation of “what works” has swept across the international landscape of many human service disciplines over the last decade, becoming a centerpiece of policy and practice in the fields of medicine, nursing, social work, clinical psychology, and even library science (David, 2002). However, it is worth noting that critiques regarding concepts and practice related to the ideas like “evidence-based decision making,” “cultures of evidence,” and the “learning organization” have also emerged in each of these fields (Fenwick, 2001; Stevens, 2007), as well as the field of teacher education (Cochran-Smith & BCET, 2009). Several issues are thematic to these critiques. One line of criticism focuses on the normative quality of the discourse in which these ideas are generally advanced: the assumption that decisions based on evidence are somehow inevitably better than those made on other grounds (for example, those made through democratic deliberation of moral or political values). The field is clearly in need of studies that link evidence-based decision making in teacher education programs to measures of teacher candidate effectiveness—that is, direct investigation of the hypothesis that better evidence leads to better programs. A second and related set of concerns has to do with the limitations of measurement methodologies, particularly the argument that what can be measured most expeditiously is not necessarily what is most important. A third set of concerns has to do with the displacement of local, richly contextualized professional judgment by generalized decision rules—the tension between what is generally the best treatment/policy, and what might be the best decision in this particular situation (Kravits, Duan, & Braslow, 2004; Putnam, Twohig, Burge, Jackson, & Coix, 2002; Sidman, 1960). Finally, perhaps the most pervasive concern raised by critics of contemporary policy zeitgeists urging creation of “cultures of evidence,” “evidence-based decision making practices,” and “learning organizations” has to do with issues of power and voice (Parding & Abrahamsson, 2010; Vince, 2001). At issue are questions about whose values and interests are reflected in choices about evidence to be admitted in policy debates. Whose interests are served by policy decisions made on the basis of those data? Within a “culture of evidence,” what concerns about policy and practice are likely to be systematically marginalized or silenced entirely?


We believe these are substantive questions and appropriate issues for serious and ongoing deliberation. Our own resolution of the tensions underlying them has less to do with decision practices themselves than with the motivational contexts in which they are enacted. Returning here to one of the foundational ideas underlying cultural historical activity theory (including the Engestrom framework we have used in our present analysis), we note that the meaning of an action or an organizational practice can only be interpreted within the motivational context in which it is embedded (Leontiev, 1978). Our tentative conclusion is that whether a “culture of evidence” is a good thing or a bad thing has much to do with whose point of view is taken in raising the question, and what the object/goals are for its use. In the programs we have described here, we believe faculty and staff have engaged the practical challenges of learning to use evidence in decision making from an “inquiry stance” (Cochran-Smith & Lytle, 1993; Peck et al., 2010), successfully appropriating new sources of information about program outcomes (that is, new tools), and using them to make substantive changes in their individual and collective practice. That is, faculty, staff, and administrative leaders in these programs have developed a set of organizational practices that more closely approximate what we might imagine as a “culture of evidence” in an attempt to improve the program in which they worked. However, we can imagine a quite different response to the kinds of pressures these programs experienced, and a very different trajectory of change. As we have noted elsewhere (Peck et al., 2010), it is entirely possible for a program to implement new data collection systems, hold meetings to review program outcome data, and produce regular program audit reports with the objective of complying with policy mandates, while at the same time avoiding substantive programmatic change. For example, Kornfeld, Grady, Marker, and Ruddell (2007) reported a case in which at least some of the organizational practices we have described in the present paper were implemented with the expressed objective of resisting what faculty interpreted as an unwarranted and destructive intrusion of the state into matters of program integrity. Drawing again on the Engestrom framework, the sense we make of this apparently paradoxical response is that it underscores the pivotal function the object of activity may play in determining whether or not the creation of a “culture of evidence” turns out to be a salubrious goal for programs of teacher education.


Acknowledgments


The authors express appreciation to Chrysan Gallucci, Tine Sloan, Mary Clevenger-Bright and G. Williamson McDiarmid for their contributions to many of the ideas developed in this article.

This work was supported in part by Spencer Foundation Grant #201200045


Notes


1. See Appendix A for a more detailed description of our data sources, methods of analysis, and findings for each of the cases.


References


Achenbach, T. M. (2005). Advancing assessment of children and adolescents: Commentary on evidence-based assessment of child and adolescent disorders. Journal of Clinical Child and Adolescent Psychology, 34(3), 541–547.


Argyris, C., & Schon, D. (1996). Organizational learning II: Theory, method and practice. Reading, MA: Addison-Wesley.


Bier, M., Horn, L., Campbell, S., Kazemi, E., Hintz, A., Kelley-Peterson, M., . . . Peck, C. (in press). Designs for simultaneous renewal in university–public school partnerships: Hitting the “sweet spot.” Teacher Education Quarterly.


Boreham, N., & Morgan, C. (2004). A socio-cultural analysis of organizational learning. Oxford Review of Education, 30(3), 307–325.


Brown, J., & Duguid, P. (2002). The social life of information. Cambridge, MA: Harvard Business School Press.


Brown, J. S., & Duguid, P. (1991). Organizational learning and communities of practice: Toward a unified view of working, learning, and innovation. Organizational Science, 2, 40–57.


Cochran-Smith, M. (2005) Teacher education and the outcomes trap. Journal of Teacher Education, 56, 411–417.


Cochran-Smith, M., & Lytle, S. L. (1993). Inside/outside: Teacher research and knowledge. New York: Teachers College Press.


Cochran-Smith, M., & the Boston College Evidence Team. (2009). “Re-culturing” teacher education: Inquiry, evidence and action. Journal of Teacher Education, 60(5), 458–468.


Conant, J. (1963). The education of American teachers. New York, NY: McGraw-Hill.


Daniels, H. (2010). The mutual shaping of human action and institutional settings: A study of the transformation of children's services and professional work. British Journal of Sociology of Education, 31(4), 377393.


David, M. (2002). Introduction: Themed section on evidence-based policy as a concept for modernizing government and social science research. Social Policy and Society, 1(3), 213–214.


Duggan, T., Harris, K., Hintz, A., Li, M., Luttrell-Montes, S., Peck, C., . . . VonBeck, M. (2008, March). Organizational design for program renewal: Early career problems of practice as feedback for teacher educators. Paper presented at the annual meeting of the American Educational Research Association, New York, NY.


Dwyer, C., Millett, C., & Payne, D. (2006). A culture of evidence: Postsecondary assessment and learning outcomes. Princeton, NJ: Educational Testing Service.


Ellis, V., Edwards, A., & Smagorinisky, P. (Eds.). (2010). Cultural-historical perspectives on teacher education and development. New York: Routledge.


Engestrom, Y. (1987). Learning by expanding: An activity-theoretical approach to developmental research. Helsinki: Orienta–Konsultit.


Engestrom, Y. (2001). Expansive learning at work: Toward an activity theoretical reconceptualization. Journal of Education and Work, 14, 133–156.


Engestrom, Y. (2008). From teams to knots: Activity–theoretical studies of collaboration and learning at work. New York, NY: Cambridge University Press.


Engestrom, Y., & Sannino, A. (2010). Studies of expansive learning: Foundations, findings and future challenges. Educational Research Review, 5, 1–24.


Estabrooks, C. (2007). A program of research on knowledge translation. Nursing Research, 56(4), 4–6.


Fallon, D. (2006, January). Improving teacher education through a culture of evidence. Paper presented at the sixth annual meeting of the Teacher Education Accreditation Council, Washington, DC. Retrieved from http://www.teac.org/membership/meetings/Fallon%20remarks.pdf


Fenwick, T. (2001). Questioning the concept of the learning organization. In C. Parrie, M. Preedy, & D. Scott (Eds.), Knowledge, power and learning (pp. 74–88). London, UK: Paul Chapman/Sage.


Horn, I. S., Nolen, S. B., Ward, C. J., & Campbell, S. S. (2008). Developing practices in multiple worlds: The role of identity in learning to teach. Teacher Education Quarterly, 35(3), 61–72.


Kornfeld, J., Grady, K., Marker, P., & Ruddell, M. (2007). Caught in the current: A self-study of state-mandated compliance in a teacher education program. Teachers College Record, 109(8), 1902–1930.


Kravitz, R., Duan, N., & Braslow, J. (2004). Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. The Milbank Quarterly, 82(4), 661–687.


Langemeyer, I., & Roth, W-M. (2006). Is cultural–historical activity theory threatened to fall short of its own principles and possibilities as a dialectical social science? Outlines, 8(2), 20–42.


Leontiev, A. N. (1978). Activity, consciousness, and personality. (M. J. Hall, Trans.). Englewood Cliffs, NJ: Prentice Hall. (Original work published 1975)


Lipshitz, R., & Popper, M. (2000). Organizational learning in a hospital. Journal of Applied Behavioral Science, 36(3), 345–361.


McDiarmid, W., & Peck, C. (2012, April). Understanding change in teacher education programs. Paper presented at the annual meeting of the American Educational Research Association, Vancouver, BC, Canada.


McDonnell, L., & Elmore, R. (1987). Getting the job done: Alternative policy instruments. Educational Evaluation and Policy Analysis, 9, 133–152.


Meyer, J., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83, 340–363.


Miles, A., & Huberman, M. (1984). Innovation up close: How school improvement works. New York, NY: Plenum.


Meyer, J., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83, 340–363.


Moss, P., & Piety, P.  (2007). Introduction: Evidence and decision making. In P. Moss. (Ed.), Evidence-based decision making. 106th Yearbook of the National Society for the Study of Education (1-14). Malden, MA: Blackwell.


Naot, Y., Lipshitz, R., & Popper, M. (2004). Discerning the quality of organizational learning. Management Learning, 35(4), 451–472.


National Research Council. (2002). Scientific research in education. Washington, DC: National Academy Press.


Nicolini, D., Gherardi, S., & Yanow, D. (2003). Knowing in organizations: A practice-based approach. New York, NY: Sharpe.


Nolen, S. B., Horn, I. S., Ward, C. J., & Childers, S. (2011). Assessment tools as boundary objects in novice teachers’ learning. Cognition and Instruction, 29(1), 88–122.


Orlikowski, W. (2002). Knowing in practice: Enacting a collective capability in distributed organizing. Organization Science, 13(3), 249–273.


Parker, W., & Hess, D. (2001). Teaching with and for discussion. Teaching and Teacher Education, 17, 273–289.


Parding, K., & Abrahamsson, L. (2010). Learning gaps in a learning organization: Professionals’ values versus management values. Journal of Workplace Learning, 22(5), 292–305.


Pecheone, R., & Chung, R. (2006). Evidence in teacher education: The Performance Assessment for California Teachers. Journal of Teacher Education, 57(1), 22–36.


Peck, C., Gallucci, C., & Sloan, T. (2010). Negotiating implementation of high-stakes performance assessment policies in teacher education: From compliance to inquiry. Journal of Teacher Education, 61(5), 451–463.


Peck, C., Gallucci, C., Sloan, T., & Lippincott, A. (2009). Organizational learning and program renewal in teacher education: A socio-cultural theory of learning, innovation and change. Educational Research Review, 4, 16–25.


Peck, C., Muzzo, M., & Sexton, P. (2012). Program implementation of the edTPA in Washington State. Unpublished manuscript, University of Washington.


Podgursky, M. (2004). Improving academic performance in U.S. public schools: Why teacher licensure is (almost) irrelevant. In F. Hess, A. Rotherham, & K. Walsh (Eds.), A qualified teacher in every classroom? Appraising old answers and new ideas. Cambridge, MA: Harvard University Press.


Popper, M., & Lipshitz, R. (1998). Organizational learning mechanisms: A structural and cultural approach to organizational learning. Journal of Applied Management Science, 43(2), 161–178.


Putnam, W., Twohig, P., Burge, F., Jackson, L., & Coix, J. (2002). A qualitative study of evidence in primary care: What practitioners are saying. Canadian Medical Association Journal, 166(12), 1525–1530.


Roth, W-M.  & Lee, Y-J.  (2007). “Vygotsky’s neglected legacy”: Cultural-historical activity theory. Review of Educational Research, 77(2), 186-232.


Schein, E. H. (1993). On dialogue, culture, and organizational learning. Organizational Dynamics 22(2), 40–51.


Simon, H. (1991). Bounded rationality and organizational learning. Organization Science, 2(1), 125–134.


Sidman, M. (1960). The tactics of scientific research. New York, NY: Basic Books.


Stake, R. (2005). Case study research. In N. Denzin & Y. Lincoln (Eds.), The Sage handbook of qualitative research. Thousand Oaks, CA: Sage Publishing.


Stevens, A. (2007). Survival of ideas that fit: An evolutionary analogy for the use of evidence in policy. Social Policy & Society, 6(1), 25–35.


Teram, E. (2010). Integrating independent case studies. In A. Mills, G. Durepos, & E. Wiebe (Eds.), Encyclopedia of case study research (pp. 476–480). Thousand Oaks, CA: Sage Publishing.


van Kammen, J., deSavigny, D., & Sewankambo, N. (2006). Using knowledge brokering to promote evidence-based policy-making: The need for support structures. Bulletin of the World Health Organization, 84, 608–612.


Valencia, S. W., Martin, S. D., Place, N. A., & Grossman, P. (2009). Complex interactions in student teaching: Lost opportunities for learning. Journal of Teacher Education, 60(3), 304-322.


Vince, R. (2001). Power and emotion in organizational learning. Human Relations, 54(10), 1325–1351.


Vygotsky, L. S. (1978). Mind and society: The development of higher mental processes. Cambridge, MA: Harvard University Press.


Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge, UK: Cambridge University Press.


Windschitl, M., Thompson, J., Braaten, M., Stroupe, D., Chew, C., & Wright, B. (2010). The beginner’s repertoire: A set of core practices for teacher preparation. Paper presented at the annual meeting of the American Educational Research Association, Denver, CO.


Young, K., Ashby, D., Boaz, A., & Grayson, L. (2002). Social science and the evidence-based policy movement. Social Policy and Society, 1(3), 215–224.


Zeichner, K., & Payne, K. (2011). Democratizing knowledge in teacher education through practice-based methods teaching and mediated field experience in schools and communities. Paper presented at the annual meeting of the American Educational Research Association, Denver, CO.


APPENDIX A: CASE STUDY METHODOLOGY


We conceptualized our methodological approach in this study as an example of what Stake (2005) has described as a “collective instrumental case study”:


I call it instrumental case study if a particular case is examined to provide insight into an issue, or to redraw a generalization. The case is of secondary interest, it plays a supporting role, and it facilitates our understanding of something else. . . With even less interest in one particular case, a researcher may jointly study a number of cases in order to investigate a phenomenon, population, or general condition. It is instrumental case study extended to several cases. They may be similar or dissimilar, redundancy and variety each important. They are chosen because it is believed that understanding them will lead to better understanding, perhaps better theorizing, about a still larger set of cases. (p. 437)


Teram (2010), writing in the Encyclopedia of Case Study Research, added this rationale for ex post facto integration of data from previous studies:


The integration of independent case studies within the same analytical framework is a way to expand understanding of a particular phenomenon. This integration is useful when independent studies have examined the same issue at different times or within different contexts. The rationale for integrating independent case studies is similar to that of comparative case studies. The compared studies, however, are not part of a predesigned research strategy, and the comparison emerges from an ex post facto realization by independent researchers that the insights derived from their studies can be enriched by comparative analyses. (p. 476)


In our study, we drew on a variety of data sources to construct two such “instrumental cases”—which together allowed us to develop a more concrete interpretation of the meaning of the “culture of evidence” construct in the context of practices we were able to document in two programs of teacher education. In each case, the data we drew upon for the present analysis had been collected in the context of previous studies (e.g., Bier et al., in press; Nolen et al., 2011; Peck et al., 2010; Peck et al., 2009) and program evaluations (McDiarmid & Peck, 2012; Peck, Muzzo, & Sexton, 2012) carried out as part of evidence-based program renewal work undertaken in each of the programs we studied. In reviewing these data, we relied on previous coding schemes where they were relevant to our current analysis. For example, in our earlier coding of faculty interviews, we had identified text segments referring to “course goals.” These data were examined in the present study because of their relevance to the “object of activity” as conceptualized by Engestrom. In other cases, we reviewed documents we had collected, but not previously coded, identifying segments related to issues such as staffing policies (Rules), job descriptions (Division of Labor), and other variables related to the Engestrom framework.


In the tables below, we describe the data sources, general analytic procedures, and examples of the data we used to construct our interpretations of each case.


Table A-1. Data Sources, Analysis, and Sample Findings at PBU

Dimension of Activity

Data Sources

Analysis

Example of What We Found

Subject: The identity of the program as a “collectivity”—the perspective from which program members see the work of the program

Faculty interviews collected over an 18-month period (see Peck et al., 2009). Follow-up faculty interviews were collected at 2 and 4 years after initial study.

Data collected in previous studies were reviewed to identify text segments describing faculty perspectives on the work they were doing in the program, and how these changed over time.

“Previously we had sort of a handful of people that had sort of—were powerful forces in dictating, you know, how the program was and where it might go, because they had the knowledge about the program, about the students, and those kinds of things. But now we have far more people with that same kind of knowledge, far more people participating in the process. And we get comments like we're more of a program . . . because they themselves understand the program better.”

Object: The purpose of the work, as understood by program faculty, staff, and field supervisors

Faculty interviews (as above); Written documents produced by faculty during program development activities (e.g., program goal statements, and design and evaluation criteria)

Interviews were analyzed to identify changes in faculty goals over time; written documents were reviewed to identify program goals expressed by faculty, and how these changed over time.

“My course was perceived by the students, and probably by the supervisors and instructors, as . . . kind of outside (the program). But last summer, I sat down to re-conceptualize how the course would be taught with specific reference to the TPAs. I made these changes with greater integration of that course into TEP in mind.”

Instruments: The conceptual and material tools used to carry out the work of the program

Course syllabi; assessment instruments, course assignments, and fieldwork observation protocols were collected over the course of 18 months.

Documents were compared over time to identify changes in tools used in coursework and fieldwork.

A new program-wide lesson plan framework was developed as new data sources revealed the need to build stronger connections across courses and between coursework and fieldwork.

Rules: The explicit and implicit regulations, norms, and conventions that afford and constrain practices within the program

Written program policy statements, meeting agendas and minutes, job descriptions, and compensation policies were collected over an 18-month period.

Program documents were reviewed and compared over time to identify relevant changes in program policies.

Changes were made in job descriptions and compensation policies for field supervisors to support their participation in collecting and using new types of outcome data.

Community: Values and beliefs of program participants

Interviews were conducted with faculty and supervisors in which they described what they viewed as important about their work and the work of others in the program.

Data collected and analyzed in previous studies (Peck et al., 2009, 2010) were reviewed to identify text segments describing initial values and beliefs, and how these changed over time.

“And then it's also just become a part of the culture of what we do. So anybody new who comes into the program comes into this culture. This is what we do here. We talk. We support. We look at data.”

Division of labor: The differentiation of roles and responsibilities for doing the work of the program

Interviews with faculty and staff; minutes of program planning meetings; formal job descriptions

Interviews, meeting minutes, and job descriptions were analyzed to identify changes in roles and responsibilities that took place over time.

Minutes of program meetings showed increased attendance and participation of supervisors and cooperating teachers in program deliberations and decision making over time.



Table A-2. Data Sources, Analysis, and Sample Findings at OVU



Dimension of Activity

Data Sources

Analysis

Example of What We Found

Subject: The identity of the program as a “collectivity”—the perspective from which program members see the work of the program

Faculty Interviews (n=11); fieldnotes from program meetings

Data collected and analyzed for the program evaluation and renewal during the TNE project were reviewed to identify text segments describing faculty perspectives on the work they were doing in the program.

Excerpt from early faculty interviews: “I had no idea what was being handled in other courses.”


Excerpt from interview three years later: “We continue to talk with each other about how the courses that we teach relate to the TPA and we continue to, as a group, think about what our collective impact is on what students learn.”

Object: The purpose of the work, as understood by program faculty, staff, and field supervisors

Faculty interviews (as above); minutes and observation notes; documents produced by faculty and staff during program development activities

Interviews as described above; written documents collected during the TNE program evaluation and redesign process were reviewed to identify program goals identified by faculty and compared for changes over time.

 “Design Principles for Program Renewal” document produced by faculty recommends “changes in program structure and practice to emphasize integration of what students are learning.”


Faculty interview excerpt:


“The scores are sort of helpful, but mostly what we want to see is what do students really do. And so we need to look at the work, we need to look at the videos, we need to look at the writing that they do in order to really have information that’s useful for our teaching”

Instruments: The conceptual and material tools used to carry out the work of the program

Course syllabi; assessment instruments, course assignments, and fieldwork observation protocols were collected over the course of 36 months.

These documents were compared over time to identify changes in tools used in coursework and fieldwork.

New induction year seminars were developed and used for collecting outcome data from program graduates in their first years of teaching (Dugan et al., 2008); Teacher Performance Assessment (PACT/TPA) was introduced as classroom-based program assessment tool.

Rules: The explicit and implicit regulations, norms, and conventions that afford and constrain practices within the program

Written program policy statements, meeting agendas and minutes, job descriptions, and compensation policies

Program policies were reviewed and compared over time to identify relevant changes.

TEP staffing plan memo showed changes in position descriptions and responsibilities to support data collection and analysis.

Community: Values and beliefs of program participants

Interviews were conducted with faculty and supervisors in which they described what they viewed as important about their work and the work of others in the program.

Faculty interviews, program philosophy statements, and program policies were reviewed to identify text segments describing initial values and beliefs, and how these changed over time.

Revised program design document recommends “greater supports for students to integrate what they are learning in the context of complex problems of practice.”

Division of labor: The differentiation of roles and responsibilities for doing the work of the program

Interviews with faculty and staff; minutes of program planning meetings; formal job descriptions

Interviews, meeting minutes, and job descriptions were analyzed to identify changes in roles and responsibilities that took place over time.

Minutes and records of attendance at program meetings over three years showed increased participation of supervisors and cooperating teachers in program deliberations and decision making.









Cite This Article as: Teachers College Record Volume 116 Number 3, 2014, p. -
https://www.tcrecord.org ID Number: 17359, Date Accessed: 12/8/2021 6:59:05 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Charles Peck
    University of Washington
    E-mail Author
    CHARLES A. PECK is a professor of special education and teacher education at the University of Washington. His research interests are policy implementation in teacher education and special education. His recent publications include the following articles: Peck, C., Gallucci, C., & Sloan, T. (2010). Negotiating implementation of high-stakes performance assessment policies in teacher education: From compliance to inquiry. Journal of Teacher Education, 61(5), 451–463. Peck, C., Gallucci, C., Sloan, T., & Lippincott, A. (2009) Organizational learning and program renewal in teacher education: A socio-cultural perspective on learning, innovation and change. , 16-25.
  • Morva McDonald
    University of Washington
    E-mail Author
    MORVA MCDONALD is an associate professor of teacher education at the University of Washington. Her research interests are organizational issues in teacher education. Her recent publications include the following articles: McDonald, M. (2011). Social justice teacher education and the case for enacting high leverage practices. Teacher Education & Practice, 23(4) (Special Issue: A Social Justice Imperative for Teacher Preparation and Practice). McDonald, M., Tyson, K., Brayko, K., Bowman, M., Delport, J., & Shimomura, F. (2010). Innovation and impact in teacher education: Community-based organizations as field placements for preservice teachers. Teachers College Record.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS