Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Challenges of Assessing Value Added to PK-12 Education by Teacher Education


by Robert F. McNergney & Scott R. Imig - October 26, 2006

This paper situates one university’s efforts to respond to challenges presented by the Teachers for a New Era program within the larger discussion about teacher education and value-added assessment. Teachers for a New Era (TNE) is a multi-million dollar, multi-year program designed to explore the value added to PK-12 education by colleges and universities by way of their teacher education programs. The Carnegie Corporation of New York and both the Ford and Annenberg foundations fund the program (with matching funds from participating institutions). TNE supports 11 institutions of higher education and their school partners in efforts to determine what, if anything, teacher educators do to help teachers help students learn.

The authors discuss a model of value-added assessment in terms of its provenance and its unique application at one institution. They present four critical issues investigators must address as they seek and interpret evidence on program effectiveness—those of accuracy, utility, feasibility, and propriety. Finally, the authors explicate these issues in terms of particular challenges that have arisen at their institution. The intent is twofold: to encourage others to scrutinize their work and to offer guidance to teacher education researchers elsewhere who are in similar circumstances.

Like it or not, teacher educators must face the specter of being judged irrelevant if they cannot demonstrate that they add value to the PK-12 educational enterprise. Teachers for a New Era (TNE) is a multi-million dollar, multi-year program intended to shed light on the value added to PK-12 education by colleges and universities via the preparation of teachers.  The Carnegie Corporation of New York and both the Ford and Annenberg foundations fund the program (with dollar-for-dollar matching from participating institutions). TNE supports 11 institutions of higher education and their school partners in efforts to determine what, if anything, teacher educators do to help teachers help students learn.1


We use this opportunity, first, to situate our own TNE efforts at the University of Virginia within the larger discussion about teacher education and value-added assessment that is extant at this time.  We discuss one model of value-added assessment—its provenance and its unique application at our own institution.  Second, we present four critical issues teacher education researchers must address as they engage in value-added assessment. These emanate from within and outside the profession and vary in terms of their prominence in stakeholders’ minds. Third, we explicate these issues in terms of particular challenges that have arisen at our own institution—challenges that are as diverse as they are vexing.  In doing so, we want to encourage others to scrutinize our work and to offer guidance to teacher education researchers elsewhere who are in similar circumstances.


The research challenge is contained explicitly in Design Principle “A” of the TNE program. Design Principle “A” prescribes that decisions about teaching, learning, and teacher preparation must be driven by “evidence.” Dan Fallon, chair of the Education Division of the Carnegie Corporation, and his colleagues have been reluctant to prescribe exactly how evidence should be defined.  Yet it is clear that the metric of PK-12 student academic learning, as represented by student test scores, must drive exploration. That is not to say, as we shall note below, that other metrics are out of bounds or irrelevant.  Principle “A” is meant to foster the search for knowledge of links between teachers and students, and for that knowledge to guide the assessment and practice of teacher education. All schools, colleges, and departments of education, not just TNE institutions, face increasing pressure to attend to evidence for program legitimacy.


THE CONTEXT OF VALUE-ADDED ASSESSMENT AND TEACHER EDUCATION


In 2002, then-Secretary of Education Rod Paige criticized teacher education and emphasized the need to document the value that it adds to PK-12 public schools. Paige declared education courses to be the “Achilles heel of the certification system.”  He argued that mandated pedagogical courses “scare off talented individuals while adding little value” (Paige, 2002, p. 40). The invidious comparison catches one’s eye, but the key phrase in Paige’s comments was “adding little value.”  In the No Child Left Behind legislation, the value of teaching was to be measured in terms of evidence produced by “scientifically based research.”  Education was to become an “evidence-based field.”  If teachers had students who demonstrated academic achievement, then value would be added.  


The question of whether teacher effectiveness was a function of participation in a teacher education program loomed large.  As the AERA Panel on Research and Teacher Education has demonstrated, teacher education research has not been a particularly fruitful endeavor (Cochran-Smith & Zeichner, 2005).  Through the years, neither the funding nor the results achieved by such work have been particularly notable.  Yet teachers—who they are and how they are prepared—are routinely characterized as one of the most influential forces in children’s lives. With TNE came an unusual opportunity to verify this belief.


A small group of faculty at the University of Virginia worked in the summer of 2002 to construct a conceptual model to guide research on teacher education within the context of value-added assessment (McNergney, Cohen, Hallahan, Kneedler & Luftig, 2002). Mitzel (1960) first envisioned the basic dimensions of the model in Figure 1 nearly a half century ago, and Dunkin and Biddle (1974) elaborated Mitzel’s ideas.  Figure 1 depicts a set of relationships between and among variables meant to represent the world of research on teacher education, teaching, and learning.  Although we describe the steps in the model in terms of our own school of education, they are intended to represent most any university-based teacher education program.


[39_12814.htm_g/00001.jpg]
click to enlarge


Figure 1.  A Model for Teacher Education Research and Evaluation.


Step 1: University students enter the School of Education in their second year from the College of Arts and Sciences and by transferring from other institutions of higher education.  The first assessment step is to describe their demographic characteristics such as age, ethnicity, gender, scholastic aptitude, and beliefs about teaching and learning.  Effort is devoted to describing students’ dispositions, feelings of efficacy, implicit philosophies of teaching and learning, knowledge of disciplines, technology skills, and, their reasons for seeking careers in teaching, particularly when they are “late deciders.”  Descriptions of the programs in which university students engage, and their performances in these programs, constitute what might be thought of as their training characteristics.  Profiles of arriving students permit comparisons to students at other institutions of higher education, to students in other programs at the university, and to one another over time. These profiles form the basis on which the university can track its students longitudinally.


Step 2: Once in education school programs, students demonstrate their abilities to acquire and apply pedagogical content knowledge in their chosen fields. Clinical applications of pedagogical content knowledge, in both live and simulated or case-based settings, provide indications of teaching competence or the effects of teacher education programs.  University students’ performances yield information on professional teaching processes or teaching performance in controlled settings that is analogous to the kind of information derived from clinical practice in medical education.  Here, program faculty can examine how students plan their lessons, how they deliver instruction, and how they evaluate the results of their efforts.  These investigations are conducted in different contexts (subject matters, grade levels, schools, and communities).


Step 3: Program faculty study effects that students exert on PK-12 student learning in different contexts. A longitudinal study of our students—following them step by step through their years of preparation and in their initial years as teachers—allows investigators to describe, over time, who students are, how they perform, and what PK-12 students learn. It is also possible to estimate university students' effectiveness using cross-sectional approaches that compare people at different points in their careers.  This strategy permits estimates of the relative influence of teachers' characteristics, of their teacher education programs, and of the schools in which they are employed based on PK-12 students’ academic test scores.


The bi-directional arrows on the model suggest that the steps are interactive in the sense that they exert reciprocal effects on one another.  The arrows also suggest that it is both possible and desirable to retrace steps—from Step 3, to Step 2, to Step 1—to create a programmatic feedback loop.  Evidence on PK-12 student learning can be used to inform teaching performance.  Investigations of teaching performance, can, in turn, influence selection and preparation of teacher candidates. As Figure 1 suggests, effects between factors may be moderated by teacher and student thinking.


RESEARCH ISSUES AND CHALLENGES


The limitations of  conceptualizing and conducting research as we describe it here—that is, in the process-product tradition—were well documented some years ago (Shulman, 1986).  Even though studies were conducted in natural settings, critics questioned the external validity of the work.  Others drew attention to the raw empirical nature of the approach—the tendency to dwell on teaching behaviors at a molecular level—which tended to diminish the potential explanatory power of theory.  Still, others noted what they considered to be an over-reliance on standardized test scores for measuring student success.


Nonetheless, as Shulman observed, the process-product paradigm was remarkably durable.  Process-product researchers advanced the conversation on the factors that contributed to school achievement.  The approach was consistent with research in behavioristic psychology.  Researchers concentrated on teaching and learning in natural settings, not in laboratories.  People often perceived the implications for policy and practice as clear and straightforward. And in the end, Shulman argues, process-product research worked—that is, researchers met the aims they set out for themselves.


It is also important to note that while a process-product view suggests a set of critical questions about teacher education, teaching, and learning, the model does not address all the critical issues.  Cochran-Smith (2006) and Erickson (2005) note many issues related to teacher preparation that no single strategy (e.g., randomized clinical trials or process-product research) will address.  These include, but are not limited to the following:


Are there variations in teacher preparation associated with teachers’ retention in hard-to-staff and other schools?  What experiences do teacher candidates of color have in mostly White teacher education programs and institutions?  Is this important?…What are the school conditions that make it possible for new teachers to take advantage of the resources available to them?  How do teachers know if their pupils are learning, and how do they use that information to alter curriculum and instruction?” (Cochran-Smith, 2006, p. 10)


Our own work has been guided by a set of four critical issues that impinge on the quality of the evidence for teacher education program legitimacy (Joint Committee on Standards for Educational Evaluations, 1981).  These must be resolved over time if research and evaluation are to shed light on the value teacher education adds to the educational enterprise.  We pose them here as questions:  (1) Does teacher education research and evaluation produce accurate information?   (2) Can research and evaluation produce useful results—results that will guide improvement of the educational enterprise?  (3) Is it feasible to conduct teacher education research and evaluation within and across settings?  (4) Is it possible to do the kind of teacher education research and evaluation that is deemed informative and also operate within the bounds of propriety? As we explain below, we have only begun to address these issues in our own work.


DOES TEACHER EDUCATION RESEARCH AND EVALUATION PRODUCE ACCURATE INFORMATION?  


Our colleague Eric Bredo (in press) has observed that sometimes research and evaluation suffer from the prevalence of Type III Errors.  These are failures at the outset to ask the right questions, or those questions that are most germane.  Some would argue that accuracy in teacher education program evaluation has been plagued by Type III Errors, leading researchers to seek the wrong kind of information when judging the value of teacher education.


Accuracy means in part that we must consider the value of teacher education in terms of PK-12 student learning. This concern has been reinforced across the nation by the No Child Left Behind legislation and efforts to judge the value of education in terms of PK-12 student learning.  The measures of student learning most often of interest, however, do not lend themselves to the assessment of pre-service teacher education programs.  To put it another way, the use of students’ scores on standardized achievement tests leads us to ask the wrong questions when investigating program efficacy.  Why?  Pre-service programs exercise no immediate influence on these scores.  PK-12 students’ standardized achievement test scores may be useful in follow-up studies of teacher education program graduates—or studies of teaching and learning once teachers leave universities and assume full-time employment.  But even then, results may be confounded by induction programs, by other efforts aimed at in-service teacher development, and/or by the influences of teacher socialization.


We have tried instead to develop highly focused measures of short-term academic learning, in particular, PK-12 students’ mastery of concepts pertinent to data representation and interpretation (statistics), and to use them during the course of pre-service programs.  Despite the difficulties of creating and using such measures, they appear to offer reasonable options for assessing the effects of pre-service teachers’ training and behavior on student learning.


If PK-12 student learning is the coin of the realm, it is not the only currency in the marketplace. Teacher educators must be judged on factors within their control that will ultimately contribute to student learning.  As Figure 1 suggests, there are various ways university teacher education programs influence PK-12 schools, from the moment a student enters the university and eventually assumes a teaching position. Universities attract people with certain qualities into teaching.  Teacher education programs provide content knowledge and pedagogical skills to these would-be teachers, who in turn develop a repertoire of strategies and behaviors that enable them to plan, teach, and evaluate their students.  Teachers apply what they have learned to engage and motivate students, and ultimately, to enhance students’ academic learning.  All along the way, this chain of events provides opportunities to gather evidence on the value added to PK-12 education by teacher educators.


To acquire an accurate picture of the value added by teacher education, we must seek answers to a host of questions:


What are the intellectual, physical, emotional, and social capacities of pre-service teachers that are likely to influence their teaching behavior? How well have they mastered the content they will teach? Do pre-service teachers have the proclivity to continue learning?  

How do teachers in preparation demonstrate teaching behavior that has been shown to relate to or cause PK-12 student learning?  Can they plan instruction, implement these plans, and assess their work in ways that are likely to influence students’ performances?

How do pre-service teachers’ students think about teaching and learning—are they motivated, challenged, and reinforced by teachers’ actions?  What do pre-service teachers’ students learn?


Can Research And Evaluation Produce Useful Results?  


A teacher education program involves a number of audiences whose perspectives on what constitutes “useful” often vary. The audiences include the university student participants and their parents who pay tuition; the PK-12 students these university students teach in field experiences; the parents, teachers, and principals of the schools where the pre-service teachers work; and the teacher educators and university administrators responsible for program operations.  State and local education authorities also have a stake in the operation of teacher education programs.  As such, these audiences can be expected to influence the kind of questions asked and the type of evidence collected by researchers.


Figure 1 suggests that teacher characteristics (including educational background) affect teaching behavior; teaching behavior, in turn, influences PK-12 student learning. The bi-directional arrows suggest that the steps are interactive in the sense that they exert reciprocal influences on one another.  The arrows also suggest that it is possible to retrace steps—from Step 3, to Step 2, to Step 1—to create a programmatic feedback loop.  Evidence on PK-12 student learning can be used to inform teaching performance; and investigations of teaching performance can influence selection and preparation of teacher candidates.  Regardless of how one views the flow of influence in the full model, Step 2 is the linchpin in the process. Effective teaching behavior influences or adds value to PK-12 student learning.  Effective teacher education yields teachers who behave in ways that influence or add value to student learning.


To acquire useful information on teaching behavior at our own institution, we have relied primarily on two classroom observation systems:  The Teaching Performance Record (TPR) and the Classroom Assessment Scoring System (CLASS).  CLASS is a set of 10 observation rating scales that have been used in classrooms nationally (Pianta, La Paro, Payne, Cox, & Bradley.  2002). The CLASS captures data on the social, instructional, and managerial elements of teachers’ interactions with their students. It is designed to address questions of educational and teaching quality.  CLASS provides a common metric, vocabulary, and descriptive base for teaching and classroom processes, primarily in pre-K through grade 5.  The intent is to measure variation in both instructional and social dimensions of the classroom environment through observation.  In so doing, information from the CLASS provides a mechanism by which classroom practices can be gauged and improved.  


The CLASS was developed based on the scales used in multiple large-scale classroom observation studies including the NICHD Study of Early Child Care. There are four major components of the CLASS: emotional climate, management, instructional support, and instruction in content areas.  Within each of these components, there are three or four seven-point scales used to assess various dimensions of behavior.


The Teaching Performance Record (TPR) is a system that has observers record “signs” of behaviors within defined time periods (University of Virginia, 2005; McNergney, 2006).  Observers record the presence or absence of some 105 items of teaching behavior; they do not judge or rate their observations. The records are later scored in various ways.  For instance, patterns of behavior can be identified that represent a teacher’s attention to constructs that are relevant to PK-12 student learning.  These include Opportunity to Learn, Goal Setting, Scaffolding Pupil Engagement, Achievement Expectations, Application and Practice, and the like (Brophy & Good, 1986; Brophy, 1999; McNergney, 1988). The records can also be scored by reorganizing teaching behaviors according to programmatic goals and state teaching standards required for certification.


The TPR also offers opportunities to collect data on the PK-12 students in the classroom. Observers note signs of student involvement in lessons and their attention to classroom rules of behavior.


The TPR has laptop, hand-held, and paper-and-pencil versions.  TPR training involves the use of a video database containing examples of teaching behaviors of interest as they are demonstrated in various classroom contexts—that is, in classrooms that vary by grade, academic level, subject area, number of students, and cognitive demands of the lesson.


Both CLASS and TPR have growing bodies of validity evidence.  Clinical supervisors use both instruments to provide feedback on teaching performance in clinical settings and online.  The instruments are being used at various points in the pre-service program to improve teaching practice.


Is It Feasible to Conduct Teacher Education Program Evaluations Within and Across Settings?  


The feasibility of teacher education program evaluation depends on a number of factors.  Within-group, within-program, or within-setting program evaluations require the necessary data.  Evaluations across settings, programs, or groups require multiple sites with comparable data. An evaluation plan must be realistic, prudent, and implemented diplomatically if it is to succeed.  This means placing a premium on practicality and minimizing disruption in on-going programmatic activities. The evaluation needs to produce information from various sources and of sufficient value to justify the resources expended—cooperation and frugality count.


One way we have begun to address the feasibility issue with regard to the collection of university student data is via the establishment of a research pool similar to that long used in psychology departments. Our School of Education instituted a student research requirement and formed a research pool of student participants. The requirement stipulates that newly enrolled teacher education students either participate in five hours of research studies or complete substitute assignments each year for the life of their programs.


TNE, in cooperation with the office of teacher education, creates opportunities for students to participate in research studies designed to address various aspects of the model in exchange for credit toward their research requirement. Of the 165 students entering the education school the first year of the requirement, 153 or 93 percent joined the pool. The remaining students will pursue research studies independently, or they will complete five hours worth of equivalent work. Year Two had a 95 percent participation rate.


To participate in the pool, students must consent to the use of specific core data on themselves and participate in a set of common studies. The core data includes records of high school and college courses taken, grades received, and SAT, PRAXIS, and GRE test results. Pool participants also agree to complete a collection of common studies intended to capture a range of information on each student.   Common studies include: (1) an In-Take Survey designed to gather data on students’ attitudes, abilities, and beliefs about education; (2) The Interactive Teaching Assessment that simulates real-time interactive teaching in a middle-school classroom to gauge students’ pedagogical content knowledge; (3) a Q-Sort Exercise that uses a card-sort activity to measure the qualities, attributes, and beliefs of teacher candidates; and (4) the Personal Experiences Inventory that measures various aspects of students’ lives that may influence their future as teachers.  These measures of characteristics, in turn, can be examined in relation to teaching behavior (as measured by CLASS and TPR).


Can Teacher Education Research and Evaluation Operate Within the Bounds of Propriety?


What is proper, right, and ethical in the conduct of teacher education research and evaluation is everyone’s concern, but formal university attention to this issue falls under the purview of the Institutional Review Board (IRB).  The IRB sanctions and monitors all research performed under the aegis of TNE, as it would any research in any college or university setting.  The board is comprised of 17 faculty members from across the university.  The primary function of the IRB is to ensure compliance with federal and university regulations for non-medical, behavioral research involving human participants. The IRB standard operating procedures regarding the review of human participant research assure that the risks of such research are compatible with the expected benefits, and that participants always have a free and fully informed choice before giving their consent to participate.


One of the first issues on which we consulted the IRB was the formation of the education school participant pool. Our communications revolved around the board’s concerns about the possible threat of coercion of university students.  To address IRB concerns, we developed a set of operating procedures that follow as such:  (1) The idea of the pool was to be introduced to the students by a faculty member from the College of Arts and Sciences and the actual five-hour research expectation was to become a program requirement established by the Office of Teacher Education; (2) Students who did not wish to participate in the pool had to be given alternative assignments that would allow them to meet the education school’s new research requirement; (3) For every pool study, whether administered on-line or face-to-face, the investigator would have to seek and receive IRB approval; (4) Once concluded, all investigators had to offer a debriefing for participants; and lastly, (5) Every study had to be of educational value for its participants.


We also sought approval from the IRB for the use of the TPR and CLASS to be used with all student teachers in order to track their teaching behavior over time. Successful implementation of this plan required approval of both the Institutional Review Board and the teacher education faculty.  The IRB was concerned primarily with the quality of supervision the student teachers would receive; that is, the board wanted to ensure that the normal duties of supervisors to provide relevant feedback to students would not be disrupted by their efforts to collect data that would be used for research purposes. The possible threat to the quality of supervision issue was mitigated by faculty acceptance of the systems as natural parts of their programs.  In other words, supervisors would begin to use these systems as a natural consequence of their jobs, providing feedback based on results from the use of the systems.  To learn the systems, all supervisors, who are also doctoral students, now take a course on evaluation of teaching.  The course offers training in the use of the CLASS and the TPR, as well as other methods of observation and feedback. We have taken one step toward making systematic observation with common elements across programs becoming the rule, not the exception.


DISCUSSION


As Cochran-Smith (2006) has observed, there are many positive aspects to the emerging evidence focus of teacher education.  The movement holds the potential to transform both research and practice in the field. The concomitant danger is that, if researchers and policy makers take too narrow a perspective on what constitutes evidence, progress will be threatened. The federal government’s reliance on randomized clinical trials as the pre-eminent strategy for investigating teaching and teacher education drives Cocharn-Smith’s concern. Others worry about the potential of policy makers to misuse William Sanders’s (1998) work on value-added modeling (VAM).  “Misuse” has been characterized as over-reliance on PK-12 student academic achievement as the metric of teacher education success, while simultaneously deemphasizing the importance of teacher preparation (Lasley, Siedentop, & Yinger, 2006).


Nonetheless, in the end, teacher education must be and will be defined at least in part in terms of PK-12 student learning. Student learning will “pull” the system to attend to children’s needs. Knowledge of student learning may be expected to guide how we select for and develop pre-service teachers’ characteristics, how we encourage teachers to plan, teach, and evaluate, how we use PK-12 students’ thinking about facing academic challenges, and ultimately, how teacher educators judge their efforts. But student learning can and must be expected to exert reciprocal influence in order to improve teacher education practice.  Knowledge of student learning must “push” the system back the other way, driving teachers to think about their actions in terms of the concomitant influences on students and shaping programs in terms of the behaviors of their students. Like other professions, teacher education will train to achieve particular ends and use knowledge of results to shape training.


Skeptics will be convinced of the value of teacher education when researchers and evaluators produce evidence of effectiveness.  Such evidence will come in a variety of forms at a variety of points during and after training.  To obtain evidence will require careful attention to the accuracy, utility, feasibility, and propriety of research and evaluation. Investigators must ask the right questions in order to be able to offer directions for shaping program processes that enhance participants’ teaching abilities. Investigators need to concentrate not only on the desirability of evidence but on the likelihood that it can be collected and interpreted within budget and on time. And all those involved in examining teacher education programs will be duty bound to protect the integrity of their investigations and the rights of those involved, or risk debasing the entire process. As we address these challenges in our own communities, we will build models of professional practice that will help us judge our own and each other’s efforts.


This manuscript was made possible in part by a grant from Carnegie Corporation of New York, the Ford Foundation, and the Annenberg Foundation.  The statements and views expressed are solely the responsibility of the authors.  An earlier version was presented at the annual meeting of the American Educational Research Association, San Francisco, CA, 2006.


Notes


1. Bank Street College of Education, Boston College, California State University, Northridge, Florida A&M University, Michigan State University, Stanford University, University of Connecticut, University of Texas at El Paso, University of Virginia, University of Washington, University of Wisconsin, Milwaukee.



References


Bredo, E. (in press). Conceptual confusion and educational psychology.  In P. Winne, P. Alexander, & G. Phye (Eds.). Handbook of educational psychology. Fairfax, VA: Techbooks.


Brophy, J. (1999). Teaching (Educational Practices Series No. 1). Geneva: International Bureau of Education [On-line].  Available: http://www.ibe.unesco.org/International/Publications/EducationalPractices/prachome.htm/


Brophy, J. & Good, T.L. (1986). Teacher behavior and student achievement. In M. Wittrock (Ed.), Handbook of research on teaching (pp. 340-370). New York: Macmillan.


Cochran-Smith, M. (2006, January/February). Taking stock in 2006: Evidence, evidence everywhere. Journal of Teacher Education, 57(1), 6-12.


Cochran-Smith, M., & Zeichner, K.M. (Eds.) (2005). Studying teacher education: The report of the AERA Panel on Research and Teacher Education. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.


Dunkin, M.J. & Biddle, B.J. (1974). The study of teaching. New York: Holt, Rinehart and Winston, Inc.


Erickson, F. (2005). Arts, humanities and sciences in educational research and social engineering in federal education policy. Teachers College Record, 107, 4-9.


Joint Committee on Standards for Educational Evaluations (1981). Standards for evaluations of educational programs, projects, and materials. New York: McGraw-Hill Book Company.


Lasley, T., Sideentop, D., & Yinger, R. (2006). A systemic approach to enhancing teacher quality: The Ohio model. Journal of Teacher Education, 57(1), 13-21.


McNergney, R.F. (Ed.) (1988). Guide to classroom teaching. Boston: Allyn & Bacon.


McNergney, R.F. (2006, April 8). Judging the value of teacher education. Paper presented at the annual meeting of the American Educational Association, San Francisco.


McNergney, R.F., Cohen, S, Hallahan, D., Kneedler, R. & Luftig, V. (2002). The teaching assessment initiative of the teachers for a new era. Unpublished manuscript, University of Virginia, Charlottesville, VA.


Mitzel, H.E. (1960). Teacher effectiveness. In C.W. Harris (Ed.), Encyclopedia of educational research (3rd ed.) (pp.1481-1486). New York: Macmillan.


Paige, R. (2002). Meeting the highly qualified teachers challenge: The secretary’s annual report on teacher quality. Washington, DC: U.S. Department of Education, Office of Postsecondary Education.


Pianta, R. C., La Paro, K. M., Payne, C., Cox, M. J., & Bradley, R.  (2002). The relation of kindergarten classroom environment to teacher, family, and school characteristics and child outcomes. The Elementary School Journal, 102(3), 225-238.


Pianta, R. C. (2003, March). Professional development and observations of classroom process. Paper presented at the SEED Symposium on Early Childhood Professional Development.

Washington, DC.


Sanders, W.L. (1998). Value-added assessment. The School Administrator, 55(11), 24-27.


Shulman, L.S.  (1986). Paradigms and research programs in the study of teaching: A contemporary perspective.  In M.C. Wittrock (Ed.) Handbook of research on teaching (3rd ed.), (pp. 3-36). New York: Macmillan Publishing Company.


University of Virginia (2005).  Teaching Performance Record. Charlottesville, VA: University of Virginia.




Cite This Article as: Teachers College Record, Date Published: October 26, 2006
https://www.tcrecord.org ID Number: 12814, Date Accessed: 1/22/2022 10:19:50 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Robert McNergney
    University of Virginia
    E-mail Author
    ROBERT MCNERGNEY is a professor in the Curry School of Education at the University of Virginia. He is a researcher on the Teachers for a New Era project at UVA. His research and teaching interests include teacher development, case-method teaching and learning, and classroom observation. He is co-author of “Nouvelles directions dans la formation des enseignants: recherché et pratique” in Technologies de communication et formation des enseignants: Vers de nouvelles modalites de professionnalization? Published by the National Institute for Pedagogical Research (2006) and co-author of “Case method and intercultural education in the digital age” in the World Yearbook of Education 2004: Digital Technology Communication & Education (2004).
  • Scott Imig
    University of North Carolina–Wilmington
    E-mail Author
    SCOTT IMIG is an assistant professor of education at the University of North Carolina–Wilmington. He previously served as director of the Teaching Assessment Initiative—the research component of the University of Virginia’s Teachers for a New Era grant. His research interests include the effects of high-stakes testing on elementary school students, characteristics of effective classrooms, and classroom observation. He is co-author of “The Teacher Effectiveness Movement: How 80 Years of Essentialist Control Have Shaped the Teacher Education Profession” in the Journal of Teacher Education, 2006 and “The Learned Report on Teacher Education: A Vision Delayed” in Change, 2005.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS