Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

For The Record: Topics for The New Educational Psychology


by Lyn Corno - 1998

The questions, topics, and methods of interest to contemporary educational psychologists.

Teachers College Record has been an occasional host for interpretive scholarship written by educational psychologists. But educational psychology’s empirical work has tended to appear in other outlets, such as the Journal of Educational Psychology, Educational Psychologist, and Contemporary Educational Psychology. Despite a long association with Teachers College as a faculty member, I am only just now publishing in Teachers College Record. However, the Record has been changing recently; it has a new look and even a Web page! As part of this new look, editor Gary Natriello has sought to reignite readers with publication of some empirical and conceptual work in fields that appeal to the Record’s broad audience.


The present issue, accordingly, presents “Topics for The New Educational Psychology.” Our hope is that both practitioners and researchers will find this issue informative. It illustrates the kinds of questions and topics of interest to educational psychologists today, as well as some of the modern research methods being tried. To explain why I have selected this material and these authors in particular, some historical background is in order.

SOME RECENT HISTORY IN EDUCATIONAL PSYCHOLOGY


Gary Natriello and I were both once doctoral students at Stanford University’s School of Education. Back in the 1970s “Psychological Studies in Education,” as it is still called, was a program headed by N. L. Gage, with senior faculty, such as R. C. Calfee, Lee J. Cronbach, R. D. Hess, and R. E. Snow. I was a doctoral student in this program, while Gary was in the Sociology of Education. Neither of us knew what the future had in store for us, but we felt privileged to be there then. Some important things I learned in Psych Studies that remain with me today lie behind my selection of the scholarship that constitutes this special issue. They pertain to the presentation, design, and interpretation of research; data analysis and reporting; and the role of context in educational findings of all sorts. These aspects of an educational psychologist’s work play into the conceptions presented here at the same time, knowing where we were in the 1970s should help to illustrate how much we have shifted direction in the past twenty years.

PRESENTATION AND INTERPRETATION OF RESEARCH


One of the first things I learned in Psych Studies was how to review research critically. In my first doctoral year, I was informed that it was insufficient to write a paragraph-size review of one research study after another that opened with the study’s purposes and closed with its conclusions, saying little to nothing about what was done to reach them or their respective warrants. I was told to pick and choose the most meaningful studies to review, to relate them to historical trends in some conceptual framework, and to tell the reader what the researchers did in sufficient detail to point the direction for someone else. I was also taught to read the tables in a research study carefully (not just the author’s conclusions), and to tinker with data in ways that sometimes recast results contrary to the author’s own claims. Finally, I learned that a critical review finds some weaknesses—perhaps some alternative explanation for results, or some problems with methodology. A paper I wrote for my first course with Professor Gage was a review absent this kind of criticism. Gage gave it an “A,” but said that if I wrote the same paper for him next year, he would give it a “B.” He implied that there is always something to criticize about any research study, always some things one could have done better.


Several articles in the present issue provide fine examples of the particular value of a solid, critical review in an era of information explosion.

DESIGN OF RESEARCH


Another thing I learned in Psych Studies was how to design research of my own. In those days, we designed quantitative studies because the field had previously been replete with anecdotes, and Campbell and Stanley (1963) had charted new paths. We had the champions of quantitative methods on the Stanford faculty. I was a research assistant for Gage’s Program on Teaching Effectiveness, one of the important teaching process-student product” programs in the country (see Shulman, 1986), and my dissertation was directed by R. E. Snow. When Gary and I were in graduate school, we were not studying qualitative methods.


Good research designs in educational psychology then were “true” experiments; everything else met a somewhat lower standard. If students could not be assigned randomly to groups, then at least groups could be assigned randomly to “treatments,” and pretest measures could ensure their comparability on variables thought to matter in relation to outcomes. A treatment was another name for some carefully structured approach to education—some teaching style, or curricular/instructional strategy. It was to be “delivered” by a teacher, a computer, or a textbook, and used to “gain control” over the environment. There was little concern then about control over other factors, such as students’ attitudes or perceptions of the treatment. And, no one seemed to question the appropriateness of this medical-model language for the business of educational psychology. A treatment was generally paired with some “control group(s)” that received some alternative, “innocuous treatment,” one that did not compete with what was delivered in the experimental group but helped to avoid a Hawthorne effect. We might, for example, send an observer into all the “control” classrooms, but not otherwise intervene in them.


Our measures of variables were multiple, and these multiple measures were given at multiple points in time (at pretest, during treatment, and¾following treatment—at post-test, some of which were delayed). I was taught that it is best to obtain more than one measure of a given construct (e.g., two measures of anxiety or two measures of ability) to help ensure construct validity. And, the “during treatment” measures that our professors used to insist on (classroom observations, student and teacher interviews, written reports from parents, and collections of student notes and work samples) were only just beginning to inform the kinds of process analyses that later became the core of modern cognitive-instructional psychology. Although our process measures remained shallowly quantitative (all of these were coded numerically and correlated with everything else), opening up the “black box” of both treatment and control groups provided insights into some of our more puzzling findings.


“No significant differences between groups,” for example, could be explained when control group teachers, on their own, did many of the same things asked of treated teachers. If control teachers used the same curriculum, their different teaching strategies were often insufficiently powerful to cause meaningful differences in student achievement. And student achievement, as measured by standardized tests, was still the outcome we sought most to explain in the 1970s. Motivation, affect, and outcomes, like argumentative reasoning, were only beginning to be considered important to study as well.


Often, it turned out that our main effects had to be qualified. The critical result was not “which treatment is best on average,” but “which treatment is best for which students being served.” This latter, more conditional, result is a question of “aptitude-by-instructional treatment-interaction” or ATI. ATI was the technical term used by Cronbach and Snow (1977) in documenting the extensive literature showing that characteristics of students often moderate the effects of a given instructional method. Nowadays, moderating effects are taken for granted, as some articles in this issue show.


Opening up the test score box, just as we did treatment and control groups, began to address the reasons for ATI To take one example, the Stanford Aptitude Research Project studied the different kinds of information processing used by students who did well and poorly on scholastic ability tests. Snow (1992) reported that higher-ability students appeared more strategic, managed their time better, and were better at handling complexity in problems than their counterparts who scored lower on these tests. Translate this to classroom instruction, and one hypothesis becomes: As teaching more strongly requires learners to puzzle things out for themselves, or to meaningfully integrate their understanding of a topic, many lower-ability students are likely to fall behind. Confirmation of such a hypothesis would explain the relative success of innovations like “discovery learning,” for example, which have favored high-ability students historically (Cronbach & Snow, 1977). Alternatively, a similar explanation for the observed benefits of “direct instruction” with low-ability students is that it relieves the burden of difficult reading, inferencing, or analysis of complex concepts. Many forms of direct instruction function to compensate for or circumvent students’ cognitive information processing weaknesses.


Although explanations such as these consider psychological processes directly, the educational implications of such findings remain unclear. Less able students will not benefit from direct instruction over the long haul if they never learn to cope with educational arrangements where structure is unavailable. So, directed work on academic learning and reading skills, study habits, and self-management may be important for students who score relatively low on standardized ability tests. Perhaps readers can now get some sense of how the present generation of-process-theoretic research, and the particular examples contained in this volume, began to unfold.

DATA ANALYSIS AND REPORTING


In Psych Studies at Stanford, I also learned how to collect data from teachers and students and to collate it for computer entry. I learned to use computer software for data analysis, thereby avoiding the errors of hand calculation. In those days, we still used punch cards and dumb terminals for entry and printout. The IBM 360 took up half of the computer building at Stanford! And to think, there is more power in today's laptops than the old 360 had back in the seventies.


But data still have to be entered and checked for accuracy before they are analyzed. And, statistical analyses still have to fit the form of data they address; data have to satisfy all assumptions before multivariate analysis can be used to address complex hypotheses. I learned this the hard way when once, at around 2 A.M. in the computer center, a few graduate students came close to a collective nervous breakdown when our polynomials for a class assignment were not orthogonal.


I also learned correlations lower than + or - 0.20 probably are not worth reporting and that one needs to calculate the number of correlations that would be significant by chance before making a big deal out of those over 0.20. I learned that correlation is not possible without variation, and that many sources of variation contribute to test scores, including unreliability of measures. Once, when a study we conducted turned up a pre-post correlation of 0.98 on student reading comprehension, I lamented to Dick Snow that there was only 4 percent of the variance left to be accounted for by anything other than the pretest. How could we possibly expect the instruction to make a difference in achievement? His response was “Now you know why I study aptitude.”


I shall refrain from detailing how I also labored to improve my writing for publication in those days. Suffice to say it ran the gamut from grasping the value of thinking “topic: of section, of subsection, of paragraph, up front,” to the absolute avoidance of split infinitives. In this, I still labor and have Lee J. Cronbach to thank for that as recently as the day of this writing. The point is that the kinds of data educational psychologists collect and analyze today, and the ways we present them, vary considerably from the strictly traditional, quantitative treatment, as readers will see this in the pages that follow.

THE ROLE OF CONTEXT IN EDUCATIONAL FINDINGS


Finally, during those days in Psych Studies, I learned about the importance of social factors and cultural influences on educational achievements of all sorts. Professor Hess taught us that social processes and cultural differences carry a great deal of motivational weight with students, for good or ill. But socialization into the melting pot has proven to be inhospitable to diversity. In today’s understanding of social context, educational psychologists celebrate diversity within communities as one way to make the whole community richer. Another part of what today’s new teaching reforms are about is gaining sufficient knowledge and understanding of the skills and interests of students when they are not in school to develop teaching activities that afford use of these strengths when they are. Today’s educational psychology uses ecological description to give form to such ideas about teaching, and encourages teachers to develop and adapt these for different children and over time. Again, articles contained in this special issue illustrate this point.


Psych Studies at Stanford during the seventies may not have been representative of educational psychology programs at other schools of education then; in some ways the program remains an “outlier” today. But many of the things I learned in Psych Studies are both deeply embedded and extended in the educational psychology pursued in 1998.

THE ARTICLES IN THIS ISSUE


The sampling of work in the present issue ought to be viewed with all of these teachings from Psych Studies in mind. It ought to be viewed with an eye toward quality standards in reviewing research and in writing. It ought to be viewed in light of new developments in research and measurement methodology as well as data analysis. And it certainly ought to be informed by the kinds of educational topics—social, contextual, or psychological—currently thought to be worth pursuing. I would like nothing better than to say that the sampling in this issue of Teachers College Record is representative of the best educational psychology today, but I know too much about sampling to make that claim.


So let me close with a word about the authors in this issue and why their work, in particular, is represented here. I invited several prominent educational psychologists whose work I had admired over the years to contribute. My invitation included the suggestion that a current or former student (or colleague) be selected as co-author, with no particular order of authorship dictated. The team could decide for the senior author to be listed first, or reverse this, or the junior author could write the article alone with only editorial support and comments by the senior. The idea was to introduce some new and upcoming educational psychologists to the educational community at large, and to encourage them to write about their research for a broad audience. All but one of the individuals I asked found the timeline workable, and so we present seven full-length articles in the issue.


As is indicative of the state of the field today, four of the seven articles are examples of interpretive scholarship. They are critical reviews of research, conceptualizations of important new issues and topics for research, or documents of particular value for educational policymakers. I say this is indicative of the state of the field today because, more and more, educational psychologists are entering the public forum with their work, trying hard to make results reach a wide audience of practitioners, policymakers, and citizens more generally. Interpretive scholarship then plays an increasingly important role.


The examples of interpretive scholarship included are articles by Salomon and Almog, Calfee and Norman, McCaslin and Infanti, and Gallagher. Gavriel Salomon, dean at the University of Haifa’s Faculty of Education, and his colleague, Tamar Almog, capitalize on Salomon’s long experience as a leader in the field of instructional technology and characterize the relationship between education and technology as a duet. They argue that each affords important questions and opportunities for the other, but that only with the right kind of attention to the details of this exchange will psychologists ultimately understand the real value of these opportunities.


Robert Calfee, the dean of the College of Education at the University of California at Riverside, worked with a former student, Kimberly Norman, to consider the contributions of psychological research to the understanding and improvement of early reading instruction. Calfee has long been a leader in the field of psychological research on reading. In their article, these authors use current work on the concept of phonological awareness in early reading to illustrate the challenges of translating research into practice.


Educational and adolescent psychologist Mary McCaslin and her student, Helen Infanti, both of the University of Arizona, chose to write about “parenting” as an important, and frequently misconstrued, societal construct. They build on Eric Erickson’s life-span theory to consider how America might better think about its cultural responsibilities to parents and children.


The final interpretive article presented is written by Ann Gallagher in consultation with Ellen Mandinach. Both are research scientists at Educational Testing Service, working at the forefront of new research on assessment. Gallagher offers a penetrating analysis of factors that underlie gender differences in mathematics achievement, using a psychobiological model of cognitive development. Her article also considers what educators might do to alter recurring evidence of gender bias in testing.


The issue’s remaining three articles present examples of modern empirical research in educational psychology. These depart from traditional empirical studies for each of the educational psychologists who conducted them. Each breaks new ground in some way, reflecting advances-either in topics undertaken, methodology, or both-in directions that would not, twenty years ago, have been viewed as typical of educational psychology. These are the articles by Chinn and Anderson, Nichols and Good, and Xu and myself.


Clark Chinn is a former student of cognitive-instructional psychologist Richard Anderson of the University of Illinois. Chinn and Anderson worked together under the auspices of the Illinois Center for Research on Reading, which Anderson directed for many years. Currently a faculty member at Rutgers University, Chinn here produced with Anderson an analysis of the structure of classroom discourse in discussions whose aim is to promote students’ argumentative reasoning. The authors’ data-collection methods, as well as the types of conclusions drawn, offer important new developments for the field of research on teaching, developments that combine the best of existing process and ecological context research models.


As chair of the Department of Educational Psychology at the University of Arizona, Thomas Good assisted student Sharon Nichols in her study of the ways that students think about “fairness” in school, and how this might impact their motivated behavior and achievement. In the present article, Nichols and Good offer a gender analysis of this understudied concept of fairness, and present their initial data to address related research questions.


Finally, I worked with my former student Jianzhong Xu to prepare a manuscript-length version of his dissertation research on the socioemotional dynamics of beginning homework. We present this qualitative/interpretative research to illustrate how educational psychologists might profitably document the role of affective processes, goal setting, and volitional follow-through in this and other assisted-study conditions. Interestingly, of all the topics I have addressed-in my own career as an educational psychologist, this work on homework has received the most attention from parents, teachers, students, and the media.


In keeping with the current format of the Record, I have also included in this special issue a book review section containing critical reviews by educational psychologists of three different kinds of books of potential interest to readers. I selected these books in consultation with Michael Pressley, the current editor of the Journal of Educational Psychology. The intent was to review a few recent books that would be of interest to readers who wish to learn more about educational psychology as a field, as well as books that educational psychologists ought to know about and comment on publicly.


Accordingly, the first book is the recently published and first Handbook of Educational Psychology (edited by David Berliner and Robert Calfee), reviewed by Anita Woolfolk Hoy. Woolfolk Hoy is the author of a popular textbook on educational psychology, published by Allyn & Bacon. She is also a past president of the American Psychology Association’s Division 15 (Educational Psychology). Her description of the content offered in this volume is both cogent and comprehensive, suggesting that it ought to be required reading for all who enter the field at this point in history.


The second book review is a critical examination by Bruce-Biddle of Lawrence Steinberg’s Beyond the Classroom: Why School Reform has Failed and What Parents Need to Do. Biddle is also co-author with David Berliner of the American Educational Research Association’s award-winning The Manufactured Crisis: Myths, Fraud, and the Attack on America’s Public Schools, and has recently served as co-editor of the International Handbook of Research on Teaching. He is known as someone who is not afraid to take a position and defend it meticulously. Although I would not want to steal any of Biddle’s thunder, suffice it to say that this review upholds Biddle’s reputation as a tough critic of those who would undermine America’s public schools.


The third book, written by Ellen Winner and entitled Gifted Children: Myths and Realities, is reviewed by Heidi Doellinger, a graduate student in educational psychology at the University of Iowa, who was recommended by David Yohman. Exceptionality has not been the domain of educational psychology traditionally; rather, this topic has been addressed by special education. Doellinger’s review nonetheless demonstrates the value of an educational-psychological perspective on the study of giftedness (and by implication other aspects of exceptionality as well).


I hope that these articles, along with my long-winded story or two from my own developmental experiences in the field of educational psychology, and our list of suggested readings, will help the audience of Teachers College Record to better understand what it is that we do. Perhaps readers will even see some relevance for their own work.

REFERENCES


Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed,), Handbook of research on teaching (pp. 171-246). Chicago: Rand-McNally.


Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook of research on interactions. New York: Irvington/Naiburg.


Shulman, L. S. (1986). Paradigms and research programs in the study of teaching: A contemporary perspective. In M. Wittrock (Ed.). Handbook of research on teaching (3rd. ed.) (pp. 3-36). New York: Macmillan.


Snow, R. E. (1992). Aptitude theory: Yesterday, today, and tomorrow. Educational Psychologist, 27, 5-32.




Cite This Article as: Teachers College Record Volume 100 Number 2, 1998, p. 213-221
https://www.tcrecord.org ID Number: 10370, Date Accessed: 12/3/2021 5:53:59 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Lyn Corno
    Teachers College, Columbia University
    Lyn Corno is adjunct professor of education and psychology, Teachers College, Columbia University, and Board Chair of the National Society for the Study of Education. She is co-author, with Judi Randi, of "Teachers as Innovators," International Handbook of Teachers and Teaching, Vol. II (Kluwer, 1997).
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS