Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Opening Classrooms and Improving Teaching: Lessons from School Inspections in England


by W. Norton Grubb - 2000

Classroom observation is one mechanism for making teaching visible and enhancing instruction. In Great Britain, different methods of school inspection based on observations have been in place since 1839, and they provide information about how such instruments of school reform could work. This paper examines English school inspection prior to 1993 reforms, inspection since 1993, the observation procedures that a few individual schools have adopted, and those of further education colleges (like our community colleges) all quite different in their procedures and consequences. In particular, the balance of accountability (or control) and support for improvement varies depending on the details and culture of inspection. In the United States some experiments with inspection are now taking place, and several school reforms depend on the quality of teaching. In these cases the lessons from England can help in developing appropriate methods of observation.

Classroom observation is one mechanism for making teaching visible and enhancing instruction. In Great Britain, different methods of school inspection based on observations have been in place since 1839, and they provide information about how such instruments of school reform could work. This paper examines English school inspection prior to 1993 reforms, inspection since 1993, the observation procedures that a few individual schools have adopted, and those of further education colleges (like our community colleges)—all quite different in their procedures and consequences. In particular, the balance of accountability (or control) and support for improvement varies depending on the details and culture of inspection. In the United States some experiments with inspection are now taking place, and several school reforms depend on the quality of teaching. In these cases the lessons from England can help in developing appropriate methods of observation.


The American classroom is normally a secret space. Even though it is full of people, what goes on is rarely reported, and even less often is it used to discuss what teaching is or might be. As generations of reformers have lamented, anything can happen when the teacher closes the door, and so the most carefully constructed reforms may be undone when teachers revert to old and familiar practices. And the price of privacy falls on teachers too, as their lifework is so rarely discussed and their own perspectives are ignored in public debate. Improvement in teaching is not now a routine element of American schools and colleges: it is, sometimes and fitfully, the subject of reform efforts punctuating business as usual, but it is not the stuff of daily conversation in most schools.


Yet this need not be the case. In Great Britain, whose schools and reform efforts share many similarities with our own, methods of inspection—in which teams of outside inspectors observe classrooms and other dimensions of schools and colleges—have been around for almost 150 years. In some forms they have succeeded in opening up classrooms, and in converting teaching from a hidden activity to one about which public discussion takes place, where different approaches and improvements become the subject of more routine conversations. When inspections work well, they can serve as vehicles for a broad and continuous conversation about teaching, providing ideas for teachers and fostering expertise among teachers, administrators who participate in inspection, and inspectors themselves. These processes have the potential to generate much richer information about how schools perform than do simple assessments like standardized tests. This is teacher improvement writ much larger than the episodic staff development days and summer workshops that take place in the United States.


But, like any assessment procedure, inspection can be used for regulation and control as well as for discussion and improvement. As it has been implemented in English elementary and secondary schools since 1993, inspection has become stressful and punitive. Its benefits, only grudgingly admitted by teachers and administrators, are hardly worth the costs, and the conversation about teaching it has engendered is limited and awkward, constrained rather than facilitated by the specific form it has taken. An important story of English inspection is how a system, fondly remembered from the olden days before 1993, has been so quickly transformed into a widely detested mechanism that teachers view as an agent of Conservative control over education.


And that’s part of my point: Inspection isn’t a single system, but a multitude of approaches to classroom observation. It can therefore be modified to fit different institutions and different perceptions of what schools and colleges need; the practices that create so much controversy and resistance in elementary-secondary schools in England can be readily changed. In this essay I will describe four approaches to inspection in England: inspection in the pre-1993 days (Section I), only briefly described; the current inspection system in elementary-secondary education (Section II), which has been the most controversial and the best researched; inspections initiated by schools themselves (Section III); and inspection in further education (FE) colleges, remarkably similar to our community colleges (Section IV). My information on elementary-secondary inspection includes observations in two schools undergoing inspection, in one of which I attended all team meetings; interviews in two schools that had recently undergone inspection; and interviews with other heads, former Her (or His) Majesty’s Inspectors (HMIs), researchers, and Office for Standards in Education (OFSTED) personnel. The section on FE colleges is based on visits to several colleges, interviews with Further Education Funding Council (FEFC) personnel, FEFC reports, and discussions with several individuals who provide technical assistance to FE colleges. In addition, there is a substantial literature on inspection including Cullingford (1999); Gray and Wilcox (1996); a special issue of the Cambridge Review of Education, Vol. 25(1), 1995, “Inspection at the Crossroads: Time for Review?”; and articles in the British Journal of Educational Studies, Vol. 45(1), March 1997. The Times Educational Supplement is also a wonderful source of information about the fray over inspection, particularly the frequent acid contributions by E. C. (Tedd) Wragg (Wragg and Brighouse, 1995). There’s much less research about FE inspections than there is about elementary-secondary inspections, though Melia (1995), the chief inspector for FEFC, has written an informative article; Spours and Lucas (1996) include some observations; and the Times Educational Supplement special section, “FE Focus,” follows current developments. My observations and interviews took place in Fall 1996, before the election of a Labor Government under Tony Blair. However, elementary-secondary inspections have changed relatively little, and the Chief Inspector, the much-loathed Chris Woodhouse, has been retained. FE inspections have changed somewhat more, as I describe in Section IV.


Within the United States, there is fledgling interest in inspection mechanisms, as I note in the concluding section. As Americans grapple with ways of improving schools, some method of observing classrooms would be complementary to other reform movements, particularly those that are trying to change teaching and to create more continuous and teacher-directed forms of staff improvement. When inspection works well, the benefits can be substantial—to teachers, to students, to schools with a sense of common purpose. But inspection can also be politicized and shaped to fit a particular agenda—in the case of Great Britain, the Tories’ conservative educational agenda. When I consider in the final section how inspection might be adapted in this country, the task is to identify those approaches that have been supportive of teachers and conducive to broad discussions about instruction, rather than those that have simply scared teachers to death.

I. INSPECTION BEFORE 1993: THE MODEL OF CONNOISSEURSHIP


Inspection began in 1839, soon after initial government funding was instituted to support schools for the poor. A group of HMIs conducted periodic observations to determine whether public money was well spent, and whether the central office of education could help schools improve (see Wilson, 1996, on the pre-1993 system of inspections). The principle of observation best accomplished by external examiners was therefore established early in the British educational system, and the inspection process was always one in which classroom observations by experts with long experience in the classroom were central. In addition, the dual and possibly conflicting roles of inspection—as both a mechanism of accountability and a process of school improvement—were part of inspection from the start. The longevity of this process helps explain the widespread acceptance of inspection: Virtually all English educators accept the idea that educational effectiveness requires both professional development and external assessment, in contrast to the United States where the privacy of the classroom is sacred.


As inspection developed, a cadre of HMIs carried out periodic inspections of schools, but these were rare (unless a school was failing) and unsystematic. Prior to 1993 there were about 500 HMIs in a country of about 25,000 schools; inspections were infrequent, occurring perhaps once every ten years or so. According to those who went through an inspection, HMIs are almost universally remembered as wise and helpful individuals; they were men (almost always) of great experience in the classroom, and of quiet and thoughtful temperament, who could come into a classroom and understand quickly what was going on. Their expertise was that of practitioners, not of researchers or evaluators, and they are remembered both for understanding teaching and for being able to help teachers in ways that others could not. A common comment from teachers was that “they saw everything,” or “you couldn’t fool an HMI.” An HMI inspection was “tough,” one teacher mentioned, but the compensation was that it provided advice about how to improve teaching. HMIs often created continuing relationships with schools and would visit periodically, providing information about specific problems, and being available for consultations. The inspectorate also published “red books” for specific areas of the curriculum, entitled Curriculum Matters, detailing good practice in specific subjects so that their accumulated expertise could be made available to others.


Prior to 1993, there were no published principles or guidelines for schools to follow. Because the basis for inspections was uncodified and based only on the experience and wisdom of HMIs, it has been widely described as a model of “connoisseurship,” conducted by individuals with a certain sense of what teaching should be. The process of selecting HMIs, along with their particular training (including a two-year apprenticeship with a senior HMI), may have contributed to a certain uniformity in tastes; however, when inspections were published beginning in 1983 there were many complaints about inconsistencies in judgment (Gray & Hannon, 1995). Even though inspection prior to 1993 has been viewed as improving education, it was not consistently seen this way. It was undertaken, as one administrator mentioned, with “great vision but with no management,” and was carried out too infrequently and unsystematically to affect all schools.


In addition to HMI inspections, local education authorities (LEAs) also carried out inspections of their own schools (Hargreaves, 1990). This process had the advantage of the institution carrying out inspections also being responsible for “picking up the pieces” and improving the schools found wanting. These inspections varied enormously, and some paid less attention to classroom observation than to other ways of gathering information; as one ex-inspector recalled, “We were making it up as we went along.” But taken together with HMI efforts, LEA inspections contributed to a system whereby classroom observations were a routine and accepted part of school improvement.

II. ELEMENTARY-SECONDARY INSPECTION: THE REIGN OF OFSTED


The process of inspection in elementary-secondary education was markedly changed by the Education (Schools) Act of 1992. Mounting evidence of shortcomings in British education—particularly in international comparisons—led to a National Curriculum. Ideological conflict over pedagogical methods, political conflict between the Conservative government and the more liberal (and highly organized) teaching profession, and the continued privatization of public education all added to the tensions. Inspection was reconstituted principally to enforce the National Curriculum. As a mechanism of enforcement, it was important for inspection to be more regular; therefore a schedule was established where each school would be inspected every five years. Rather than expand the number of HMIs, OFSTED carries out its work by contracting inspections to teams that bid to perform several inspections in areas of their expertise—secondary schools, for example, or nursery schools. Teams led by a registered inspector (or “Reggie”) with legal responsibility include individuals with expertise in every area covered by the National Curriculum—the conventional academic subjects like math and English, various performing arts, technology, religious studies—as well as a lay inspector to provide a different perspective. In theory, teams are selected based on quality. In practice, as long as the quality of the team appears adequate, OFSTED appears to choose on the basis of cost since it has neither the information nor the resources to make more detailed judgments about quality. (At least this is the prevailing opinion, among team members as well as teachers and administrators.) This contracting process and the squeeze on costs proves to be one factor influencing the information available from inspection; as one school head noted, “it’s necessarily a cut-rate production” compared to prior HMI inspections.


Because inspection now has a specific purpose—the enforcement of the National Curriculum—the old “connoisseurship” model is inappropriate, and OFSTED therefore has codified the new criteria for evaluating schools in guidelines, e.g., Guidance on the Inspection of Secondary Schools (OFSTED, 1995). Inspectors are required to report on the quality of education; the educational standards achieved; the management of financial resources; and the “spiritual, moral, social, and cultural development of pupils” at each school. Most teachers and administrators find these guidelines helpful, particularly compared to the absence of guidelines in the pre-1993 era. They may be a little bland, some have commented—they certainly do not illustrate what inspired teaching might look like, for example, and they forego endorsing any particular approach to teaching—but as statements of unexceptionable elements of good teaching, they strike most teachers and administrators as solid, judicious, and helpful. These guidelines have been widely used in schools conducting their own inspections (see section III), even those that see OFSTED inspections as destructive. Thus the very process of establishing a framework for inspections has its own educational value, in the effort—necessarily incomplete—to distill what good teaching should be.


The guidelines also include prescriptions for school inspections themselves. The majority of time is spent observing classrooms, not, as is typical in U.S. accreditation visits, in interviewing administrators or counting resources; by OFSTED regulation, 60 percent of inspectors’ time should be spent observing classrooms, sampling students’ work, and talking with students. Inspectors are supposed to observe each teacher at least once, and they sometimes visit two or three times, particularly if a subject or teacher appears to be weak. In addition, the inspection process gathers a great deal of other information: schools complete forms providing information about such things as enrollments, student backgrounds, and classes offered; the results of national exams are available; inspectors interview groups of parents, students, and governors (i.e., school board members). Parents are notified about the inspection and invited to submit comments or to speak with inspectors. Thus the inspection process is designed to elicit as much information as possible.


Furthermore, elaborate procedures record this information. Every class observed is rated on a sheet that requires a score of one (excellent) to seven (very poor) on teaching, student response, attainment, and progress, with a paragraph of support required for each score; other forms are completed for interviews with administrators, parents, and governors; and still other forms are completed for observations of nonclass activities like assemblies, playgrounds, cafeterias, conduct while passing among classes, and physical facilities. Indeed, the process of conducting an inspection is partly a paperwork blizzard, with forms accumulating as the week progresses; offhand comments about observations are likely to be met with the response that “we need a sheet for that, for OFSTED.” During the inspection week, around Thursday afternoon, the inspection team summarizes its results in “star charts,” with numbers (the ever-present 7-point Likert scale) for each department on a number of dimensions; the “star chart” is then summarized to the school administrators verbally and translated into writing for a public report. (The numbers on the “star chart” are never given directly to the school, though they are fed into the OFSTED maw as part of its large data base.) So there is a concerted effort to record the information from inspections—though, as I will argue, the procedure is still one in which a great deal of information is lost.


Once the inspection is over, the results are written up and presented to the school; administrators can challenge reports on factual grounds, though this appears to be difficult because the underlying “facts” have been generated by the inspection team itself.1 Then a final report is published for all to see: parents, prospective parents, governors, and taxpayers. These reports do not make exciting reading to outsiders, however, because they are written in a peculiar language, “OFSTED-speak,” described below, and because they have boiled down a vast amount of observation into brief sentences. For example, one school’s report included the criticisms that “ineffective use is made of teachers’ training and experience,” and “the quality of teaching is poorly targeted in years 7 to 9”; while such comments certainly are based on more specific observations, it is hard to know how schools can respond to such generalizations. Schools must prepare a plan to respond to criticisms, though the implementation of such plans is left to the schools themselves (and their governors). In failing schools, however, OFSTED may conduct periodic reinspections, and schools that fail to progress may be taken over by the secretary of state.


The procedures devised by OFSTED are, in many ways, substantial improvements over the casual pre-1993 procedures. All schools are inspected on a regular basis; the basis for inspection is clarified in the Guidelines; the conduct of the inspection is standardized, so that differences from team to team are minimized; the information from the inspection is carefully collected; and there are mechanisms for using this information to improve the quality of schools. From my perspective, the crucial dimension in the process is that classroom observations form the heart of the inspection process; teaching is made visible, the subject of discussion, and then improvement.


But, as in so many areas of education, God is in the details. What happens in practice is powerfully influenced by the specifics of an inspection, the context surrounding the inspection, and the way information is conveyed. These details, so different in the inspections schools have created for themselves and in FE colleges, shape the value of inspections for good and evil.

THE CONDUCT OF AN INSPECTION


An inspection is enormously stressful for all involved. Schools know at least a year in advance that they will be inspected, and they typically begin preparing—not, as one might imagine, by scripting lessons for the inspection week, but by completing the enormous amount of paperwork required by the National Curriculum, particularly schemes of work (lesson plans) for each lesson, since inspectors may ask for these plans. Teachers are divided about the value of this preparation: some have commented that “it keeps you sharp,” or “it makes you do what you should have done,” but a majority bitterly resent it because the paperwork is carried out for OFSTED’s purposes, not for the benefit of students. In addition, schools sometimes bring in outsiders to provide specific help, or to conduct mock inspections during the months preceding the inspection itself. This period is unanimously described as one of enormous time pressures, with teachers working nights and weekends for many months. This starts to build stress; as one teacher mentioned, “I don’t like the way it’s done; I had eight months of hell.” Apparently some people crack: the head teacher of one school I visited resigned during this period, and there are many stories of teachers taking health-related leaves or falling sick just prior to inspection week. How many of these stories are true is impossible to determine, but the legend surrounding inspection conveys its stress.


Then comes the inspection week itself. The inspectors—perhaps 12 to 15 for a secondary school of 1,500–2,000 students—are given free rein. They enter classes at will, preventing teachers from scripting particular lessons. Often, teachers prepare little slips of paper or pages describing the intent of the class, to provide the inspector some context—since an inspector typically visits a class for 20 minutes or so before moving on, perhaps coming back to a particular teacher for one or two other segments, especially if the teacher appears weak. Teachers are under pressure the entire week, therefore, and there’s a sense of constantly looking over one’s shoulder to see whether an inspector is entering the classroom, the “OFSTED twitch,” as one teacher described the vigilance for a turning doorknob. Inspectors also observe playgrounds, assemblies, lunchrooms, hallways, and every other space, always with clipboards, always filling out sheets with 7-point Likert scales. A cartoon in the satiric magazine Private Eye, with inspectors filling out charts at the OFSTEDdy bears’ picnic, accurately conveys the sense of inspectors snooping everywhere.


When inspectors visit a class, they typically sit in a corner filling out required forms; they may also chat with students quietly, though most students seem indifferent to them (as to all visitors) and don’t appear to be overly prepped. There’s substantial variation in what inspectors say to teachers, however: many, under the pressure of time, simply leave for the next class without a word, while others will take time—5 to 10 minutes at most—to comment on what they saw, what improvements are possible, what practice in other schools looks like. Inspection teams vary in their philosophies about providing advice; some Reggies encourage this feedback, others discourage it. Teachers overwhelmingly value this information and opportunity for discussion; it reminds them of the pre-1993 process, where advice was the central purpose. But OFSTED itself has discouraged the giving of advice, portraying inspection as a form of monitoring rather than advisement; and the pressure of time makes such discussion fleeting at best.


Despite the attention to classroom observation, the observation turns out to be artificial in several ways. The 20-minute segments are too short; teachers complain that these “snapshots” do not allow inspectors to see a full lesson, never mind development over a year. Teachers and administrators complain that inspectors behave according to the motto “If we didn’t see it, it didn’t happen”; it is therefore difficult to convey to inspectors where a particular class stands in a semester’s teaching.2 Because inspectors visit for such a short period of time, teachers often feel they must teach to the plan, not “on the hoof ”; “teachable moments” and other spontaneous possibilities have to be ignored in favor of presenting a “typical” lesson. A representative comment from teachers is, “Your focus is OFSTED, not the students”; one teacher noted, “It’s strange—you have to pack a whole year into a week.”


In addition, many teachers resent being observed without the opportunity to talk with inspectors because it means they are unable to put the lesson in context, to explain how it fits into the year’s progress, to interpret the difficulties of particular students. As one noted, “When someone has been in your lesson, you want to know what went right or wrong. You want to know, because I know if it’s Jimmy [who responded inadequately], I understand that; if I’ve yelled at a student in a particular way, [there may be a reason].” Implicitly, teachers are asking for a discussion around teaching, but the inspection process in elementary-secondary education is not a discussion. As one head noted, it is an asymmetric process in which OFSTED and its inspectors have all the power, in which suggestions to teachers are not formally allowed and praise is hard to give, in which teacher corrections of inspectors’ perceptions are almost impossible to make.


Finally, teachers widely perceive inspection as a process where OFSTED and its minions are looking for bad teaching, rather than trying to improve the quality of teaching overall. As far as many teachers are concerned, the director of OFSTED, Chris Woodhead, has single-handedly done considerable damage to the inspection process: his public presentations have been consistently demeaning to educators and have stressed the large numbers of incompetent teachers. Recently OFSTED devised a process for identifying failing teachers—those with overall scores of 6 or 7—thereby harnessing the inspection process to rooting out incompetent teachers. In return, teachers see Woodhead as one-sided in his appraisal of teachers. As one mentioned, “He’s doing the Government’s dirty work,” and stories abound (again impossible to verify) of his doctoring reports to make schools and teachers look bad. In this testy atmosphere, teachers play a defensive game: thinking that inspectors are looking for bad teaching, they try to teach by the book so as to avoid the dreaded 6s and 7s. All this is well-known to inspectors, of course; one Reggie complained that “teachers misunderstand the brief of OFSTED” as teacher-bashing and school-bashing, and this misunderstanding distorts their teaching and limits the value of the process in improving schools.


The end result is that most teachers experience inspection, like its preparation, as enormously stressful. Of course, teachers vary in their response to inspection, and a substantial number consider inspection helpful in “sharpening up” their teaching; these individuals typically look forward to feedback. But many more are crushed by the experience—particularly new teachers, often conscientious teachers, those who are insecure in their teaching and dread being found out, and those who are timid or introverted. These individuals routinely describe inspection as “the most stressful process I’ve ever been through”; “it’s not as bad as bereavement or divorce, but it’s third,” commented another. It is so stressful that many teachers are reluctant to acknowledge that it can have any value—and they may see the process as so illegitimate that they are unwilling to respond to the recommendations that come from it. Teachers and administrators alike describe a letdown after inspection week, a six-month period in which it is impossible to get teachers to do much of anything. And so some question its overall value, since it is such an intrusion: with a long period of preparation and another of recuperation, it may take a year and a half out of the life of a school.


From the perspective of most (but not all) teachers, inspection is not a particularly supportive process. But the details that make it so stressful can be changed readily, as subsequent sections on self-initiated inspections and FE inspections illustrate. The problem is less with the idea of classroom observations as the basis of school improvement than with the specific way OFSTED and Chris Woodhead have implemented this process.

THE TRUNCATION OF INFORMATION AND EXPERTISE


A remarkable feature of inspection is the amount of information and expertise about teaching that it generates. The inspectors themselves are typically highly experienced: many are former HMIs or LEA advisors who used to provide advice in specific curricular areas to local schools. The process of inspection increases their expertise, as they are able to observe practices in a great many schools. Indeed, two Reggies on two different teams described inspection in much the same terms: “the inspection process is a gift,” one said, while another called it “a privilege and a joy” because of the opportunity to observe. (Teachers would be incredulous to hear inspection described in these terms, particularly in the first school I visited where there were many distraught and weeping teachers.) While, unavoidably, a few inspectors are incompetent (and generate a large number of negative stories), the majority I observed are experienced, dedicated to education, highly informed, and therefore—when they have the time and inclination—quite effective in suggesting alternatives for teachers to consider. Their advice is based on practices they have observed rather than their own views of teaching; they are able to provide recommendations about specific teaching problems since they can comment on a particular lesson—rather than providing only general advice abstracted from the details of a school, class, and subject. The real promise of inspection, then, is that it can move between the specific and the general settings of education: inspectors observe specific classes and can provide advice tailored to those situations, while their wide familiarity with other schools enables them to understand a much broader range of practices.


Unfortunately, an enormous amount of information collected in the inspection process is simply thrown away, and the tremendous expertise of an inspection team is made largely irrelevant. The truncation of information starts with each classroom observation: while some inspectors provide feedback on the spot, this is limited by time schedules, by the overwhelming burden of paperwork, by the philosophies of certain teams, and by OFSTED policy discouraging such feedback. Then four aspects of the class (teaching, response, attainment, and progress) are reduced to numbers (one to seven) with supporting paragraphs of 100–200 words each (the observation sheet provides about 1½ inches of space for each category)—but the paragraphs are typically general, rather than referring to specific details. The immediacy of the classroom experience is lost forever.


Results are then summarized for specific departments for the “star sheets” in a pressured Thursday afternoon meeting, causing further loss of information for the individual teachers in a department. In the rating session I observed, there was a tendency to start with a 4 (the mid-point of the Likert scale) and then demand evidence for any departure from 4: 3’s and 5’s required some evidence—“well, it might be 3-ish, but I’m not sure”—but 1’s and 2’s and 6’s and 7’s required much more evidence. Reportedly many inspectors are reluctant to give 6s and 7s because of the enormous effect this can have on teachers and schools. (Indeed, some Reggies have resigned rather than participate in what can feel like a search for failing teachers and failing schools.) This procedure presses the final scores toward the middle and eliminates high and especially low scores, but it also throws away the rich information about what happened in specific classrooms.


The results of the star sheets are not given to the schools;3 instead the results are summarized in a few written paragraphs. Any writing for school personnel, as well as the draft reports and published reports, translates the numbers from the star sheets into bureaucratic language (“below average for this department,” “well above average for schools of this type”), without any detail about the teaching practices observed or specific strengths and weaknesses. Broad proportions are often used—“four out of five classrooms were satisfactory or better”—further reducing the usefulness of the results. Many teachers and administrators refer to this as “OFSTED-speak” since it is so standardized and conveys so little information. Thus rich and generally informed classroom observations by experienced inspectors are converted into numbers with supporting paragraphs, summarized across teachers into yet other numbers, and reported back to the school in impersonal language. In the process, the information that would be most useful to teachers is lost unless an inspector has managed to have a quick conversation on the run.


Along the way, the expertise of the inspectors themselves is wasted. Their experience may enable them to carry out inspections more smoothly, to come more quickly to a summary judgment about a class or department, or to write a supporting paragraph more precisely; as one inspector responded when I asked what difference her background made to her observations, she said, “it helps me write the paragraph.” However, the form in which information is reported from inspections limits what can be said. If inspectors see systemic problems in a region, or in a particular curriculum area—for example, a lack of imaginative practice in technology that one inspector reported to me, or a general inability to educate handicapped students appropriately—there is no forum for their observations. If there are trends in the regions where they practice, for good or ill, they have no official way to record this information. The inspection process may be a “gift” to inspectors but, unlike the continuing exchanges in gift-giving societies (Hyde, 1983), the gift falls into a black hole, with no way to continue to enrich the community at large.

THE EFFECTS OF INSPECTION: REGULATION VERSUS SCHOOL IMPROVEMENT


The most important question to ask is whether inspection can improve the quality of education. This is a complex issue because it is entangled with perceptions of the inspection process itself, and some participants are reluctant to acknowledge that it could ever help; one teacher complained, “you’ve pulled it out of me,” when I finally got her to acknowledge some benefit. But there are few clear benefits, and—more to the point—some negative consequences where changes in the inspection process could make it much more effective.


One consequence, often forgotten in the turmoil over the process itself, is that inspection has served to reinforce the National Curriculum. Because the precepts of the National Curriculum are the basis for observations, a school that departs from this standardized curriculum will fare badly in its inspections, and then be subjected to various corrective pressures. At least generally, this is true; some inspectors have their own conceptions of good teaching, and they look for practices—for example, student initiation, genuine discussions, particular approaches to problem-solving whether in theater or math classes—that are nowhere mentioned in the Guidelines. But despite these departures from orthodoxy, in the main inspectors are required to judge teaching against the standards of the National Curriculum. Further complicating matters, the four criteria by which each class is judged may not be equally weighted: one inspector reported that attainment of national standards is the most important criteria, followed by progress; the response of students—a category reflecting motivation, engagement, and interactions within the classroom—came third, and teaching was last among equals. The stress on performance levels throughout the British system is so powerful that a school with low performance—because of immigration, language backgrounds, student mobility, or other aspects of family background all too familiar from the U.S. experience—is likely to get a low rating regardless of how inspired the teaching is.


Unfortunately, following the National Curriculum constrains the inspection process in obvious ways. For example, I observed a math class for 16-year-olds, solving a perimeter problem (“if a field contains 10,000 square feet and is square, how many feet of fencing are required . . .”). By American standards this is an 8th or 9th grade activity; the teaching was extremely mediocre, with formulaic inquiry, response, evaluation (IRE) questions and desultory responses. But the students did understand the lesson and made progress. The inspector admitted that in the pre-1993 inspection system he could have critiqued the low content level, the pedestrian teaching, and the reluctant student participation, but he couldn’t find much fault according to the National Curriculum. It is difficult, then, for a highly constrained inspection process to be any better than the guidelines it follows.


Beyond its enforcement of the National Curriculum, the effects of inspection are widely debated. Even those teachers most resentful of inspection admitted that it had forced them to think more about their teaching and to complete lesson plans they should have developed anyway; many received some constructive feedback, and a few teachers received a great deal, and found the entire process worthwhile. But the difficult question, almost always posed by teachers themselves in economic terms, is whether these benefits are worth the costs of the process. Most teachers and administrators do admit, albeit grudgingly, that inspection has some benefits, but that they are not worth the costs. My view is that the alternative inspection procedures that schools have created on their own, and the process in FE colleges, are modifications that generate greater improvements with much smaller costs overall, as I will argue in Sections III and IV. However, one problem with this simple economic formulation is that costs take different forms: the real costs of elementary-secondary inspection come in the form of stress, unpaid work by teachers, and the long interruption in a school’s life, none of which are monetized; the more careful and lengthy procedures developed for FE colleges undoubtedly cost more in monetary terms but much less in intangible expenses. For those concerned only with public budgets, therefore, the FE inspections may not look like they are worth the higher costs—but they certainly are from the perspective of instructors.


From the vantage point of administrators and schools, a different calculus about the effects of inspections is typical. Often, of course, inspection finds problems in schools that administrators and heads of departments have known about; some observers feel that LEAs, not wanting confrontation, wait for OFSTED’s reports to close or reconstitute failing schools. In such cases, inspection reinforces the efforts of administrators to reform schools. But when administrators disagree with the results of inspections, or when inspectors raise points about the quality of management itself, then different problems arise. Several administrators reported that if the results are seen to be illegitimate, then schools are much more likely to resist changes, not surprisingly. In contrast to the old inspection system, where HMIs were viewed almost with awe and widely thought to be concerned only with the well-being of students, there are now many more ways to reject any negative results: educators can fault the qualifications of the inspection team, the thoroughness of the inspection process, or the political motives underlying inspection—and therefore can reject the conclusions of almost any inspection. As a result there is general consensus that inspection works better as a mechanism of school improvement with good schools than it does with mediocre or failing schools (see also Hargreaves, 1990).


Certainly, if the purpose of inspection is to improve the quality of education by closing failing schools, the process itself has been a failure. Between 1993 and 1996, perhaps twenty or thirty schools have been closed, a tiny fraction of the nation’s 25,000 schools—hardly enough to improve the quality of education in Britain. It is hard to close poor schools, even when external observers and LEA administrators agree.


As a mechanism of school improvement, therefore, inspections in elementary-secondary education haven’t been particularly effective. They may “sharpen up” some teaching, and provide some feedback to teachers who are particularly receptive, but that advice costs a great deal in terms of anxiety, perhaps pointless paperwork, and the inevitable post-inspection letdown. The results may help some schools improve, particularly when inspectors ratify what administrators already know, but they are less effective with precisely those schools that need the most help.


To be sure, official policy is that inspection is a mechanism of regulation, not advice and improvement. But calling inspection a mechanism of regulation rather than school improvement gives away the real potential of an observation-based system. Indeed, the most distressing aspect of the 1993 changes under OFSTED and Chris Woodhead is that a widely respected (though unstandardized) system of inspection, providing widely respected advice, has been converted into a much-dreaded process with checkered results. As I will argue in subsequent sections, other systems of inspection have been much more successful.

THE FOCUS OF INSPECTION: THE TEACHER, THE SCHOOL, AND THE SYSTEM


Inspection collects information both about individual practice—the activities of teachers in classrooms—and a school’s policies, through interviews with administrators (though not teachers). In theory, this allows the inspection team to link the two, to understand the ways in which individual teaching results from school policies, rather than viewing teaching as individual and idiosyncratic (as is common in this country).


Indeed, many inspection teams appear to stress the institutional origins of good and bad teaching. In one team I observed, there was general consensus that eliminating bad teachers is the responsibility of the school head and other administrators, that they should have a system in place to monitor the quality of teaching, to provide help to weak teachers, but then to begin the process of dismissing if efforts at improvement fail teachers (“It’s difficult, but it can be done,” was one comment I heard). From their perspective, an OFSTED inspection should not be the mechanism for dismissing incompetent teachers, which would be a sign of a failing system. Similarly, this particular school suffered from an awkward physical plant, which might be seen as being beyond the control of administrators; however, the inspection team faulted the head for not going to his board of governors for improvements and for failing to make better use of space, including changes that they had seen work successfully in other schools with poor facilities. There was general concern as well, in this school, that career guidance was not carefully coordinated—“things are hit and miss, and students miss out if their tutors are not interested”—and that there were no systemic practices providing all students with consistent guidance. While the inspection process focused on individual classrooms and teachers, the central concerns of the team were institutional and administrative.


However, OFSTED has not been clear about this kind of institutional responsibility, and many of its activities have reinforced a view of teaching as individual and idiosyncratic. Woodhead’s constant harping on incompetent teachers and his campaign to rid the schools of failing teachers reinforces the tendency to see poor teaching as an individual characteristic, rather than as the fault of a system with relatively low pay, poor conditions in many urban schools, and professional strains caused by Woodhead himself and his fellow Conservative officials. OFSTED’s requirement of certificates for outstanding teachers (with 1s and 2s) and failing teachers (with 6s and 7s) returns the focus to individuals, and requires inspection teams to focus more carefully on individual ratings than on institutional strengths and weaknesses.


Of course, a reconciliation is possible: we could admit that the improvement of teaching has certain institutional dimensions, but that these sometimes fail and that individual teachers should be dismissed. The strength of inspection is that it can provide information necessary for both of these possibilities. It can identify where schools are failing to provide necessary support for teaching, and can document these patterns by systematic evidence from classrooms. It can also identify “outliers” where, for example, a department (or school) has strong teachers along with exceptions who have failed to respond to attempts at improvement; in these cases the external authority of inspectors can add to the evidence internal to the school. But the balance between the two matters to the success of either: if inspection is viewed principally as a form of teacher bashing, then it may lose its legitimacy as a mechanism for either institutional improvement or individual dismissal.


In one other dimension, the inspection process in England has failed to take advantage of its potential. Inspection is only one mechanism of improving teaching, after all: teacher training, in-service education, administrator preparation, the National Curriculum itself, the structure of salaries and patterns of shortages (or unqualified teachers), the schedules and demands placed on teachers, and the overall morale of the teaching force are other factors influencing the quality of instruction. But the inspection mechanism doesn’t provide any evidence about these alternative influences, partly because it doesn’t interview teachers and partly because it emphasizes the National Curriculum. If, for example, there are systematic problems in teaching that should be addressed in preservice education, or if there is a preponderance of “failing” teachers in urban schools because shortages of trained teachers are covered by substitutes, or there are problems in salary scales or teaching conditions that cause the best teachers to leave, there is no way for the inspection process to accumulate this information across schools. The process of contracting out inspections to different teams means that, even though individual inspectors may develop tremendous experience, results are institutionally fragmented in reports on individual schools.


By contrast, inspection was initially created in order to make recommendations to the Crown about schools, and in the days before 1993 its expertise and authority made it a respected voice in policy deliberations. But this aspect of the Inspectorate has changed, because of the inspection process itself as well the Conservative distrust of educators and the destruction of expertise among HMIs. The irony is that a mechanism created to improve the quality of British education can no longer address the most pressing issues of the system.

THE INSPECTION TEAM AND “ROLE STRAIN”


Finally, what of the inspectors themselves? In England, there is very little concern with inspectors: teachers and administrators usually view them as agents of OFSTED, with a mixture of dread and hostility; there are many stories about their incompetence that circulate among teachers, the educational equivalent of urban legends, without any countervailing positive stories. OFSTED seems to view them merely as vendors carrying out contracts, no more special than fishmongers or parts suppliers, even though OFSTED often points to inspectors as being responsible for outcomes. A great deal of national discussion seems to describe inspectors merely as money-grubbers, getting rich off inspection contracts. The contrast with HMIs of the olden days, revered and respected, is stark.


Inspectors deserve much more sympathy than they have gotten, however, in my view. In my contact with them, they are usually experienced individuals, deeply committed to improving schools, who have been forced out of earlier positions (as HMIs, LEA staff, and department heads) by the relentless pressures of Conservative policy. They often participate in inspections as a way of staying in education and continuing to contribute to the good of schools. While they have not sunk into poverty, the competitive pressures in contracting make it implausible that anyone could get rich through inspection; indeed, teams with retired individuals are said to have a competitive advantage because they can charge less than teams whose members still have children to support and mortgages to pay. They may not be as wise as the former HMIs, but as a body of individuals they represent an expertise about teaching unmatched by anything else in England or this country.


But they have an impossible task, and I detect signs of “role strain” that has not been widely recognized in England.4 On the one hand, many inspectors are committed to education; on the other, they work for an agency, OFSTED, that spends a great of energy on school- and teacher-bashing. Inspectors accumulate a great deal of information about teaching practices, but they are constrained to report it in Likert scales and OFSTEDspeak that strip the life out of what they have seen. They want to improve teaching, but despite their large experience they are unable to provide much direct feedback; for the short period that they inhabit a school they spend their time as snoops and snitches, recording everything they see on their omnipresent clipboards and scaring all the teachers they come across. As a former LEA inspector noted about current inspectors, “There are lots of conflicted people around.”


The strain of these incompatible roles emerges in small ways. In the work sessions I observed, there was a great deal of gallows humor, including many comments about the “bureaucracy” (OFSTED) and its requirements. In referring to computer-based paperwork, one inspector noted that “if you cut off the OFSTED logo, the computer likes it much better.” “Don’t we all?” another replied. There were many complaints about the pace of work and the amount of fatigue at the end of a stressful week, and there was much grumbling about a procedure that requires the enormously complex task of teaching to be summarized and reduced to a single number. The team was trying its best to act responsibly, to provide some accurate assessment of the school’s strengths and weaknesses that might help it to improve, while still following the guidelines required by OFSTED, but these were difficult to carry out simultaneously.


On another team, the Reggie, the individual who considered inspection a “gift” of insight to her and her team, forthrightly acknowledged that the inspection process, her current life work, has been a failure. Its purpose, in her view, was to give schools themselves insight into their own strengths and weaknesses, to operate, like good teaching itself, by building on the strengths of individuals rather than by disparaging their weaknesses. But despite the enormous promise of inspection, she admitted (to a complete stranger) that it had failed because the time demands were so severe and because teachers misunderstood the purpose to be teacher-bashing rather than instructional improvement. Others admitted that the process worked under some conditions, particularly in schools that were already quite competent, but not in the schools most needing advice. These admissions are heartrending, since it is hard for people to admit their work is ineffective; and it is almost unbearable to hear well-intentioned educators confess their complicity in an ineffective and often destructive system. But the process has put teachers in an impossible position.


And what of the remaining HMIs, now employed by OFSTED? For the most part, they administer contracts—though a few of them carry out inspections of failing schools, or of schools with low attendance or high rates of excluding students—and therefore still serve in their old roles. But the old Inspectorate has scattered to the winds, no longer the repository of experience it once was. Some HMIs retired; some of them still carry out inspections but in new and constrained forms; and some turned into bureaucrats.


And so the saddest story of the current inspection system in elementary-secondary education is that it has converted a process that provided substantial help to teachers, albeit in an unsystematic way, into one that thwarts such help, constrains the development of expertise, and imposes enormous costs in the process. Fortunately there are many other ways to carry out inspections.

III. SELF-INITIATED INSPECTIONS


One consequence of OFSTED inspections is that many schools have created their own inspection procedures, to prepare teachers for the official one. Often, administrators visit classrooms, using the OFSTED procedures and requiring their teachers to complete the paperwork necessary for OFSTED. Sometimes the schools hire outsiders, particularly individuals who work for inspection teams, or members of LEA staff, to provide an outside view and some “inside” advice about how an inspection will take place. As a result, there is considerably more routine observation of classes than takes place in American schools. Such efforts are generally focused on preparing for the inspection itself, and therefore end when the inspection is over.


However, in a few cases, schools (and some FE colleges) have set up their own permanent inspections.5 One such school began its effort after an OFSTED inspection: the administration first created a process in which its own teachers and administrators, joined by members of the board of governors, carried out classroom observations with OFSTED’s procedures. However, the school found that it was unable to judge standards with only internal observers. The next year, therefore, it added one “outsider”—a member of the LEA staff who participated in inspections—along with the insiders; since then they have added an external inspector for each curriculum area.


While this inspection process follows OFSTED’s guidelines, the atmosphere surrounding it is completely different. It was not imposed on teachers, since the administration consulted with teachers on its design; intended to provide support and encouragement, “it will be positive and developmental,” declared the administrator in charge of it. No individual scores were reported, eliminating the sense of trying to root out incompetent teachers; instead, the faculty as a whole examined scores for groups of teachers (grade levels and curriculum areas) and collectively proposed how to improve them. Finally, while the OFSTED process is a snapshot taking place in a single week, the internal inspection process for a department takes place over several weeks, allowing observers to see teaching over a longer span of time and eliminating one serious flaw of OFSTED inspections, the maxim that “if we didn’t see it, it didn’t happen.” Teacher interviews are part of the process, and therefore teachers have the opportunity to explain to observers the special characteristics of their approach and their students; the resentment at being observed without being able to interpret their teaching to outsiders is thereby avoided.


Unless I was systematically fooled, the teachers in this school generally supported the internal inspection process. Certainly there is some apprehension about being observed; but the process provides teachers with helpful comments on their teaching, both from their peers and from curriculum experts outside the school, and it enables observers to see the range of teaching within the school. Further, there is more discussion about teaching as a result, because there’s more shared information. The head declared that teachers change when they participate in such an inspection system: because the OFSTED forms require justification for any particular rating, teachers begin looking more closely at the details of classroom interactions rather than simply reacting on the basis of their feelings about the gestalt of the class, and thereby become more sensitive to interactions within their own classrooms. Finally, the internal inspection process incorporates the school’s governors, making them aware of educational issues in a way that had not been possible before.


In this self-initiated inspection, one school has modified the OFSTED process in relatively small ways, but the results are quite different because of it. The issues—the atmosphere and purposes surrounding the inspection process, the balance of insiders and outsiders, the period of time over which observations take place—all prove to be crucial to FE inspections as well.

IV. INSPECTIONS IN FE COLLEGES


Inspections in FE colleges are also descended from the old HMI system, though they have developed in very different directions.6 As OFSTED had, the FEFC created a schedule for inspections and a manual to guide them (FEFC, 1993), in place of irregular inspections based on unknown guidelines of the “connoisseurship” approach. Inspection also had a regulatory purpose: in 1992 FE colleges were required to incorporate as autonomous institutions free of LEA control, allowing them local control of their finances and programs. Inspection was then instituted as a form of external regulation, to prevent autonomous institutions from watering down the content of their programs.


From the outset the FE inspection process has been structured in subtly different ways. Self-assessment plays a more important role: each college must complete a self-assessment report prior to the formal inspection, in which it clarifies its own views of its strengths and weaknesses, and formulate a process of improvement. FE colleges often undertake their own inspections to prepare for the official inspection. (In one case, heads of colleges switched places, with each inspecting the other’s college.) Recently FEFC has placed even more emphasis on self-assessment; while there are still external visitors, their role is much more to validate a college’s self-assessment than to create an assessment from scratch.


In addition, the FE inspection teams are differently constituted. A fulltime inspector from the FEFC Office of the Inspectorate puts together the team in consultation with the college head. Team members come from a pool of about 60 full-time inspectors and 600 part-time inspectors, who are usually instructors in FE colleges or employers in specific areas. The lead inspector therefore provides continuity among inspections because he or she is in charge of all inspections within a region7—in contrast to the OFSTED process where each inspection is a separate event. The team then includes subject area specialists from other colleges and a “nominee”—an individual from the college being inspected. While there is only one such nominee on a team of perhaps 15, he or she plays a critical role: the nominee can interpret the school to the inspection team, and in turn serves to convey the conclusions of the team back to the college. Information from the inspection comes not only in the form of impersonal reports, but also in direct discussion with the nominee, the lead inspector, and subject specialists.


The inspection process takes place in a more extended fashion. The inspectors responsible for a particular curriculum area are likely to return to the college two or three times, observing classes at different points during a semester, avoiding the “snapshot” problem of OFSTED inspections. After all the curriculum areas have been inspected, the cross-college inspection takes place, examining college-wide functions (career education, registration, extra-curricular activities, and the like) as well as administrative procedures intended to improve teaching; thus the cross-college inspection has all the information available to it from the classroom observations. Because subsequent visits can be scheduled after some information about teaching conditions has been developed, the inspection team can concentrate on areas of weakness, or areas that the college would like to improve; the necessity for observing every teacher that makes OFSTED inspections so pressed is thereby avoided. Of course, inspection teams may still miss some weak areas, as one nominee complained; in this particular case the administration was hoping that the inspection would strengthen its own initiatives in a particular department. But there are more opportunities to observe over longer periods of time, and consultation with administrators and the nominee can help prevent errors of omission. Also, there is much less chance of committing the other kind of error, declaring a competent teacher to be deficient, based on a misinterpretation of a class or a too-short period of observation, because there are many more opportunities to discuss observations and interpretations.


Much more clearly than OFSTED, FEFC has articulated a “corporate” view of inspections: that improving the quality of teaching is an institutional responsibility, not an individual issue. Particularly through the crosscollege examination, inspectors evaluate institutional practices related to teaching much more consistently, and the recommendations of inspections are directed at college administrators rather than individual teachers. It helps that FEFC has avoided strident teacher-bashing and constant references to incompetent teachers; while FE inspections can certainly identify individuals in need of improvement or dismissal, the aura surrounding FE inspections is much less charged. Most FE personnel view the FEFC Inspectorate as supportive of colleges, in contrast to the situation in elementary secondary education where the government is widely viewed as being hostile to the education establishment.


In addition, FE inspection is much more thorough in providing information to instructors and colleges. The role of the nominee is crucial to this process, and the chief inspector discusses findings with the college head throughout the inspection. After an inspection is complete, the college prepares a plan of improvement, and the chief inspector continues to work with the administration as it carries out this plan, providing more continuity in advice and consultation that the OFSTED process would allow. The expertise developed among FE inspectors is also used in more consistent ways: subject area specialists are available for consultation to other colleges, and the Inspectorate offers a series of booklets it has published (e.g., Engineering, 1996, or Humanities, 1996) describing overall findings in particular curriculum areas and providing recommendations for good practice. The annual report of the chief inspector, Quality and Standards in Further Education in England, is also an informative document, judicious and balanced in its identification of strengths and weaknesses. The FEFC seems to have learned from experience: one senior inspector noted that early reports were “terrible”—poorly justified and badly written—and that the process changed in response.


Finally, the inspection process forces administrators to think about issues of teaching and learning. A corporate view would be especially helpful in the United States where most community colleges are led by administrators who spend almost no time in the classroom and are poorly informed about what instructors do; faculty refer to them as “bean counters,” and the level of hostility toward them is dreadful to see (Grubb and Associates, 1999, chap. 8). But administrators in England at least know what happens in classrooms. They may choose to be bean counters, and the competition engendered by Conservative governments has definitely pushed them in this direction, but the inspection system at least provides school heads with the information necessary to improve the quality of instruction.


Of course, no one thinks that FE inspections are fun. There remain complaints about the vagueness of requirements, about the amount of documentation necessary, and about tight timetables. However, on all these dimensions FE inspections are a vast improvement over elementary-secondary inspections. Furthermore, FE administrators and instructors generally accept the need for improving the quality of teaching, and largely approve of the FE process. When inspection is carried out in ways that maintain the dignity of instructors and the integrity of institutions, and that maximize the advice given to instructors, then it need not generate the controversy and resistance typical in elementary-secondary education.

V. TRANSLATING INSPECTION TO THE UNITED STATES


One obvious lesson from England’s experience is that the idea of inspection can be interpreted in many different ways, at very different scales. The specific composition of the inspection team, the balance of insiders and outsiders, the period of time over which an inspection takes place, and the ways information is reported back to instructors and administrators are all procedural issues that can be endlessly varied. The atmosphere surrounding inspection, the balance of institutional versus individual conceptions of responsibility for teaching, and the relative emphasis on advice and improvement versus regulation are less tangible aspects of inspection that matter a great deal to its effectiveness. Individual schools or colleges can develop their own inspection mechanisms—just as a very few community colleges now have their own programs of observation (Grubb and Associates, 1999)—though such procedures would probably become more acceptable if larger numbers of institutions, in coalitions of reforming schools, or school districts, or entire states, adopted inspection.


This may be a good moment to introduce the practice of inspection into the United States. Some efforts are already underway. Thomas Wilson has been working to incorporate classroom observations, or “school visits,” into the procedures used by the New England Association of Schools and Colleges, Rhode Island, Illinois, and Chicago, and has developed a handbook to help schools structure such observations (Wilson, 1999); based on his work, Rhode Island has developed School Accountability for Learning and Teaching (SALT) as a statewide procedure including a four-day school visit (Olson, 1999). David Green, a former HMI inspector, has worked with schools in Chicago on the School Change and Inquiry Program and in New York State on the School Quality Review Initiative.8 In California, often a bellwether state, the former governor proposed creating an Office of the Chief Inspector, modeled on British practice; the current governor, Gray Davis, has enacted peer review and started a debate over the propriety and efficacy of observations in that state (e.g., Archer, 1999).


Furthermore, inspection is complementary to some practices in the current reform movement. Certain reforms, for example, those of the Coalition of Essential Schools, the effort to develop Accelerated Schools, and efforts to institute career Academies and other forms of “education through occupations,” depend for their power on changes in teaching, and inspection provides a mechanism to give teachers trying new approaches feedback on their efforts. Other reforms place great emphasis on teachers upgrading their skills continuously, such as the efforts to develop teacher academies and professional practice schools. Others envision communities of instructors engaged in more continuous and self-directed forms of staff development (e.g., Lieberman, 1996; Darling-Hammond and McLaughlin, 1996); for these reforms, inspection can provide both assistance in the process of change and information about whether change is taking place. Still other reforms, including some state efforts and certain nationwide initiatives like that of the National Council of Teachers of Mathematics, have developed standards or guidelines for different subjects; sympathetic observers could help teachers in their efforts to adopt such standards. In our observations in community colleges (Grubb and Associates, 1999), many instructors welcomed observations in order to move toward the ideal of community colleges as “teaching institutions.” Finally, there have been many proposals for external forms of accountability, through examination and assessment and “standards”; but without knowing what is happening in classrooms these are likely to remain weak methods of improving schools. The current moment is therefore conducive to practices like inspection, even though the idea is not in general circulation.9


However, the English experience provides several warnings about how best to institute inspection. One is that any observation system requires a climate of trust, a sense that teachers, administrators, and policy makers are joined in a common enterprise to improve schools. Where this trust does not exist—for example, where policy makers are trying to impose an unwelcome agenda, as in English elementary-secondary education in Britain, where teacher unions and administrators are antagonistic, or where personal relations within schools are antagonistic (Payne, 1997)—then the inspection process itself may be undermined as teachers engage in defensive teaching, and any recommendations are likely to be undermined as teachers (and often administrators) see them as illegitimate. While inspection can be used either for school improvement or for accountability, the English experience suggests that it may be more effective when the emphasis is placed on improvement.


Second, inspection requires that there be some generally accepted standard of what teaching should be. The model of connoisseurship, although it may work well with certain individuals, is likely to be too unsystematic for widespread use, and the lack of any guidelines makes it difficult for teachers to know how they should improve their teaching. Therefore observation-based procedures are best viewed as complementary to other reforms taking place, and would be most effective in conjunction with other reforms that have clarified what practice should look like. Further, groups of schools engaged in the same kind of reform—Coalition schools, for example, or schools adopting career-oriented Academies and majors, or even schools within a state like Kentucky that has adopted a consistent approach to reform—might also see themselves as part of a community of practice and therefore be open to inspection teams from other similar schools.


Obviously, the details of inspection procedures matter a great deal to the quality of information generated by inspection. The participation of insiders as well as external inspectors, observation of classes over some period of time, and opportunities for teachers to discuss their teaching with inspectors are all critical to the success of inspection. And some way must be found to capitalize on the expertise developed by inspectors themselves, to make this available to teachers and administrators, for example, through individual consultation during inspections, or through subsequent workshops and publications. Under these conditions observations generate expertise around teaching, grounded in practice and experience, that can continue to improve schools and colleges.


Finally, any system of inspection or classroom observation requires some kind of institutional structure. Even if a system is set up for a single school, it is important to establish expectations, regulate the conduct of the inspections themselves, and establish procedures for informing teachers and administrators about their performance. In this country, it is tempting to suggest that classroom observations simply be added to the activities of accrediting agencies, which are already established to monitor the quality of schools and colleges. However, accreditation has largely focused on minimum standards, on facilities and health and safety issues, and the adequacy of resources (Portner, 1997), and it is difficult to imagine using these agencies for classroom observation and school improvement. There is really nothing in most states comparable to inspection, and so a new institutional structure might be necessary. But that is more opportunity than challenge, since it would enable a district, state, or coalition of schools to devise its own observational process from the ground up, incorporating teachers in the process of design, so that it can serve their interests as well as those of school improvement.


Of course, it is difficult to borrow practices from other countries. The longevity of inspection in Great Britain has made the idea of external observers widely acceptable there, just as our history has led to the privacy of the classroom. But the secrecy surrounding teaching does not serve anyone well—not students, nor advocates for reform; not administrators who have little idea what is happening in their classrooms, nor teachers who often find themselves isolated and unsupported. If teaching and learning are to be central to our educational institutions, then inspection provides a way of learning what happens in the classroom and generating the expertise necessary for improvement.


I visited schools, conducted interviews, and collected materials for this article while on sabbatical at Cambridge University during Fall 1996. I particularly wish to thank Paul Ryan, King’s College, and the Faculties of Economics and Education at Cambridge University for their hospitality, and the many teachers, administrators, and researchers I interviewed for their candor. Neal Finkelstein, John Gray, Paul Ryan, and Thomas Wilson made helpful comments on an earlier draft, as did several participants at an AERA presentation.

REFERENCES


Archer, W. (1999, February 17). California bill rekindles debates over teacher peer review. Education Week 18(6), 1, 31.


Cuttance, P. (1995). An evaluation of quality management and quality assurance systems for schools. Cambridge Journal of Education 25(1), 97–108.


Darling-Hammond, L., and McLaughlin, M. (1995, April). Policies that support professional development in an era of reform. Phi Delta Kappan 76(8), 597–604.


Finkelstein, N., & Grubb, W. N. (1998, April). Making Sense of Educational Markets: Lessons from England. Paper presented at the American Educational Research Association. Berkeley: School of Education, University of California.


Further Education Funding Council. (1993, September 22). Assessing Achievement. Circular 93028. Coventry: FEFC.


Further Education Funding Council. (1996, June). Humanities. Coventry: FEFC.


Gardiner, J. (1998, March 6). Schools to get inspection referee. Times Educational Supplement, p. 1.


Gray, J., & Hannon, V. (1995). HMI’s interpretations of schools’ examination results. Journal of Educational Policy 1(1), 23–33.


Gray, J., & Wilcox, B. (1996). Inspecting Schools: Holding Schools to Account and Helping Schools to Improve. Buckingham and Philadelphia: Open University Press.


Grubb, W. N., & Associates. (1999). Honored Even in the Breach: An Inside Look at Teaching in Community Colleges. New York and London: Routledge.


Hargreaves, D. (1990). Accountability and school improvement in the work of LEA inspectors: The rhetoric and beyond. Journal of Education Policy 5(3), 230–239.


Hyde, L. (1983). The Gift: Imagination and the Erotic Life of Property. New York: Vintage Books.


Jeffrey, B., & Woods, P. (1996). Feeling deprofessionalized: The social construction of emotions during an OFSTED inspection. Cambridge Journal of Education 26(3), 325–343.


Lieberman, A. (1995, April). Policies that support professional development: Transforming conceptions of professional learning. Phi Delta Kappan 76(8), 591–597.


Little, J. W., & Bird, T. (1987). Instructional leadership “close to the classroom” in secondary schools. In W. Greenfield, ed., Instructional Leadership: Concepts, Issues, and Controversies. Pp. 118–138. Boston: Allyn & Bacon.


Melia, T. (1995). Quality and its assurance in further education. Cambridge Journal of Education 25(1), 35–44.


Merton, R. (1957). Social Theory and Social Structure. Glencoe, IL: Free Press.


Office for Standards in Education. (1995). Guidance on the Inspection of Secondary Schools. London: HMSO.


Olson, L. (1994, May 4). Critical friends. Education Week 13(32), 20–27.


Olson, L. (1999, January 11). Moving beyond test scores. Education Week: Quality Counts ’99 18(17), 67–73.


Payne, C. (1997). “I don’t want your nasty pot of gold”: Urban school climate and public policy. Working Paper WP–97–9. Evanston, IL: Institute for Policy Research, Northwestern University.


Portner, J. (1997). Once status symbol for schools, accreditation becomes rote drill. Education Week, March 26, pp. 1, 30–31.


Spours, K., & Lucas, N. (1996, July). The Formation of a National Sector of Incorporated Colleges: Beyond the FEFC Model. Working Paper No. 19. London: Post–16 Education Centre, University of London.


Wilson, T. (1995). Notes on the American fascination with the English tradition of school inspection. Cambridge Journal of Education 25(1), 89–97.


Wilson, T. (1996). Reaching for a Better Standard: English School Inspection and the Dilemma of Accountability for American Schools. New York: Teachers College Press.


Wilson, T. (1999, March). Foundations of the Catalpa School Visit. Third Edition. Providence, RI: Catalpa.


Wragg, E. C., & Brighouse, T. (1995). A New Model of School Inspection. Occasional Paper. Exeter, UK: School of Education, Exeter University.


W. NORTON GRUBB is the David Gardner Chair in Higher Education at the School of Education, the University of California, Berkeley. He is interested inter alia in the potential links between educational policy (including funding) and the quality of teaching. He is the author most recently of Honored but Invisible: An Inside Look at Teaching in Community Colleges (Routledge, 1999), about institutional effects on instruction within these self-styled “teaching institutions.”




Cite This Article as: Teachers College Record Volume 102 Number 4, 2000, p. 696-723
https://www.tcrecord.org ID Number: 10500, Date Accessed: 1/22/2022 10:46:47 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • W. Grubb
    University of California, Berkeley
    E-mail Author
    W. Norton Grubb is the David Gardner Chair in Higher Education at the School of Education, the University of California, Berkeley. He is interested inter alia in the potential links between educational policy (including funding) and the quality of teaching. He is the author most recently of Honored But Invisible: An Inside Look at Teaching in Community Colleges (Routledge, 1999), about institutional effects on instruction within these self-styled "teaching institutions".
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS