Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

An Evaluation System for Curriculum Innovation


by Garlie A. Forehand - 1971

The first purpose of this paper is to examine a range of types of evaluation studies, the circumstances in which they are appropriate, and the information and techniques needed. The second purpose of this paper is to propose that certain programs be instituted by educational institutions.

A review of studies designed to evaluate curriculum innovation is likely to yield discouraging results. Abramson, for example, cites a "continuing paucity of studies which can serve as models for curriculum research."1 The problem is not so much a lack of technical competence. Technical standards -- for example, those pertaining to objectivity, reliability, and relevance to program and institutional objectives -- have not been easy to define, although some progress is discernible.2 A more perplexing problem is comprehensiveness. Goals that might reasonably be studied are so wide ranging as to overshadow the resources of any particular project. A study that purports to evaluate a curriculum, for example, is doomed to be judged inadequate by comparison with all of the relevant studies that might have been done. Moreover, the absence of an analysis of the different meanings and purposes of evaluation leaves the researcher without clear guidelines. Wessel has recognized this problem and suggested that the development of a "taxonomy of evaluation" is an essential task in the improvement of curriculum evaluation.3

The first purpose of this paper is to examine a range of types of evaluation studies, the circumstances in which they are appropriate, and the information and techniques needed. For convenience, we consider these questions in the particular context of evaluating curricula in a college or university. This will allow us to assume that curriculum development and evaluation occur within a single organization with colleagues working toward shared objectives. While examples and some proposals will be specific to this context, the underlying ideas should also be applicable to evaluation of primary and secondary curricula.

The range of activities to be considered defines an organizational effort more elaborate and expensive than any now in existence. There are two values in proposing such an idealized program. First, even if an institution does not develop a comprehensive program, its evaluation efforts can be made more effectively by selection of emphases and by a comparison of what it is doing with what it might be doing. Second, an institution's commitment to improve its educational program will be increased in effectiveness by the opportunity to appraise the success of its efforts. A greatly increased allocation of effort and resources to evaluation may well be wise economy.

The second purpose of this paper, then, is to propose that programs along lines suggested below be instituted by educational institutions. The suggestions are concerned with the capabilities of an evaluation system, rather than its particular form. The particulars of a plan -- allocation of emphasis, mechanisms for interaction among members of the organization, and amount of resources invested -- will vary from institution to institution and from occasion to occasion.

The Search for a Model Curriculum evaluation, like other developing scholarly activities, looks to older, related disciplines for models and techniques. Since evaluation studies generally involve studies of behavioral outcomes, psychological research has provided the models for most formal studies.4 In the absence of analysis of the full range of evaluation problems, however, there is danger that borrowed methods may be applied on the basis of inexact analogies; and, indeed, in curriculum evaluation there are instances of good tools being put to wrong jobs. Standard test construction methodology, for example, has been used to design evaluation instruments, despite the fact that its emphasis on precision of individual scores and discriminability within groups produces tests which are relatively insensitive to environmental change and unresponsive to program objectives.5 Similarly, the adoption of principles of experimental design as a model for setting up a comparison of programs can easily produce uninformative studies. A study based on the experimental design model requires careful control of the experimental "treatment" and statistical comparison with alternative well-defined treatments. To define one configuration of interacting variables as an "experimental program" and another as a "control program" is arbitrary; a variation in one element of either treatment renders the experiment uninformative. Moreover, without analysis of the contributing elements, the effective contributors to variance are invisible, and may well vary unobserved the next time the courses are taught. The experimental design paradigm may also have the unfortunate effect of freezing a plan at some given point so that it may be handled as an experimental treatment, thus blocking the opportunity to vary program elements in response to feedback.

An evaluation system responsive to the demands of curriculum development must be based on a more elaborate and flexible model than can be drawn from the repertoires of the psychometrician or experimental psychologist. The activity of such a system would more nearly resemble an operations research program than a psychological laboratory. It would coordinate the efforts of many individuals and parts of an organization and provide technical facilities; it would collect information systematically from all parts of the enterprise; and it would be capable of handling questions about the likely or actual outcome of an action and provide multivariate descriptions that would be available for subjective evaluation, for analysis in suggesting modifications in a course of action, and for guidance in the design of more controlled experiments. The program would be capable of applying when relevant the principles of controlled experimentation and of test construction. Psychometric and experimental design techniques developed and adapted especially for educational research would be an integral component.6 But it would also contain a variety of capabilities and techniques borrowed from other fields or developed specifically for it.

In the sections below we will consider classes of questions that might reasonably be put to a curriculum evaluating system, and suggest some methods to answer them. The suggestions raise in turn a host of organizational and technical questions. Some may prove unworkable, and all will have to be adapted to a particular organization and its problems. Throughout this paper we will use the term "educational program" to refer to any planned set of procedures designed to accomplish educational goals. The term may thus refer to courses, sets of courses, automated instructional systems, counseling and tutorial plans, and so on.

Explicitness First, evaluation is not assumed to consist of a single study or series of studies; it is an ongoing activity designed to provide answers to questions which might arise from a number of sources and perspectives.

Second, an evaluation study does not itself constitute the evaluation; it is a compilation of data, perhaps with resulting recommendations, which must then be interpreted and acted upon by appropriate representatives of the organization.

Third, the number and types of studies conducted at a given time depend upon the institution's needs and the resources devoted to evaluation. The specific recommendations to be developed describe a general facility to be used selectively. Selection of appropriate studies is an important part of the work of an evaluation program.

Finally, curriculum evaluation requires the participation of persons from all parts of the institution, performing roles that are suggested below. Also required is a central evaluation program staff, which will consist of professionals with appropriate backgrounds and research interests—such as psychometrics, educational psychology, and sociology—along with research and technical assistants.

Two Levels of Evaluation Evaluation implies the collection of information about both the nature and the outcome of an educational procedure with a view toward assessing the quality of the procedure. This assessment presumably enters into decisions regarding the adoption, continuation, or modification of the procedure. A question arises at this point, a question that is usually not explicitly put, and for this reason is frequently a source of contention and controversy: From whose point of view is evaluative data collected? The question underlies Forehand's distinction between extrinsic and intrinsic objectives and that of Campbell and Stanley between internal and external validity.7

The process of curriculum development contains several elements that are now reasonably well agreed upon. A team consisting of subject-matter specialists, teachers, and occasionally behavioral scientists develops a statement of objectives that is as explicit and as closely related to obtainable outcome measures as possible. They develop procedures (which may embrace parts of courses, courses, or series of courses) designed to accomplish these objectives and based on a stated or unstated pedagogical theory. They conduct tryouts of the procedure, gather feedback from any available source, and assess the effectiveness of elements (units, techniques, materials, etc.) for accomplishing the objectives. From the point of view of the development team, the function of evaluation data is to provide feedback concerning the attainment of objectives for immediate use in modifying procedures. This evaluation takes place within the process of curriculum development. We may term this kind of activity project evaluation. Cronbach has argued that project evaluation is the essential function of the evaluation program.8

There are other questions that are frequently and legitimately posed by persons outside the development team, for example, administrators, suppliers of funds, accrediting committees, or critics of the team. These questions arise from the fact that the team's objectives seldom exhaust the relevant possible objectives, and its pedagogical theory is never the only defensible one. Objectives, theory, or both may not be shared by others with a role in curriculum decision-making. Thus evaluation must also take place in an institutional context that is broader than the development team. Evaluation at this level may ask: Do the objectives underlying the program adequately reflect the institution's goals? Are the institution's objectives satisfactorily met? If not, should the institution demand modification of the program, or should it modify its goals? Does the benefit achieved by the program justify the expenditure of resources? In short, is the program acceptable to the institution? These are practical, hard-headed questions. They provide less opportunity for theory and theory-related research than do project evaluation questions, and consequently, may be less fun for course developers and researchers. They are nevertheless important questions that an institution is to some degree obligated to ask. We may term evaluation functions responsive to such questions institutional evaluation.

Project evaluation and institutional evaluation are not incompatible. Data collected for one may be applicable to the other. Institutional evaluation should always include the objectives and rationale of program developers, and developers should be aware of the institution's needs. At times, institutional objectives and project objectives will blend and coincide. The differing perspectives, however, will usually produce important differences in methodology, and ignoring the distinction will often result in misleading conclusions. For example, the measurement of student achievement in evaluating an educational program is a source of dilemma. A measure of achievement ought to be related intrinsically to the objectives of the educational program; otherwise, traditional definitions of achievement and the demands of graduate schools or employers will come to dominate the goals of the institution and hence stifle innovation. Yet if programs are evaluated only from the frame of reference of their designers, there is a risk that idiosyncratic objectives will place limits on student achievement that would be unacceptable to the institution as a whole. Thus there are two kinds of evaluative questions relevant to the achievement of students as a measure of program success. Does the program accomplish its objectives? And does student achievement meet an acceptable standard? Study of program outcome in relation to program objectives is best considered a function of project evaluation. The major institutional function should be the establishment of standards.

A system designed to provide information relevant to evaluative questions will be responsive to problems of both project and institutional evaluation. For institutional evaluation it can provide information about the outcomes of educational programs and about the nature and quality of the programs themselves. For project evaluation it can provide consultation to help designers obtain relevant feedback and be a means for developing basic research on teaching.

Institutional Evaluation Evaluation from the perspective of an institution will generally focus upon the program as a whole rather than upon its constituents. Program developers and institutions view a program from different perspectives. What may seem to the program initiators a whole will more likely be to the institution only a subgoal or part. Institutional evaluation considers the achievements of any particular program in relation to a network of other programs and goals, a function that cannot be performed at the project level. Moreover, since curriculum development and revision are ongoing activities, program developers can profit from "external" appraisal of the program. New goals and new perspectives on method and content can be put to use in continued development work.

In seeking to assess programs in relation to institutional goals, curriculum evaluation functions as a diagnostic system: It can identify regions of the overall program where attention is needed. It does not itself provide the remedy which must be sought in more curriculum development and experimentation at the project level. To provide the relevant information, institutional evaluation must focus on the outcomes of a program—the achievements of its students and the reaction that it engenders—and upon the nature of the program itself.

Information about Outcomes The perennial opening question in curriculum evaluation is: "What are the objectives?" This question, difficult enough for a relatively compact team to answer, is compounded in difficulty when asked with respect to an institution. Institutional evaluation involves the study of the institution's objectives as well as the ways to assess their attainment. Since objectives are embedded within the institution's programs, their articulation is an empirical as well as a policy-setting problem. The suggestions set forth below represent something of a "bootstrap" operation; the existence of a program is the stimulus for formulating the institution's expectations of it, and reactions of faculty and students to a program provide information about both the degree of the program's acceptance and the criteria upon which the reactions are based.

Standards of Student Achievement It has been suggested that the role of institutional evaluation with respect to student achievement is to establish standards and assess the degree to which those standards are achieved. The nature and function of achievement standards must be carefully defined. If standards are rigid, or if they are considered the sole evaluative criterion, they can be counter to educational progress. There are a number of common pitfalls that an institution can and must avoid. A constructive system of university standards will reflect the following premises:

  1. Standards are subject to continuous revision. They must be sensitive to changes within fields of knowledge and in educational philosophy.
  2. Standards result from a process of analysis which includes input from a wide range of opinions and perspectives within an institution.
  3. Standards are developed in cognizance of, though not necessarily in agreement with, the objectives reflected in the design of relevant programs.

The mechanism suggested for building institutional standards is the constitution of standards committees. The particular standards committees in a given institution would depend on the nature of the institution's programs. There might be disciplinary standards committees, for example, in mathematics, literature, history, and social science. To explore the role of such committees, let us take as an example a Standards Committee in Mathematics, which will focus its attention on a new freshman course for liberal arts students.

It is proposed that such a committee be composed of members of the mathematics department, members of departments in which mathematics training is needed for course work (e.g., physics, economics, statistics, psychology), and other knowledgeable faculty members serving "at large." The committee would work with a member of the evaluation program staff competent in testing and data analysis. Working over a period of time (perhaps simultaneously with the development of the course), this committee would:

(a) Study the objectives and procedures of the course being developed, consulting with a member of the development team (who may be a member of the committee).

(b) Examine criteria which define what a liberal arts sophomore ought to be able to do in mathematics; such criteria might include opinions of appropriate faculty members, opinions of relevant professionals outside the organization, specifications developed by professional associations, and requirements of upper-level courses for which mathematics background is required (e.g., in mathematics, physics, or economics).

(c) Develop a statement of institutional standards in two forms:

  1. A conceptual statement, naming concepts and skills felt by the committee to represent important facets of the desired achievement.
  2. An operational statement, consisting of an examination embodying the conceptual standards and prepared in consultation with the committee by the evaluation program staff.

(d) Determine a tentative standard of achievement on the examination. Such a standard may take the form of a score or set of scores on different parts of the examination. Procedures for determining such standards are themselves subject to development and modification. An initial approach for an inexperienced committee might be a study of student performances on an actual or simulated examination, discussion of opinions of committee members regarding acceptability of performances, and development of a consensual standard.

The outcome of the standards committee's work will be a procedure for gathering data in the form of a Standards Examination. The institution will be in a position to determine and report the proportion of students meeting the standards defined by the committee. Several caveats are in order with regard to the nature and purpose of standards examinations. Such an examination is not a course examination with its emphasis on evaluating the performance of individual students. In particular, an attempt to set a standard performance level implicitly or explicitly relative to a distribution of scores would be inappropriate. Such a performance level should be a criterion-referenced rather than a norm-referenced measure. Published achievement tests would also be inappropriate substitutes, since a critical function of the standards committee is to set standards in relation to a specific university's goals and practices, although carefully selected items from such tests, if available, might be useful resources.

Attitudes toward Programs From an institutional point of view, affective reactions of students and faculty serve to indicate the degree of a program's acceptance. The information required for institutional evaluation may be relatively global in scope. The methodology of opinion polling provides a convenient and relevant approach to this problem.

There are several appropriate target populations whose opinions are relevant to institutional evaluation, including: (a) students who have participated in the program, (b) students scheduled to participate in the program, (c) other students, (d) faculty of relevant departments of the program, and (e) other faculty. One of the first tasks of the projected survey is thus to decide which target audiences will be defined. In many instances, it will be possible to survey a 100 percent sample of the target population. It will also be possible to follow up non-respondents, since the campus community is sufficiently small and sufficiently accessible.

The specific questions asked will depend on the circumstances of the institution and the program under study. The goals of institutional evaluation, as we have outlined them, suggest that questions initially be general and directed; as questions concerning bases for opinions arise, they can be studied in more detail by similar methods. The following classes of questions may be suggested:

  1. Degree of familiarity with the program under study.
  2. Opinion regarding the objectives of the program (perhaps with a scale ranging from "fully disagree" to "fully agree").
  3. Opinion regarding the effectiveness of the program in meeting its goals.
  4. Opinion regarding continuation of the program.

The results of an opinion study of this sort will provide information relevant to decisions, but will not define alternative actions. Negative opinions of a particular group might indicate that the group inappropriately rejects the goals of the program; in this case, the proper action would aim toward increasing acceptance of the program rather than changing the program itself. Decisions concerning programs must be made by people, whether or not attitudinal data are available. The function served by such studies is to substitute comparatively carefully gathered and analyzed data for the impressions that frequently influence decisions.

Reporting Discussions of variables in curriculum evaluation have placed almost exclusive emphasis on outcome. The arguments for this emphasis are that input variables—methods and materials going into the program—are the province of the development team; that so long as a program achieves acceptable student performance and acceptance by students and faculty the program has accomplished its purpose; and that attempts to assess the methods and materials themselves would imply that content and approach are being externally controlled. The unquestionably valid principle underlying these arguments is that curriculum development is a professional activity, the responsibility for which rests with the developer. There are, however, important functions to be served in obtaining information about the nature and quality of program materials. If a curriculum is to outlive the project that gave it birth and if continued improvement is envisioned, content and method must be subjected to professional appraisal. Likewise, information about required resources is necessary to make intelligent administrative decisions. A crucial problem in an institutional evaluation system is the development of methods for collecting and disseminating information which will not interfere with the professional freedom of the development team.

The norm appropriate to evaluation of programs is analogous to that which has evolved in research. A researcher is expected to describe in detail his variables, methods, and theoretical orientation, and to recognize opposing viewpoints. Critical appraisal of the work is expected, even though it does not constitute its ultimate evaluation; rebuttal and reappraisal are available as checks on the evaluation process. Institutions and professions have learned—with incomplete but not inconsiderable success—to evaluate the quality of research with respect to its methodological and analytical soundness without restricting freedom to develop ideas. The elements of this process—complete public information, debate, and suspension of final judgment—are likewise applicable to curriculum development.

As curriculum development comes to be recognized as a professional field, it is likely that standards for the description of programs, comparable to those used by research journals, will develop. Until then, an institution can facilitate communication about programs by developing its own format for reporting —and perhaps, in the process, contribute to more widespread professional standards. Like journal standards, such a format should outline the general information needed, striving to include enough detail so that the program can be reconstructed by persons other than the developers. It should also permit the writers their own prose and point of view. An institutional evaluation system might profitably call for reports at two stages. Upon initiation of a program, the development team would prepare a program description, including such material as course outlines, reading lists, and organizing concepts and techniques. After say a year, a more complete report would be issued, including a presentation of the evaluation studies done to that point.

The existence of curriculum development reports will facilitate professional evaluation. Within an institution, teachers can raise questions about the desirability of elements of the program, question the logic of the arguments relating content and method to outcome, or propose additional evaluation studies—and developers can reply. As materials are disseminated, such colloquy can extend beyond the particular campus. To a degree, this dialogue is likely to grow on its own accord as published reports provide both the stimulus and the subject for debate. There are several things that institutions can do, however, to facilitate such discussion and make it constructive. A relatively formal forum, such as a periodic symposium, would offer three advantages: it would focus attention on the projects, stimulate communication across fields of study, and provide a setting in which criticism would be expected to satisfy professional standards.

One further type of information collection should be mentioned for completeness. An institution's planning and decision-making require knowledge of costs of programs and their required resources. These are administrative matters, however, and more amenable to analysis by already available management techniques than the other problems we have discussed. The collection of such information, nevertheless, should be considered as part of an overall curriculum evaluation system, and should be coordinated with its other parts.

Integration of Information The information gathered for institutional evaluation will be useful to the extent that it contributes to the institution's appraisal of its own effectiveness and improves its efforts to increase its effectiveness. For this purpose the information must be integrated and examined configurally in relation to institutional objectives. As experience accumulates, it should be possible to develop techniques to facilitate this integration and analysis. At any stage, however, judgment by responsible organizational members is required. An important component of a curriculum evaluation system, therefore, is a mechanism for summarizing evidence of the institution's progress and directing future organizational steps. This is best accomplished through participation of persons from all parts of the organization. Therefore, we suggest that an integral component of the system be an institution-wide Curriculum Review Committee. This committee would include senior administrative officers and faculty and administration representatives of the various organizational units. It would study reports from the evaluation program staff, request new studies, assess progress, recommend new programs, and summarize these proceedings in a periodic report.

Evaluation within Projects The problems of evaluation within curriculum development projects have received considerable attention in the literature.9 We shall not discuss these problems in detail here. Instead, we may consider briefly some intransigent problems in project evaluation and the contributions that might be made by a curriculum evaluation system.

The most chronic problems in curriculum evaluation have resulted from a conceptual gap between two spheres of thought, one focusing upon content and one upon behavior. This gap is manifested in the difficulty encountered in attempting to formulate behavioral objectives that are consonant with conceptual ones. Another manifestation is the conceptual gap between the pedagogical procedures used and the outcomes desired. The basic assumption of curriculum development (indeed, of teaching) is that a relationship between procedure and outcome exists, and it is in this respect that curriculum development most resembles a psychological experiment. It is an experiment, however, in which the learner is truly a "black box." From the point of view of both the psychologist and the curriculum developer, the process by which stimulus produces response is difficult to see. In curriculum development, it often seems that the procedure is described in terms of subject matter and the outcome in terms of behavior, and that the only readily available rationale for expecting a relationship between them is the faith that the new program is better, and hence must produce better outcomes.

A formal evaluation system is neither a necessary nor sufficient resource for the solution to these problems. There are, however, two important contributions that it can make. It can provide consultation to help curriculum development projects obtain valid feedback concerning the results of their efforts, and it can provide a home for research that is sensitive to the interaction of subject matter, procedure, and behavior.

The Consultation Function We may assume that curriculum development projects vary in comprehensiveness, budgets, and personnel. Some may have professional evaluation personnel; others may have resources for extra-institutional consultation; while still others may have no access to professional advice on evaluation. A minimum role that an evaluation program staff can play is coordination of evaluation efforts. Many such efforts suffer from their ad hoc character. The evaluation instruments devised may be so special that they offer little communicative value over and above the program materials themselves. They may also be selected from a parochial point of view. Suggestions regarding additional types of information might help curriculum developers in selecting the feedback to seek. Evaluation efforts across different parts of an institution would be aided by having a common vocabulary and a common set of expectations regarding the assessment of outcomes. A proposed device for this purpose is a basic "evaluation package." The package would include:

  • A summary of procedures for defining educational objectives;
  • An outline of common objectives, with examples of evaluative items, and suggestions for building evaluation instruments;
  • Brief instruments relating to common institutional goals in major subject matter areas;
  • Brief instruments tapping attitudinal objectives which may be applicable across subject matter areas;
  • A set of suggested methods of analysis—simple and outlined in terms of the analytical goals they accomplish—together with computer programs and instructions for using the methods;
  • An outline of a suggested basic evaluation report.

With such a package, curriculum development projects can perform basic evaluation studies with a minimum of technical assistance. Consultants from the evaluation staff can use it as a guide in developing evaluation programs suited to the needs of particular projects. As in all cases of project evaluation, decisions regarding what elements of the package to use rest with the curriculum development team.

The Research Function The question of behavioral research has been obscured by the multiple demands of the evaluation function. Research usually takes place within an ongoing curriculum development project. For this reason variables are usually expressed in terms of content and method rather than psychological constructs, and the curriculum that to the curricular designer is an esthetically pleasing Gestalt is to the researcher an impenetrable ball of wax. On the other hand, an attempt to do such research in a psychological laboratory, removed from actual curriculum work, carries the parallel danger that the variables will be only tangentially related to realistic educational situations. To bridge this gap, there is need for a locus for research which is not limited to variables defined in curriculum development, but is responsive to educational problems. A recognized basic research function within a curriculum development system can help to fill this need.

Curriculum development begins with the premise that invention and modification of materials and teaching techniques can significantly improve the quality of education. That premise has prompted such questions as: What do we want students to have when they finish a course in chemistry (or English or mathematics) that they did not have when they began? What concepts of the field are especially related to these objectives? What materials and methods are especially pertinent to these concepts? A different approach, focusing on behavioral processes from the point of view of psychologists and sociologists, might produce such questions as: What behavioral variables are related to desirable educational outcomes? What processes are likely to produce such behavior? What environmental conditions are likely to affect these processes? Both sets of questions can contribute to the development of an integrated system of teaching activities -- in other words, to a curriculum.

The proposed research function for an evaluation system would make a place for behavioral research and encourage its interaction with other curriculum activity. This research will probably differ from traditional educational research in three ways. First, it would be forced to focus upon variables operating within systems rather than in isolation. This will call not only for the kind of multivariate conceptualization of the teaching-learning, process that Siegel and Siegel10 have presented, but also for the development of models that permit investigation of various modes of interaction among variables, including the effects of timing and sequencing. Second, behavioral curriculum research would be freed of the demand to evaluate predefined educational practices, particularly such administrative variables as class size, lecture versus discussion, and televised teaching techniques that have preoccupied much of psychological research on teaching.11 It would have the opportunity to construct hypotheses from theoretical analyses of learning and thought processes and translate them into methods and systems of methods for solving educational problems. Finally, behavioral curriculum research would interact with content-based curriculum development in the attempt to define operational applications of the proposed methods relevant to educational objectives in a live educational setting.

Conclusion We have suggested that curriculum evaluation is best thought of as a system of interlocking functions. How do the various functions become coordinated with one another and with curriculum development and sensitive to an institution's evaluation problems? Coordination requires a degree of organizational interaction with which few institutions have experience. The development of technical procedures depends upon identification of problems based on experience. For these reasons the development of an effective system of evaluation functions may best be considered an evolutionary process. We thus anticipate stages in the growth of a curriculum evaluation system.

The first step is the definition of evaluation as an organizational function, and the focusing of organizational effort upon the problem. This will facilitate recognition of problems that transcend discrete projects and development of concepts and procedures that are applicable across projects.

The second step will be planning evaluation activities in relation to the process of curriculum development. Stake has presented an abstract plan, based on a taxonomy of evaluation functions and data, which provides an excellent model.12 The attempt to operationalize such a planning model in a particular organizational setting will encourage greater integration of evaluation functions.

The final step in building an evaluation system is the integration of curriculum development and curriculum evaluation into a unified process. Regular assessment of progress will lead to new plans, which in turn will be evaluated.

The conception of curriculum evaluation implicit in this discussion is more broad than that usually operationalized. Several implications of the broader conception deserve mention.

First, articulation of the purpose of a study may facilitate intelligent design. Many curriculum evaluation projects appear to attempt project and institutional evaluation simultaneously, with the result that their designs are not well adapted to either. Recognition that a given study may serve one or several -- but not all -- of the functions of curriculum evaluation should lead to a considered selection of emphases, and a more focused choice of methods.

Second, from an institution's point of view, coordination of its evaluation efforts into an interacting system offers advantages. An organizational activity that views evaluation from a wide perspective can identify ways to use resources more efficiently, ways to unify fractionated projects, and problems that "fall between" existing projects and hence go unnoticed. It is not implied that institutions should be conducting all types of projects at all times. A curriculum evaluation system of the sort proposed here helps to direct attention toward objectives and studies most relevant to objectives.

Third, the system outlined here will be most necessary and most effective when curriculum development is an ongoing activity in the institution. By providing feedback and stimulating questions, such a system might contribute significantly to the professionalization of curriculum development.

1 D. A. Abramson, "Curriculum Research and Evaluation," Review of Educational Research, Vol. 36,1966, pp. 388-395.

2 L. J. Cronbach, "Evaluation for Course Improvement," Teachers College Record, Vol. 64, 1963, pp. 672-683; G. A. Forehand. "The Role of the Evaluator in Curriculum Research," Journal of Educational Measurement, Vol. 3, 1966, pp. 199-204; E. J. Furst, "Tasks of Evaluation in an Experimental Economics Course," Journal of Educational Measurement, Vol. 3, 1966, pp. 213-218; R. E. Stake, "The Countenance of Educational Evaluation," Teachers College Record, Vol. 68, 1967, pp. 523-540.

3 N. Y. Wessel, "Innovation and Evaluation: In Whose Hands?", Proceedings of the 1966 Invitational Conference on Testing Problems. Educational Testing Service, 1967.

4 J. I. Goodlad. The Changing School Curriculum. New York: Fund for the Advancement of Education, 1966.

5 Cronbach, op. cit.; R. Glaser, "Instructional Technology and the Measurement of Learning Outcomes: Some Questions," American Psychologist, Vol. 18, 1963, pp. 519-521.

6 D. T. Campbell and J. C. Stanley, "Experimental and Quasi-Experimental Designs for Research on Teaching," N. L. Gage, ed. Handbook of Research on Teaching. Chicago: Rand McNally, 1963; C. W. Harris, ed. Problems in Measuring Change. Madison: University of Wisconsin Press, 1963.

7 Forehand, op. cit.; Campbell and Stanley, op. cit,

8 Cronbach, op. cit.

9 Abramson, op. cit.; Cronbach, op. cit.; Forehand, op. cit.; Furst, op. cit.; Stake, op. fit.; H. Grobman, "The Place of Evaluation in the Biological Sciences Curriculum Study," Journal of Educational Measurement, Vol. 3, 1966, pp. 205-212; J. T. Hastings, "Curriculum Evaluation: The Why of the Outcomes," Journal of Educational Measurement, Vol. 3, 1966, pp. 27-32.

10 L. Siegel and Lila C. Siegel, "A Multivariate Paradigm for Educational Research," Psychological Bulletin, Vol. 68,1967, pp. 306-326.

11 W. J. McKeachie, "Research in Teaching: The Gap Between Theory and Practice," C. B. T. Lee, ed. Improving College Teaching. Washington, D.C.: American Council on Education, 1967.

12 Stake, op. cit.



Cite This Article as: Teachers College Record Volume 72 Number 4, 1971, p. 577-593
https://www.tcrecord.org ID Number: 1651, Date Accessed: 10/20/2021 9:16:58 PM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Garlie Forehand
    Carnegie Mellon University
    Professor Garlic A. Forehand heads the department of psychology at Carnegie Mellon University, Pittsburgh.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS