Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Title I Federal Evaluation: The First Five Years


by Kathryn A. Hecht - 1973

It is the opinion of this author that much of the active interest and progress in evaluation in the last seven years was generated, either directly or indirectly, by the seemingly simplistic Congressional evaluation requirements of Title I. Hopefully, the progress of Title I federal program evaluation and development of an evaluation profession will continue to proceed on a mutually supportive basis.

Kathryn A. Hecht received her doctorate in evaluation and research from the University of Massachusetts in September, 1972. This article is a revised version of a paper presented to the Symposium on Data Analysis of the 1970 Elementary Survey of Compensatory Education, at the Annual Meeting of the American Educational Research Association, Chicago, Illinois, April, 1972. The author wishes to express her appreciation to Dr. Jimmie C. Fortune and Dr. Richard M. Jaeger for their helpful criticisms and suggestions but assumes total responsibility for the information and opinions expressed in this report.


The 1965 Elementary and Secondary Education Act (ESEA) was landmark educational legislation. Title I of that act had two unique features which, in combination, have produced much controversy and, hopefully, something more.


ESEA was the first large-scale federal aid to elementary and secondary education. Given the long history of attempts to pass a school aid law, it has been called a child of compromise. The most notable stumbling blocks which had to be overcome included church/ state, civil rights, and urban/ rural issues. In many minds, the first priority of this act was simply to produce federal aid legislation, and only secondly to begin the reform of the educational system.1 To emphasize the relative size of funds involved, Title I of that act contained more money than the total Office of Education direct appropriations for the year 1964.2


The second unique feature, which gains in importance because of the first and the attention it commanded, was the evaluation requirement included in Title I. It was the first time that federal education legislation included an evaluation requirement within the law itself. It was the concern of Senator Robert Kennedy, main proponent of the evaluation requirement, to prevent schools which had failed in the past from doing more of the same with Title I funds.3 Though the requirements were not very specific and the know-how to carry them out was soon found to be lacking, the lawmakers were quite enthusiastic in their support of the evaluation mandate.4


The law stated that for a local project to be approved:


. . . effective procedures, including provision for appropriate objective measurements of educational achievement, will be adopted for evaluation at least annually of the effectiveness of the programs in meeting the special education al needs of educationally deprived children.5


The local educational agency was required to report this and other information annually to the state educational agency; the state, in turn, was required to re port this and other information required by the Commissioner to the Office of Education. The Commissioner's evaluation responsibilities were quite vague but included his duties to establish basic criteria and receive such reports "as may be reasonably necessary to enable the Commissioner to perform his duties under this title."6 Though the Commissioner's reporting requirements were left ambiguous, it was understood that yearly reporting to Congress would be necessary, and such information required by the law would of course be part of this report. The law also created a National Advisory Council on the Education of Disadvantaged Children.


The framework of ESEA was dictated by political viability rather than administrative ease. Parts of the act that might cause conflict were purposely left more ambiguous than others.7 Former Commissioner of Education Harold Howe has said:


E.S.E.A. was the only type of federal activity in education which was likely to be politically viable in 1965. ... I doubt that anyone could have dreamed up a series of education programs more difficult to administer and less likely to avoid problems in the course of their administration, but E.S.E.A. was not designed with that in mind.8


Early implementation of Title I evaluation provisions was fraught with difficulties, many stemming not only from the legislative background but also from its timing. The law was vague and the administration was unprepared, for no planning time had been allotted. The Office of Education was reorganized in July, 1965, and the reorganization included the establishment of the administrative structure for ESEA. Final appropriations were not made until September, yet the law required that money was to be spent in that first year, school year 1965-66, which was already underway. In this setting, it is hardly surprising that little work had been done on the evaluation requirements prior to the implementation of the Title I program. Guidelines for state and local expenditures were not available until early 1966. Given this delay, some states had approved projects and issued guidelines before the Office of Education had prepared theirs.


The magnitude of the act, the administrative restructuring, and the new out look it required of the Office of Education was further complicated by severe staffing problems. The Division of Compensatory Education, responsible at that time for Title I program development and evaluation, had no qualified full-time evaluator on its staff for its first six months of operation. Recruitment was difficult and usually unsuccessful. Local, state, and federal levels were competing for a very limited number of qualified people; the Office of Education had long had a poor image among researchers, and few were willing to take a risk with new legislation which was little understood.9


In addition, the timing of the legislation, the need to get programs underway, and the money spent precluded sufficient planning and groundwork and led to many misconceptions, including those concerning the evaluation requirements. Staff in the Division of Compensatory Education realized the need to assure the local and state administrators of Title I that the evaluation requirement would not be used as the basis for refunding. In that first year, they emphasized the importance of evaluation to the local agency as a tool in effective planning, and played down its difficulty. In an interview intended to assist schoolmen in ad ministering Title I, Commissioner Howe was quoted as saying:


Maybe too much fuss is being made about evaluation. Evaluation procedures are already built into Title I projects; now, you simply carry them out. ... I don't think it's unreasonable to expect schoolmen to do a careful job on evaluation. We're really not asking them to do any more than they normally would do on their own. . . . No one is going to be "penalized" as a result of the evaluation.10


To the local administrators, the requirement to evaluate Title I programs was something tacked on to the massive paperwork they already had to do, and a bothersome requirement at that. By the time the Office of Education had guide lines on program evaluation, the local school districts already had their pro grams underway. One can summarize the phase of early implementation of Title I evaluation by stating that what no one was sure how to do had to be done in a hurry.


The evaluation methodology used for federal reporting purposes in the five years, fiscal years 1966-1970, can be broken down into two phases. The first phase included evaluative reports for the years 1966 and 1967, which were done through a "compilation" methodology. They followed a route from local to state to federal reports; information was passed on and compiled from the lowest level to the highest. Additionally, some spot studies on special problem areas were funded through salary and expense funds, as there had been no funds specifically granted for evaluation purposes. An example of such a study was the one done by Boston College on non- public school involvement.11


The base of the compilation methodology was the state report, which included project and district data submitted from the local agency. In the early years of Title I reporting, the types of data which were thought to yield important information were attendance, dropout, and testing, but problems were soon evident. Data collected from the states were not comparable, and not always available.12 Also, the relevance of such data was soon questioned, since the emphasis of Title I was clearly being focused at the early elementary grades. Further, the variability of projects was so immense as to make comparisons on such data questionable.

EARLY REPORTS


The first evaluation report produced by the Office of Education was called "The States Report: The First Year of Title I, Elementary and Secondary Education Act of 1965."13 As most other things during that first year, it was done speedily and, as a first try at a national evaluation report, was not severely faulted. It contained information on the number of children served, expenditure per pupil, types of instructional and service programs, and grade levels of participants.14 More interesting and promising were the narrative comments from the state reports that were included, for example:


The image of the "poverty-school culture" is changing from one of inferiority to one of progressing professionalism.15


Early informal observation and evaluation by educators most intimately associated with programs indicate that this impact will be questionable and dramatic.16


In a short period of time we have passed through the embryonic stage of a revolutionary education venture.17


The report itself admitted its shortcomings:


Because of time limitations, lack of established evaluation procedures and techniques, failure to use achievement measuring systems, and the lack of trained evaluators, the report lacks some of the specifics of a technical evaluation report.18


The report for fiscal year 1967, called "Title I/Year II,"19 was similar to the first in that it too used the compilation methodology and contained similar types of information plus early reports and some results from the contracted problem studies mentioned above. It appeared to be more of a publicity publication than an attempt at a technical report. A flashier publication and the only one in five years to contain photographs, it contained more theory than fact and did not show much improvement in data collection procedures. The following statement was typical:


Title I has been responsible for a better understanding of the handicaps that beset poor children as they strive to match their affluent classmates in the arena of academic achievement where school success or failure is usually measured.20


As a second-year report, not having the excuse of hurriedness and lack of experience of the first, it was received more critically by those hoping for an effective assessment. Dentler21 described the conclusion of the second year report as "elegant, encouraging, yet empirically not precise." He chastised researchers involved for faulty manipulation of the limited data available. Rather than sup porting the success orientation of the document, his conclusion was that at best Title I programs seemed to be of indeterminate educational value. He concluded:


This review of the federal report is not intended to argue against future Title I allocations or expenditures. These funds are genuinely needed and they can be used to improve the quality of education. It is intended to advance a plea on behalf of effective evaluation research in keeping with the spirit and the letter of the 1965 Elementary and Secondary Education Act.


About the time the first two reports had been completed, a new evaluation outlook began to emerge. The Office of Education now had less need to be defensive in its reporting; public interest had lessened and opposition had moderated. The church/state and anti-federal forces had caused less trouble than had been expected. The concepts of Title I and an understanding of its basic premises had been generally well-disseminated and accepted, for the most part, among schoolmen. There was thus less need for publicity and more for hard data. After two years of compilation methodology using the local-state-federal route, Office of Education officials and Congressional legislators seemed convinced that hard data would not come from this model. The next stage was, therefore, intended to move the evaluation activity towards an evaluative-questioning mode, in order to provide data for decision making as opposed to publicity.

UNIFORM DATA


The second phase of federal Title I evaluation can be characterized by the collection of uniform programmatic data directly by the Office of Education through the use of sample surveys representative of the nation. It was initiated in 1968 with a survey of elementary schools with Title I programs. The scope of the second phase was soon expanded by the creation of the Joint Federal/State Task Force on Evaluation, sometimes called the Belmont System after the location where the first formalized federal-state agreement to provide evaluative data was made. This unique agreement between the Office of Education and the chief state school offices of twenty states expanded the collection of uniform data to include most Bureau of Elementary and Secondary Education programs and some of those of the Bureau of Adult, Vocational and Technical Education.


The redirected evaluative effort was supported by 1967 Public Law 90-724, which amended ESEA and specifically instructed the Commissioner of Education to report annually on program effectiveness.22 The mandate was further strengthened by appropriations for this purpose and indicated a significant expansion of federal evaluation responsibilities for all Office of Education programs.


The design of the federal-state evaluation system was to provide information to serve all levels of education, federal, state, and local, and to provide both funding and program data for managerial and assessment purposes.23 According to JaegerI24, the Joint Comprehensive Evaluation was "an attempt to make rational a complex set of decision processes." In producing a more efficient system, the single data base for several programs would reduce the collection of redundant data, and allow comparable cross-program information to be collected and total impact to be studied, more in line with the purposes of the ESEA legislation. To this end, activities were to be conducted at the Bureau level which oversaw most federally sponsored elementary and secondary education programs, rather than with the divisions managing the individual programs.


The implications of a commitment to a uniform data evaluation system geared to produce data for decision making were great, and can be only briefly summarized here. State officials were to be collecting comparable and transferable data which, given the problems of National Assessment, was a progressive move. As previously mentioned, it came to include the first active partnership of federal and state governments in the evaluation of educational programs. It provided high-level support for the use of empirically based decision making processes, for both managerial and policy decisions—not the usual procedures in the intricacies and politics of bureaucratic administration. Moreover, the system itself offered a unique methodology which was untested at this level of magnitude. The planned instrumentation was original and often most creative. The system as proposed should receive attention beyond its use for any one set of programs, as a major step towards creating a federal program evaluation methodology. (The lack of federal documentation of this stage is most unfortunate.)


As the system was actually implemented, the one major characteristic that made the next three annual evaluations noticeably different from those of the first two years was that reporting was now student-centered rather than project or district-centered. The major data source used was the PCI, or Pupil-Centered Instrument. PCI data was collected from a national sample of school districts, and within this sample from subsamples of schools, teachers, and pupils in grades two, four, and six; these data were weighted to represent national totals. (This was supplemented with the CPIR, or Consolidated Program Information Report, replacing the variety of statistical reports required for each of the titles incorporated in the Belmont agreement.) In the reports of fiscal years 196825 and 1969,26 the sample consisted of Title I schools only, and participating pupils were identified by participation in "academic compensatory education programs."


This information presented the first national data to describe students in compensatory programs. The fiscal 1968 report, the first of the second phase, pro vided reliable estimates of the number and kind of deprived children in the nation. The report gave socioeconomic descriptions of pupils in Title I schools, identified the types of pupils participating in compensatory education programs, and described those pupils.


However, it was not until fiscal 1970,27 that a sample representative of all schools (Title I and non-Title I) and pupils in second, fourth, and sixth grades was utilized. Also, given the addition of source-of-funding data, 1970 was the first data analysis to specifically identify Title I pupils. The second distinctive feature of the 1970 analysis was that it was not concerned with Title I only but with ten Bureau of Elementary and Secondary Education and Vocational Education programs affecting elementary and secondary education.


This system did not overcome all the problems of the previous years, how ever. For one reason, evaluation was still very much underfunded. In the first years of Title I, the Office of Education did not have much more money to evaluate all of its programs than the Office of Economic Opportunity had to evaluate Head Start alone. Title I funds for evaluation were approximately .05 percent of the total Title I funds in 1969.28 Therefore, proportionally, the amount of funds available was minimal. Also, Title I still lacked specific criteria upon which it could be evaluated, in that program determinations were made locally and varied tremendously.


In assessing the fiscal 1968, 1969, and 1970 reports, it can be seen that there was a beginning of a sophisticated methodology. Despite the progress that had been made from the compilation stage to the uniform data phase, there were two major weaknesses in the implementation of the system. The first was that pro gram participation and its effectiveness were still not being measured. Parts of the proposed evaluation subsystem29 had not been implemented, including two major data collection instruments. These were the project descriptor, intended to collect information that would allow programs to be described and therefore connected with pupils and assessment, and secondly, uniform criterion measures. Without these two, accurate and detailed descriptions of programs were not possible, nor could they be related to pupil achievement or other outcome variables. (In the fiscal 1970 analysis achievement test data on over 90 percent of the pupils were not available or were not reported in usable form. A study is currently underway to help alleviate this problem by equating tests from different publishers.)30


The second major weakness of the evaluation system as it was used was that feedback was not being supplied at all levels as originally envisioned. The federal/state agreement was supposed to provide direct feedback to the Office of Education and to state and local administrators for program development and management. It was intended to reduce state and local problems with evaluation. However, it could easily be seen that this was not happening. The fiscal 1968 report was technical and its implication difficult to understand. It was not writ ten as guidance for state or local personnel or for dissemination purposes, and was not released in time to provide the useful material that it was expected to offer. Though the 1968 report was based on data collection planned prior to the Belmont agreement, the 1969 report continued to suffer from the same problems, perhaps to a greater extent. The 1969 report still has not been disseminated.


Within the Office of Education, evaluation had been removed from the locus of program administration. Compounding the problem, different stages of the evaluation process were handled by different people, leading one to question its consistency and usefulness. There has been little indication that the evaluative information produced flowed back to those whose questions it was designed to answer.


At the local level, studies provided evidence that many local administrators, not being supplied by this source of data, were also not using the evaluation data which they were supposed to be producing for themselves.31 Many states also seemed to be less demanding. A state official was quoted as saying that since the U.S. Office of Education was no longer interested in local evaluation, he saw no need for the state to be.32 The state departments were well aware that the state reports were no longer being used as in previous years. The self-improvement aspects of evaluation, so strong in the early years of Title I, were being neglected. The Office of Education's push for local evaluation and at tempts to provide technical assistance no longer seemed to be priorities. Though this author does not agree, some critics of federal evaluation had gone so far as to suggest that local expenditure for evaluation was unnecessary and wasteful.33


The lesson of the uniform data collection phase seemed to be that no one sys tem could serve all evaluation needs at all levels. Evaluations from the third through fifth years, based upon nationally representative sample survey data, suggested a prototype which might prove useful for high-level federal policy decisions. There have already been indications that this was happening, but such reports have shown few prospects for administrative use and program monitoring, as currently employed.


In retrospect, the first five years of Title I federal evaluation appeared to lack consistent goals, comprehensive coverage, and a sufficient commitment of effort. This could be attributed in part to a general lack of knowledge about program evaluation—what it meant, whom it served, and how it was to be done. The response of the Office of Education to the first legal requirements to evaluate program effectiveness can be viewed as an attempt to grope with these questions, and as such, was to influence greatly the developing field of program evaluation, which in 1965 was barely in its infancy.


The first federal administrators of Title I were caught in a double bind of no planning time and no previous experience upon which to base the execution of the evaluative mandate. They chose an expedient and seemingly logical procedure, that of compiling Title I data collected through existing channels of local and state educational agencies. This was a multi-purpose approach, intended to satisfy both the law-given local and state evaluative requirements and also to provide the main source of data for the Commissioner's annual report to Congress.


At the same time, the Office attempted to counter the threat perceived by many of having to report progress in terms of specific program objectives, by defining and promoting evaluation as a means of data-based program improvement. Both state and local evaluations were said to serve internal as well as external data needs, in that they provided corrective feedback for local program development and state program monitoring, as well as for federal uses.


The federal evaluation reports of Title I for the first two years of operation were widely disseminated They consisted of promising narrative commentary but provided little useful data for any of the Title I administrative levels. Per haps the major accomplishments of this period were that the expected anti-federal forces did not materialize and that evaluation became a familiar term for schoolmen and researchers alike. Despite the overburdening paperwork and general difficulties of launching a new program, a cooperative atmosphere prevailed for the most part among those responsible for Title I at all levels, easing the way for additional evaluative demands.

SUMMARY


The federal evaluation reports of Title I's third through fifth years followed a shift of emphasis to the collection of uniform data by a sample survey. This approach provided the framework for a more sophisticated methodology and an improved data base. The cooperation of the chief state school officers, represented by the Belmont agreement, assured the political feasibility of this approach. The first evaluative report based upon a survey and its analysis (fiscal 1968) was written mainly for a Congressional audience and gave indications of its potential as an evaluative device.


That potential, however, has largely been unrealized. Essential parts of the intended evaluative system planned for following years were not developed. Although the total evaluation and planning expenditures by the Office of Education increased, those connected with the implementation of a Title I evaluative system did not seem to receive the needed support. Congress, with whom the evaluative requirements originated, seemed to have lost either interest, hope, or both in the results of Title I evaluation, as evidenced by the lack of reaction to the 1968 report and a willingness to accept the fact that the Office of Education has not turned in an annual evaluative report for the two subsequent years.


The Office of Education also seemed to have misplaced its evaluative convictions. For the years under discussion after the survey was initiated, the Office showed little interest in local or state evaluative efforts, offered no leadership in improving such activities, and made no demands that evaluative results be used as feedback to improve programs, as was once so strongly advocated. This neglect was compounded by the lack of meaningful dissemination of the yearly survey results, which in turn served to further disillusion the cooperating state and local agencies.


The advent of the National Institute of Education (NIE) could provide an appropriate placement for the ever-shifting responsibility for Title I evaluation and an opportunity for a concentrated pool of research and evaluation talent to rethink the purposes, procedures, and priorities of federal program evaluation. The structure of NIE and its intended mechanisms for utilization of university-based researchers could facilitate the interaction of those in the field, with Title I experience and growing evaluation expertise, with program administrators. Title I has given many researchers their first contact with practical problems on the applied end of the research continuum. Most who would consider themselves part of the emerging profession of evaluation have worked on some level of Title I planning or evaluation. That many evaluators have not been involved to a greater extent in the past could be considered a neglect of professional responsibility coupled with the Office of Education's inability to make participation a practical prospect for the non-corporate bidder.


Reviewing what has been done in Title I evaluation (work which has not been adequately documented and is therefore not easily subject to objective analysis) raises a multitude of questions that the Title I experience presents or serves to illustrate. The technical problems are infinite, but the basic questions are less apparent, easier to ignore, and more difficult to deal with.


Should, for example, the goal of evaluation be program improvement? Campbell34 argues that if political systems are to attempt an experimental approach to social reform, in that the effectiveness of the program cannot be previously proven, then such programs must be allowed not only to succeed in measured amounts but also to fail. Should and how can evaluation be used to determine the fate of individual projects and total programs? If a program is not known a priori to succeed, then alternative programs increase the probability of success and evaluation must operate comparatively. Can evaluation then take on the next step of causal inference? In pursuit of such questions, it is obvious that evaluators must confront not only the intricacies of research design but also the necessity of interface with political reality.


It is the opinion of this author that much of the active interest and progress in evaluation in the last seven years was generated, either directly or indirectly, by the seemingly simplistic Congressional evaluation requirements of Title I. Hope fully, the progress of Title I federal program evaluation and development of an evaluation profession will continue to proceed on a mutually supportive basis.35




1 F. M. Wirt and M. W. Kirst. The Political Web of American Schools. Boston: Little, Brown, 1972.

2 K. A. Hecht, "Congress and Research," unpublished paper, 1966.

3 Hearing before the Subcommittee on Education of the United States Senate, Eighty-Ninth Congress, First Session on S.370, "A Bill to Strengthen and Improve Educational Opportunities in the Nation's Elementary and Secondary Schools." Washington, D.C.: U.S. Government Printing Office, 1965.

4 S. K. Bailey and E. K. Mosher. ESEA—The Office of Education Administers a Law. Syracuse, N.Y.: Syracuse University Press, 1968.

5 United States Public Law 89-10, Eighty-Ninth Congress, H. R. 2362, Section 205, April 11, 1965.

6 Ibid., Section 206.

7 Bailey and Mosher, op. cit.

8 J. T. Murphy, "Title I of ESEA: The Politics of Implementing Federal Education Reform," Harvard Educational Review, Vol. 41, No. 1, 1971, p. 41.

9 E. B. Drew, "Education's Billion-Dollar Baby," The Atlantic Monthly, July 1966, pp. 37-43.

10 "A Schoolman's Guide to Federal Aid—Part III," reprinted from School Management, Vol. 10, No. 4, May 1966, p. 10.

11 V. C. Nuccio and J. J. Walsh, "A National Level Evaluation Study of the Impact of Title I of ESEA of 1965 on the Participation of Non-Public School Children: Phase I Final Report," United States Senate Subcommittee on Education. Notes and W

13 U.S. Department of Health, Education, and Welfare, Office of Education. The States Report: The First Year of Title I, Elementary and Secondary Education Act of 1965. Washington, D.C.: U.S. Government Printing Office, 1967.

14 Bailey and Mosher, op. cit.

15 The States Report, op. cit., p. 7.

16 Ibid., p. 5.

17 Ibid., p. 23.

18 lbid., p. vi.

19 U.S. Department of Health, Education, and Welfare, Office of Education. Title I / Year II: The Second Annual Report of Title I of the Elementary and' Secondary Education Act of 1965, School Year 1966-67. Washington, D.C.: U.S. Government Printing Office, 1968.

20 Ibid., p. 16.

21 R. A. Dentler, "Urban Eyewash: A Review of Title I/Year II,’" The Urban Review, Vol. 3, No. 4, 1969, p. 33.

22 U.S. Department of Health, Education, and Welfare, Office of Education. Education of the Disadvantaged: An Evaluative Report of Title I, Elementary and Secondary Education Act of 1965, Fiscal Year 1968. Washington, D.C.: U.S. Government Printing Office, 1970.

23 Scientific Educational Systems, Inc. Joint Federal/State Task Force on Evaluation—Comprehensive Evaluation System: Current Status and Development Requirements (draft). Submitted to the U.S. Office of Education, Department of Health, Education, and Welfare, Washington, D.C., 1970.

24 R. M. Jaeger, “Evaluation of National Educational Programs: The Goals and the Instruments,” a paper presented at the Annual Meeting of the American Educational Research Association, Minneapolis, 1970, p. 1.

25 Education of the Disadvantaged, op. cit.

26 G. V. Glass, et al. Data Analysis of the 1968-69 Survey of Compensatory Education (Title I), Final Report. Boulder, Colorado: University of Colorado, 1970.

27 J. C. Fortune, et al. Data Analysis of the 1969-70 Survey of Compensatory Education (Title I), Technical Report. Amherst, Mass.: University of Massachusetts, 1972.

28 Wholey, et al, op. cit.

29 Jaeger, op. cit.

30 U.S. Department of Health, Education, and Welfare, Office of Education. "A National Anchor Test Equating Study Using Reading Comprehension and Vocabulary Subtests of the SAT, MAT, ITBS, CAT, SRA, and STEP," REP Number 71-29 (awarded to Educational Testing Service; study in progress).

31 D. C. Jordan and K. A. Hecht. Compensatory Education in Massachusetts: An Evaluation with Recommendations. Amherst, Mass.: University of Massachusetts, 1970.

32 Wirt and Kirst, op. cit.

33 Wholey, et al, op. cit.

34 D. T. Campbell, "Reforms as Experiments," American Psychologist, Vol. 24, No. 4, 1969, pp. 409-29.

35 The following report was not available for consideration at the time this article was developed: M. J. Wargo, et al. ESEA Title I: A Reanalysis and Synthesis of Evaluation Data from Fiscal Year 1965 Through 1970, Final Report. Palo Alto, California: American Institutes for Re search, 1972.orking Papers Concerning the Administration of Programs. Washington, D.C.: U.S. Government Printing Office, December 1967.

12 J. S. Wholey, J. W. Scanlon, H. G. Duffy, J. S. Fukumoto, and L. M. Vogt. Federal Evaluation Policy. Washington, D.C.: The Urban Institute, 1970.

13 U.S. Department of Health, Education, and Welfare, Office of Education. The States Report: The First Year of Title I, Elementary and Secondary Education Act of 1965. Washington, D.C.: U.S. Government Printing Office, 1967.

14 Bailey and Mosher, op. cit.

15 The States Report, op. cit., p. 7.

16 Ibid., p. 5.

17 Ibid., p. 23.

18 lbid., p. vi.

19 U.S. Department of Health, Education, and Welfare, Office of Education. Title I / Year II: The Second Annual Report of Title I of the Elementary and' Secondary Education Act of 1965, School Year 1966-67. Washington, D.C.: U.S. Government Printing Office, 1968.

20 Ibid., p. 16.

21 R. A. Dentler, "Urban Eyewash: A Review of Title I/Year II,’" The Urban Review, Vol. 3, No. 4, 1969, p. 33.

22 U.S. Department of Health, Education, and Welfare, Office of Education. Education of the Disadvantaged: An Evaluative Report of Title I, Elementary and Secondary Education Act of 1965, Fiscal Year 1968. Washington, D.C.: U.S. Government Printing Office, 1970.

23 Scientific Educational Systems, Inc. Joint Federal/State Task Force on Evaluation—Comprehensive Evaluation System: Current Status and Development Requirements (draft). Submitted to the U.S. Office of Education, Department of Health, Education, and Welfare, Washington, D.C., 1970.

24 R. M. Jaeger, “Evaluation of National Educational Programs: The Goals and the Instruments,” a paper presented at the Annual Meeting of the American Educational Research Association, Minneapolis, 1970, p. 1.

25 Education of the Disadvantaged, op. cit.

26 G. V. Glass, et al. Data Analysis of the 1968-69 Survey of Compensatory Education (Title I), Final Report. Boulder, Colorado: University of Colorado, 1970.

27 J. C. Fortune, et al. Data Analysis of the 1969-70 Survey of Compensatory Education (Title I), Technical Report. Amherst, Mass.: University of Massachusetts, 1972.

28 Wholey, et al, op. cit.

29 Jaeger, op. cit.

30 U.S. Department of Health, Education, and Welfare, Office of Education. "A National Anchor Test Equating Study Using Reading Comprehension and Vocabulary Subtests of the SAT, MAT, ITBS, CAT, SRA, and STEP," REP Number 71-29 (awarded to Educational Testing Service; study in progress).

31 D. C. Jordan and K. A. Hecht. Compensatory Education in Massachusetts: An Evaluation with Recommendations. Amherst, Mass.: University of Massachusetts, 1970.

32 Wirt and Kirst, op. cit.

33 Wholey, et al, op. cit.

34 D. T. Campbell, "Reforms as Experiments," American Psychologist, Vol. 24, No. 4, 1969, pp. 409-29.

35 The following report was not available for consideration at the time this article was developed: M. J. Wargo, et al. ESEA Title I: A Reanalysis and Synthesis of Evaluation Data from Fiscal Year 1965 Through 1970, Final Report. Palo Alto, California: American Institutes for Re search, 1972.



Cite This Article as: Teachers College Record Volume 75 Number 1, 1973, p. 67-78
https://www.tcrecord.org ID Number: 1488, Date Accessed: 1/25/2022 4:20:24 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Kathryn Hecht
    University of Massachusetts
    Kathryn A, Hecht received her doctorate in evaluation and research from the University of Massachusetts in September, 1972. This article is a revised version of a paper presented to the Symposium on Data Analysis of the 1970 Elementary Survey of Compensatory Education, at the Annual Meeting of the American Educational Research Association, Chicago, Illinois, April, 1972. The author wishes to express her appreciation to Dr. Jimmie C. Fortune and Dr. Richard M. Jaeger for their helpful criticisms and suggestions but assumes total responsibility for the information and opinions expressed in this report.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS