Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Close Up and Personal: The Effect of a Research Relationship on an Educational Program Evaluation


by Robin A. Mello - 2005

This article addresses the construction of "critical friendships" within the practice of one particular program evaluation. It focuses on the evolution of relationships developed during one 2-year program evaluation study that examined a collaborative educational project. Findings show that a caring interest and mentoring rapport developed between program constituents and evaluator (the author of the study). Further, this study suggests that these relationships were essential to how formative and summative feedback was used. This article suggests that the interplay between evaluator and constituents is best practiced through what House and Howe call a methodological advocacy stance. Finally, it encourages further discourse and investigations into the ways that program evaluation is carried out in order to more thoroughly understand how relationships influence evaluation methodology and practice.

INTRODUCTION


Nearly every sizable educational initiative requires an accompanying program evaluation and report that is added to archives nationwide. These evaluations stand as evidence of our collective belief that the practice of assessment results in greater accountability and in turn leads to higher quality programming. The bulk of the research literature, however, does not indicate that these investigations automatically create better educational systems, nor are there many direct correlations linking program evaluation outcomes to program change. Shadish, Cook, and Leviton (1991), for example, found that the influence of program evaluation on project activities is rare. They also noted that change at the program level, despite rigorous evaluation, is usually exceedingly slow. McNabb, Hawkes, and Rouk (1999) concurred, calling for expanded use of evaluation findings in program implementation. Chelimsky (1998) went further by proposing that evaluators are at fault and must begin advocating for better use of findings and for stronger program implementation.


This article attempts to address these concerns by examining, from the point of view of the program evaluator, an important yet underexplored component of evaluation process, namely, the research relationship’s development and subsequent bearing on use of findings and ensuing project development. The following discussion suggests that the field of educational program evaluation will benefit from an in-depth inquiry regarding how specific interpersonal relationships, forged as a part of evaluation methodology, might influence future sustainability of projects and programs in schools and other learning institutions. It does so by focusing on the intervention of interest discussed by Kisker and Brown (1997), along with the concept of evaluation advocacy (Chelmsky, 1998). The question is particularly dealt with through an examination of the evolution of my own experience as program evaluator during a 2-year investigation designed to examine an educational collaborative program sponsored by the University of Wisconsin state system.




LITERATURE REVIEW


Qualitative evaluation practice is aligned within the fields of ethnography and qualitative methodology and commonly uses a series of practitioner-based, interactive, and process-oriented approaches such as grounded theory (Strauss & Corbin, 1997), democratic evaluation (House & Howe, 2000), constituent-based inquiry (Stake, 1967), and utilization evaluation practice (Patton, 2001). At the programmatic level, qualitative inquiry’s impact has been most notably examined in the areas of public health (Strauss & Corbin), arts-based education (Eisner, 1991, 1998), authentic assessment implementation (Goodwin, 1997), and teacher-oriented action research (Strauss, 1993). However, most of the current literature in the field focuses on the evolution of programs and their ability to meet potential outcomes. What is missing is a sustained dialogue pertaining to utilization practices─ that is, the working details of evaluation activities as they are experienced (Patton, 2001). This includes how evaluation procedure might affect project outcomes and how evaluation methodology may influence findings and assessments.


Many practitioners engage in discussions regarding their impressions and insights concerning the practice of evaluation, but this discourse remains almost wholly within the anecdotal sphere of coffee breaks, work groups, and chats at professional conferences and is therefore largely underpublished. Connections are rarely made─ with the exception of Chelimsky (1998), Ducan and Stassio (2001), and Perez and others (1997)─ between the evaluator’s personal perspective and evaluation goals. Chelimsky notably discusses the effect of evaluator attitude on the political landscape as a whole and calls for less obfuscated characterizations of the evaluator’s role.


The majority of current publications and research focus on the efficacy of specific program interventions (Kisker & Brown, 1997; Strauss & Corbin, 1997) or explore the connections between theory and practice (Chelimsky, 1998; House & Howe, 2000). Investigations concentrating on actual methods used remain thin. An even smaller portion of the literature examines the impact of human interactions on an evaluation. And there is no significant analysis regarding ‘‘the research relationship’’ (Maxwell, 1996) or ‘‘critical friendship’’ (Schon, 1983) even though both Schon and Maxwell see the interaction between program stakeholders (this includes the evaluators) as the primary source for creating a viable and rich evaluation experience. They claim that nurturing the rapport between evaluators and program constituents results in valid, reflective, practicable, and comprehensive analysis. The person-to-person interaction in qualitative practice is of seminal importance.


This position is especially important in the domain of educational programming, in which evaluation methodology can be seen as similar and connected to pedagogy. The epistemological basis that exists between mentor and mentee or teacher and student, for example, has long been understood as a primary condition for inquiry, learning, and development. Dewey (1947), for example, felt that the teacher is a primary conductor for the educative experience through interactive and inclusive explorations with learners. Palmer (1998) sees teaching as a philosophical relationship that is most meaningful when it is iterative and empathic. Leamnson (1999) added that through interpersonal exchanges with learners, the teacher participates in an interactive process of meaning making that is lifelong. These connections are created in educational venues between what Hawkins (1975) called ‘‘I, thou, and it’’ and are the essence of learning.


Program evaluation as a field, in the opinion of this evaluator-author, is similar and must begin to focus in greater particularity on the connections between evaluators, program ecology, methodology/pedagogy, and stakeholders (the ‘‘I, thou, it’’ of evaluation practice), especially if the programs being examined are educational in nature. It is necessary to acknowledge the ways that person-to-person interactions affect the evaluation itself. This is an essential position because it strikes at the core of a study’s validity: Standard qualitative practice requires that biases and assumptions be discussed and examined recursively throughout the entire process (Maxwell, 1996).


To ensure applicability and relevance, then, educational program evaluation must be integrated within the socially constructed context as part of the teaching and learning relationship. House and Howe (1998) perceived this type of ‘‘I-thou-it’’ interface as inevitable and necessary to educational program evaluation. They support efforts to more rigorously and explicitly define its nature and note, ‘‘Analysis and interpretation requires many judgments and decisions on the part of the evaluators . . . the evaluators unavoidably become heavily implicated in the findings . . . their intellectual fingerprints are all over the place’’ (p. 2). The concept of an intellectual fingerprint is akin to what many in qualitative research have begun to acknowledge and debate─ the interpersonal autobiographical experience and its place in research analysis and narrative inquiry (Clandinin & Connelly, 2000; Denzin, 1997; Gubrium & Holstein, 1999; Lawrence-Lightfoot & Davis, 1997; Patton, 1999).


This article therefore attempts to address the lack of investigations in this area by focusing on the practice of program evaluation─ specifically, this evaluator-author’s experience. It looks at the subsequent effect of interpersonal relations on evaluation implementation and outcomes and does this by examining the evolution of this author/evaluator’s experience during a 2-year program evaluation. Further, this article is predicated on the belief that explorations pertaining to the interplay between researchers, program constituents, and educational institutions provide a deeper understanding of how to best implement evaluation in order to sustain teaching and learning.




SCOPE AND CONTEXT


In 2000, a collaborative project linking faculty and students at five separate campuses across the State of Wisconsin was designed to enhance and support the professional education and licensure of school librarians. This initiative was inaugurated through the creation of the University of Wisconsin System-School Library Education Consortium (UWS-SLEC). At this time, I was invited to evaluate the program. Both evaluation and program were underwritten by the project’s Pre-Kindergarten to Undergraduate Level PK-16 Initiative Grant from the University of Wisconsin System (Schroeder et al., 2001).



PROGRAM EVALUATION


UWS-SLEC combines the expertise of experienced faculty, who work together within a distance-learning program and who function as a pseudo-virtual department while retaining autonomy and independence as members of their respective on-campus departments and programs. This collaborative is tied together through an interinstitutional agreement that mediates individual campuses’ perspectives and needs. UWS-SLEC is designed to serve approximately 100 students, offering them a hybrid (Garmham & Kaleta, 2002) distance-learning program in preparation for an initial or permanent Wisconsin School Library Media Specialist license.


UWS-SLEC’s program evaluation was planned and conducted to investigate the evolution and the effectiveness of the project over its 2-year preliminary grant period. Grounded in questions proposed by the program and dependent on project activities and constituent perspectives, it was naturalistic and qualitative in approach because it was felt that this methodological perspective best matched both the expertise of its project director, faculty, and evaluator, and the conditions of the program’s educational and narrative contexts. Documentation and data collection were ongoing and resulted in a series of iterative data analyses, formative reports, and two year-end summative studies (Mello, 2001, 2002a).


Methodology was primarily qualitative in nature but used some quantitative measures for purposes of auxiliary corroboration. Its design was closely aligned with the perspective of Stake (1967), who suggested that program evaluation be constructed using a countenance model─ one in which the researcher and program constituents interact in the natural environment of project activities and in which the evaluator acts as a participant-observer. It also incorporated Patton’s (2001) utilization approach by following the program over time through its iterations and evolutionary processes. Both perspectives were related to how the project functioned─ that is, the program’s interactivity, iterative decision-making processes, and practice-based foci.


The evaluation focused on what was most useful and important to the program by using UWS-SLEC goals and outcomes as a framework. It focused on the following areas of inquiry:


1) Students’ perceptions of, and satisfaction with, course work and other program components


2) The program’s effectiveness in developing professional competencies


3) Overall level of program quality, including course delivery and instruction


4) Models that reflect the program’s evolution during its first 2 years


The program evaluation design also included (a) inclusion of stakeholders’ perspectives, questions, and activities throughout the entire evaluation process; (b) use of multiple evaluation tools; (c) triangulation of data obtained from multiple sources; (d) inclusion of multiple perspectives so that program questions and lines of inquiry could be comprehensively examined; and (e) active involvement of the evaluator as a participant-observer who also provides frequent formative and summative evaluation feedback over the life of the program (see Table 1).



Table 1. Program Evaluation Matrix/Calendar: UWS-SLEC


 

Evaluation Activities

Persons

Observations

Phone/

In-Person Interviews

Meetings & Project Activities

Artifacts

Surveys

Project Staff

ü

ü

ü

ü

ü

Teachers/Librarians

ü

ü

ü

ü

ü

Participants/Students

ü

ü

ü

 

ü

Project Director/ Evaluator

 

ü

ü

  

Others

ü

ü

ü

ü

ü



DATA SOURCES


Findings were based on data from multiple sources gathered during the evaluation. This included program documents, e-mails and online conversations, questionnaires and surveys, participant-observation of class and faculty meetings, and in-depth interviews. Principal constituent sources of data were students (prospective, inactive, graduated, and current), faculty, administrators, library science professionals, university representatives, program assistants, and technology support staff. Validity was ensured through data triangulation─ that is, examination and collection of multiple sources from a wide variety of constituents, with an emphasis on the impressions and reactions of participants, which then inform the program and suggest further lines of inquiry.


Data used to inform this particular discussion is extracted from the data set collected during the program evaluation. However, it is used here to examine a particular phenomenon that occurred during the assessment years─ that is, the evolution of the relationship(s) that developed between project constituents and myself (the principal evaluator). Therefore, the data set used and referred to in this article primarily includes interview conversations, e-surveys, communications and other procedural correspondence, and observations of project activities only. Data quoted here are representative of the entire data set and are used as illustrative examples of the whole collection, which number in the thousands of pages per program year.


Ecological perspective


In its naturalistic context, the evaluation was grounded in an assumption, based on Brofenbrenner’s (1979) ecological developmental theory, that suggests that growth and evolution in human systems (such as the UWS-SLEC project), practices, implementations, or relationships influence, at a foundational level, the systems within which the individual participant or program functions. Ecologically, UWS-SLEC operates at a macrosystemic level within a national debate that is currently centered on the disappearance of school libraries and the critical need for school library personnel.


Data indicate that nationally, there is one school librarian for every 950 children (Everhart, 2000) and that this situation continues to worsen. Wisconsin rates 12th in the nation, with approximately 650 students for every qualified part- or full-time school library media specialist (Everhart). A total of 92% of schools in Wisconsin report having a center dedicated solely to library/media programs and collections (National Center for Educational Statistics, 2001), but only 85% of these schools report retaining a licensed school library media specialist to staff their facility. Twelve percent of these schools have access to a part-time licensed school library media specialist for only a portion of the school year or day.1 In effect, this means that 1 of every 6 schools in Wisconsin, or 27% of all classrooms, have restricted access or no access to library-media instruction on a full-time basis. The case is worse at the elementary level.2


Data suggest that, during the years since the UWS-SLEC evaluation was executed, the situation has worsened. School library budgets continue to be cut or deleted altogether. Despite the No Child Left Behind Act of 2001, decisions to eliminate school library programs and staff are often made in response to pressure from budget shortfalls, along with the scarcity of qualified applicants. Data also show that because of the decline in library staffing, school library media specialists, in addition to their regular responsibilities, are required to act as substitute teachers, paraprofessionals in classrooms, technology coordinators, maintenance staff, and teachers of computer skills, thus reducing their ability to provide quality library instruction to school and community members (Lerner, 1998). As a result, some administrators within the State of Wisconsin system assert that declining literacy rates and media access issues are partly due to problems retaining qualified school librarians.


Another factor in the state of school libraries and how they function is the median age of a school library media specialist in Wisconsin, which is currently older than that of the average classroom teacher; just over 44% of librarians are at or above retirement age. Data suggest that school librarian retirements will continue to increase over the next 3 to 5 years, leveling off in the near future to an approximate 7%–10% yearly rate (Lauritzen, 1998; National Center for Educational Statistics, 2001). Because of the record number of retirees in the school library media specialist profession, new Wisconsin State Emergency School Library Media Licenses have increased 66% within the past 4 years, and schools are desperately seeking qualified personnel to staff the remaining library programs (Wisconsin Department of Public Instruction, 2001).


The Wisconsin Department of Public Instruction (2001) reported a crisis in the profession, showing an urgent need for full-time school library media specialists and an increase in the number of accessible and supported school libraries, especially in urban and rural areas. In light of these data and other pressures within the University of Wisconsin System itself, the UWS-SLEC was created. The underlying principle for the program’s existence is to support school library-mediaship through exemplary educational programming, ensuring that the school library and school library media specialist remain a core value-added component in 21st-century schools.


Establishing communication


At the beginning of the 2-year evaluation cycle, the project director and I (the evaluator) met and established lines of inquiry and parameters for the study. Second, in keeping with methods learned from Dr. George Hein (1990, 1998) and Sue Cohen (Cohen & Hickman, 1998), directors of the Program Evaluation and Research Group at Lesley University, where I had originally trained, an evaluation matrix was developed outlining data sources, constituency groups, timelines, and research questions. This in turn was sent back to the project director for approval. Data collection then began, and a pattern of feedback processes ultimately evolved.


Every 2 to 3 weeks, the project director and I would contact each other by phone or through face-to-face interactions. Informal discussions centered on how things were going (in general terms), insights, and emergent findings. The project director often asked for information on specific problems or events. Data show3 that the tone of these interactions was personable, supportive, and developed positively over time. They ultimately proved useful in the formation and evolution of the program.


I am so amazed at how really useful, I mean really useful, this evaluation is. I wish more people realized that this is the way to do things─ when you listen, communicate, and talk about the [program] with an evaluator who is paying attention, then I begin to pay attention too. And it’s worth every penny─ we have gotten so much from this.


(faculty interview)4



We have developed quite a relationship! It’s made the program better but it’s also made our work better. I don’t know what we would have done without it. I think we’ve both benefited. We have. It was brilliant to have insisted we evaluate this way now I think back, brilliant [laugh].


(faculty interview)


It quickly became apparent that these interactions constituted a key factor within the day-to-day workings of the evaluation. Therefore, two new data sets were added to evaluation plans: field notes/records of the interchanges and interactions between researcher and project director, and reflective protocols that were designed to encourage reflections pertaining to the ongoing evolution of the research relationship(s) formed during evaluation activities.


Evaluator/constituent contact


UWS-SLEC sponsors two programwide faculty retreats per grant year─ one in mid-October and the other in early February─ at which formative evaluation findings are reported. The interactions between myself and principal faculty members during biyearly retreats best illustrates the evolution of the role that the evaluator played over the duration of the evaluation. At these meetings, I presented emergent findings, collected field notes, and kept an ongoing observational record of meeting activities. As the evaluator, I was also called on to give input and address agenda items under discussion and was generally incorporated into the program’s milieu. During meal and break times, for example, a lively and personable social interaction ensued. Anecdotes pertaining to personal life and professional interests were consistently exchanged between UWS-SLEC faculty members and myself.


Faculty 1: Ok, I just have to ask─ did you wear that on purpose? It’s so unique. Is it the coat you wear when you evaluate? [Laughter] Like those hats you talk about.


Evaluator: No! [Laugh] well . . . I do wear my invisible evaluator hat today but I’ve just found this coat at the thrift store and all I did was change the buttons.


Faculty 2: I used to love doing things when I actually had time [laugh].


Faculty 3: Did you see the new Threads exhibit at the Arboretum? [Turning to evaluator] You’d love that if you like fun clothes and clothing and design.


Evaluator: I’ll check it out. Thanks.


(participant observation)



Faculty: I’m worried about Zoe¨, her mother is in hospital and she’s not . .


Evaluator: Yes. I think I know . . .


Faculty: Well she’s had the muck up with her administration and now this.


Evaluator: Now this takes her away from the classroom more.


Faculty: Exactly!


(faculty interview)


Additionally, based on the relationship that began to establish itself during these meetings, faculty members would contact the evaluator to check in and discuss their impressions, questions, and concerns. This happened spontaneously, and interactions grew in number and duration over the course of the evaluation process.


By the end of the 2-year evaluation cycle, faculty members not only discussed their research agendas, political views, and professional perspectives but were also confiding personal confidences, including information pertaining to health, career, and relationships.


I wanted to talk to you about the course I taught this summer because I didn’t know what to do about [student’s name] and her questions over the work. I was wondering if you had any ideas about that . . . of course I know, I do, what I want to change. I was thinking of getting the students involved in a much more ambitious project because I think the greatest mistake I made was not challenging them enough, what do you think?


(faculty interview)



We’re really interested in the idea of professional identity and what it means to the program, I mean, what it will be for the students when they work with us, and I’m very concerned about how they will interact and react to the online environment. Don’t tell anyone, but I do think that the online courses might not be working out, and then again, they might work out very well. I am anxious about it and I just wanted to talk to someone. You seemed the perfect person to talk to─ you understand. I feel better just talking about it. Thanks.


(faculty interview)


An example of the evolution within the research relationship was also observed during site visits. The ways in which I was treated as I traveled around the state observing the status of the program at differing campuses was indicative of the research relationship’s evolution. For example, at the beginning of my tenure, site visits were viewed with some trepidation by most faculty and administration. However, by the end of the first year, attitudes had shifted significantly. Advice on where to find good restaurants and hotel rooms and invitations to attend local cultural events and tour the surrounding areas became increasingly common. Although most of these requests had to be declined on ethical grounds, data indicate that they sprang from an interest in the evaluators’ welfare.


I called to tell you that there is a knitting store in the area so when you come, since you like to knit─ I remember you said so at the last meeting─ I thought I’d email you these directions just in case you want to look at it in your free time.


(e-mail communication)



Hey! When you get here you’ve got to check out the Eating Emporium─ great food─ maybe we can meet there for a sandwich too.


(e-mail communication)


As the evaluation progressed and the program evolved, it became apparent that suggestions provided through evaluation feedback were taken more seriously and were acted on more rapidly than were many ideas and changes proposed by nonparticipating faculty and administrators. Although not all suggestions and advisements were used, a significant amount of changes proposed and supported by evaluation activities and data were implemented.


I really think that one of the best parts of this whole grant is the way we work with the evaluator. It was a lot of money at first, I thought, to spend on an evaluation, but it’s worth every penny. It’s meaningful and useful.


(faculty interview)



I love these interviews when you do them because I get so much out of them. It’s the only time I get to think and stop and reflect on the program and what we and I are doing in it. It makes me understand what’s going on better and I think it’s influenced how [my institution] fits into what is going on at UWS-SLEC.


(faculty interview)


Additionally, both the evaluator and project director came to rely on their relationship as a way of connecting and supporting the program and as a form of analysis and meaning making.




FINDINGS



DEVELOPMENT OF THE RESEARCH RELATIONSHIP


The UWS-SLEC evaluation was predicated on the belief that the program constituents are a highly qualified dedicated group of professionals, that the program is an important undertaking, and that UWS-SLEC students are capable preprofessionals. At the beginning of the evaluation, the intended goal was to establish a ‘‘critical friendship’’ (Schon, 1983) that would both support the program in its work and be valuable in a scholarly and academic context. Data show, however, that as the relationship developed, a more connected and interactive understanding began to evolve, one that included trust, advocacy, and caring between me and program constituents. At the end of the project, most UWS-SLEC faculty members and students described the research relationship as trustworthy, vitally important, meaningful, helpful, and unique. Other key findings important to this discussion suggest that:


1. The research relationship became a key component of the evaluation process.


2. Relationships grew over time through ongoing dialogue and communication.


3. Rapport was founded on the concept of the evaluator as a ‘‘critical friend’’ (Schon, 1983) and eventually evolved into that of an empathic advocate.


Access


At the beginning of the evaluation, many core faculty were suspicious or concerned about the prospect of ‘‘being evaluated by someone who isn’t a librarian.’’ Also, because all core faculty were involved with their own research agendas and interests, the idea of a stranger and an academic collecting data within their area of expertise was seen by a few as an encroachment. For example, data show that at the beginning of the evaluation, issues of academic freedom and privacy were brought up as objections to the presence of the evaluator or the process of evaluation activities. Early on in the process, I was confronted with many questions about the evaluation plan and methodology.


As the evaluation moved forward, however, data indicated that negative attitudes declined and that there was a general acceptance for and positive interest in evaluation findings.


I am very interested in what you will be saying in your next evaluation. I am going to read it through. When are you going to mail it out?


(faculty interview)


Additionally, the outsider/insider role I played proved beneficial. Some faculty members and many students became increasingly pleased to find that, through participation in the evaluation, I was becoming more informed and cognizant of issues regarding school librarianship. There was a general sense that UWS-SLEC members were teaching me about their world. They were proud of their accomplishments and dedicated to their profession. Therefore, by the second and final year of the evaluation, nearly all objections and original concerns had been put to rest.


Data indicate that positive dispositions toward the evaluation were manifested most clearly when individuals were contacted for interviews. For example, with few exceptions, requests for interview and observation appointments were returned in a timely and positive manner.


Remember at the beginning? When you and I talked I got so worried. I wondered where this was going. Now this is kind of this talking and reporting back is interesting. You’ve listened to what we [at this individual university department] say and now I see that you are fair about what we need. We’re different here and some don’t see it that way, but we’ve got very different pressures. It’s good to talk about this.


(faculty interview)


Also, participants were consistently forthright and clear as interviews were being conducted, expressing both positive and negative feedback freely and honestly. Constituents frequently inquired after the well-being of the evaluator and openly discussed their personal lives as well. In addition, interview conversations would often begin or end with lively exchanges of ideas, jokes, stories, questions, and discussions surrounding current research in the areas of online learning and school-librarianship; the trust and integrity exhibited was significant.


Hey! How are you? Good to hear from you. This year has been really hellish with all the budget cuts and stuff but we are pulling through. So what is it you wanted to know about this time?


(interview)



You’ve got to tell them [the program] about the [blank] course. It was terrible! Horrible! We don’t know where to turn . . . . Mary thought we should all get together and talk to you about all of it. You can talk to the program; I mean that’s what you are there for right?


(interview)


Dialogue and rapport


Response rates to surveys and questionnaires were significantly higher than the norm─ frequently over 80%─ an experience unique in my evaluation practice. This level of interest may have been influenced by the professional habits of librarians, or it may be an indication that participants were invested in the program─ and subsequently the program evaluation itself. Whatever the cause, data indicated that a large number of constituents fully participated in evaluation activities throughout the 2-year evaluation cycle. Consistent interactions provided an opportunity for us to build camaraderie and collegial relationships. Data show that the frequency of responses increased, and by the second year, many participants were communicating with the program evaluator on an ad hoc basis when they had questions, compliments, or concerns. Eventually, an ongoing discourse was firmly established, one that went beyond niceties.


I really can’t tell you how much I appreciate your asking us for feedback because sometimes I feel that colleges are just out there to make money from students and that they don’t care. This tells me that you are interested in me and the quality of instruction out there. Thank you.


(interview excerpt)



You [the evaluator] know the history of this program. You are there to listen! Thanks for listening!


(survey excerpt)



It’s great to know that you [the evaluator] are out there doing all this. I want you to know how much [I] appreciate it because you really add to the program. It’s good to talk to you.


(interview excerpt)



This is a really unique experience, I think you [the evaluator] are invaluable as a resource to this program.


(interview excerpt)


Formative interactions


The bulk of evaluation feedback focused on formative critique to the project director. This was offered primarily during conversations and informal meetings, which frequently lasted up to 3 hours, often over cups of tea or meals, and taking place on an ongoing and continual basis. For example, toward the middle of the first year of program implementation, the project director was asked to reassess the formula for time spent online versus in the classroom. A detailed and in-depth look at the data set─ especially the portion pertaining to student learning and developmental perceptions─ was requested. Subsequently, I met with the project director and presented formative findings that were designed to address this specific program query. The ensuing discussion, like most others, was expressed through a collaborative interchange that focused inclusively on the welfare of the program from the point of view of various constituent groups. Pros and cons of various perspectives were explored, as well as negative and divergent data trends. In the final analysis, discussions regarding findings were perceived, by both the project director and myself, as productive and supportive.


This phenomenon was also observed in interactions with other UWS-SLEC faculty members, who were highly responsive and positively oriented to most formative suggestions. This was evidenced by the frequency with which institutional or programmatic changes were made when practicable. Again, this was viewed as an indication that the evaluation was, overall, a valuable and useful endeavor.


The successful relationship that developed during the program evaluation personally affected participants, especially the project director and me. It also strengthened the role of the evaluation within the evolution of the program and led to closer and more scrutinized use of findings and suggestions. For example, formative reports and summaries were consistently published and used by UWS-SLEC faculty in redesigning course delivery and content. Many important decisions were made after consulting with me. At the end of the second implementation year, the summative report was condensed and used as supportive evidence to acquire additional intra- and extramural funding.


Critical friendship


Through participation in the UWS-SLEC evaluation, I found myself becoming more interested in the health and well-being of school library programs in general. My early research into the state of the profession nationally, coupled with close and ongoing work with the UWS-SLEC, prompted me to become interested in the state of school libraries. I was convinced that there was an educative need for strong library programs within schools. My work as an evaluator with the project also led me to learn more about online teaching, seek training in Macromedia Dreamweaver and WebCT, design four online courses using WebCT and Blackboard, conduct a small research study focusing on online learning and instruction, and publish a paper (Mello, 2002b) on these findings. In addition, I found that I enjoyed working with the program and came to consider it an important part of my scholarly development.


In the year following the completion of the evaluation, I continued to meet with the project director from time to time to ‘‘rehash how things stand at this time’’ and to reconnect on a personal basis. Many spur-of-the-moment social events, such as going out for coffee, occurred. I moved to a new department at another university within the UW system, and the project director took on responsibilities for mentoring me through the process of resigning, negotiating contracts, and moving. The relationship continues to grow and evolve. The project director still contacts me to get feedback and keep me informed about how the UWS-SLEC is faring.




ANALYSIS



EVALUATION, ADVOCACY, AND CARE


Evaluation activities were iterative and ongoing. In the beginning, I knew almost nothing about library mediaship or its significance in school environments. I had only one previous experience evaluating school library programs. This took place during my tenure with the Program Evaluation and Research Group at Lesley University and within the context of a larger evaluation project underwritten by the National Science Foundation. Through the UWS-SLEC study, I not only became aware of the impact that school librarians have on educational institutions in general, but I also came to agree with the program’s advocacy stance in regard to shoring up and sustaining the profession statewide. Although it was not unusual for me to experience interest in the welfare of individuals or programs I have evaluated in the past, this particular investigation brought about a stronger and longer lasting interest─ possibly because of the convergence of a variety of factors, namely (a) individual personalities, (b) proclivity of evaluator for literacy issues (such as health of libraries), (c) investigating ecological and systemic conditions effecting program, and (d) use of formative feedback by program to make iterative changes and adjustments on an ongoing basis.


The entire process brought both the project director and me toward a deeper understanding regarding (a) the program’s context and ecological foundation; (b) the lives, both personal and professional, of participating faculty, students, and staff; (c) a shared scholarly interest in the disciplines and fields that grounded the program; and (d) an interest in the professional development of school library media specialists in general.


House and Howe (1998) pointed out that advocacy stances within program evaluation procedure are organic to the process and may well be necessary to provide complete and clear assessments. Through advocacy, which House and Howe define as being (1) theoretically based, (2) proactive in relation to project outcomes, and (3) comprehensive in characterizing the benefits and liabilities of the program, the evaluator creates an intuitive framework that guides the process and integrates differing perspectives: ‘‘We believe that all evaluators must embrace some conception of the public interest, of democracy, and of social justice, even if these conceptions are implicit. They cannot avoid it in the conduct of their studies’’ (p. 3).


This is akin to what Dymond (2001) defined as the evaluator’s ‘‘stakeholder involvement.’’ In the case of the UWS-SLEC evaluation, the evaluator ‘‘embraced’’ the interest of the public─ in this case, the students and educators of Wisconsin─ ultimately becoming involved not only as a data collector and analyst but also as an informal advisor to the program (evidenced by the ongoing phone calls, consistent informal meetings and conversations, and immediacy of the program to follow-up on suggestions and nascent findings). Additionally, the involvement of the stakeholders in the evaluation process was iterative and included the evaluator.


Kisker and Brown (1997) maintain that ‘‘at the heart of [the] program evaluation is the . . . intervention of interest’’ (p. 541). Chelimsky (1998) concurs, asserting that all program evaluations ought to be predicated on underlying questions that, when posited, support the evolution of broad-based interest and advocacy for the program.5 In the case of the UWS-SLEC evaluation, an ‘‘intervention of interest’’ was maintained, and an ‘‘advocacy stance’’ evolved as a result of methodological procedures. In addition, Chelimsky’s questions, designed to address the assumption that all advocacy is proactive and supportive (which is not guaranteed), were answered affirmatively through proactive and ongoing responses to evaluation protocols and emergent findings.


In the specific case of the UWS-SLEC, the evaluation research relationship moved, on the part of the evaluator, from interest in the program’s efficacy to concern for the welfare of the program and its stakeholders. Although some could argue that this compromised the evaluations’ validity, this study suggests the opposite and challenges the belief that evaluators are actually able to be impartial. It also asks where the line should or can be drawn, especially in the matter of advocacy.


This article presents a model quite different from that of the prevailing idea that an evaluator’s role needs to be dispassionate and aloof. Its intention is to present a case in which the program’s ability to evolve was supported and fostered through the creation of interpersonal connections. Further, it argues that the role of ‘‘critical friend,’’ often cited as a mainstay of behavior in program evaluation methodology, needs to be fully explored so that actual practices and cases within the field can be more thoroughly and honestly discussed. In this particular case, and this case only, the outcome of the critical friendship that developed resulted in a thorough focus on data collection and richer and more precise analysis of these data.



PROGRAM EVALUATION AND EDUCATIONAL PRACTICE


Educational program evaluation is primarily concerned with the practices and activities that are foundational to teaching, learning, and schooling. Therefore, as discussed previously, it has an obligation to operate, at some level, within the teaching-learning paradigm. One can argue that evaluations, which exist to support and inform educative environments, must be cognizant of this perspective. In this particular case study, a connection to teaching and learning was firmly established and examined through a mentoring relationship.


In the case of the UWS-SLEC evaluation, the evaluator role evolved from mentor toward ally. The research relationship developed through shared interactions that were predicated on interest in sustaining the program and on a willingness to change and incorporate evaluator feedback in an ongoing and formative manner. As the evaluator, I also earned a status in the program that sometimes required me to act as a mentor-teacher and to guide the program, as opposed to directing or diagnosing the project. In this way, a caretaking stance of advocacy and counsel was established.


Taking the metaphor further, trust and respect for individuals’ expertise and perceptions were built during formative sessions between the project director and me. We experienced an interaction akin to what Vygotsky (1962) described as the Zone of Proximal Development, in which all individuals involved socially construct meaning and expertise interactively and iteratively. Additionally, the close, personal, and ongoing relationship resulted not only in greater use of the evaluation overall but also in a professional bond that lasted after the evaluation was completed.



IMPLICATIONS


Although this study is limited in its approach and reports on one case only, it is hoped that it will stand within the literature as an example of how the research relationship supported the evaluation process. The experience of the UWS-SLEC program evaluation suggests that evaluators of educational programs be cognizant of teaching and learning processes and attend to the development of supportive research relationships during evaluation activities. Findings discussed here suggest that developing mentor/mentee relationships might help to ensure that the results of educational program evaluations are used proactively and thoughtfully.


For too long, the field has lived under the false assumption that it is the evaluators’ job to be impartial, objective, or dispassionate. This study’s findings suggest the opposite and propose that the field of program evaluation might benefit from the use of a teaching/learning model of advocacy care (House & Howe, 1998) as a way of supporting the creation of an optimal developmental environment. After all, isn’t that the real intent of the majority of formative program inquiry? To provide feedback, based on authentic and comprehensive data, that supports the creation of quality projects is one of the strongest arguments in favor of program evaluation (Patton, 2001).




Notes


1 Wisconsin requires all schools for grades 7–12 to have a qualified school library media specialist on site. It requires elementary schools to have a ‘‘supervised’’ library media center only.


2 Nationally, 71% of elementary schools have a library/media center; many of these are not staffed on a regular basis. Ninety percent of secondary schools report having a center, but only 87% of these are actually staffed by a licensed library media specialist. The case is far worse in schools in which the student population is under 200 or at Board of Indian Affairs schools (National Center for Educational Statistics, 2001).


3 Data excerpts are quoted throughout the text as examples of the larger data set. They are identified by their source rather than by individual. Some names and other identifying details have been changed to ensure anonymity of the informants.


4 All data quoted here are indicative of the larger data set and used here with the permission of the project. Individuals remain anonymous.


5 Chelimsky proposes a set of 12 questions used before an evaluation is undertaken. In part, these include: If the evaluation is feasible, should it be done? Is the program likely to implement changes? If so, is it politically implementable? If the results are different from what the sponsor hopes, will findings still be useful?




References


Bronfenbrenner, U. (1979). Ecology of human development. Cambridge, MA: Harvard University Press.


Chelimsky, E. (1998). The role of experience in formulating theories of evaluation practice. American Journal of Evaluation, 19, 35–56.


Clandinin, D. J., & Connelly, F. M. (2000). Narrative inquiry. San Francisco: Jossey-Bass.


Cohen, S. B., & Hickman, P. (1998, April). Statewide Implementation Program (SIP): Effective models for curriculum implementation: Center for the enhancement of science and mathematics education. Paper presented at the Annual Meeting of the National Association for Research in Science Teaching, San Diego, CA. (ERIC Document Reproduction Service No. ED419683)


Denzin, N. K. (1997). Interpretive ethnography. Thousand Oaks, CA: Sage.


Dewey, J. (1947). Education and experience. New York: Macmillan.


Ducan, K., & Stassio, M. (2001). Surveying feminist pedagogy: A measurement, an evaluation, and an affirmation. Feminist Teacher, 13, 225–239.


Dymond, S. (2001). A participatory action research approach to evaluating inclusive school programs. Focus on Autism, 16, 54–64.


Eisner, E. (1991). The enlightened eye: Qualitative inquiry and the enhancement of educational practice. New York: Macmillan.


Eisner, E. (1998). The kind of schools we need. Portsmouth, NH: Heinemann.


Everhart, N. (2000). Looking for a few good librarians. School Library Journal, 46(9), 58–61.


Garmham, C., & Kaleta, R. (2002). Introduction to hybrid courses. Teaching with Technology Today, 8(6). Retrieved April 10, 2002, from http://www.uwsa.edu/ttt/articles/garnham.htm


Goodwin, A. L. (Ed.). (1997). Assessment for equity and inclusion: Embracing all our children. New York: Routledge.


Gubrium, J. F., & Holstein, J. A. (1999). At the border of narrative and ethnography. Journal of Contemporary Ethnography, 28, 561–574.


Hawkins, D. (1975). The informed vision: Essays on learning and human nature. New York: Agathon Press.


Hein, G. (Ed). (1990). The assessment of hands-on elementary science programs. Grand Forks: North Dakota Study Group on Evaluation.


Hein, G. (1998). Learning in the museum. London: Routledge.


House, K., & Howe, E. (1998). The issue of advocacy in evaluations. American Journal of Evaluation, 19, 3–12.


House, K., & Howe, E. (2000). Deliberative democratic evaluation. New Directions for Evaluation, 85, 3–12.


Kisker, E. E., & Brown, R. S. (1997). Nonexperimental designs and program evaluation. Children and Youth Services Review, 19, 541–566.


Lauritzen, P. W. (1998). Supply and demand of educational personnel for Wisconsin Public Schools 1998: An examination of data trends (Research Report, Wisconsin Department of Public Instruction). Retrieved June 17, 2002, from http://www.dpo.state.wi.us/dpi/dlsis/tel/supdem98.html


Lawrence-Lightfoot, S., & Davis, J. H. (1997). The art and science of portraiture. San Francisco: Jossey-Bass.


Leamnson, R. (1999). Thinking about teaching and learning. Sterling, VA: Stylus.


Lerner, F. A. (1998). The story of libraries: From the invention of writing to the computer age. New York: Continuum.


Maxwell, J. (1996). Qualitative research design: An interactive approach. Thousand Oaks, CA: Sage.


McNabb, M., Hawkes, M., & Rouk, U. (1999). Critical issues in evaluating the effectiveness of technology. (Report on The Secretary’s Conference on Educational Technology, Department of Education, Washington, DC). Retrieved April 10, 2002, from http://www.ed.gov/Technology/TechConf/1999/confsum.html


Mello, R. (2001). Summative report 7/01. Unpublished research report, UWS-SLEC at the Department of Educational Foundations, University of Wisconsin, Whitewater.


Mello, R. (2002a). Summative report 7/02. Unpublished research report, UWS-SLEC at the Department of Educational Foundations, University of Wisconsin, Whitewater.


Mello, R. (2002b). ‘‘100 pounds of potatoes in a 25 pound sack’’: Stress, frustration, and learning in the virtual classroom. Teaching with Technology Today, 9(7). Retrieved June 10, 2002, from http://www.uwsa.edu/ttt/articles/mello.htm


National Center for Educational Statistics. (2001). Schools and staffing survey 1999–2000: Overview of the data for public, private, public charter, and bureau of Indian affairs elementary and secondary schools (Research Report, NCES, Washington, DC). Retrieved from April 15, 2002, from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2002313


No Child Left Behind Act of 2001, Pub. L. No. 107-110 (2001)


Palmer, P. (1998). The courage to teach. San Francisco: Jossey-Bass.


Patton, M. (1999). Grand Canyon celebration. Amherst, NY: Prometheus Books.


Patton, M. (2001). Qualitative research and evaluation methods. Thousand Oaks, CA: Sage.

Perez, A. I. (and others). (1997). Forms of collaboration: The flexible role of the researcher within the changing context of practice. Report presented at American Educational Research Association, Chicago. (ERIC Document Reproduction Service No. ED 410290)


Schon, D. (1983). The reflective practitioner. New York: Basic Books.


Schroeder, E. E., Zarinnia, E. A., Slygh, G., Hopkins, D. M., Robbins, L., Cross, E., et al., (2001). Teaching with Technology Today, 8(3). Retrieved December 10, 2001, from http://www.uwsa.edu/ttt/articles/zarinnia.htm


Shadish, W., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Thousand Oaks, CA: Sage.


Stake, R. E. (1967). The countenance of educational evaluation. Teachers College Record, 68, 523–540.


Strauss, A. (1993). Continual permutations of action. Chicago: Aldine de Gruyter.


Strauss, A., & Corbin, J. (Eds.). (1997). Grounded theory in practice. Thousand Oaks, CA: Sage.


Vygotsky, L. (1962). Thought and language. Cambridge, MA: MIT Press.


Wisconsin Department of Public Instruction. (2001). Supply and Demand of Educational Personnel in Wisconsin Public Schools, 2001. Madison, WI: Author.





Cite This Article as: Teachers College Record Volume 107 Number 10, 2005, p. 2351-2371
https://www.tcrecord.org ID Number: 12196, Date Accessed: 1/27/2022 9:38:02 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Robin Mello
    University of Wisconsin-Milwaukee
    E-mail Author
    ROBIN MELLO is assistant professor of theatre, program evaluator, and coordinator for the K–12 Theatre Education Program at the Peck School of the Arts, University of Wisconsin—Milwaukee. Previously, she was a research associate at the Program Evaluation and Research Group at Lesley University, where she evaluated a wide variety of formal and informal educational programs.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS