Home Articles Reader Opinion Editorial Book Reviews Discussion Writers Guide About TCRecord
transparent 13
Topics
Discussion
Announcements
 

Is Your Stuff up to Snuff? Conditions for Success in Academic Peer Review


by Michèle Lamont - November 24, 2009

Based on her book How Professors Think: Inside the Curious World of Academic Judgment, sociologist Michèle Lamont discusses the role of emotions, interactions, and identities in influencing academic evaluation, including for tenure. She argues that subjectivity is essential for evaluation and that we need to be more aware of its impact – not try to eliminate it. She also argues that we should shift our focus away from bias toward gaining a better understanding of evaluative cultures and evaluative practices and of how they differ across disciplines, universities, and national contexts.

The evaluation of research and teaching is at the heart of the tenure system of American universities. Quality control is especially crucial at the point when our universities consider whether to make a life-long commitment to a particular individual. Against this background, we cannot be surprised that social scientists are obsessed with the quality of evaluation, and particularly with peer review. According to a recent report, by 2004, no less than 6,708 authors had published a total of 3,720 articles on peer review, which had appeared in 1,228 journals, and this does not include monographs, compilations, and grey literature. Clearly, we want to “get it right”! But are we looking at the right place?


Much of this immense literature focuses on bias, and much of it appears to be premised on the notion that, under ideal conditions, evaluators would stand above their social and cognitive networks to consider objectively and consistently whose “stuff is up to snuff.” This literature often posits that the non-cognitive (social networks, identities, and political commitments) corrupts the process. I believe this perspective is misguided and that we need to reorient our energies away from bias, toward understanding *evaluative cultures.* This is one of the arguments I develop in How Professors Think: Inside the Curious World of Academic Judgment (Harvard University Press, 2009). This book offers an in-depth analysis of grant peer review based on interviews and observations conducted in twelve multidisciplinary funding panels in the social sciences and the humanities, including panels organized by the American Council for Learned Societies, the Social Science Research Council, and the Woodrow Wilson National Fellowship Foundation. What I learned is of immediate relevance for our broader understanding of academic evaluation, including the tenure process.


To be clear, I don’t dispute that studying bias is important; we need to know whether and how members of underrepresented groups or specific types of scholarship are systematically underrated. Indeed, it is a matter of basic fairness and justice, as well as efficiency (to the extent that neglecting or underestimating talent is counterproductive and leads to underperformance). But an overly exclusive emphasis on bias has been counterproductive from the perspective of improving our academic evaluation system.


My main argument is that we cannot remove the social from evaluation. Instead, we need to heighten our awareness of its impact, so that we will be better able to neutralize it. More specifically, the emotions of evaluators (e.g. their desire to have their expertise honored by other panelists), their identity (as social scientists, humanists, members of specific disciplines or other relevant or less relevant communities), and the dynamic of panel interactions, are intrinsic to the process of evaluation. They are unavoidable if we want the evaluation to be conducted not by computers, but by human beings, as these come with emotions, identities, and interactions. And I believe that we need to continue to depend on human beings as their evaluations are best (better informed, more nuanced, more contextually specific).


There is much to be said for emotions and identities, as they often lead experts to take steps that contribute to the quality of their evaluation. For instance, experts generally spend considerable time carefully reading the tenure file of a candidate, which helps maintain the faith they have in the evaluation system, and their pride and self concept as respected and respectable academics. Interactions have many positive aspects as well. For instance, through interaction on evaluation panels, peer reviewers combine their knowledge to determine what is worth funding. It is through interacting that experts formulate arguments and reach a shared understanding of how research and researchers compare in terms of quality.


In How Professors Think, I suggest that we need to consider the social and cognitive constraints in which evaluators operate. Such a pragmatic approach may be more conducive to improving the evaluation process than a Pollyanna-ish approach that aims to abstract peer review from interaction and emotions or to “buffer” evaluators from social influences. By gaining an understanding of evaluative cultures, including those that are found in different types of disciplines, universities and colleges, and national settings, I believe we will do more to understand whether and how we can improve peer review. Findings from How Professors Think are suggestive in this respect. Here are some of the conclusions I drew:


What looks most “like us,” i.e. like our own research, is often what seems most exciting. Therefore judgments of taste should generally be subordinated to judgments concerning the quality of execution. However, judgments about originality should trump judgments about competence, to the extent that there are plenty of competently executed proposals that are not worth carrying through. Encouraging panelists to be more aware of how their idiosyncratic preferences reflect their own work should increase the overall quality of the evaluation process.

Subjectivity and taste overall do not only corrupt the evaluation process but are also essential to the collective identification/construction of excellent proposals. Indeed, it is because of their taste and connoisseurship (or mastery of refined classification schemes needed to determine what is original), that academics are asked to pronounce judgment. However, these need to be tempered by strict rules concerning conflicts of interest to limit the impact of personal networks on the evaluation process. More specifically, evaluators should be provided a list of instances where they are obligated to abstain due to network proximity, and this list should include not only direct students, collaborators, and colleagues, but also those of their close collaborators.

As they go about assessing grants and fellowships proposals, evaluators find it impossible to be consistent in their standards: different proposals make different criteria of evaluation salient (originality, significance, feasibility, etc.). It is not until after having read the full set of proposals under consideration that all relevant criteria emerge. Only by revisiting all proposals after the full list of criteria has been identified is it possible to limit the impact of our cognitive abilities on the selection process. Moreover, the list of comparands is unstable: due to our cognitive limitations, all proposals cannot be compared to one another at once. Instead, subgroups of proposals are lumped together based on a range of criteria, whether proximity in the alphabet, similarity of topic, or similarity in the characteristics of the applicants (those who teach in small liberal arts colleges, African-American women, etc.). Being aware of such limitations will improve the evaluation process.

Evaluation occurs under real life constraints – e.g. the need to finish in time to catch a plane. Thus evaluation is not a pure cognitive activity, but one that is shaped by pragmatic consideration. Being fully aware of such constraints, especially toward the end of panel deliberations, will also help improve the overall quality of the evaluation process.


Many characteristics of the American higher education system lead us to think that peer review works better here than elsewhere – the size of our academic community, with more than 2,500 colleges and universities; autonomy in hiring across institutions, which sustains market mechanisms; geographical size of the academic field and the spatial mobility of the faculty, which diminishes the impact of local networks of clientelism; the lengthy and closely supervised nature of our graduate training, which contributes to the diffusion of shared norms and a shared culture of evaluation; and of course, the centrality of meritocracy in our national ideals. Of course, the system is far from perfect. However, we need to remember that the privilege of academic self-government comes with many responsibilities. It behooves to us to insure the reproduction of the evaluative culture, and the accompanying set of practices, on which our autonomy rests.


Reference


Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Cambridge: Harvard University Press.




Cite This Article as: Teachers College Record, Date Published: November 24, 2009
https://www.tcrecord.org ID Number: 15847, Date Accessed: 1/26/2022 11:43:48 AM

Purchase Reprint Rights for this article or review
 
Article Tools
Related Articles

Related Discussion
 
Post a Comment | Read All

About the Author
  • Michèle Lamont
    Harvard University
    E-mail Author
    MICHÈLE LAMONT is Robert I. Goldman Professor of European Studies and Professor of Sociology and African and African American Studies at Harvard University.
 
Member Center
In Print
This Month's Issue

Submit
EMAIL

Twitter

RSS