Countless decisions in academe are based on the quest for excellence. Which professors to hire and promote. Which grants to fund. Which projects to pursue. Everyone wants to promote excellence. But what if academe actually doesn’t know what excellence is?
Michèle Lamont decided to explore excellence by studying one of the primary mechanisms used by higher education to -- in theory -- reward excellence: scholarly peer review. Applying sociological and other disciplinary approaches to her study, Lamont won the right to observe peer review panels that are normally closed to all outsiders. And she was able to interview peer review panelists before and after their meetings, examine notes of reviewers before and after decision-making meetings, and gain access to information on the outcomes of these decisions.
The result is How Professors Think: Inside the Curious World of Academic Judgment (Harvard University Press), which aims to expose what goes on behind the closed doors where funds are allocated and careers can be made. For those who have always wondered why they missed out on that grant or fellowship, the book may or may not provide comfort. Lamont describes processes in which most peer reviewers take their responsibilities seriously, and devote considerable time and attention to getting it right.
She also finds plenty of flaws -- professors whose judgment on proposals is clouded by their own personal interests, deal making among panelists to make sure decisions are made in time for panelists to catch their planes, and an uneven and somewhat unpredictable efforts by panelists to reward personal drive and determination over qualities that a grant program says are the actual criteria.
On diversity, Lamont’s research finds that peer reviewers do factor it in (although the extent to which they do so varies by discipline). But peer reviewers are much more likely to care about diversity of research topic or institution than gender or race, she finds.
As for excellence, that quality that peer review theoretically promotes, Lamont isn’t so sure it exists. It may be invoked all the time, she said in an interview, but her examination of the process suggests no way to measure it. "I think excellence means nothing,” she said, suggesting that panels be honest about the criteria they use. “I think you have to give the criteria. Typically it's originality, feasibility, and also the social and intellectual significance.” There is nothing wrong with those definitions per se, she said, but people shouldn't pretend they equate with some scientific measure of excellence, as other criteria could be used as well.
The most common flaw she documents is a pattern of professors applying very personal interests to evaluating the work before them. “People define what is exciting as what speaks to their own personal interest, and their own research,” she said.
Even if her book doesn’t change peer review, Lamont writes that she wants to “open the Black Box of peer review” so the scholars being evaluated have a better understanding of what happens to the applications in which they have invested so much time and hope. But she does have hope for those on the panels too. “I also want the older, established scholars -- the gate keepers -- to think hard and think again about the limits of what they are doing, particularly when they define ‘what is exciting’ as ‘what most looks like me (or my work).’ ”
Lamont is no stranger to the peer review process. She has won grants and served on peer review panels, and done both from a position as an academic “insider,” she writes. She is a tenured professor at Harvard (with appointments in European studies, sociology and African and African-American studies, while also serving as a senior adviser on faculty development and diversity for the Faculty of Arts and Sciences). At the same time, she notes that she brings an outsider’s perspective to her study, as one who was French educated and French speaking and thinking until she came to the United States, and who is “not enamored with ‘insiderism.’ ”
To get inside the process, Lamont had to pledge confidentiality. While she describes in general terms some of the organizations whose peer review panels she observed, and names some of the organizations, the interviews and descriptions in the book aren’t linked to any specific competition, nor are peer reviewers or applicants named. Among the peer review processes she was permitted to study were some sponsored by the American Council of Learned Societies, the Social Science Research Council, and the Woodrow Wilson National Fellowship Foundation.
The peer review processes she studied involved grants to professors and graduate students, and all the panels involved professors from many disciplines. She writes that, as a result, the findings may suggest similar issues for multi-disciplinary committees on individual campuses -- panels that frequently play a key role in tenure reviews once a candidate has been considered at the departmental level.
One of the key findings was that professors in different disciplines take very different approaches to decision making. The gap between humanities and social sciences scholars is as large as anything C.P. Snow saw between the humanities and the hard sciences.
Many humanities professors, she writes, “rank what promises to be ‘fascinating’ above what may turn out to be ‘true.’ ” She quotes an English professor she observed explaining the value of a particular project: “My thing is, even if it doesn’t work, I think it will provoke really fascinating conversations. So I was really not interested in whether it’s true or not.”
In contrast, Lamont quotes a political scientist on what he values in proposals he reviews: “Validity is one, and you might say parsimony is another, I think that’s relatively important, but not nearly as important as validity. It’s the notion that a good theory is one that maximizes the ratio between the information that is captured in the independent variable and the information that is captured in the prediction, in the dependent variable.”
Lamont acknowledges that not all professors align with disciplinary norms, but she cites not only individual quotes, but tabulations of the words and values expressed by peer reviewers that show the strength of the patterns.
Among her findings:
The Middle of the Pack and Horse-Trading: Most peer review panels spend relatively little time on those proposals that come in with broad support or little support, but spend most of their time on middle of the pack proposals on which there are flaws of various types. In deliberations, many panelists admit to forming alliances with like-minded scholars to back or oppose proposals, and to using "strategic" voting, in which they may go along with one grant to win support for another. Many admit to "high balling" proposals that they like, giving them ranks that are higher than deserved, as a means of keeping a proposal alive in the competition. But relatively few would admit to "low balling" and there appears to be a general consensus against it.
The Luck of Timing: The most intense discussions take place proposal by proposal, and it is relatively rare to go back -- on the basis of finding more deserving proposals -- and pull out those already awarded, Lamont writes. One panelist told her of a session: "I feel that if the meeting had gone another day, and if we had been allowed to pull people out of the 'yes' list and change our minds, there might have been six or seven or eight switches." Another timing issue involves the inevitable plane to catch. One panel Lamont observed simply didn't award all the fellowships it could have because the reviewers wanted to leave for the airport.
The Power of Personal and Professional Interests: Lamont writes that most reviewers would never admit to being unfair and would never engage in explicit favoritism based on personal ties, or an applicant being a student of a friend or colleague. But when it comes to an affinity for work that is similar to their own or that reflects personal interests having nothing to do with scholarship, many applicants benefit in a significant way. In a passage that may be one of the most damning of the book, Lamont writes: "[A]n anthropologist explains her support for a proposal on songbirds by noting that she had just come back from Tucson, where she had been charmed by songbirds. An English scholar supports a proposal on the body, tying her interest to the fact that she was an elite tennis player in high school. A historian doing cross-cultural, comparative work explicitly states that he favors proposals with a similar emphasis. ... Yet another panelist ties her opposition to a proposal on Viagra to the fact that she is a lesbian: 'I will be very candid here, this is one place where I said, OK, in the way I live my life and my practices ... I'm so sick of hearing about Viagra. ... Just this focus on men, whereas women, you know, birth control is a big problem in our country. So I think that's what made me cranky.' Apparently, equating 'what looks most like you' with 'excellence' is so reflexive as to go unnoticed by some."
Morality and Character: When peer reviewers talk about excellence in their deliberations, Lamont writes, they frequently link their opinions on applicants' character to their proposals (without much link to what grant competitions claim to be about). For example, she writes that there are frequent attempts to bolster proposals from "courageous risk-takers," or to reject ideas from "lazy conformists." People also reference, in positive ways, such qualities as "determination," "humility" and "authenticity," she writes.
Diversity of Diversity Considerations: Generally, Lamont writes that peer reviewers believe that diversity in higher education is a good thing and should be encouraged. But she finds relatively little attention paid to the gender and race of applicants, and much more to diversity of topics and to "institutional affirmative action," which is only sometimes endorsed by the funding agencies whose panels practice it. "Panelists practice institutional affirmative action because they believe that private, elite, and research-focused universities are privileged in the competition process," Lamont writes. She quotes a number of panelists as becoming frustrated when they realized that they were approving grants from similar institutions, and who then looked for other proposals to support. At the same time, Lamont writes of cases where the benefit of the doubt goes to someone from a prestigious institution (again without apparent justification in terms of funding organization criteria). She quotes a reviewer on one proposal as saying: "I know that Chinese literature at Penn is very highly regarded, she can't be a dummy doing this particular kind of work. ... This is a subject that if she had been from some tiny little hole-in-the-wall college, it's not likely I don't think."
So where does this leave Lamont on peer review? She actually thinks that the meetings, even with the flaws she exposes, are important to preserve. Thoughtful discussion among panelists can lead people to move beyond personal biases and make better decisions, she said. And so, for example, she is skeptical of moves to just have panelists rank proposals online, with some sort of computation of scores without a real meeting. "There is a real reason why deliberation takes place," she said.
Having studied this subject in such detail, Lamont said, the most important recommendation for herself and others is to ask more questions ... of oneself. "When I declare that something is exciting now, I am more aware of how this relates to my own agenda," she said. "I hope that people will read the book and that we'll all be more reflective on how we do this."