Recently I received an e-mail that prompted me to think once again about commensuration -- the social process of providing meaning to measurement. The study of commensuration involves analyzing the form and circulation of information and how counting changes the way that people attend to it, as discussed in articles by Wendy Espeland and Mitchell L. Stevens  and Espeland and Michael Sauder. 
The e-mail came from the editor of a special issue of an American journal in my field, concerned my contribution to the issue, and contained a recommendation based in current metrics governing the worth of ideas: "There is one thing I want to encourage you to consider doing, namely have a look at a couple of preliminary and relevant articles from other contributors to the special issue. If you acknowledge each other’s work it will clearly add to the feeling of having a special issue that is relatively well-integrated, plus add to the impact factor of each other’s work." He had dared to request out loud that we game the system, a practice generally discussed in whispers.
The editor is a particularly ambitious young man, who is bright, works hard, and wants to scale the rungs to the top of academe. There are lots of young academics who fit that description, but other non-tenured full-time faculty to whom I mentioned the e-mail were appalled. "You’re kidding," one said, as a look of disgust took over his face. A young woman to whom I forwarded the quote replied promptly: "That impact factor comment in the letter is a little depressing -- are we academics really that pathetic?"
Perhaps because I am a sociologist, that e-mail got me to thinking about the measurement of value in academe. (I had contemplated the politics of self-promotion previously, when another untenured researcher had asked me to "like" his work on Facebook.) Certainly the practice of measuring human value is not a new thing. Economists have long conflated wages with the measurement of human value, as writers from Karl Marx and Adam Smith to today’s neoliberals have clearly shown. (Smith was for conflation; Marx was against; the neoliberals don’t even know that such conflation can be challenged.)
When I was a kid in the 1950s, someone had calculated the worth of the chemicals in the human body -- $1.78. I remember being surprised that a body was worth so little instead of being shocked that someone had even performed the calculation. Today I’m not taken aback to learn on a website that someone has calculated  “the lucrative uses for the roughly 130 pieces of body tissue that are extracted, sterilized, cut up, and put on the market” -- $80,000. As I age, I am becoming harder to shock. After all, there is a cadaver industry. At least three television dramas, "Law and Order: Special Victims Unit," "The Closer" and "The Mentalist" have reminded me that evildoers will plot to obtain body parts and will kill to make their way up the list of people awaiting transplants.
I don’t think I am naïve. I have heard discussions of impact factors before, mostly when people evaluate their colleagues’ scholarly contributions to decide whether they deserve promotion or tenure. Usually, the term refers to a metric that supposedly summarizes the worth of a journal, calculated by the number of citations per article that it has received either in other journals (as given by either the ISI Web of Knowledge or Scopus) or in books and journals (Google Scholar). There is some variation in how a journal scores, depending on which company is reporting. Scopus emphasizes science journals; ISI includes humanities and social science journals; Google Scholar adds books. The meaning of the metric is simple: the higher the score, the better the journal. It follows that the higher the impact factor of the journals in which a candidate publishes, the worthier the candidate. Thus, a candidate for tenure whose publications are all in journals with high impact is worthier than a candidate who publishes is lesser journals, all else being equal (though of course, it never is). I once heard a biologist praise a candidate for tenure, because he had published in a journal with an impact score of 4.5, which is quite good in most branches of biology and off the charts in the social sciences.
Impact scores affect subfields. Just as the top molecular biology journals have higher scores than the top environmental biology journals, so too within any one discipline, some specialties score higher than others. The more people work in a subfield or a specialty within that subfield, the higher the potential impact factor. Last year when Gender & Society, the journal of Sociologists for Women in Society, had the fourth highest score of the 132 journals in sociology, the organization’s list-serv celebrated. Several people looked forward to telling the news to colleagues who had poo-pooed the study of gender so that they could eat their words.
Impact scores also affect whole universities. Several years ago, top administrators at the University of Chile advised some professors to help improve the institution’s international ranking by publishing in “ISI journals.” (This is also an instruction to publish in English, since the Web of Knowledge is more likely to include English-language journals in their calculations than journals in other languages.) Already one of the top ten institutions of higher education in Latin America, this public university is locked in competition with the private Pontifical Catholic University of Chile.
And, of course, impact scores affect the journals being rated. Supposedly, given the choice of two journals that might accept her work, the canny professor will submit to the journal with the higher impact score. The more articles submitted, the more rejected, the better the articles published – or so the theory goes. Editors keep track of their journal’s score and publishers list the scores on their websites. Last year, like other members of one editorial board, I received a joyous e-mail announcing that journal’s impact factor and celebrating its relative achievement. By its fourth year, it had risen to the middle of the pack in its subfield.
To me, an agreement to cite one another’s work accepts the proposition that citations indicate the quality of an individual’s research. That theory receives concrete validation every time that the members of a promotion and tenure committee check how many citations a candidate has received. I’ve seen cases where committee members were so wedded to the measure that they could not hear that the candidate had received few citations because he was in an emerging field and also could not accept that members of such fields don’t score so well on impact measures. When enough people can attend a convention to discuss a supposedly nascent idea, Marshall McLuhan once said, that idea is no longer innovative. McLuhan might well have been discussing the circulation and impact factor of journals.
I find it worrisome that all of these uses of impact factors may shape a field. I've heard tell that after preparing a self-evaluation for a quintennial review of his department, one social science chair urged his colleagues to publish articles rather than books. Articles garner citations more quickly. If everyone published articles, he thought, the department would collect citations more quickly and so would zoom up the national rankings of the quality of departments in its field. The chair forgot to mention that in his discipline, journals tended to publish one kind of research and books, another. Perhaps he didn’t realize that he was essentially telling his colleagues what sort of scholarship they ought to do.
Unhappily, as I think about all of this measurement, I am forced to examine my own practices. It's just too easy to audit oneself and to confuse the resulting number with some form of self-worth. When Google Scholar announced that intellectuals could have access to their citation count, as well as their scores on the h and i10 indices, I first Googled the indices. (I found, "an h-index is the largest number h such that h publications have at least h citations." An "i10-index is the number of publications with at least 10 citations.") Google Scholar was also good enough to tell me the scores of a newly promoted professor and of a potential Noble laureate. Then I looked myself up. After several weeks I realized that by auditing my citation and indices much as I might check my weight, I had commodified myself – my worth to both my department and my university -- every bit as much as the cadaver industry has calculated the worth of my body parts.
I like to tell myself that checking my citation count is only a symbolic exercise in commensuration. After all, no one knows the exact financial worth of each citation of each scholar working at each research university, let alone for scholars in my discipline and subfields. In contrast, the cadaver industry is dealing in concrete dollars and cents. I find the discrepancy between these calculations comforting. I advise myself: since it is only symbolic, my self-audit does not yet qualify as commodification. As Marx might have put it, I have not yet paid so much attention to my product (published research) that I have confused the value of the product with the dignity of the maker. I care about that; I’m discussing my dignity.
But then I think again. My self-audit of my own citation count expresses obeisance to the accountability regime that increasingly governs higher education. (An accountability regime is a politics of surveillance, control and market management that disguises itself as value-neutral and scientific administration.) Sure, the young scholar who had sent me that e-mail advocating mutual citation felt he was advancing his career and protecting himself from failure. But I, too, have been speeding the transformation of higher education from an institution that stresses ideas to one that emphasizes measurement and marketability. I am ashamed to say that in this job market, I would feel hard-pressed to tell the young man to ignore his citations and just do his work.
Gaye Tuchman, professor emerita of sociology at the University of Connecticut, is author of Wannabe U: Inside the Corporate University and Making News: A Study in the Construction of Reality.