Publishers

Essay offers guide to those who review journal submissions

Everyone gets rejected. And it never stops being painful not matter how successful or how long you have been in the business. Some of this is inevitable; not everyone is above average. But some of it isn't. I thought that I would offer some dos and don’ts for reviewers out there to improve the process and save some hurt feelings, when possible. Some are drawn from personal experience; others, more vicariously. I have done some of the "don’ts" myself, but I feel bad about it. Learn from my mistakes.

(Author's note: I'd like people to focus on the ideas in this piece, not the strong language, so I've substituted a new version with all the same points, but a few different words.)

First, and I can’t stress this enough, READ THE PAPER. It is considered impolite by authors to reject a paper by falsely accusing it of doing THE EXACT OPPOSITE of what it does. Granted, some people have less of a way with words than others and are not exactly clear in their argumentation. But if you are illiterate, you owe it to the author to tell the editors when they solicit your review. It is O.K. – there are very successful remedial programs they can recommend. Don’t be ashamed.

Second, and related to the first, remember the stakes for the author. Let us consider this hypothetical scenario. In a safe estimate, an article in a really top journal will probably merit a 2-3 percent raise for the author. Say that is somewhere around $2,000. Given that salaries (except in the University of California System) tend to either stay the same or increase, for an author who has, say, 20 years left in his/her career, getting that article accepted is worth about $40,000 dollars. And that is conservative. So you owe it more than a quick scan while you are on the can. It might not be good, but make sure. Do your job or don’t accept the assignment in the first place. (Sorry, I don’t usually like scatological humor but I think this is literally the case sometimes.)

Third, the author gets to choose what he/she writes about. Not you. He/she is a big boy/girl. Do not reject papers because they should have been on a different topic, in your estimation. Find fault with the the paper actually under review to justify your rejection.

Fourth, don’t be petty and whiny. Articles should be rejected based on faulty theory or fatally flawed empirics, not a collection of little cuts. Bitchy grounds include but are not limited to – not citing you, using methods you do not understand but do not bother to learn, lack of generalizability when theory and empirics are otherwise sound. The bitchiness of reviews should be inversely related to the audacity and originality of the manuscript. People trying to do big, new things should be given more leeway to make their case than those reinventing the wheel.

Fifth, don’t be a jerk. Keep your sarcasm to yourself. Someone worked very hard on this paper, even if he/she might not be very bright. Writing “What a surprise!”, facetiously, is not a cool move. Rejections are painful enough. You don’t have to pour salt on the wound. Show some respect.

Sixth, remember that to say anything remotely interesting in 12,000 words is ALMOST IMPOSSIBLE. Therefore the reviewer needs to be sympathetic that the author might be able to fix certain problems when he/she is given more space to do so. Not including a counterargument from your 1986 journal article might not be a fatal oversight; it might have just been an economic decision. If you have other things that you would need to see to accept an otherwise interesting paper, the proper decision is an R&R, not a reject. Save these complaints for your reviews of full-length book manuscripts where they are more justifiable.

Seventh, you are not a film critic. Rejections must be accompanied by something with more intellectual merit than "the paper did not grab me" or "I do not consider this to be of sufficient importance to merit publication in a journal of this quality." This must be JUSTIFIED. You should explain your judgment, even if it is something to the effect of, "Micronesia is an extremely small place and its military reforms are not of much consequence to the fate of world politics." Even if it is that obvious, and it never is, you owe an explanation.

Brian C. Rathbun is associate professor in the School of International Relations at the University of Southern California. This essay is adapted from a blog post by Rathbun at The Duck of Minerva.

Essay on making campuses welcoming for older people

Category: 

G.B. Shaw said that "youth is wasted on the young." Be that as it may, college is not wasted on the older, at least on kinder campuses. Maria Shine Stewart reflects.

Ad keywords: 

Glenn McGee raises a storm in the bioethical world

Smart Title: 

Conflict of interest. Internet scrubbing. Nepotism. Allegations are flying around an influential journal's former editor.

 

 

Essay on the gaming of citation index measures

Recently I received an e-mail that prompted me to think once again about commensuration -- the social process of providing meaning to measurement.  The study of commensuration involves analyzing the form and circulation of information and how counting changes the way that people attend to it, as discussed in articles by Wendy Espeland and Mitchell L. Stevens and Espeland and Michael Sauder.

The e-mail came from the editor of a special issue of an American journal in my field, concerned my contribution to the issue, and contained a recommendation based in current metrics governing the worth of ideas: "There is one thing I want to encourage you to consider doing, namely have a look at a couple of preliminary and relevant articles from other contributors to the special issue. If you acknowledge each other’s work it will clearly add to the feeling of having a special issue that is relatively well-integrated, plus add to the impact factor of each other’s work."  He had dared to request out loud that we game the system, a practice generally discussed in whispers.

The editor is a particularly ambitious young man, who is bright, works hard, and wants to scale the rungs to the top of academe. There are lots of young academics who fit that description, but other non-tenured full-time faculty to whom I mentioned the e-mail were appalled. "You’re kidding," one said, as a look of disgust took over his face. A young woman to whom I forwarded the quote replied promptly: "That impact factor comment in the letter is a little depressing -- are we academics really that pathetic?"

Perhaps because I am a sociologist, that e-mail got me to thinking about the measurement of value in academe. (I had contemplated the politics of self-promotion previously, when another untenured researcher had asked me to "like" his work on Facebook.)  Certainly the practice of measuring human value is not a new thing. Economists have long conflated wages with the measurement of human value, as writers from Karl Marx and Adam Smith to today’s neoliberals have clearly shown. (Smith was for conflation; Marx was against; the neoliberals don’t even know that such conflation can be challenged.)

When I was a kid in the 1950s, someone had calculated the worth of the chemicals in the human body -- $1.78.  I remember being surprised that a body was worth so little instead of being shocked that someone had even performed the calculation.  Today I’m not taken aback to learn on a website that someone has calculated “the lucrative uses for the roughly 130 pieces of body tissue that are extracted, sterilized, cut up, and put on the market” -- $80,000. As I age, I am becoming harder to shock. After all, there is a cadaver industry. At least three television dramas, "Law and Order: Special Victims Unit," "The Closer" and "The Mentalist" have reminded me that evildoers will plot to obtain body parts and will kill to make their way up the list of people awaiting transplants.

I don’t think I am naïve.  I have heard discussions of impact factors before, mostly when people evaluate their colleagues’  scholarly contributions to decide whether they deserve promotion or tenure. Usually, the term refers to a metric that supposedly summarizes the worth of a journal, calculated by the number of citations per article that it has received either in other journals (as given by either the ISI Web of Knowledge or Scopus) or in books and journals (Google Scholar). There is some variation in how a journal scores, depending on which company is reporting. Scopus emphasizes science journals; ISI includes humanities and social science journals; Google Scholar adds books. The meaning of the metric is simple: the higher the score, the better the journal. It follows that the higher the impact factor of the journals in which a candidate publishes, the worthier the candidate. Thus, a candidate for tenure whose publications are all in journals with high impact is worthier than a candidate who publishes is lesser journals, all else being equal (though of course, it never is). I once heard a biologist praise a candidate for tenure, because he had published in a journal with an impact score of 4.5, which is quite good in most branches of biology and off the charts in the social sciences.

Impact scores affect subfields. Just as the top molecular biology journals have higher scores than the top environmental biology journals, so too within any one discipline, some specialties score higher than others. The more people work in a subfield or a specialty within that subfield, the higher the potential impact factor. Last year when Gender & Society, the journal of Sociologists for Women in Society, had the fourth highest score of the 132 journals in sociology, the organization’s list-serv celebrated. Several people looked forward to telling the news to colleagues who had poo-pooed the study of gender so that they could eat their words.

Impact scores also affect whole universities. Several years ago, top administrators at the University of Chile advised some professors to help improve the institution’s international ranking by publishing in “ISI journals.” (This is also an instruction to publish in English, since the Web of Knowledge is more likely to include English-language journals in their calculations than journals in other languages.) Already one of the top ten institutions of higher education in Latin America, this public university is locked in competition with the private Pontifical Catholic University of Chile.

And, of course, impact scores affect the journals being rated. Supposedly, given the choice of two journals that might accept her work, the canny professor will submit to the journal with the higher impact score. The more articles submitted, the more rejected, the better the articles published – or so the theory goes. Editors keep track of their journal’s score and publishers list the scores on their websites. Last year, like other members of one editorial board, I received a joyous e-mail announcing that journal’s impact factor and celebrating its relative achievement. By its fourth year, it had risen to the middle of the pack in its subfield.

To me, an agreement to cite one another’s work accepts the proposition that citations indicate the quality of an individual’s research. That theory receives concrete validation every time that the members of a promotion and tenure committee check how many citations a candidate has received.  I’ve seen cases where committee members were so wedded to the measure that they could not hear that the candidate had received few citations because he was in an emerging field and also could not accept that members of such fields don’t score so well on impact measures.  When enough people can attend a convention to discuss a supposedly nascent idea, Marshall McLuhan once said, that idea is no longer innovative.  McLuhan might well have been discussing the circulation and impact factor of journals.

I find it worrisome that all of these uses of impact factors may shape a field. I've heard tell that after preparing a self-evaluation for a quintennial review of his department, one social science chair urged his colleagues to publish articles rather than books. Articles garner citations more quickly. If everyone published articles, he thought, the department would collect citations more quickly and so would zoom up the national rankings of the quality of departments in its field. The chair forgot to mention that in his discipline, journals tended to publish one kind of research and books, another.  Perhaps he didn’t realize that he was essentially telling his colleagues what sort of scholarship they ought to do.

Unhappily, as I think about all of this measurement, I am forced to examine my own practices. It's just too easy to audit oneself and to confuse the resulting number with some form of self-worth. When Google Scholar announced that intellectuals could have access to their citation count, as well as their scores on the h and i10 indices, I first Googled the indices. (I found, "an h-index is the largest number h such that h publications have at least h citations." An "i10-index is the number of publications with at least 10 citations.") Google Scholar was also good enough to tell me the scores of a newly promoted professor and of a potential Noble laureate. Then I looked myself up. After several weeks I realized that by auditing my citation and indices much as I might check my weight, I had commodified myself – my worth to both my department and my university -- every bit as much as the cadaver industry has calculated the worth of my body parts.

I like to tell myself that checking my citation count is only a symbolic exercise in commensuration. After all, no one knows the exact financial worth of each citation of each scholar working at each research university, let alone for scholars in my discipline and subfields. In contrast, the cadaver industry is dealing in concrete dollars and cents. I find the discrepancy between these calculations comforting. I advise myself: since it is only symbolic, my self-audit does not yet qualify as commodification. As Marx might have put it, I have not yet paid so much attention to my product (published research) that I have confused the value of the product with the dignity of the maker. I care about that; I’m discussing my dignity.

But then I think again. My self-audit of my own citation count expresses obeisance to the accountability regime that increasingly governs higher education. (An accountability regime is a politics of surveillance, control and market management that disguises itself as value-neutral and scientific administration.) Sure, the young scholar who had sent me that e-mail advocating mutual citation felt he was advancing his career and protecting himself from failure. But I, too, have been speeding the transformation of higher education from an institution that stresses ideas to one that emphasizes measurement and marketability.  I am ashamed to say that in this job market, I would feel hard-pressed to tell the young man to ignore his citations and just do his work. 

Gaye Tuchman, professor emerita of sociology at the University of Connecticut, is author of Wannabe U: Inside the Corporate University and Making News: A Study in the Construction of Reality.

Oxford Press will publish books that are controversial in India

Smart Title: 

Facing criticism from professors worldwide, publisher will re-issue books by scholar whose essay set off controversy in India.

Scholar continues to find flawed metadata in Google Books

Smart Title: 

Critic of Google Books' metadata finds many of the problems haven't been fixed.

Pages

Subscribe to RSS - Publishers
Back to Top