Like many academics, I’ve almost never left school. The students who occupy my mental energy most are the worst, not the best; the interactions with colleagues I recall most immediately the harshest; the department memos that curdle in my memory are those spiced with typos. Those stories, with the whiff of gossip, earn the most laughs from my colleague-friends.
But those stories aren’t typical. They represent academe like an inset map does a region — sharpening the view of a narrow area set aside from the larger context.
When people outside academe ask me what I do, I skip those stories. Most of my students have gifted me with their intelligence; my colleagues teach and support me. I don’t want to project a naïve, Panglossian optimism about my job or about the current state of higher education, but I try to be careful and honest when I explain my work in academe to those outside it. I know the stereotypes of academe and the real threats to its future: the longstanding and growing right-wing animus against universities as liberal hives that indoctrinate unquestioning drones; the dramatic rise in the cost of higher education; the shift from tenure-protected long-term faculty to underpaid, overworked short-term faculty. (That’s not an exhaustive list.) I sympathize with resistance against academe’s entrenched problems, but when writers misrepresent those, I boil over.
Which brings me to the essays of Rebecca Schuman, an education columnist for Slate. She has a Ph.D. in German studies and a forthcoming book on Kafka, Wittgenstein, and Modernism. Despite those impressive credentials, she left academe and the crumbling job prospects of German studies in 2013, so she’s well-positioned to comment on the real crises in higher education. She even professes, in one essay, “I’m a higher-ed expert.”
While I sometimes agree with her, I think she crafts fundamentally anti-academic arguments, anti-academic in that they rely heavily on unsourced and unsupported generality clothed in hyperbole. While she frames her essays with her expertise and experience, she presents a funhouse image of the academic world as the norm and recreates fabulist stereotypes of the ivory tower gone mad. Ultimately, her writing most often fails to offer substantive critique of academe’s problems and instead offers empty amusement that misleads readers about the world she claims to analyze with expertise.
Her July 15, 2014 essay, “Revise and Resubmit,” exemplifies her anti-academic methods. Most prominently in that essay, Schuman revels in psychologizing straw men, scarecrows who lack not brains but hearts: “Think of your meanest high school mean girl,” she writes, “at her most gleefully, underminingly vicious. Now give her a doctorate in your discipline, and a modicum of power over your future. That’s peer review.” The peer reviewer is a type with easily decipherable, easily dismissed motives.
Yes, Schuman’s being amusingly hyperbolic — I see Regina George and the Plastics of Mean Girls with their burn book, telling lesser academics “Stop trying to make ‘synecdochic heteronormativity’ happen. It’s not going to happen. It’s problematic.” If Schuman’s writing then moved to a more complex, realistic, or data-driven exploration of peer review, I might accept one hyperbolic stereotype — but the psychological profile of straw men predominates her argument. Consider the following:
1) “A thorough vetting of a new piece of scholarship is indeed crucial — but right now, rather than being constructively critical, far too many peer reviews are just cruel for no reason.”
2) “But some articles are just bad! whine the meanies, in a panic that they’ll lose their only consequence-free opportunity to express their professional misery. And yet, you can reject an article without stating, definitively, that its author has no business in the profession. I’ve done it. It’s not even the slightest bit difficult.”
3) “Sure, in an ideal situation, peer reviewers are collegial, constructive experts in the author’s sub-field with a legitimate interest in making an article better. But most situations are far from ideal: The ‘peer’ is actually a put-upon grad student or recent Ph.D., fresh off of seven years of being kicked in the gut and just aching to do some kicking of his own.”
4) “Some journal articles are accepted on their first pass. (I wrote one that was! Will you be the fourth person to read it?) Many more are rejected with extensive commentary, and more still receive a special place in academic purgatory: a directive to ‘revise and resubmit’ according to the referee’s requirements.”
5) “So many readers’ reports can be boiled down to: ‘Why wasn’t this article exactly the one I would have written?’ (Or: ‘Why wasn’t I cited enough?’) Thus, too often, academic peer review is gatekeeping for its own sake, whose chief result is to wound the author as deeply and existentially as possible.”
6) “What we need is an idea that takes advantage of, rather than battles against, academics’ petty self-interest. Luckily, the peers who review your work may think of themselves as above you in every way....”
Each example reinforces the stereotype of the peer reviewer as a petty, miserable, unreasonably cruel, self-interested meanie looking to exact metaphorical violence, a paper villain twirling his mustache. Academics produce with their guts, and they devastate with their guts. Yet somehow Schuman escaped that vicious cycle — it’s an easy stereotype to avoid, and lo and behold, she did!
Consider a different image of the peer reviewer. The job requirements of a tenure-track faculty member put reviewing academic submissions as a distant consideration. If an article submitted for peer review seems poorly written or uninformed, it’s much easier to respond, “The writing is poor” or “the author doesn’t demonstrate a sufficient knowledge of the field” than to offer a much more exhaustive and exhausting response that constructively details the changes needed to develop the essay to publication strength. If a submission (say, 8,000 words) doesn’t demonstrate adequate scholarly work, how likely is it that a single peer reviewer can make suggestions that the writer will use to significantly improve the article? Sneering at the more generous psychological profile I’ve crafted here proves harder, but I think that profile — even limited by generality as it is — more accurately reflects the peer reviewer than Schuman’s central stereotype of a thuggish villain.
One might respond that stereotypes reflect broad truths; however, do Schuman’s stereotypes do so? Based on her evidence, I’d say not. Frustratingly, she combines anecdotes with unsourced assertions of the commonality of the anecdotes she provides, claiming data without support for those claims. No doubt there are some peer reviewers who fit her stereotypes. But that’s neither representative nor reliable, failing to clear even what seems to me a low evidentiary standard.
Let’s take the most visually and substantively prominent evidence in Schuman’s essay: five tweets. (The sample tweets are larger and framed in boxes.) Schuman chose these examples as her best representative evidence. Of those five, two come from the same person, and another begins “not mine, but a friend.” So we have a limited sample — four respondents — and a thirdhand anecdote. One tweet, a respondent’s summary interpretation of the reviewers’ comments, reads, “Reviewer A: Not enough of my work here. Reviewer B: Not enough of my views here. Reviewer C: What *I* would have said is.” If we assume that the anecdote is real, a generous assumption I wouldn’t grant an undergraduate essay, can we further assume that the respondent’s interpretations are accurate without any context or evidence? We shouldn’t, but Schuman does.
Another response reads, “my fav review: called my writing ‘jejune’ because I included a beyonce reference.” That respondent is a science writer, so I can imagine contexts in which a Beyoncé reference might seem jejune. Granted, using jejune condescends. Does it exemplify cruelty, though? Given Schuman’s emphasis on the cruelty of reviewers I’d expect examples that are forcefully cruel rather than condescending.
Why can’t Twitter anecdotes demonstrate Schuman’s claim that peer review is so flawed (“broken,” as Slate’s banner has it) that it should be reformed? Considering the anxiety around academic publication, the problems with peer review may be disclosed only via quiet conversations and outlets like Twitter. First, however, consider two opposite sayings: “the plural of anecdote is data,” and “the plural of anecdote is not data.” Schuman clearly prefers the former, but I prefer the latter. So the respondents are a subset of Schuman’s Twitter followers (she has around 5,700 as of this writing), and I’d bet that the majority of responses would veer toward the negative — not necessarily because most experiences are negative in ways that reveal systemic fatal flaws, but because negative stories entertain readers more and are more likely to appear in an essay about peer review. To borrow and butcher Leo Tolstoy, happy peer-review stories are all alike; each unhappy peer-review story is unhappy in its own way.
Second, she fails to contextualize her evidence. Not only does she treat anecdote as data, she gives no sense of what those data represent. She seems not to have done basic research on the subject. My five-minute Google search found Bo‐Christer Björk and David Solomon’s “The Publishing Delay in Scholarly Peer-Reviewed Journals,” which examines the timing from submission to publication in a random sample of 2,700 papers in 135 journals across academic fields, from chemistry to arts and letters to economics. The delays, as you might imagine, vary from field to field. That reveals a problem Schuman only obliquely addresses: What is an acceptable peer-review response time? She suggests three months, which sounds great to me, but on what does she base that guess? Why would three months be optimal but not four or six? And, given the many obligations of peer reviewers and the number of submissions, is three months universally realistic?
Other crucial missing context: Do academics agree that response times are too long, as she claims? Lutz Bornmann and Hans-Dieter Daniel’s case study of the response times for a single journal cites a 2008 survey of academics in which 38 percent of respondents “were dissatisfied with peer-review times.” That percentage certainly doesn’t suggest universal frustration with the process. Even more importantly, is that dissatisfaction of 38 percent of respondents a reliable measure for how long peer review should take? I’d argue maybe not — I sometimes experience a five-minute wait in a grocery line as interminable and other times as short, depending on a number of factors: my mood, the crowd, how my other trips to the store have gone. And tenure-clock anxiety likely heightens the sense of a wait as interminable.
For what it’s worth, I also loathe the submission process, but not for the reasons she lists. I find submitting writing fundamentally frustrating because of elements inherent to that process. Finishing a draft submission offers little reward, because in this case it means “reaching a relatively acceptable but uncertain stopping point that seems to be the fullest, most complete effort I can give”; sending the work out into the world feels small because I have little or no concrete sense of what I’m competing against. Then, as soon as I’ve submitted, I realize an error or errors, major or minor, that I can’t correct for that submission. The frustration of waiting congests me intensely and irrationally early in the process. No alteration to peer review could ease these anxieties and the ways they distort my view of the process.
In addition to her limited anecdotes and unquestioned assumptions, she asserts vague commonalities without evidence. The following phrases from her essay reinforce Schuman’s sense of her mathematical certainty while undermining it: “rather than being constructively critical, far too many peer reviews are just cruel for no reason”; “most [peer-review] situations are far from ideal”; “thus, articles regularly, I’m talking almost always, languish untouched on the referee’s desk for six, 12, even 18 months” (in that quote, the italics are Schuman’s; and which is almost always — six, 12, or 18 months?); “Many more [submissions] are rejected with extensive commentary, and more still receive a special place in academic purgatory: a directive to “revise and resubmit”; “Some such suggestions are even helpful — but all too often, they’re hidden amidst the venting of petty vendettas and pettier agendas.” Far too many, most, regularly, almost always, many more, more still, too often and all too often, a lot of Schuman’s emphasis on frequency is empty hyperbole, a tactic Schuman slathers over her writing.
These anti-academic argumentative strategies demonstrate the central argumentative sin of “Revise and Resubmit” and much of Schuman’s other writing: generality. The claims of commonality without reference or specific numbers, the reliance on straw-man stereotypes, and, potentially most troubling of all, her use of “peer review” itself: these are all frustratingly general. For “peer review” isn’t a single thing. The process and timing vary from one discipline to another as well as within disciplines, depending on the focus of the journal. Anonymous review isn’t universal; double-blind reviews are more common in the humanities than in the hard sciences. But Schuman neglects these and other basic details. Worse still, she makes those claims with the air of authority and the position of authority for Slate’s readership, of whom likely only a small portion have a working knowledge of the peer-review process. Remember, Schuman tells us, “I’m a higher-ed expert.”
Schuman seems to have envisioned her essay as a shot across the bow of the peer-review system. On her personal website, Schuman wrote that her essay “was excoriated — my biggest WTF moment of my career, since in private every academic in the world loathes peer review.” She assumes her opinion is universally shared, and she doesn’t ask why readers resisted. I’d guess they did so in part because of her method, and also in part, ironically, because of her tone: protesting the behavior of academics, she treats them as childish, cruel meanies, driven solely by “petty self-interest.”
For the broader academic world, the problems in her writing matter because of how they resemble in their method the attacks on academe from outside. To take one example, on a recent edition of Fox News Sunday, George Will asserted that those who claimed that a 97 percent consensus among climate scientists that global warming is real, a consensus drawn from thousands of peer-reviewed articles and interviews with the authors of those articles, had invented the statistic: “They pluck these things from the ether,” he said. Given Will’s larger audience and presentation of authority — in addition to his appearances on America’s televised Pravda, he writes for The Washington Post. His “pulled from the ether” lie (or, to be charitable, ignorance) has spread to conservative websites that deny climate change. Peer review may be flawed, but in a world where news outlets pose academics against uncredentialed bloggers in three-minute debates about global warming, we need to convey the purpose and value of peer review, even if we want to change it.
I don’t want Schuman to stop writing. She could represent a useful, important resistance to the limits of the academic world that must come from within the university because a reflexively defensive institution is more likely to stifle than encourage meaningful change. But her shoddy method on peer review isn’t anomalous. Recently, she inveighed on Slateagainst too-long course syllabuses. The subhead, oddly enough, declares that syllabuses reach 25 pages, but Schuman’s longest example is a 20-page syllabus. The 20-pager she cites (it’s mentioned, but not presented, in a Twitter response) seems like an outlier, yet she claims it’s common. Schuman later tells us, as usual without citation of any evidence, that “the average length of my academic friends’ syllabi is 15 pages.” That kind of assertion reminds me a first-year student I taught who rejected the idea that public schools were still segregated because his school was diverse: only 75 percent white, by his estimate.
As with “Revise and Resubmit,” I’d move on from her hyperbole, except for her turn to why syllabus length matters: “Syllabus bloat is more than an annoyance. It’s a textual artifact of the decline and fall of American higher education. Once the symbolic one-page tickets for epistemic trips filled with wonder and possibility, course syllabi are now but exhaustive legal contracts that seek to cover every possible eventuality, from fourth instances of plagiarism to tornadoes.” In addition to the unsourced generalization familiar from her writing on peer review — syllabi were once this, but now they are this instead — skewed nostalgia radiates from her writing: when she was a student, things were great, but now they’re awful. Syllabi were once “symbolic one-page tickets for epistemic trips filled with wonder and possibility”; given her language in that phrase, I imagine poor Charlie Bucket gifted a trip into the fantastical, mythical education chocolate factory crafted in his head by his grandfather’s stories. Of course, the golden age has faded, as they always seem to; course syllabi “are now but exhaustive legal contracts that seek to cover every possible eventuality.” The wonder has died; all syllabi are dull and dry, and all students abhor them all.
Elsewhere, in an essay supporting President Obama’s call for a universal rating standard for universities and colleges in the United States, Schuman writes, “But here’s why I still hope Obama’s plan will work. The very fact that the ratings’ most vocal detractors are college presidents — who often rake in millions while their students crumble under debt — should tell us Obama is onto something.”
Schuman doesn’t defend the specifics of Obama’s plan — she even lays out several of its major flaws — but she writes, “I say that a system that currently survives on nearly three-quarters contingent faculty labor has more than earned a sledgehammer — or at least a thorough audit from the body that’s providing a healthy percentage of its revenue.” The latter sentence might be convincing if it didn’t appear in the same essay that acknowledges “It dismays me to see that in a federal initiative to reform higher education, there is no mention whatsoever of the labor crisis that all but defines the 21st-century American university: The ever-growing dependence on barely-paid part-time adjuncts, which makes skyrocketing tuition particularly unconscionable.”
In other words, universities need accountability regarding their employment practices. Here’s a plan that doesn’t address that. Schuman supports that plan!
I appreciate Schuman’s efforts to avoid a jargon-cloaked academic tone for a snarkier critique of academia’s entrenched, encrusted bad habits as well as disturbing trends that, in my opinion, threaten higher education. But if she wants to change academe in meaningful, constructive ways, she has to engage it, just as she asks peer reviewers to do with the submissions they receive. Instead of wielding the funhouse mirror for Slate, she can reflect a more accurate image to show us our flaws. Of course, any organization as broad, complex, diverse, and hierarchical as academia could be represented with many actual images, some of which may not overlap.
The adjunct’s academe isn’t the tenured professor’s, nor is the dean’s, the undergraduate’s, the parent’s, or the custodian’s. George Will’s academia and Rebecca Schuman’s treat stereotype and the limited individual perspective as universal truth. But when one political party wants to toss higher education into the trash, and Schuman wants to wreck it with a sledgehammer, we have an obligation to know and understand academia before it’s turned to rubble.
Charles Green teaches writing as a lecturer at Cornell University, where he asks students to do bicep curls on the first day so they can lift his syllabus.
Student retention has been in the news a lot lately, but for a long time, no one at U of All People took it too seriously, since we’ve always had the same 20 percent rate of graduation within 20 years. To supplement our data, we also rely on anecdotal evidence, such as Professor Daissa Frogg’s looking around his biology lab in 2005 and exclaiming, “Where is everybody?” As it turned out, Professor Frogg had simply got the time wrong, and most of the students were at lunch.
But recently our rates have plummeted to below 10 percent, teasing at the edges of our institutional consciousness like a zen koan: What is the sound of a school with no students? Or, as the bursar, Shaumida Munnie, put it, “What’s a school that brings in zero tuition dollars?”
A hastily set-up committee, SSF (Stop Student Flight), came up with these findings: Students leave in droves during the summer, despite the current 24/7/12 system, under which no time slot or class space goes unfilled. But students also leave for reasons of bad grades (below a B+), drug and alcohol abuse (or insufficient quantities), and lack of financial support (in fact, we count on student dollars to support us). Also: apathy, irritation with overlong lectures, and the conviction that they could be spending their time more profitably flipping burgers at McDonald’s.
Accordingly, the SSF has met at least twice and come up with some measures that should make U of All People the only campus in the U.S., beyond maximum-security prison, able to boast a 100 percent retention rate, if you define terms like “100,” “percent,” “retention,” and “rate” rather loosely. Here are some of the proposals:
Prescription parties, offering Abilify to Zoloft. The first dose is free, after which the drugs are distributed on an ascending scale of payment, though the cost may be waived if the student maintains a G.P.A. higher than 3.0.
Resident advisers recruited from the ranks of bar mitzvah motivators, enriching dorm life with games, loud music, and cheap party favors. Motivators will also encourage lollapalooza study sessions and romantic all-nighters.
Financial incentives. Since we can’t put everyone on scholarship, we propose to reward students who complete a minimum of 500 credit hours. Since the minimum number of hours required for graduation is 126, it’s mainly the thought that counts.
A grade-adjustment system, for any grades that students aren’t happy with. Students must fill out a form in which they explain why an A from U of All People means the world to them.
Ten-foot-high fences surrounding the campus, topped with concertina wire, and a full check of all delivery trucks going in and out.
In addition to these five programs, set to go into effect this fall, here is a set of additional ideas that, in the words of SSF chair Jess Kidden, “haven’t quite gelled yet”:
Peer pressure, including a campaign to “Sign the ‘Don’t drop out!’ pledge.” Posters, prizes.
Mandatory, undeletable phone app that buzzes maddeningly whenever the phone is away from campus for more than a week.
Free lunch every Monday, the cost built into every student’s activity fees.
Perfect-attendance certificates, suitable for framing or posting on Instagram (with special certificate filter).
Nightly head-count in the dorms.
Distribution of “We ♡ Our Students” T-shirts to faculty.
Note: The SSF did include a student representative on the committee, but by the second time the committee met, she had already withdrawn from school.
David Galef directs the creative writing program at Montclair State University. His latest book is the short story collection My Date With Neanderthal Woman (Dzanc Books).
Grants for digital humanities projects serve as established tradition as the new chairman for the National Endowment for the Humanities welcomes grant recipients to the agency's new home in Washington.
Responsible academics have long attempted to discredit the positivistic data generated by IQ tests, variously demonstrating that such instruments favor certain socioeconomic groups under the guise of objectivity, reduce the many types of intelligence into a single rating, and imply a stable position for qualities that are far more variable, even volatile. The resulting bell curves, some scholars have demonstrated, may function as handcuffs for groups that don’t tend to do well. Yet analogs to the oversimplified and unyielding judgments of ability generated by those IQ tests are alive and well in the academy itself today. Too often, in situations ranging from a tenure decision to our expressed or internalized responses to a student paper, we impose firm and final rankings on academic aptitude rather than making a nuanced or provisional evaluation.
Can we generalize about situations ranging from marking a sophomore’s paper in the privacy of one’s office to participating in a meeting on a tenure decision? Clearly issues, stakes, and political implications may differ. The recurrence of certain problems and practices in situations across that spectrum, however, permits — even encourages — certain broad generalizations. At the same time, since some of these issues are field-specific, I am addressing the humanities, and particularly my own discipline, literary and cultural studies. And since the issue of how racial and gendered prejudices can contaminate judgments on intellect has been discussed extensively elsewhere, this essay devotes comparatively less attention to those issues.
Obviously, many types of judgment are necessary and valuable in such fields and in our universities as a whole; I have repeatedly — though by no means invariably — been impressed with the dedication, expertise, and care colleagues have brought to these responsibilities. And I am not now nor have I ever been a member of the parties opposing tenure, not least because I do not think that move would resolve the disgraceful reliance on adjuncts. But we need to acknowledge and negotiate the problems attending the way we evaluate academic ability.
One such problem is premature judgment. For example, deciding on the basis of a single paper that someone is not likely to be a good student throughout the semester or throughout his or her career is problematic for many reasons. In general the teacher should try to suspend that judgment, or, if it must be made, both bracket it with caveats and gradually buttress or modify it with additional evidence. As the literary historian Avrom Fleishman effectively argues in The Condition of English: Literary Studies in a Changing Culture, evaluations that may be appropriate for a particular example of or even a body of work all too often slide into more definitive overall judgments on the person creating it. Often a firm evaluation of the quality of the work at hand may well be entirely sound; a prognostication of future work feasible though risky, and a judgment on immutable qualities of mind deleterious.
The issue Fleishman identifies is especially risky when judgments are made on whether something or someone is “smart.” As Jeffrey Williams persuasively demonstrated in the minnesota review, the replacement of “solid” with “smart” as a term of praise marks an increasing delight in the startling or counterintuitive argument. The ability to generate such points in a single piece of work may indeed demonstrate the intelligence of its author from some perspectives. But again, doing so begs the question of whether those abilities will be sustained and whether they are adequate predictions in themselves of strong scholarship or criticism.
Moreover, should one privilege one version of intelligence over others? The emphasis on multiple types of intelligence in the work of the cognitive scientist Howard Gardner is an important caveat to making judgments of intellectual ability.
I vividly remember that after one of my early IQ tests I heard that I had puzzled teachers because I had done very well elsewhere but missed an apparently simple question. I still remember struggling with it: given a picture of a doll and gloves in three different sizes, we were asked in so many words which gloves would fit “this little doll.” I knew that one set of gloves looked right for the doll, but hearing the word “little” made me erroneously decide that the gloves that were best described as “little” were the correct answer. This mistake prefigured both the unusual verbal skills and indifferent visual and spatial abilities that have characterized my cognitive performances to this day — but since it was simply counted as an error, it also demonstrates the problems of measuring intelligence as a monolithic category.
Problems in the concept of “smart” as well as in other criteria for professional judgments are crystallized by the lecture-style presentation that is so important in hiring at many institutions. What are we measuring, and how effectively? Teaching abilities, some would assert. But such presentations at best reveal only a few of the many skills involved in effective teaching and in fact often serve as an excuse for not assessing other skills, especially at the sort of institution that gives only lip service to the importance of undergraduate education. Are we judging research through these presentations? Yes, and up to a point fair enough. But we risk devoting undue weight to impressions generated by job talks: a careful and protracted assessment of written material is typically both more time consuming (sometimes unfeasibly so) and more valuable.
Yet even faculty members who have reviewed that material sometimes allow their prior judgments on it to be subsumed or virtually forgotten, giving undue weight to the lecture that should instead be evaluated in close conjunction with earlier reading. What all that suggests is that often we are above all judging perceived smartness — or the performance of it — through job talks, and even judging if the candidate displays (flaunts?) precisely the putative markers of smartness we have ourselves, or to which we may aspire. The Q&A, itself unduly weighted in many decisions, also reflects performance and polish — and at its worst invites judgments based on whether one approves of the answer to one’s own question.
Even if one does decide that smartness in its customary senses of rapidly producing a startling insight is the sine qua non for and best measure of academic ability, or if one assigns that role to other dimensions of intelligence, we certainly risk not measuring them accurately, whether in job talks or many other situations. As noted above, the academy has recognized although not invariably curtailed the impact of racial, ethnic, and gendered stereotypes on judgments of academic ability, but many other prejudices may come into play as well. One of the top graduate students I ever taught told me that she had worked sedulously to discard her Southern accent, correctly perceiving that listeners in other regions might be less likely to take her seriously.
For all the consciousness of class and social status in literary and cultural criticism, in our own personnel decisions we too often interpret as signs of mental prowess mannerisms and behaviors that may well result instead from upper-middle-class breeding. Both verbal facility and refined social assurance, frequently though of course not invariably encouraged more in families from the more elite socioeconomic groups, may convey an impression of smartness. (Notice that “smart” is the very term used for elegant clothing.)
More broadly, some members of the profession will be less likely to identify intelligence in someone with an unpolished social manner — though on the other hand others are more likely to expect smartness there. (Another race in which I have a horse, though one emphatically not ready to be put out to pasture: aren’t colleagues more likely to describe people their own age, rather than significantly older, through these and related positive epithets?) As these instances suggest, both judgments on “smartness” as well as other monolithic overall evaluations may screen other, less savory evaluations, whether or not the person making them is aware of that.
Moreover, as the attacks on IQ tests also revealed, intelligence is far from the “ever-fixèd mark” that Shakespeare associates with love in one of his sonnets (116.5). Pressures of all types may temporarily block its components, notably memory; shortly after my father’s unexpected death, I repeatedly had trouble remembering the number for my ATM card, which I readily recalled before and after that event. People in the humanities may well grow and develop in many ways, not only at the stages of their undergraduate and graduate work but often considerably later in their careers. Often switching to a more congenial specialty or critical methodology produces such growth; its predecessor, less compatible with the interests and abilities of the person in question, may well have been encouraged or even dictated by a mentor or the perceived direction of the field. For such reasons, many people who composed an indifferent first or even second book do much better work later on; those who evaluate them throughout their careers on the basis of their early work, followed by a cursory familiarity with later writing or none at all, risk making unfair judgments.
Even if we do calibrate our scales to arrive at more accurate measures for academic aptitude and abilities, those categories may downplay one characteristic necessary for success: the drive that encourages intense and sustained work. Indeed, certain conceptions of intelligence dismiss that type of work as plodding , instead celebrating explicitly or implicitly a concept related to the Renaissance belief in sprezzatura: according to this model, the truly gifted will, as it were, rapidly and effortlessly turn out impressive academic work with their left hand, the right hand perhaps holding a crystal glass of, say, Meursault or another premier French burgundy (reminding us again of the implicit role of class in some judgments). But in fact, as anyone who has followed the career of graduate students over the years knows, the difference between a strong career and a disappointed and disappointing one typically involves not only talent and a sadly and increasingly large component of sheer luck. The recently publicized work by Angela Duckworth, a psychologist at the University of Pennsylvania, has demonstrated the effectiveness of what she terms “grit,” a conclusion that may variously to reinforce and to temper judgments made on other grounds.
The prices paid for the mistakes chronicled above are all too evident. Even if the teacher attempts to be tactful, both undergraduate and graduate students sense judgments; whether or not their perceptions are completely correct, thinking one has been classified as second-rate can too readily become a self-fulfilling prophecy. Above all, when the pie is as small as it is in the academy today, we must work to distribute it as fairly and judiciously as possible
How, then, can we avoid such errors, given that academic judgments are so often necessary and even desirable? We need to remain vigilant about the likelihood of mistakes, remembering, for example, that much as opponents of straw votes point out that they tend to solidify what should be tentative positions; the same danger shadows preliminary judgments on a student or colleague. We need to examine why we ourselves may be tempted into deceived and deceiving judgments. In particular, might we find it hard to challenge standards and procedures of judgment that have aided our own professional advancement?
Heather Dubrow is the John D. Boyd SJ Chair in the Poetic Imagination at Fordham University and taught previously at several other institutions. Among her publications are six single-authored monographs, a co-edited collection of essays, an edition of As You Like It, and a volume of her own poetry.