Most of my faculty colleagues agree that Writing Across the Curriculum (WAC), in which the task of teaching writing is one assigned to all professors, not just those who teach English or composition, is an important academic concept. If we had a WAC playbook, it would sound something like this: students need to write clear, organized, persuasive prose, not only in the liberal arts, but in the sciences and professional disciplines as well. Conventional wisdom and practical experience tell us that students’ ability to secure jobs and advance in their careers depends, to a great extent, on their communication skills, including polished, professional writing.
Writing is thinking made manifest. If students cannot think clearly, they will not write well. So in this respect, writing is tangible evidence of critical thinking — or the lack of it -- and is a helpful indicator of how students construct knowledge out of information.
The WAC playbook recognizes that writing can take many forms: research papers, journals, in-class papers, reports, reviews, reflections, summaries, essay exams, creative writing, business plans, letters, etc. It also affirms that writing is not separate from content in our courses, but can be used as a practical tool to apply and reinforce learning.
More controversial — and not in everyone’s playbook -- is the idea that teaching writing skills cannot be delegated to a few courses, e.g., first-year composition courses, literature courses, and designated “W” (writing-intensive) courses. Many faculty agree with the proposition that writing should be embedded throughout the curriculum in order to broaden, deepen and reinforce writing skills, but many also take the “not in my back yard” approach to WAC.
We often hear the following refrains when faculty discuss students and writing. Together they compose a familiar song (sung as the blues):
1. “I’m not an English teacher; I can’t be expected to correct spelling and grammar.”
2. “I don’t have time in class to teach writing — I barely have enough time to teach content.”
3. “Why should students be penalized for bad writing if they get the correct answer?”
4. “Mine isn’t supposed to be a ‘W’ course, so I’ll leave the writing to others.”
5. “There is no way to work writing into the subject matter of my course.”
6. “They hate to read and write and won’t take the time to revise their work.”
7. “I don’t have a teaching assistant and don’t want to do a lot of extra correcting—I have enough to do.”
8. “Our students come to college with such poor writing skills that we can’t make up for years of bad writing.”
9. “They never make the corrections I suggest; I see the same mistakes over and over again, so why bother?”
10. “They’re seniors, and they still can’t write!”
Much has been written about WAC, and I add my voice to the multitudes because I recently came to a realization, watching my students texting before class began: students spend hours every day reading and practicing writing — bad writing. How many hours are spent sending and reading tweets, texts and other messages in fractured language? It made me wonder: is it even possible to swim against this unstoppable tide of bad writing? One of my colleagues argues that students cannot write well because they don’t read. I think that students do read, but what they spend their time reading is not helpful in learning how to write. (That, however, is a discussion for another day.)
I’m not sure that all students can be taught to improve their writing, but I am sure that it is one of the most important things we can attempt to teach. What difference does it make if students know their subject matter and have excellent ideas if no one can get past their sloppy and disorganized writing?
Let us consider (with annoying optimism) those sad faculty refrains.
“I’m not an English teacher; I can’t be expected to correct spelling and grammar.”
But we are college professors; we know more about writing than our students do. What you could do, if you don’t want to make corrections yourself or are stymied by the magnitude of a particular writing problem (where to begin?), is circle areas for revision and require the student to submit the work to the tutoring or writing center before a grade will be given. (You can even allow several opportunities for revision, depending on your tolerance for pain.) You can designate a certain number of points in your rubric to writing mechanics, letting students know that their grades will be affected by their writing; human nature being what it is, students pay more attention when they know they will be graded.
Most important, we can all emphasize that writing is important in our disciplines and that students will be judged in the workplace on the basis of their writing skills. We can all convey the message that polished prose matters to us and to professionals in our field — so much so that we are taking points off for sloppy work.
“I don’t have time in class to teach writing — I barely have enough time to teach content.”
Do you have time to assign minute papers at the beginning or end of each class, asking students to summarize three things they learned, or pose a question related to the day’s work, or answer one question based on the previous reading assignment? These papers are short and easily graded; they help students internalize and reinforce content.. They each can be worth a few points, based on quality. If assigned on a regular or irregular basis (like a pop quiz), you may even get students to keep up with the reading and pay more attention in class. Minute papers encourage students to organize their thoughts; I discovered that students who could not speak coherently in class sometimes produced thoughtful short essays. Writing can be used in many ways to learn content and improve fluency and writing proficiency.
“Why should students be penalized for bad writing if they get the correct answer?”
Bcuz omg in the workplace they will be penalized for it. Ignoring student errors is like ignoring the piece of spinach in someone’s teeth; it may seem kind not to say anything, but no one really benefits. We can assign more writing in our courses, but if it is never graded, it may improve fluency but not accuracy — and confirm bad writing habits. Take a guess: over four years, what percentage of written assignments at your institution is graded for writing mechanics as well as content?
“Mine isn’t supposed to be a ‘W’ course, so I’ll leave the writing to others.”
Leaving WAC to others is like leaving voting to others. If WAC is viewed as an institutional playbook, it implies that everyone is part of the team and plays a position. All courses should be writing courses with a small w if not a big W; that is the only way to convey the message that what students learn in Composition 101 is relevant to success in their upper-level psychology course or business minor. Furthermore, since each discipline has its own rhetoric, it is particularly important for students to practice the specific types of writing they will be asked to produce in their careers. They will not be exposed to professional writing in their first-year seminars and English composition courses.
“There is no way to work writing into the subject matter of my course.”
Physicists, pathologists, geologists, mathematicians, dentists, lab technicians, engineers, architects, web designers, curators, forensic anthropologists and others have to explain things in writing; in an algebra course, for example, students could explain their reasoning on a given problem. No matter what the field, the ability to organize information in writing is a key professional asset, whether writing is used in a patient history, business contract or gallery brochure. We can invent ways to bring theory into practice by creating opportunities for students to write in the language of their careers.
“They hate to read and write and won’t take the time to revise their work.”
Yes, for many of our students, academic reading and writing seem to be unnatural acts. Some students, for example, seem much more themselves, much more authentic and engaged, on the soccer or football field.
One day in late autumn, on a perfect, still, golden afternoon, I stopped to watch the football team practice. The camaraderie, the sense of purpose, the sheer joy were poignant, as I pictured these young men paying mortgages and sitting in cubicles. Our job is to coach them safely into their futures, into different green pastures. Part of the playbook for that is to insist that they improve their writing skills so that their writing does not undercut their potential — even if they are not there yet, not fully ready to commit to academic work.
My other thought that afternoon was, can we make learning as engaging and authentic as sport? We each have to answer this question in our own way. In my law classes, for example, I ask students to write legal memorandums using the IRAC method: “You are a junior associate in the firm of Flake, Moss and Marbles, and your senior partner wants you to research and write a memo on the case of Madame X, who… .” The IRAC method not only structures the memo for students (they summarize the facts of the case, Identify the legal issues, cite the relevant Rules of law, Analyze the problem based on the facts and law, and draw a Conclusion on the likely outcome of the case), but allows them to role-play a real-world situation. They complete a series of these short writing exercises, with a rubric to guide them, and have several opportunities to revise their work.
For a formal or high-stakes writing assignment, scaffolding is essential; students will perform better when the structure of the writing assignment is broken down into components, which, when assembled, produce a coherent whole. The IRAC method has a built-in scaffold, but other writing assignments can be structured into a series of elements or steps. It is a mistake to assume that students know how to organize a paper or report; let them know what you are looking for, break down the structure into elements, and if you have a good sample of what you expect, hand it out. (Save your students’ work for this purpose.)
In my mediation class, students are asked to draft an agreement based on a mediation role-play they have participated in. The agreements follow a structured blueprint. They are peer-edited, revised by the student (with a writing tutor, if necessary) and then corrected by me. Students are given model agreements from past years and have three opportunities to revise their work prior to grading. Last semester, 18 of 19 students revised their work and received As on the agreements. The agreements were polished and professional and reinforced the content taught in the course.
I believe that we can devise meaningful and engaging ways for students to write in all courses; the challenge is to explain to students why they are doing it. Writing should be like driver’s ed in students’ minds -- a practical skill that is essential to their future success. Without that connection, writing will seem more like juggling: nice if you can do it, but not an essential life skill.
“I don’t have a teaching assistant and don’t want to do a lot of extra correcting — I have enough to do.”
Most of us don’t have teaching assistants, but we do have students for peer editing, and writing or tutoring centers with support staff. Some degree programs have upper-class peer mentors who can help students with writing in the discipline. Consider ways to form a writing partnership, using the resources available to you. Personally, I prefer that students take responsibility for their revisions by seeking out support services. Somehow, it doesn’t seem kosher to make all these corrections, have students incorporate them into their next draft, and then grade my own language, saying “good word choice,” “nicely written,” or “well organized!” I like to circle areas for improvement, making general comments, not specific corrections.
“Our students come to college with such poor writing skills that we can’t make up for years of bad writing.” Some students will make little progress in improving their writing, for a variety of reasons. But if we accept students into our institutions, we should provide opportunities for them to improve their writing skills, even if some students are the proverbial horses who won’t drink. If students practice and are graded on their writing in only a few courses, they learn: 1) that in most courses they can get a decent grade without decent writing, and 2) that writing is relevant only in a few contexts. If we insist that career preparation includes the process of writing and revision, and we all assign meaningful writing exercises that students can revise and improve, the rest is up to them.
“They never make the corrections I suggest; I see the same mistakes over and over again, so why bother?”
When students start losing points, they tend to sit up and take notice. I’ve found that many mistakes are careless ones — what I call a document dump, turning in a first draft with no proofreading. If you hand back a draft and deduct points for writing errors, you will see more effort to correct those mistakes. Why should students devote time to an ungraded exercise when they can spend their time on something that will affect their grades? If sloppy writing has no impact on their grades, it makes sense for students not to internalize your corrections or prioritize revisions.
“They’re seniors, and they still can’t write!”
If we can agree about the value of a WAC playbook, not just in theory but in our daily practice; find ways to weave writing into all of our courses, not as busywork but as a meaningful part of the content we teach; assess student writing and promote it as an essential career skill; and allow students to revise their work, since revision is the heart and soul of the writing process, we are less likely to encounter seniors who have not practiced or improved their writing skills over four years. Our playbook should read that all courses, from now on, are writing courses with a small w.
Ellen Goldberger is director of the Honor Scholars Program and teaches law, leadership and conflict resolution courses at Mount Ida College.
Brian Cranston’s recitation of “Ozymandias” in last year’s memorable video clip for the final season of Breaking Bad may have elided some of the finer points of Shelley's poem. But it did the job it was meant to do -- evoking the swagger of a grandiose ego, as well as time’s shattering disregard for even the most awe-inspiring claim to fame, whether by an ancient emperor or meth kingpin of the American Southwest.
But time has, in a way, been generous to the figure Shelley calls Ozymandias, who was not a purely fictional character, like Walter White, but rather the pharaoh Ramses II, also called User-maat-re Setep-en-re. (The poet knew of him through a less exact, albeit more euphonious, transcription of the name.) He ruled about one generation before the period that Eric H. Cline, a professor of classics and archeology at George Washington University, recounts in 1177 B.C.: The Year Civilization Collapsed (Princeton University Press).
Today the average person is reasonably likely to know that Ramses was the name of an Egyptian ruler. But very few people will have the faintest idea that anything of interest happened in 1177 B.C. It wasn't one of the 5,000 “essential names, phrases, dates, and concepts” constituting the “shared knowledge of literate American culture” that E.D Hirsch identified in his best-seller Cultural Literacy (1988), nor did it make it onto the revised edition Hirsch issued in 2002. Just over 3,000 years ago, a series of catastrophic events demolished whole cities, destroying the commercial and diplomatic connections among distinct societies that had linked up to form an emerging world order. It seems like this would come up in conversation from time to time. I suspect it may do so more often in the future.
So what happened in 1177 B.C.? Well, if the account attributed to Ramses III is reliable, that was the date of a final, juggernaut-like offensive by what he called the Sea Peoples. By then, skirmishes between Egypt and the seafaring barbarians had been under way, off and on, for some 30 years. But 1177 was the climactic year when, in the pharaoh’s words, “They laid their hands upon the lands as far as the circuit of the earth, their hearts confident…. ” The six tribes of Sea Peoples came from what Ramses vaguely calls “the islands.” Cline indicates that one group, the Peleset, are "generally accepted” by contemporary scholars "as the Philistines, who are identified in the Bible as coming from Crete.” The origins of the other five remain in question. Their rampage did not literally take the Sea Peoples around “the circuit of the earth,” but it was an ambitious military campaign by any standard.
They attacked cities throughout the Mediterranean, in places now called Syria, Turkey, and Lebanon, among others. About one metropolis Ramses says the Sea Peoples “desolated” the population, Ramses says, “and its land was like that which has never come into being.”
Cline reproduces an inscription that shows the Sea Peoples invading Egypt by boat. You need a magnifying glass to see the details, but the battle scene is astounding even without one. Imagine D-Day depicted exclusively with two-dimensional figures. The images are flat, but they swarm with such density that the effect is claustrophobic. It evokes a sense of terrifying chaos, of mayhem pressing in on all sides, so thick that nobody can push through it. Some interpretations of the battle scene, Cline notes, contend that it shows an Egyptian ambush of the would-be occupiers.
Given that the Egyptians ultimately prevailed over the Sea Peoples, it seems plausible: they would have had reason to record and celebrate such a maneuver. Ramses himself boasts of leading combat so effectively that the Sea Peoples who weren't killed or enslaved went home wishing they’d never even heard of Egypt: “When they pronounce my name in their land, then they are burned up.”
Other societies were not so fortunate. One of them, the Hittite empire, at its peak covered much of Turkey and Syria. (If the name seems mildly familiar, that may be because the Hittites, like the Philistines, make a number of appearances in the Bible.) One zone under Hittite control was the harbor city of Ugariot, a mercantile center for the entire region. You name it, Ugarit had it, or at least someone there could order it for you: linen garments, alabaster jars, wine, wheat, olive oil, anything in metal…. In exchange for paying tribute, a vassal city like Ugarit enjoyed the protection of the Hittite armed forces. Four hundred years before the Sea Peoples came on the scene, the king of the Hittites could march troops into Mesopotamia, burn down the city, then march them back home — a thousand miles each way — without bothering to occupy the country, “thus,” writes Cline, “effectively conducting the longest drive-by shooting in history.”
But by the early 12th century, Ugarit had fallen. Archeologists have found, in Cline’s words, "that the city was burned, with a destruction level reaching two meters high in some places.” Buried in the ruins are “a number of hoards … [that] contained precious gold and bronze items, including figurines, weapons and tools, some of them inscribed.” They "appear to have been hidden just before the destruction took place,” but "their owners never returned to retrieve them.” Nor was Ugarit ever rebuilt, which raises the distinct possibility that there were no survivors.
Other Hittite populations survived the ordeal but declined in power, wealth, and security. One of the maps in The Year Civilization Collapsed marks the cities around the Mediterranean that were destroyed during the early decades of the 12th century B.C. — about 40 of them in all.
The overview of what happened in 1177 B.C. that we’ve just taken is streamlined and dramatic — and way too much so not to merit skepticism. It’s monocausal. The Sea Peoples storm the beaches, one city after another collapses, but Ramses III survives to tell the tale…. One value of making a serious study of history, as somebody once said, is that you learn how things don’t happen.
Exactly what did becomes a serious challenge to determine, after a millennium or three. Cline’s book is a detailed but accessible synthesis of the findings and hypotheses of researchers concerned with the societies that developed around the Mediterranean throughout the second millennium B.C., with a special focus on the late Bronze Age, which came to an end in the decades just before and after the high drama of 1177. The last 20 years or so have been an especially productive and exciting time in scholarship concerning that region and era, with important work being done in fields such as archeoseismology and Ugaritic studies. A number of landmark conferences have fostered exchanges across micro-specialist boundaries, and 1177 B.C.: The Year Civilization Collapsed offers students and the interested lay antiquarian a sense of the rich picture that is emerging from debates among the ruins.
Cline devotes more than half of the book to surveying the world that was lost in or around the year in his title — with particular emphasis on the exchanges of goods that brought the Egyptian and Hittite empires, and the Mycenean civilization over in what we now call Greece, into closer contact. Whole libraries of official documents show the kings exchanging goods and pleasantries, calling each “brother,” and marrying off their children to one another in the interest of diplomatic comity. When a ship conveying luxury items and correspondence from one sovereign to another pulled in to dock, it would also carry products for sale to people lower on the social scale. It then returned with whatever tokens of good will the second king was sending back to the first — and also, chances are, commercial goods from that king’s empire, for sale back home.
The author refers to this process as “globalization,” which seems a bit misleading given that the circuits of communication and exchange were regional, not worldwide. In any case, it had effects that can be traced in the layers of scattered archeological digs: commodities and artwork characteristic of one society catch on in another, and by the start of the 12th century a real cosmopolitanism is in effect. At the same time, the economic networks encouraged a market in foodstuffs as well as tin — the major precious resource of the day, something like petroleum became in the 20th century.
But evidence from the digs also shows two other developments during this period: a number of devastating earthquakes and droughts. Some of the cities that collapsed circa 1177 may have been destroyed by natural disaster, or so weakened that they succumbed far more quickly to the marauding Sea Peoples than they would have otherwise. For that matter, it is entirely possible that the Sea Peoples themselves were fleeing from such catastrophes. “In my opinion,” writes Cline, “… none of these individual factors would have been cataclysmic enough on their own to bring down even one of these civilizations, let alone all of them. However, they could have combined to produce a scenario in which the repercussions of each factor were magnified, in what some scholars have called a ‘multiplier effect.’ … The ensuing ‘systems collapse’ could have led to the disintegration of one society after another, in part because of the fragmentation of the global economy and the breakdown of the interconnections upon which each civilization was dependent."
Referring to 1177 B.C. will, at present, only get you blank looks, most of the time. But given how the 21st century is shaping up, it may yet become a common reference point -- and one of more than antiquarian relevance.
When I began my career as a faculty member many decades ago, I had the good fortune to find myself in an especially distinguished department at an especially eminent research university. It was the custom of this department to gather for a faculty luncheon once a week and then to proceed to a departmental seminar in which we heard either from a visiting colleague or one of our own members. In the discussion period following the talk, questions generally had more to do with the ongoing research of the interlocutor than with the research of the speaker. Since all members of the department tended to be engaged in consequential research, the overall quality of the discussion was high -- although proceedings tended to take on a somewhat predictable, ritualized character
To be sure, department members were sincerely interested not only in their own research, but also in the research of their colleagues, and would often engage in conversation on these matters. This was known as discussing one’s “work.” Teaching was not considered a part of such “work,” even though many members of the department were dedicated, effective teachers. Teaching was basically a private matter between a faculty member and his or her students. I had the distinct sense that it would not be to my professional advantage to engage in discussion about my teaching; indeed, I sensed that it might be the conversational equivalent of a burp.
Back in the 1950s, the sociologist Alvin Gouldner did some interesting work about the culture of faculty members and academic administrators at a liberal arts college. He was following up on Robert Merton’s general idea about the social significance of “latent,” as opposed to “manifest” roles – that is, how roles not recognized explicitly, and not carrying official titles, might be of central importance in social life. In the academic context, manifest roles would include those of “dean,” “faculty member,” “student,” etc. The latent roles that Gouldner found especially important were those of “cosmopolitan” and “local”: roles that were not consciously recognized by overt labels, but which were consequential to the actual culture and social organization of the institution.
Cosmopolitans were those whose primary focus was their profession, as opposed to the institution where they were employed. Thus, a faculty member in this category would, for example, take a job at a more prestigious university that was stronger in his or her own field, even if it meant a lower salary. (Gouldner’s research was carried out at a time when it was apparently conceivable for a liberal arts college to offer a larger salary than a research university). Locals, on the other hand, were loyal first and foremost to the institution; they were usually not productive as scholars. At the time of Gouldner’s study, administrators generally fell into the category of locals.
Much has changed since that time. There has been, with a general move toward cosmopolitanism on the part of administrators, who have developed professional associations of their own and are more likely to go from one institution to another. As for faculty, their world has seen a widening gap between elite cosmopolitans and indentured locals -- adjuncts tied to low-paying jobs only relatively close to home, not the kind of locals who have been given any reason to develop institutional loyalty.
A question, then, for faculty members today is how best to balance concern for their profession with concern for their institution. A likely way is to care seriously and deeply for one’s students – since they are, after all, a major part of one’s vocation, in addition to paying most of the bills. And this means taking a more intentional, sophisticated approach to teaching.
To be sure, different institutions have different missions. Research universities, in particular, are crucial to the advancement of knowledge and must thus concern themselves with leading-edge science and scholarship. Even here, however, not all graduate students are themselves headed for major research universities -- far from it. Thus, graduate faculties in research universities are coming to feel responsible for preparing students for the future careers they will actually have. In part, this will mean exploring possibilities beyond the academy. It will also mean creating effective programs for preparing graduate students as teachers for a wide range of students.
The development of such programs has been a focus for the Teagle Foundation in recent years. This has involved supporting universities in their efforts to expose graduate students to what cognitive psychology has taught us about learning; to the pedagogical approaches and styles that have proven most effective; and to which forms of assessment are most relevant to the improvement of teaching. More generally, it means leading faculty to feel that they are not only a community of scholars, but also a community of teachers.
It has been suggested that the preparation of graduate students for teaching would be well-served if there were different faculty “tracks,” with some department members being primarily responsible for preparing researchers while others are primarily responsible for preparing teachers. While it is certainly true that not all members of a department have to make the same kind of contribution to the overall success of the program, formalizing such a separation between research and teaching would simply reinforce the caste system already in place -- not to mention the fact that many distinguished researchers are also exceptional teachers and that student engagement in research is an important teaching strategy. So, while there might be some value in having a pedagogical specialist (or more) on the roster, it is not desirable to have a tracking system that segregates teaching from research.
Here, then, is the general goal: just as faculty members would never think of being unaware of what peers are doing in the same field of research, so they should feel a comparable impulse to be aware of what their colleagues are doing in their areas of teaching. And thus, the world of higher education can become even more a true community.
Judith Shapiro is president of the Teagle Foundation and former president of Barnard College.
A junior scholar had been waiting months for a response on an article she had submitted to a good journal. One day she happened to be visiting a colleague’s office, as the colleague was bemoaning being hassled by an editor, having missed the deadline to “review this damn paper.” The title was visible on the colleague’s computer screen. “But that’s my article!” the junior scholar cried. There followed a moment of rather awkward silence, followed by some nervous laughter. The colleague, shamefaced about his tardiness as a reviewer, hastily dispatched a friendly critique of the piece to the editor.
If the colleague hadn’t realized the article was written by someone he knew, he probably would have put it off even further. In the ideal world, the review process is perfect, but unfortunately it involves the actions of humans.
We’ve all received scathing reviews of our pieces by anonymous reviewers. (Or at least I have. Perhaps the gentle reader has only ever received fulsome praise for his or her scholarly efforts, and if that is you, possibly you should stop reading here.)
But for those academic mere mortals still reading, we all know the harsh review, which often contains unfair criticism. (Exhibit A: “The author of this article did not make reference to Smith’s groundbreaking research in the field” -- never mind that Smith’s research has yet to be published, and there is no chance, none whatever, that the author of this significant piece is the person writing the review). Or the more usual reviewer disagreement: Referee 1 says the article has too much brown and not enough purple, Referee 2 says it has too much purple and not enough brown, and Referee 3 (to whom it has been sent to break this deadlock of opinion) says that this interesting article on feudal Japan doesn’t include enough about Richard Nixon. The ideal behind blind review places the reviewer as impartial Justice, but it is much easier to swing a sword than look at a scale when you’re blindfolded.
After a particularly blistering referee’s report (I find these best read with a bloody mary in hand; the reader’s experiences may vary), I’m sure I’m not the only one who has fantasized about kicking that referee’s shins at a conference. Of course I don’t know whom to kick. The distressing thing is there’s a good chance they know who I am.
Somewhere back in the mists of academic idealism, there was a point where scholars’ work was unknown until they presented it for publication. But now that we all leave trails of our research all over the web, the idea behind “blind” reviews seems quite naive. Googling a title will often yield a conference program, or a researcher’s departmental website. How many academics are so pure in their approach that they would AVOID looking up the topic of the paper under review? After all, it may be relevant to catch up on other literature on the topic in order to situate your review of the article in question.
For those of us who work in broad areas, it’s still the case that we will be asked to review (and be reviewed by) people completely unknown to us. Part of blind review’s theory is to avoid the conflicts of refereeing the work of friends (or enemies). But those in small subfields can already guess pretty closely who wrote an article they are asked to review. How many of us wouldn’t be more kind in a review of a piece we knew was written by a friend?
Which brings me to the issue of workshopping papers in public. I’ve heard people wonder whether doing so damages peer review. To which I would respond, no more than the Internet has damaged it already. With two articles of mine, I tried an experiment: posting my drafts on Google docs. I then posted links on Twitter and asked for anyone who was willing to comment.
(I realize that in STEM fields, posting paper drafts on ArXiv and other repositories for comment is more common, but in the humanities we don’t have this type of culture. We simply informally ask friends for comments.)
Getting colleagues from around the world to comment on my work made it stronger. And rather than feeling guilty about buttonholing the same few overworked friends to look at an article draft, the infinite generosity of my Twitter followers gave me volunteers. And they wrote constructive, useful things.
Some time ago, Daniel Lemire (a computer science professor at the Université du Québec) made the argument that blind review should be eliminated because work should be evaluated as part of a scholar’s broader career.
I’m not sure I agree with that, not least because I have my suspicions this already happens to the benefit of some Silverbacks, who manage to get pieces published that, had they landed on the editor’s desk as the work of an unknown Ph.D. student, would have been eighty-sixed in short order. However, I think it’s right to wonder how the current situation is actually operating (as opposed to how it “should”).
Lemire raises some interesting research that suggests rather than helping those outside the academy get published (which in theory it should, as supposedly the work itself is being judged rather than the author) in fact it works against them. Blind peer review is the standard by which we mark the quality and rigor of our scholarship. I do believe research needs impartial vetting but I’m not sure the current system should be it.
[Wondering about what happened to my friend’s article, mentioned at the start? It was not accepted by the journal, as the other referee had written a much harsher assessment.]
Katrina Gulliver is a lecturer in history at the University of New South Wales. Her site is http://www.katrinagulliver.com and you can find her most of the time on Twitter @katrinagulliver.