“There are two modes of establishing our reputation: to be praised by honest men, and to be abused by rogues. It is best, however, to secure the former, because it will invariably be accompanied by the latter.”
-- Charles Caleb Colton, Anglican clergyman (1780-1832)
One deleted e-mail marked the beginning of my ordeal. It was finals week, just before Christmas break, when I received a strange message asking me to comment on some kind of online political essay that I had supposedly written. Since I’m not a blogger and make it a point to avoid the many rancorous political forums on the Internet, I immediately dismissed it as spam and hit delete.
But the notes kept coming, increasing in their fervor and frequency, until I could no longer deny it: I was receiving “fan mail.” Some writers called me courageous. Others hailed me as a visionary. A few suggested that I was predestined to play a pivotal role in the apocalyptic events foretold in the Book of Revelation. (Seriously.) Now, over the past 12 years I have published a scholarly book and eight journal articles on various historical topics, but I have to admit that through it all I never even attracted one groupie. So with my curiosity very much piqued, I began an online quest in search of the mysterious article.
I suppose it was inevitable that I was not going to like what I found. There, prominently displayed on a rather extreme Web site, was an essay (information about it can be found here) that likened President Obama to ... Adolf Hitler. Underneath the title was the inscription “by Tim Wood.”
To say I was not pleased would be a colossal understatement. However, even though my parents always told me I was special, a quick Internet search will reveal that I am not, in fact, the world’s only Tim Wood. So I ignored the article -- at least until one of the versions of the essay being forwarded via e-mail mutated into a form which included the rather unambiguous phrase “Professor of History, Southwest Baptist University.” The writer of this message also helpfully appended my office phone number and e-mail address.
Stunned, I struggled to regain my bearings and tried to grasp the full implications of this professional identity theft. Beyond the fact that the comparison is utterly ridiculous (anyone who believes that truly has no understanding of the depths of evil plumbed by the Nazi regime), it was now personal. Who had the right to speak for me like that? How dare they hide behind my name! What if my colleagues -- or my friends and family -- read this and believed it?
But the most pressing question seemed to be what kind of damage control would be necessary in order to prevent this from irreparably damaging my career. And that, in turn, led me to begin reflecting on how scholars will need to safeguard their professional reputations in the 21st century. Although I would never wish this kind of ordeal on anybody, the realist inside me fears that I will not be the last professor to fall victim to digital dishonesty. As academics, we must be aware that our professional reputations are transmitted through the technology of a bygone era, and even then are typically shrouded in secrecy or obscurity. Mentors, colleagues, and administrators exchange sealed and confidential references printed out on university letterhead. Editors, referees, and reviewers validate our scholarly work by allowing us access to or giving us coverage in their publications, but the results of that process all too often lie buried in library stacks and academic databases. In the meantime, the malicious or misinformed denizens of the Web have had time to hit the “forward” button about a million times.
So what lessons have I learned through this ordeal? First of all, be proactive. Once these rumors hit a certain critical mass, ignoring them will not make them go away. Indeed, a situation like this becomes the ultimate test of one’s personal credibility in the workplace. Immediately after I discovered that my specific identity had become attached to that particular article, I treated myself to a tour of the university’s administration building. Everybody from my department chair, to my dean, to the provost, to the directors of human resources, information technology, and university relations heard my side of the story within 48 hours. In my case, I was fortunate enough to have retained the confidence and support of my administration. There is no substitute for goodwill.
Secondly, I tried to remain positive and to find the teaching moment hidden within all of this. I posted an item on the university’s faculty Web page that served both as a public disclaimer and an opportunity to emphasize to students (and anybody else who might read it) why it is that faculty constantly warn against an uncritical acceptance of materials found on the Internet. I reminded my readers that in history, scholars are trained to constantly analyze their sources. Always historians must be aware that the documents they are working with may contain errors, lies, omissions, distortions, or may even turn out to be wholesale forgeries. To navigate those potential pitfalls, scholars check facts and look for other documents that confirm (or contradict) the information found in our sources. We seek to identify the author and understand his or her motives for writing. We try to understand the larger historical and cultural context surrounding a document. By doing our homework, we are better able to judge when people deserve to be “taken at their word.”
This episode has also taught me a tough lesson in maintaining a professional demeanor, even in the face of outrageous provocations. Although the majority of people who wrote to inquire about the article were gracious, and many even apologized for the mistake, enough of my correspondents were belligerent and rude to make me dread opening my inbox every morning. Even after learning I was not the author, many readers clearly still expected me to lend my professional credibility to the essay, vouching for its accuracy and validating its interpretations. After reading my denial (where I explicitly refused to endorse the article’s contents), many supporters of the piece became abusive, writing back to attack the depth of my patriotism, the sincerity of my religious faith, and the integrity of the academic community in the United States in general.
Critics of the essay were not above lashing out either -- even in the absence of evidence. One disgruntled detractor wrote to inform me that my brand of “voodoo” and “fear-mongering” would soon be vanishing into irrelevancy, heralding the advent of a new Age of Reason. (Hopefully that individual’s definition of reason will eventually grow to include a commitment to basic research and fact-checking and an unwillingness to take forwarded e-mails at face value.) In the meantime, along with the angry rants, there came from the fever swamps of political paranoia long-discredited conspiracy theories, urging me to consider that the course of history was being determined by Jewish bankers, or the Jesuits, or the Illuminati, or even flesh-eating space aliens. Frequently at those junctures, I felt the temptation to fire back with a “spirited” and “colorful” rebuttal. However, I resisted for many reasons: because I am ultimately a firm believer in civility in public debate, because I did not want to embarrass the colleagues and administrators who had stood by me through this, and because arguing with people who have already made up their minds and have come to demonize those who disagree is almost always an exercise in futility.
Moreover, this incident has led me to reconsider my somewhat adversarial relationship with technology. (I’m the guy who still refuses to buy a cell phone.) But one of the greatest difficulties I encountered in all of this was finding a platform from which to launch a rebuttal. Although I did write personal replies to many of the people who wrote me inquiring about the article, it seemed clear that such a strategy alone was like battling a plague of locusts with a flyswatter. Instead, Internet rumors are best refuted by channeling people toward some definitive, universally available, online point-of-reference (a Web address, for instance) that exposes the lie. In my case, the university was kind enough to grant me access to a page on its Web site, and I quickly began disseminating the link to my posting. However, that solution may not be available to everyone who falls victim to this kind of a hoax, and I am beginning to believe this issue is far too important for faculty to leave to others anyway. A year ago, I would have considered the creation of an “official Tim Wood Web site” to be pretentious in the extreme. Today, I’m not so sure. Like it or not, faculty are public figures, and if we do not take the initiative to define ourselves in ways that are accessible and relevant to those outside the academy, we risk being defined by others in ways that suit their agenda, not ours.
Finally, confronting this situation has led me to take a fresh look at the qualities that make a good historian. In 1964 Richard Hofstadter, an influential scholar of American politics, wrote an article for Harper’s Magazine entitled “The Paranoid Style in American Politics.” In this passage, he describes a paranoia all too familiar in today’s political discourse:
As a member of the avant-garde who is capable of perceiving the conspiracy before it is fully obvious to an as yet unaroused public, the paranoid is a militant leader. He does not see social conflict as something to be mediated and compromised.... Since what is at stake is always a conflict between absolute good and absolute evil, what is necessary is not compromise but the willingness to fight things out to a finish. Since the enemy is thought of as being totally evil and totally unappeasable, he must be totally eliminated -- if not from the world, at least from the theatre of operations to which the paranoid directs his attention.
As author Dick Meyer pointed out in a 2005 CBS News article, this mentality has come to transcend political labels:
The great dynamic is that so many people .... are convinced that a malevolent opponent wants to destroy their very way of life and has the power to do so. Evangelical Christians may believe that gay marriage, abortion rights, promiscuous and violent popular culture, and gun control are all part of a plot to destroy their community of values. Urban, secular liberals may believe that presidential God-talk, anti-abortion legislators and judges, intrusive Homeland Security programs, and imperialist wars are part of a sinister cabal to quash their very way of life.
Historians often find themselves compared to storytellers, and are lauded for their ability to present compelling interpretations of the past and to craft powerful narratives. But perhaps equally as important is our role as listeners. In an increasingly divided society, consensus will never be achieved by shouting (or e-mailing) until we drown out all competing voices. Instead, the first steps toward reconciliation come by those who seek to understand all aspects of the question and try to remain mindful of the needs of others.
In any case, my battle continues. Monday I will go to work, try to sort through all the chaos, and do my best to help folks figure out the truth. (Which is probably pretty close to what I did before my identity was stolen, come to think of it....) And I will continue to contemplate the ways in which this experience will change the way I present myself as a professor and a historian. In the meantime, if any of you encounter any online rantings and ravings that claim to be by me, do not necessarily believe them. Things are not always what they seem.
Timothy L. Wood
Timothy L. Wood is an assistant professor of history at Southwest Baptist University in Bolivar, Missouri. He is the author of Agents of Wrath, Sowers of Discord: Authority and Dissent in Puritan Massachusetts, 1630-1655 (Routledge).
It turns out that academics, at least in lore insulated from the pitches and rolls of the private sector, are not immune to the effects of recession. Even a cosseted tenured professor like me, creature of the humanities, unable to tell a bull from a bear, a credit default swap from a collaterized debt obligation, realizes that times are tight and that the economic downturn has changed everyone’s world. But as scary as words like “furlough” and “restructuring” are, it is the silence that unnerves me the most.
It has been months since somebody told me that “a university must run like a business.”
I’m alarmed to think that the era of the Business Simile is over.
I think I speak for many liberal arts types when I say how scary it is to lose that surety, that hard mooring in the results-oriented world, that comforting discipline of being told from across the conference-room table that the market imperatives must be paid heed, that we in the academy merely deliver a product to our clients, and that the efficiencies of the private sector can and must be brought to bear on the out-of-touch ivory tower. See, I liked that. There was a bracing firmness in such announcements. One the one hand, it fed my craving for intellectual loftiness — to be on the receiving end of such pronouncements allowed me to position myself as a defender of the faith, as true educator unsullied by a preoccupation with filthy lucre. On the other hand, I was secretly reassured when I heard that the important decisions — how to find the money, how to spend the money — were in the hands of realistic, highly-qualified, private-sector types who knew how the world worked. I wanted them on that wall. I needed them on that wall.
I admit that in the heyday of this tension between university-as-academy and university-as-commercial-enterprise there was some weird gendering going on that I try not to think about too much. “Business” was implacably male and strong, and the logic of the market (and all its attendant terminology whereby faculty became Full Time Equivalents and students became Credit Hours Produced) shone like Apollonian reason. Meanwhile the liberal arts, with their vague goals and misty-eyed idealizing, their lack of standardization and frustratingly inconsistent outcomes — well, they were female, areas to be protected and subordinated by a tough-minded business-oriented administration. Admit it: “pure research” has a virginal ring to it. So I confess that I liked being told that the university must be run like a business. After all, it left me time to think abstractly about big ideas (and picturesquely, I might add, leather-bound books at hand, maybe wearing a scarf). It allowed me scoff at the bean counters even as I consumed the revenue they wrung from the institution. I came to depend on the kindness of those strangers who understood accounting and statistics, core competencies and market niches. Who better to protect me from the real world than the agents of the real world?
But now the “university like a business” simile has been undercut by, well, the real world. Some of the most prominent companies in the United States are starting to resemble universities. They receive massive government aid, suffer from significant new government oversight, cling to inefficient fiscal models, and are buffeted by a howling public who sees tax dollars being thrown down the hole without concomitant results. But besides the human cost of such devilments, we must account for the metaphorical costs of AIG, Chrysler, and Bear Stearns: What happens the next time somebody deploys the Business Simile to eliminate a small, unpopular major (physics, I’m looking at you) or hire three adjuncts where once a tenure-track colleague might have served (hello, foreign languages)? It just won’t have the same effect. Now when somebody says “a university must run like a business” I won’t feel that same secret warmth of the Invisible Hand’s caress — too many businesses are looking pretty un-business-like these days.
We were all a lot happier when the Business Simile was untarnished.
Back in 2004, when my house was worth a lot more than it is now and my TIAA-CREF account was a lottery scratch ticket that always paid off, professor emeritus and former interim president of American University Milton Greenberg gave the Business Simile a good workout in Educause Review,pining that since business “involves the hierarchical and orderly management of people, property, productivity, and finance for profit,” then some hard-eyed pragmatics were in order. He riffs on the Business Simile with gusto: “Numerous realities define the business nature of higher education,” he says, and “claims that education is not a business are seen as cloaks for behaviors and expenditures that violate reasonable expectations of responsibility and accountability.” He winds up with a market-driven aria: “If higher education is to lead its own renewal, it must think about its people, its property, and its productivity in business terms.”
See, that’s the stuff … the Business Simile writ so large it is presented not as a rhetorical flourish, but as the conditions on the ground. We like our realities defined for us, our course charted, and that was what the Business Simile could be made to do in the hands of a master.
Fast forward to 2006. My house and my retirement account were worth even more then, and the Gallup Management Journal ran an admiring profile of late Drexel University president Constantine Papadakis, praising him for having “no patience with people who decry profiteering in the nonprofit world of education” and for his insistence that “academia should transition into business.” In the ensuing interview, President Papadakis, without irony, declares that in higher education, "If there's no profit motive, then you are doomed.” Vicariously and years removed, I still thrill to that kind of leadership. It is clear-eyed and results-oriented; it gives the term “businesslike” its good name.
Even as late as 2008, the year my home equity vanished and my TIAA-CREF statement actually burst into flames when I opened the envelope, one could against all odds still find Business Simile boosters. Business Week, which is apparently a magazine devoted entirely to business, ran an article in its August 3, 2008 issue exploring the conjunction of higher education leadership and the captains of industry. We’re told that Philadelphia University President Steve Spinelli helped found Jiffy Lube before becoming an academic, and that the Jiffy Lube experience helped him learn how to run his university “like a business.” Robert J. Birgeneau, chancellor of the University of California at Berkeley, had to “professionalize” his staff, because, he says, a “university is a very complicated business enterprise.”
But the article, which should have assured us that the Business Simile is so inexorable that we can depend on it even as layoffs and cutbacks are endemic, ends on an unsettling note. Making the point that many universities are hiring corporate CEOs to help them through these troubled times, Business Week notes that the University of California at Berkeley turned to Citigroup for a new vice chancellor and Harvard just filled an executive vice president position with a former Goldman Sachs executive.
Wait a minute, says I, finally getting to use the word schadenfreudecorrectly. Goldman Sachs? Citigroup? That’s $55 Billion in TARP funding right there. We should note here that the former Goldman Sachs manager Edward Forst lasted less than a year at Harvard, but the existence of TARP funding, government bailouts, and the general collapse of the economy has led not just to a general crisis in business confidence, but more importantly a crisis in my confidence in business and, by extension, the Business Simile. We are witnessing the death of a once-potent figure of speech.
As long as “business” represented competence and “university” represented inefficiency, then the Business Simile was able to win many an argument. But similes die, and they die when their referents stop making sense. Hardly anybody says “in like Flynn” anymore because very few people remember who Errol Flynn was, much less that he was associated with skillful swordplay and copulation. Who says “like clockwork” anymore? Only those who remember what clockwork was, or those who use the simile as a nostalgic gesture.
And maybe nostalgia will be my refuge. The Business Simile will not end with a bang, here in the midst of the 2009 recession. It may linger on, neutered, in faculty senate minutes and university strategic plans, in the inauguration speeches of university presidents yet to come, invoked not because it is timely or sensible, but because it reminds us of earlier good times, like folks in Hoovervilles humming Al Jolson. One can “sleep like a log” even though few of us saw our own fallen trees and snoring can be treated medically. One may still “work like a Trojan” even though it is only those with spectacularly unmarketable Classics degrees who can explain why Trojans were once known as hard workers. But those similes refer to past time, not present realities. They’re as antiquated as Bob Seger singing that my Chevy (its GM warranty now looking suspect) is “like a rock.”
So I’ll pine for the old, sure Business Simile, but settle for a new formulation: “A university must run like a business, but without the crazy risk-taking, the lack of accountability, the bankruptcy and the indictments.” It doesn’t have the same ring to it, but in these times we must take our comfort where we can. I’ll miss you, Business Simile. Losing you hurts like the dickens, whatever that means.
Daniel J. Ennis
Daniel J. Ennis is professor of English at Coastal Carolina University.
Submitted by Karine Moe on September 29, 2009 - 3:00am
Educated women’s relationship with work today is located at the crosscurrents of some significant demographic and societal shifts. Perhaps the most important of these changes, the stunning educational achievements of women during the past 50 years, opened doors to a wide variety of interesting and well-paid careers, including academe. Women, and married women in particular, increasingly entered fields that had long been considered male bastions. Given the opportunity to prove themselves academically and professionally, educated women marched headlong into the workforce. After a century of increasing female labor force participation, then, many were surprised when at the turn of the 21st century increases in the labor force participation of women stalled -- and in some cases, such as college-educated mothers of infants, declined dramatically.
While women have always moved in and out of the labor force, these most recent movements seemed different. The press began to identify women who, after investing considerable time and money in their educations, decided to leave prestigious and highly-paid careers. While the actual number of college-educated women who quit their jobs to tend to their children constituted a small fraction of working women, the phenomenon nevertheless fueled a heated public debate.
Arguments about the size of the phenomenon aside, the important part of this story is the valuable lessons about work and family to be learned from those who walked away from careers, high powered and otherwise. Our research on these women revealed issues faced by all mothers who seek to combine paid work and childrearing. While our sample was broad and included women from many different fields, academics were well-represented in our study, and so our findings have direct relevance for academic employers.
As women’s commitment to the workforce rose dramatically in the late 1900s, at the same time, marital patterns began to shift. Paraphrasing Gloria Steinem, these highly educated women were becoming the men they wanted to marry. Instead of the professor marrying the department secretary, who then quit work to raise the family, now the professor is likely to marry another professor, or lawyer, or financial analyst. This dynamic gave rise to something we call “the 100-hour couple,” or a couple who works extremely long hours for a combined total of more than 100 hours per week. At the same time as these highly educated women began to compete for academic, professional and managerial positions (along with their husbands), we began to see a surge in the work hours expected by employers. The expectations of employers for complete commitment to work -- with many expecting employees to be available on a 24/7 basis -- has risen substantially over the past few decades, as technology has made it increasingly possible for workers to be reached at all hours.
These changes coincided with cultural shifts in expectations for parenthood. While fathers certainly spend more time with their children than ever before, they still do not spend nearly as much time as do mothers. Today's mothers describe an intensification of motherhood that can be felt in the pressure to provide “mama time” for their kids by arranging play dates, driving them to activities, monitoring piano practice and homework, etc.
Compounded by ongoing expectations for women to manage household responsibilities, these cultural and demographic shifts came together to create a perfect storm of social forces that has led women to reevaluate their relationship with work. Aside from the trends described above, certain structural characteristics of the workplace inhibit women’s ability to excel in their careers while creating the home life they desire. By addressing some of these structural barriers, employers can help to create a workplace that will attract and retain highly qualified women. The implications of our research for academic employers are myriad.
Most jobs and workplace norms, including those in academe, were structured originally for men who had wives devoted full-time to managing the home life. Typically designed by and for men, few careers offer alternatives to combine work and motherhood, without invoking a significant penalty in terms of advancement and pay. In the case of academe, for example, those stressful years leading up to tenure coincide exactly with a woman’s prime childbearing years. Instances of a part-time tenure track are extremely rare, and if a woman gives up a tenure track job to take time to attend to family matters, it is highly unlikely that she will be able to secure another tenure track job. These constraints lead many qualified women to give up a chance at tenure in return for a lifetime of adjunct positions. Creating an alternative model in which parents can reduce their time commitment at work while still remaining on the tenure track (albeit delayed) would be a major step that colleges and universities could take to retain highly qualified women.
Pregnancy, childbirth, and adoption can present major stumbling blocks, even to women who want to continue to work full-time. Institutions can take great strides to improve the lives of their female faculty and staff by creating and legitimizing pregnancy, childbearing, and adoption leave policies. The 1993 Family and Medical Leave Act (FMLA) ensured that colleges offer employees at least 12 weeks within a 12 month period for unpaid leave for birth, adoption, or to care for themselves or their family members. Many institutions have expanded their parenting leave policies beyond those required by the FMLA, by providing some weeks of paid leave or a course reduction. Others provide a semester off with reduced pay. Unfortunately, the nature of academic work does not lend itself easily to taking a six-week leave, if that leave cuts into the academic calendar year. However, it can be done, especially when administrators provide support in terms of hiring replacement faculty or creatively configuring course loads.
Perhaps more important, however, is that even when these types of policies, generous as they may seem, are “on the books,” women faculty may feel that they can’t avail themselves without making it seem that they are uncommitted to their careers. Indeed many female faculty engage directly in actions to minimize even the appearance of allowing family obligations to interfere with work commitments. Typical strategies to avoid bias include returning to work too soon after childbirth, not requesting reduced teaching loads when necessary, or even missing important events in their children’s lives. Ensuring that women are not punished for taking advantage of flexible work options, then, becomes a significant step that administrators can take to retain these women.
A dearth of high quality childcare presents another significant structural barrier to mothers’ employment. High quality, affordable childcare is often unattainable for many families. In some cases the care is available, but expensive. In Minneapolis-St. Paul, for example, a family can expect to pay $24,000 per year to enroll an infant and a toddler in a full-time, center-based child care facility. And even when quality care is available and affordable, the inflexibility of opening and closing times, with most child care centers charging by the minute for late pick-ups, do not mesh with employer demands outside of the traditional 9 to 5 workday.
Of course, care responsibilities are not limited to children. Elder care is becoming an increasingly important drain on workers, and will only worsen as the nation’s baby boomers age. And since women are having children at increasingly older ages, we can expect to see a rise in the numbers of workers who have child and elder care responsibilities at the same time.
Studies have shown that employers who assist their workers with child care see improvements in productivity and morale. By providing and/or subsidizing child care for their employees, universities and colleges can expect to see improved worker performance and reduced turnover and absenteeism. As an added bonus, employers do not pay employment taxes on benefits, and so institutions can reduce their tax burden by casting some of an employee’s compensation as a child care subsidy.
Some believe that the women who left their jobs did so because they were not successful, didn’t like the work, or lacked ambition, all ideas contradicted by our research. Many of the women we interviewed had been phenomenally successful and loved their careers, but they also felt that workplace structures limited their capacity both to raise their families and to continue in those careers. And this was particularly true for academics. One national study of highly educated women who had left their careers, found that only doctors seemed happier in their work than professors, with lawyers and M.B.A.'s being far more likely to report job dissatisfaction as a major reason for leaving their careers. It was not job satisfaction that drove the professors to leave their careers, but rather the structure of the job. Therefore, academic employers who are interested in recruiting and retaining talented women should direct their attention to making structural changes in their institutions, such as increasing flexibility in terms of the tenure clock, allowing women (and men) to reduce teaching loads and take parenting leaves as needed, as well as improving other benefits such as child care assistance.
Submitted by Alex Golub on November 3, 2009 - 3:00am
E-book readers are all the rage these days -- from scenes of Oprah's audience ecstatically receiving complimentary Kindles to models of Sony's new eBook readers, this long-promised technology looks like it has finally arrived. Much has been written about the effect that e-books will have on the publishing industry (including scholarly publication), education, and its niche in the ecosystem of Extremely Complicated Handheld Devices Our Students Understand. But how useful are these devices for academics and how do they fit into our own personal scholarly ecosystems?
I recently got to spend two months up close and personal with a newly purchased Kindle from Amazon when I spent my summer conducting two months of fieldwork in Port Moresby, Papua New Guinea. Over that time, and for about a month before hand, I had a chance to read both academic and nonacademic work on the Kindle. Based on that experience, my overall impression is that while the Kindle and other ebook readers might not quite be reader for prime time, they are going to be an important part of academic work in the future.
Let’s face it: at heart, the Kindle is designed to let you read mystery novels, not academic books. It is small, light, and has terrific battery life. In this respect, Jeff Bezos has succeeded in his goal of creating a device that "just lets you read." But for an academic like me, whose casual reading list consists of books that normal humans find pointlessly opaque, does it matter than I can now read anywhere? The answer, I think, is Yes. The Kindle is remarkably freeing -- suddenly your porch or the beach is a workspace (this is particularly important to me, since I live in Hawaii and spend much of my time on my lanai). I never realized how much reading I did at my computer until I had the ability to read somewhere else. Admittedly, some might consider the workspaceization of their entire lives a minus rather than a plus, but as academics when has our life ever been separate from our work?
Academics often have a different experience of reading from that of regular readers -- our books are expensive, they are odd sizes, we intend to use them our entire lives and are careful about their condition, and we travel everywhere with an elaborate array of mechanical pencils, sticky notes, and highlighters to read them. The physical experience of reading on a Kindle solves many of these problems for us. Over the summer I read Pevear and Volokhonsky’s translation of War and Peace, something that I had tried and failed to accomplish before simply because the book was too damn large to handle. Kindles can be held over one's head while in bed or on the couch without tiring the arms, a key consideration for academics who 1) read everywhere, all the time and 2) have no upper body strength.
There are drawbacks to reading on the Kindle, of course. First, it is not a book. If one of the main reasons you read books is feel and smell the pages in order to gratify your self-image as a "reader" or "intellectual," then the Kindle is probably not for you. But if, as an academic, you are interested in the content of the book you are reading, then the Kindle's lack of pages offers a different set of challenges. Most obviously, you must give up being able to remember that the passage you are looking for is on the left or right hand side of the page. More substantively, though, the Kindle makes moving back and forth between endnotes, body text, and bibliographic material a tremendous pain -- a key concern for scholars who read by moving through the main text of a book and its scholarly apparatus simultaneously. And I must admit, while it’s nice to be able to search the contents of your book, I somehow feel that flipping through it is a method of browsing that has some obscure but important utility that the Kindle hasn't yet duplicated.
Most importantly, many academics add value to their library by writing in their books. While the Kindle’s built-in underlining feature does a much less-suck job of marking up texts than I originally expected, the markup features of the device are simply not as good as paper. While underlining may be fine for some, I am sure that many academics are like me in that they have their own complex and idiosyncratic method of annotating books which features complex circling, numbering, bracketing, and so forth -- none of which is available for the Kindle. And of course if the things you read feature charts, graphs, or even pictures, the Kindle's small screen will render them illegible.
Of course, you can do more than just read books on your Kindle. You can email PDFs to it, put .doc files on it, and so forth. This makes reading journal articles a snap -- although it will be even more of a snap when we can just go to JSTOR and click the "send this article to my eBook reader" link. It saves us from dragging around lengthy MAs and dissertations to read, although of course we can't mark up and then hand back the drafts of our students' work that we read on the Kindle.
In fact, I must admit that I think the book as an artifact is already dead. The Internet has created a used book market in which different versions, printings, pressings, covers of books matter not at all. Each book is, in a way, a replica of all the other books of the same title. Getting "reading copies" of books is now so easy that the e-book feels like the nail in the coffin, not a game-changer.
As academics, we often read extremely specialized books printed in very short runs in places that are, in general, very far from where we live. The Kindle really helps "long tail" readers like us because it lets you download a sample chapter, and then purchase, download, and read a new title, something that is tremendously exciting for academics, whose books often don't have a "look inside" feature on the Amazon Web site (or Google Books, or wherever), and who otherwise might waste time and money getting a book shipped to them simply so they can verify whether it is worth reading or not. In an age when our libraries are more and more cash-strapped, e-book distribution offers a lot of hope for niche publishing -- and academic publishing is nothing if not niche.
Except textbooks. I have to admit I am scared silly by the idea of a generation of students so alienated from material they are supposed to be immersed in that they rent digital textbooks that they do not intend to keep, cannot dog ear and underline, and otherwise feel totally alienated from. Even the current trend of students not underlining in books so as to preserve their resale value strikes me as appalling. Taking ownership of your education -- and indeed, just learning how to read closely -- means making your books part of your physical environment. In an era when you thought criminally overpriced textbooks full of uselessly pretty pictures and pre-chewed content was the absolute nadir of education, the Campus Full Of Kindles demonstrates we still have lower to sink. If, that is, the Kindles alienate students from their libraries rather than empowering them to immerse themselves in them.
And this brings us to the crux of the issue: Max Weber once remarked that scholars are the only remaining technical specialists who own their own means of production: their library. The Kindle changes this. The Kindle is the inkjet printer of the 21st century: the business model is to give you the device for free and then charge you for refills. Sure, the Kindle promises liberation to traveling bookworms, who can now travel without an emergency stock of extremely heavy extra reading in their bags. This space-saving feature offers even more respite for academics who find the book to oxygen ratio in their over-packed offices dangerously low. But then again, books are visible in a way e-books are not. I don't know about you, but one big consequence of developing an electric library of PDFs and book is that I forget what is in it, something that is harder to do when your books are there in the room with you, on an easily-eyeball-able shelf. And I, at least, am reluctant to discard a book I have marked up no matter how ubiquitous replacement copies are: My markings add value to my library.
More importantly, what happens to our scholarly means of production when they in Amazon’s copy-protected format? A glitch -- or policy change -- at Amazon may result in an erased library, and it is not entirely clear to me that Amazon’s interests are aligned with scholars. We want our digital content to be open and accessible to us. We want our underlinings and notes to transfer seamlessly from our e-book readers to our PDF collection programs to our printers, and we want to be able to mark up our content a lot (this is the academic version of the "remixing" that Lawrence Lessig talks about). We want to be able to buy content from anyone -- not just Amazon -- and read it on our Kindles. We want to be able to read journal articles on our Kindle every time a new issue of our favorite journal hits our RSS readers. Will Amazon facilitate this or will it lock the Kindle down? In sum, while e-book readers could be an important part of our future academic reading habits, questions remain.
A key to making them attractive is developing an ecosystem of scholarly information sources around them: larger libraries of scholarly books, reasonably priced, and with a firm title to ownership. Better connections between the content repositories such as journal websites and our handheld readers, more ways to make annotations and display information. Compatibility of files across readers (something that could be facilitated by adopting Open Access standards) and ways to share marked up documents with our colleagues. Perhaps one thing that I don't want on my e-book reader is more bells and whistles -- the harder it is to check my email or surf the web on my reader, the more work I'm likely to get some reading done. Until law and policy provide stronger consumer rights for people to own, rather than lease, the information they purchase, books offer a surety of title that e-books cannot replace.
It is too early for academics to shift much of their workload over to e-book format -- although that day may come sooner than we expect. So if you, like me, are going to spend a lot of time traveling or just away from bookstores, it might be the time to try one of these devices. While they are not ready for prime time yet, they are still great places to outsource our pleasure reading and reference libraries. And soon they might be good for even more.
Alex Golub is assistant professor of anthropology at the University of Hawaii at Manoa.
I used to strive to be an ideal teacher, but I gave up, because not only could I not satisfy a single classroom, I couldn’t even maintain my ideality for smitten students who took me for a second course. If there were a teacher-god in Greek mythology, I would worship at that temple for guidance. Mentor, of course, is ideal, and he was ideal for Odysseus and Telemachos alike. But remember that Athena herself has to impersonate Mentor in order to instruct Telemachos. To be the ideal teacher of my students, who have come from all over the globe to our Brooklyn community college with various beliefs about teaching and learning, they would all have to have a hand in creating me. What a magnificent creature I would be!
Ideal Teacher is fluent in the first language of all of his students. He can explain arcane English grammar backwards and forwards and compare it to Mandarin or French; he sees the parallels of all human utterance. His thing isn’t language so much as divining the incomprehension each student has in her way. Ideal Teacher is continually murmuring “Ah!” as he surveys the classroom. “I see!” And he utters the perfect words (or projects the perfect expression from his twinkling face), and the enlightened student swoons, almost melts, with happy comprehension.
That’s Ideal Teacher.
In a more ideal world, Ideal Teacher has no gender, but we’re working from our limited human imagination here, and I wouldn’t want him to be a sideshow curiosity, as a double-sexed or neither-sexed being would be distracting. So, Ideal Teacher, the male version, is physically attractive -- but neither paternal nor sexy, avuncular rather and authoritative but also good-humored and funny -- very funny and very serious about his subject.
Ideal Teacher should not be too young or too old. He should not be too enthusiastic. (This subject, these books, cannot be his life-blood, can they?) He should be a handsome monk, but in civilian clothing. His clothing should not be too fashionable, and of course not odd, but it should be pleasing to look at. A perfect formal informality. His get-ups should not show too much flair; the student should not think of the giddy and clever and outrageously dressed teacher from the Magic School Bus. Id Tea’s hair should be neat and can have just a little bit of gray -- not a lot! He should not be bald -- that’s either a weakness or rather too suggestive of nakedness. His face, however, should be clean-shaven, the delicate creases of his delight and pleasure at being in his students’ company revealing themselves more obviously thereby.
Ideal Teacher inspires and never nags. Students do the classwork and homework not only because they love him but because, through his teaching, they come to love the subject. Ideal Teacher is somehow never an object of romantic love. I return to this, as do the student-creators of Ideal Teacher, because there is necessarily something sexless about him. He is, however, not so sexless as to be contemptible. But the students never ever have to think of him having a sex life, and none of his jokes ever suggest his own pleasure or experience in sexual activity. He never ever gets distracted when he sees a pair of bare twitchy female legs or the depthless slopes of cleavage! Of course he doesn’t look! He is above and beyond such primitive responses.
If he has children -- this is tricky -- it is better that they were adopted by him and his wife, who -- I didn’t make this up, the students did -- is dead. He is a widower, thereby sympathetic, but his children are out of the house, because he has no other life, really, than the students in his class.
And of course he has other classes! But the students in one class are never aware of and thereby not jealous of the ones in the others. If he has to miss a class (and each semester he should miss a couple, whenever several of the students are having a bad day or medical appointments or just really would appreciate that gift of an hour’s holiday), everyone hears about it ahead of time and doesn’t have to cross campus to find out. If a student is low or down and needs him, Id Tea is always in his office. He is also glimpsed once in a while in the hallways and also passing through the cafeteria. He can drink juice or water, maybe coffee, but it’s better if he doesn’t. He really shouldn’t eat. Ideal Teacher has to eat, but not when a student can see, because what if his diet includes the pork or beef or meat or vegetables or protein-matter that the student disapproves of? In any case, an Ideal who eats human food is disgusting and he really shouldn’t.
After school (he shouldn’t live there, not on campus or on a cot in his office), Ideal Teacher can be seen leaving but he absolutely does not take public transportation! He does not share the grim bus ride to the subway or the impatient rush-hour subway ride towards the city. He does not sit shoulder to shoulder with Brooklynites and mark papers while sipping and sloshing coffee and eating a crumbling cookie. Banish the thought! No bus, no train. He has a car, and it’s an unusual car -- not too expensive, but cute and funny. He does not live too close to the college.
What he does at home or when he’s not teaching is never mentioned or suggested. He does not talk about “back in the day” or ever sound confused about iPods or texting or anything electronic. He is accessible by e-mail but not on Twitter. Ideal Teacher, when telling jokes, is always funny and never misunderstood. Jonathan, for example, does not have to think a moment before he realizes Ideal Teacher’s only kidding! All of his jokes are hilarious, but somehow they’re also warm and everyone is reassured by these jokes -- that we’re all human and fallible. So Ideal Teacher is as funny as Bill Cosby, and he can wear loud sweaters if wants to. His wit always saves his students from embarrassment and awkwardness. It unites students and if he targets a particular student, the gentle barb tickles. The corrected student laughs without shame and is only momentarily embarrassed. He sees before him an open path back into the good graces of his classmates and of course the professor himself. There is a hazy bliss that descends every day or two in class, wherein all the students realize they love him and they love their classmates and they love everyone in the world equally -- everyone realizes their boundless humility and tolerance -- and the whole class and Ideal Teacher sit for long moments in the glow of mutual respect and appreciation.
Ideal Teacher, this combination of Bill Cosby and the Dalai Lama with a dash or two of the latest superhero, is an angel of light. He will live forever and he was never born.
Bob Blaisdell is a professor of English at City University of New York’s Kingsborough Community College.
It’s difficult to believe now, but not so long ago, I looked forward to making up syllabuses.
Once the grand meal of the course had been structured and I’d chosen an exciting title, the syllabus design was my dessert. I took the word “design” quite literally, having fun with frames and borders, trying out different fonts, fiddling with margins.
Then, after printing out the final document, I’d sit at my kitchen table and add images saved for the purpose from old magazines, vintage catalogs, pulp advertising, obscure books, and other ephemera. Fat cherubs blowing their trumpets would announce Thanksgiving break; a skull and crossbones marked the spot of the final exam. My masterpiece was a course on the work of Edgar Allan Poe, whose syllabus was a gothic folly with a graveyard on the front page and cadaver worms crawling up the margins.
Over time, my syllabuses grew less creative. I still gave my courses what I hoped were enticing titles, and I’d usually add an image to the front page, but nothing more. In part, I was afraid my quirky designs might make the course seem less serious; I also had far less free time than I used to. But mostly, it was the number of disclaimers, caveats and addenda at the end of the syllabus that made my designs seem out of place. All these extra paragraphs made the syllabus seem less personal, and more institutional -- but then, I realized, perhaps it was time I grew up and began to toe the party line.
Those were the good old days. Now, at a different institution, I teach in a low-residency program whose courses are taught, in part, online. The institutional syllabus template is pre-provided: Times New Roman, 12-point font, 1-inch margins -- and don’t forget the “inspirational quote” at the top of the page.
The Course Description is followed by the list of Course Objectives, Learning Outcomes, Curriculum and Reading Assignments, Required Reading, Assessment Criteria and so on, all the way down to the Institute’s Plagiarism Policy and Equal Opportunity Provisions. Colleagues tell me it’s the same almost everywhere now; the syllabus is now composed mainly of long, dry passages of legalese.
I no longer design my own course titles -- or, if I do, they need to be the kind of thing that looks appropriate on a transcript, which means “Comparative Approaches to the Gothic Novel,” not “Monks, Murder and Mayhem!” There’s an extra plague in online teaching, however, in that -- at least, at the institution where I’m currently employed -- all course materials, including weekly presentations, must be submitted months in advance.
This, I’m told, is not only to ensure that books are ordered and copyrights cleared, but also for the various documents to pass along the line of administrative staff whose job includes vetting them in order to be sure no rules have been violated, then uploading them in the appropriate format. Moreover, a syllabus, we are constantly reminded, is a binding legal document; once submitted, it must be followed to the letter. Omissions or inclusions would be legitimate grounds for student complaint.
Gone, then, are the days when I could bring my class an article from that morning’s New York Times. Now, when I stumble on a story, book or film that would fit perfectly with the course I’m currently teaching, I feel depressed, not excited. I can mention it, sure, but I can’t “use” it in the class. Nor can I reorient the course in mid-stream once I get to know the students; I can’t change a core text, for example, if I find they’ve all read it before; I can’t change the materials to meet student interests or help with difficulties, as I once did without a second thought.
This is especially perplexing in online teaching, where it’s so easy to link to a video, film clip, or audio lecture. We have an institution-wide rule that such materials may not be used unless accompanied by a written transcript for the hearing impaired. When I object that there are no hearing impaired students in my small class of six, I am told that no, there are currently no students who have disclosed such an impairment. The transcripts are needed in case any of them should do so -- in which case, they would be immediately entitled to transcripts for all audio-visual material previously used in the course. Sadly, those who pay the price for this assiduous care of phantom students are the six real students in the course.
In brief, what used to be a treat is now an irksome chore.
Instead of designing a syllabus, I’m filling out a template, whose primary reader is not the student, not even the phantom potential-hearing-impaired student, but the administrators and examiners who’ll be scanning it for potential deviations from standard policy.
Sitting at my kitchen table with scissors and glue, I always felt as though the syllabus -- and, by implication, the course -- was something that came from within me, something I had literally produced, at home, with pleasure and joy.
Now, by the time the course is finally “taught” months after the template has been submitted, it feels like a stillbirth from a mechanical mother.
Mikita Brottman is chair of the humanities program at Pacifica Graduate Institute.
Scattered through the Modern Language Association’s 2009 convention were telling sessions devoted to the state of higher education. Compelling testimony was offered in small and sometimes crowded rooms about the loss of long-term central features of the discipline, from foreign language study to graduate student support to tenure track jobs for new Ph.D.'s. In many respects, the MLA’s annual meeting is more responsive to higher education’s grave crisis than the other humanities and social science disciplines that should also be part of the conversation, from anthropology to classics to history and sociology. There are simply more MLA sessions dealing with such issues than there are at other disciplinary meetings. Yet there was also throughout the MLA convention a strong sense of irrelevant business as usual, in the form of innumerable sessions devoted to traditional scholarship. There is a certain poignancy to the orchestra playing Mozart while the Titanic slips beneath the waves: We who are about to die salute our traditional high cultural commitments.
Of course we should sustain the values and the ongoing research that make humanities disciplines what they are. But the point is that the ship does not have to go down. There is action to be taken, work to be done, organizing and educating to do when faculty members and graduate students come together from around the country. Disciplinary organizations thus need to revise their priorities to confront what is proving to be a multi-year recession in higher education. As I argue in No University Is an Island, the recession is prompting destructive changes in governance, faculty status, and educational mission that will long outlast the current crisis. Because MLA’s members are already talking about these matters in scattered ways, it is time for the organization to take the lead in revising the format of its annual meeting to address the state of higher education -- and prepare its members to be effective agents -- in a much more focused, visible, and productive way. Then perhaps other disciplines will follow.
A generation ago, when the MLA’s Graduate Student Caucus sought to reform the organization, it circulated several posters at annual meetings. Most telling, I thought, was a photograph of the Titanic, captioned “Are you enjoying your assistant-ship?” It was no easy task back then convincing the average tenured MLA member that the large waves towering over our lifeboats would not be good for surfing. Now the average college teacher is no longer eligible for tenure, and the good ship humanities is already partly under water.
The MLA’s response to a changing profession was to increase the number and variety of sessions, to give convention space to both fantasy and reality. The MLA would cease to be exclusively a platform for privilege. The organization would become a big tent. Unfortunately, the big tent is looking more like a shroud. The humanities are drowning. It is time to rethink the annual meeting to make it serve a threatened profession’s needs.
Until we can secure the future of higher education, we need to be substantially focused on money and power. That, I would argue, should be the theme of the 2010 annual meeting, and the structure of the meeting should be revised to reflect that focus. Instead of simply offering incoherent variety, the MLA should emphasize large meetings on the current crisis and its implications. And I do not mean simply paper presentations, telling as local testimony can be.
Disciplinary organizations need to offer substantial training sessions -- typically running several hours each and perhaps returning for additional sessions over two or three days -- that teach their members the fundamentals of financial analysis and strategies for organizing resistance. The AAUP, for example, teaches summer workshops each year that show faculty members the difference between budgets, which are fundamentally planning documents riddled with assumptions, and financial statements, which report actual expenditures for the previous year. We work not with hypothetical budgets but with examples from a dozen universities. Attendees learn that there are virtually always pots of money not listed on a university budget at all. A budget, MLA members will benefit from learning, is essentially a narrative. It can and should be deconstructed. I expect the AAUP would be willing and able to conduct such training sessions at disciplinary meetings. Indeed we already have the PowerPoint presentations and detailed handouts we would need. We have faculty members who specialize in analyzing university finances ready to serve the MLA and other disciplinary organizations.
The AAUP could also join with the AFT and the NEA to offer workshops in the fundamentals of collective bargaining, explaining how faculty and graduate employees at a given school can create a union that meets their distinctive institutional needs and embodies their core values. We can stage scenarios that give faculty members and graduate student activists experience in negotiating contracts. And the MLA should schedule large sessions that help faculty in places where collective bargaining is impossible, to recognize that organizing to have influence over budget decisions and institutional priorities is also possible without a union. The organization should also invite the California Faculty Association to conduct a large workshop on ways to reach out to students, parents, alumni, and other citizens and rebuild public support for higher education. CFA has been running a terrific campaign toward that end. The point is to empower faculty members to be the equals, not the victims, of campus administrators.
I am urging an annual MLA meeting that promotes not only literary studies but also material empowerment, that equips the members of the profession with the skills they need to preserve an appropriate environment for teaching and research. If the MLA takes the lead in reshaping its annual meeting this way, other disciplines will follow.
Even old news can be dismal, and that is the case at hand. For about 40 years, by my calculation, American universities have been admitting too many candidates for doctorates in the liberal arts and the social sciences and, startling attrition along the way notwithstanding, have produced too great a supply of Ph.D.'s for a dwindling demand. There are proposed remedies for this injustice that prepares people exclusively for work that will not be available to them, but I want to address a different problem. What can we do with, and for, the Ph.D.'s and those who dropped out short of the final degree that will be useful for them and, not accidentally, provide a benefit to the nation?
Those who have earned or at least pursued doctorates in the humanities or social sciences, or professional degrees in law and business, whom I want to include in my argument, have learned how to learn, how to conduct research, and in many cases have acquired a second language. Field work or study abroad may have further informed them about other cultures. Thus, although their training has been geared to turn them into replicas, if not clones, of their former professors and reportedly has not prepared them for competing in the world outside the academy, they have useful skills, which could also be marketable. The question is how to bring them to market.
My proposal is for a national program that combines some of the elements of Works Progress Administration programs from the Great Depression, the Peace Corps, and the Fulbright Awards. I mention the WPA not because we have entered another depression — so far so battered, but also so far so good — but because its various programs took the unemployed and found them work which, with some notorious exceptions, the nation needed done. And this effort included support for writers and artists. The Peace Corps and the Fulbrights, with their histories of sending Americans abroad (and bringing foreigners here as Fulbright scholars) have proven their intellectual worth, their pragmatic value, and their foreign policy bona fides. I am, however, suggesting them as models of successes, not as templates.
Volunteers for this new program, after training most plausibly sponsored by the State Department, would be sent abroad, chiefly to developing countries where they could teach at high levels, in some cases study (especially languages), and work in civil programs according to their abilities and training, for example, in court administration and in the organization of self-help associations and business start-ups. The actual work will need to be directed by the skills of the volunteers, not from an arbitrary menu of projects or by ukase, though selection of the volunteers for the program will have to contribute to the shaping of its execution.
The work, as I imagine it, would not replicate or overlap with the work of Peace Corps volunteers. First, the program would recruit from the limited pool that I have described. Second, the work needs to be white-collar — educational at a high level, administrative, or organizational; volunteers will not be making bricks or laying water pipes or teaching in primary and secondary schools. Third, depending on the interest of the host country and the volunteers, periods of service could be longer than the 27-month tour in the Peace Corps. Fourth, mastering a new "strategic" language will be a primary requirement of volunteers, no matter their specific daily work — a point I will return to shortly. Fifth, at the completion of a tour, volunteers will be encouraged to maintain the linguistic skills and the cultural information they acquired while abroad. This may be done through the kind of employment they find, ideally in government service, but industry and academe could serve as well. (I say encourage rather than require because we no longer have conscription, and the unwilling are never very happy or useful.) It seems obvious to me that banking people competent in language against a future when their skills will be needed will be a good investment.
The short-term benefits are clear enough. Like the Peace Corps and the Fulbrights, the program has the potential to increase the familiarity of a generation of young Americans with other countries, their languages and cultures. Like them, it is a way of conducting soft diplomacy in which the character of the participants could complement and, I expect, enhance our national policies and interests. Those who expand knowledge or help to improve civil institutions tend to command respect, even affection, while revealing — perhaps to the astonishment of many abroad — that Americans are not the horned minions of the Great Satan. These are the expectations that the program I suggest must have, maintained rigorously with supervision and review. I have no interest in a program that enables young or even middle-aged people to find themselves or that simply keeps them out of the job market for a few years.
The longer-term benefits are, I think, more interesting and more valuable. If, as I have said, mastery of language is a primary requirement, returning volunteers will be available who know languages that are neither widely taught nor spoken in the United States; one principle guiding the placement of the volunteers should be the importance of the languages spoken where they are posted. Thirty years after we learned, in the aftermath of the assault on the American embassy in Tehran, that the Central Intelligence Agency did not have a single Farsi speaker there, we still have intelligence and military services that sorely lack people who can speak and read the languages of Africa, Asia, and the Middle East. Joshua Keating of Foreign Policy magazine, has recently pointed out that only 13 percent of CIA employees speak a second language. He tells a more bleak story. In March of 2009, the administration wanted a "civilian surge" of 300 experts in language and administration to serve in Iraq and Afghanistan. A month later, State and USAID could not find the people, and the over-committed military commands had to find the staff.
The program I am proposing, had it been established five years ago, might have been able to provide that missing expertise, at least a good part of it. Assuming only a couple of thousand volunteers a year in the program — a number that could certainly grow as it ripens and perhaps broadens according to needs — it would be manageable and not very expensive. As benchmarks, and these are only points of departure, the practices and the budgets of the U.S. Scholars Program of the Fulbright Awards and of the Peace Corps are instructive. Both offer transportation to the host country and "maintenance" or "an allowance" based on local living costs; Fulbright expenses are higher because their scholars generally live in high-cost countries. The Peace Corps offers deferral of student loan payments and in some cases cancellation of Perkins Loans, both of which would be attractive to the volunteers I have in mind.
If, then, we want a rough-and-ready baseline of costs to fund a pilot program of, say, 1,000 volunteers to begin with, we can simply take the approximate cost per volunteer for the Peace Corps, since I envision them living in conditions more like those of Peace Corps volunteers than of Fulbright students. This offers a cost per volunteer of about $45,000. Given the number of unemployed academics, recruiting this many should not be difficult and would permit selectivity.
When the economy improves a bit, imagine some of the alumni of this program entering academe not bitter from four years of adjuncting without health insurance, but energized by new experiences, and bringing unusual combinations of knowledge to their universities. Imagine if every English or history department had someone who had recently lived in the Middle East or Africa?
There is another benefit that could actually respond to a serious, if only simmering or festering, problem heading right at us now. In the next several years, approximately 250,000 federal employees, many of them at the top as GS-15 or SES workers, will be retiring. How to replace them or, more to the point, where simply to look for their replacements, is already proving to be vexing and nerve-racking. My belief — it is more than a hunch — is that many of these returning volunteers would be interested in federal service or perhaps in service with state governments, which also are facing the same problems of baby-boomer retirements as the federal government. They will already have been exposed to the terms and the values of working as public servants. They will have acquired, at the government’s expense, new skills and may have a sense of obligation or loyalty, which would be welcome. Perhaps offering student loan forgiveness or reduction in return for government service after the tour abroad would be a strong inducement.
Many, if not all, will have the academic credentials that public agencies routinely look for. If the program I propose could be established soon and quickly grow to several thousand new recruits every year — and recruits are available now and will continue to be until we change our policies of graduate education — we would have made a respectable down payment on this human capital obligation. Instead of mortgaging our future, as many programs often appear to do, we could actually be paying down the mortgage by drawing on the skills we have banked.
Beyond the value of sending “missionaries” or soft diplomats abroad, the two additional goals I have presented — the acquisition of strategic languages and the restocking of the public sector — are distinct, but not at odds with one another, nor do I worry that having more than one goal clouds the mission or makes the program unwieldy: both are worthy and important, and neither excludes the other. Moreover, by turning to the supply of unemployed or underemployed men and women, we will be putting to work minds that have been trained and skills that have been raised at great expense. This may seem an exercise in good works and foreign policy, but no less, in my opinion, a matter of thrift and profit for the nation.
Stephen Joel Trachtenberg
Stephen Joel Trachtenberg is president emeritus and university professor of public service at George Washington University.
I wrote my first novel, a cross between The Last of the Mohicans and Shane, when I was eight or nine years old. I wrote it on small spiral bound notebooks and illustrated it by hand. Later I tore all the pages out of the notebooks and stapled them together in thick stacks. I wasn’t a literary prodigy, just a kid who loved Star Wars, comics and novels. I was a geek who was not afraid to dream the literary dream. In the years that followed, I continued dreaming that dream. After spending several years writing short stories and hundreds of poems, which I dutifully relegated to hardbound composition notebooks, I wrote my second novel. I was in the ninth grade and I knew how to type. I’ll never forget the thrill of typing up that manuscript. Graduating from handwriting to typed text made me feel like a very serious writer. Before I graduated high school, I wrote a third novel, and a collection of short stories, both of which I carefully typed up, copied and bound at a photocopy center.
I’ve often wondered what would have happened if I had kept that literary faith after going to college. What did happen was that I majored in literature, went to graduate school and began my career as a literary critic. The old dreams of being a writer of novels and poems were replaced by dreams of being a published literary critic, an author of scholarly articles and monographs that would draw the interest of my peers. My whole life had been about reading and self-expression, and now, as a professor of literature, I wanted – no, I needed – to express myself and be read. I began writing articles and did quite well. It was exhilarating. Then I faced the herculean task of shaping a book from the inchoate mass of my dissertation. It took me six or seven years, and two separate tenure clocks, to complete it. My book, I’m proud to say, was personal, original, and timely. I dreamed that it would be read, that it might matter. I never thought I would wonder if writing my book was really worth it.
There were many things my mentors never told me about being an academic. I was never taught how to write a book (as opposed to a dissertation), or warned about the protocols, timelines and politics of trying to get a book published. No one ever spoke to me about what it might mean to publish in a second-tier university press, get one bad review and not really be read as much as you might hope. I knew that academic writers could be stars, and that some never got published, or published bad books that no one cited. But I did not know about the vast corpus of middling, pretty-good (or better) books and authors, which for a variety of reasons, justified or not, simply don’t make much of an impact or a difference. That’s a special kind of purgatory that graduate students and assistant professors don’t hear too much about. Well, I’m something of an expert in this subject. The story of my first book is not unlike that of a long suffering, sympathetic character in a Dickens novel who quietly suffers a series of slights, injustices and betrayals, but without the cathartic redemption or resolution that sublimates her mournful journey.
The good news was that I got my book published at a university press, not a top one, but a good one with a good backlist. The bad news was that my book would not be published in affordable soft cover, but in a more expensive library edition, meaning that no graduate student would ever buy my book the way I bought so many books as a student. My book would not sit on the crowded bookshelves of a studio apartment in a college town while someone pondered a dissertation or argued the finer points of theory with some friends. But that was OK. As long as it got into libraries, that would be fine. There might not be many notes in the margins but it would still be read. Then my press required me to change the title of my book to something flatter, more descriptive, to help sell copies to libraries I suppose. My tenure clock was running out, what could I do? I let them do it. And I even made my peace with it, believing that "If you build it, they will come." I waited for them to come.
Unless you publish with a top-tier press, and your book makes a big splash, don’t expect much fanfare in the years that follow the publication of your book. There will not be release parties at conference exhibit halls, posters, “buzz” or anything like that. Very few authors get that experience. For two or three years, it was as if my book did not exist. Then three reviews appeared. One slammed me, the other one was somewhat positive, and the third was embarrassingly short and uninformative. Within the echoing silence of the publication of my first book, came the first whispers of feedback, and it was pretty clear: My book was interesting, it had glimmers, but it was mediocre. (I don’t agree with that assessment but I’m just trying to be report events as honestly as possible.) Anyway, that hurt a lot.
Then something pretty surprising happened. I realized that some of those who really should have read my book were not interested in it. The first such person was a graduate student I was advising whose dissertation intersected with my book’s subject matter. I guess she never cracked the book to notice its table of contents. Then an acquaintance of mine tried to publish a book on the exact same subject without mentioning me at all. Let’s say, for the sake of illustrating my point, that my book was the first ever and only book on hats in literature. This fellow, who knew about my book, had his own book manuscript on hats in literature and he wanted me to help him leverage its publication, despite the fact that he could not be bothered to cite me once. And still, he asked me to write the preface to his book. I said no. But several other scholars (my peers) stepped in to blurb the book. My favorite one praised his book on hats for "filling the void" on the subject matter of hats in literature.
Still, I believed in my book. It was original and different, the first book on hats in literature! I was confident people would find it out eventually, and, in the end, redeem it by mentioning it. I put a few excerpts online, and proceeded to take my scholarly interests in a new direction. It was out of my hands.
A few more years went by. Finally, the tide turned. My book started to make its way into other books and articles, sometimes in surprising, unexpected ways. Most mentions were painfully cursory, an afterthought, a professional formality. Several citations of my work made it clear that the authors had never read my book but only the excerpts I had put online. One clever peer wove together materials from my Web site with a review that was posted online and created a credible paragraph that distorted my original argument. In fact, one or two others came painfully close to attributing my "contribution" to an online reviewer who summarized my book. This is how my bid to use the Web to promote my scholarship backfired. In this age of Web research, even scholars would rather not order a book through interlibrary loan as long as they can pretend they have read it. Who was I to think that in this postmodern age, citations would be anything else than simulacra? But I digress.
There have been two substantial engagements with the contents of my book, mainly in footnotes. I was grateful and felt somewhat redeemed, but was this the best I could hope for? What did I want or need to feel like my work mattered? It’s embarrassing to answer this question but here goes: I needed someone to recognize my work in the body of their scholarship, explicitly, not via Web "CliffsNotes," or a cursory footnote. I did not need a page-long discussion of my work, that’s too much, but just something that would say my Little Dorrit of a book had existed and was deserving of being mentioned out in the open. Four sentences, out in the open would do it, and I could settle for a footnote that was longer than one line long. I could settle for a footnote containing a few lines on what I had labored over for so many years. I think that would do it for me. Really.
I’m luckier than most. My book is appearing on people’s radars. It may just be a blip, but it’s there. There are a few people interested in hats in literature, apparently. I’m not a total failure -- far from it. On the contrary, I got tenure on the shoulders of this book and recently some presses have asked me to blurb other books (like that book on baseball caps and the other one on the representation of heads in literature). It feels like a sham to be treated like someone important when my book is so marginal or superfluous. But that’s fine; I’m not going to turn down such publicity.
So, I’m an arrogant ass, or a narcissist. Let me steal the thunder of the readers of this piece. But someone needs to speak up for all the books that have been undeservedly shunted aside, maligned or marginalized. Someone needs to say what many published authors already know: Being an author is not all it’s cracked up to be. It can be a lot lonelier and painful than you might expect. You pour your life into this thing, you parch yourself dry, and then all you can squeeze back into your sandy mouth is a few drops of moisture.
I’ve moved on. I’ve adjusted my expectations. I’ve done a reality check. My book pops up here and there and my name is out there. My articles are read and cited, sometimes repeatedly. That’s a lot more than what many of my peers have achieved. A little bit of gratitude is in order. I know.
What’s hard for me now is not the reception of my first book, but the motivation to write my second one. For years I’ve been publishing articles and editing books, but the time has come to buckle down and build the centerpiece of my case for full professor. I need to motivate myself to write another book that maybe will not make much of a difference, all over again. It’s hard to work up the gumption to do that. Another part of me, however, sees it differently. Writing a book, even an academic book destined to have very few readers, is no small feat of creation. My second monograph may or may not be important to others, but it will be written with passion and integrity. If I succeed in recovering the smallest part of that nine-year-old boy who could write, happily, for himself alone, I know that my second book will be something that I can be proud of, like my first book. There’s something zen and honest about that. Almost liberating. Now I just need to make it happen, one last time.
Peter Dorchester is the pen name of an associate professor in the humanities at a large university in the South.
Submitted by Tom Deans on February 4, 2010 - 3:00am
Just recently I got a set of teaching evaluations for a course that I taught in the fall of 2008 -- and another set for a course I taught in 2006.
This lag wasn't the fault of campus mail (it can be slow, but not that slow). Instead, the evaluations were part of small experiment with long-delayed course assessments, surveys that ask students to reflect on the classes that they have taken a year or two or three earlier.
I've been considering such evaluations ever since I went through the tenure a second time: the first was at a liberal arts college, the second two years later when I moved to a research university. Both institutions valued teaching but took markedly different approaches to student course evaluations. The research university relied almost exclusively on the summary scores of bubble-sheet course evaluations, while the liberal arts college didn't even allow candidates to include end-of-semester forms in tenure files. Instead they contacted former students, including alumni, and asked them to write letters.
In my post-tenure debriefing at the liberal arts college, the provost shared excerpts from the letters. Some sounded similar to comments I would typically see in my end-of-semester course evaluations; others, especially those by alumni, resonated more deeply. They let me know what in my assignments and teaching had staying power.
But how to get that kind of longitudinal feedback at a big, public university?
My first try has been a brief online survey sent to a selection of my former students. Using SurveyMonkey, I cooked up a six-item questionnaire. I'm only mildly tech-savvy and this was my first time creating an online survey, but the software escorted me through the process quickly and easily. I finished in half an hour.
Using my university's online student administration system, I downloaded two course rosters-one from a year ago, one from three years ago. I copied the e-mail address columns and pasted them into the survey. Eight clicks of the mouse later I was ready to send.
I sent the invitation to two sections of a small freshman honors English seminar I teach every other year. This course meets the first-year composition requirement and I teach it with a focus on the ways that writing can work as social action, both inside and outside the academy. During the first half of the semester students engage with a range of reading -- studies of literacy, theories of social change, articles from scholarly journals in composition studies, short stories and poems keyed to questions of social justice, essays from Harper’s and The New York Times Magazine, papers written by my former students -- and they write four essays, all revised across drafts. During the latter part of the semester students work in teams on service-learning projects, first researching their local community partner organizations and then doing writing projects that I have worked out in advance of the semester with those organizations.
I taught the course pretty much the same in fall 2008 as I did in fall 2006, except that in 2008 I introduced a portfolio approach to assessment that deferred much of the final paper grading until the end of the course.
Through my online survey I wanted to know what stuck -- which readings (if any) continued to rattle around in their heads, whether all the drafting and revising we did proved relevant (or not) to their writing in other courses, and how the service experience shaped (or didn't) any future community engagement.
My small sample size -- only 28 (originally 30, but 2 students from the original rosters had left or graduated) -- certainly would not pass muster with the psychometricians. But the yield of 18 completed surveys, a response rate of over 60 percent, was encouraging.
I kept the survey short-just six questions -- and promised students that it would take five to ten minutes of their winter break and that their identities would be kept anonymous.
The first item asked them to signal when they had taken the course, in 2006 or 2008. The next two were open-ended: "Have any particular readings, concepts, experiences, etc. from Honors English 1 stayed with you? If so, which ones? Are there any ways that the course shaped how you think and/or write? If so, how?" and "Given your classwork and experiences since taking Honors English 1, what do you wish would have been covered in that course but wasn't?" These were followed by two multiple-choice questions: one about their involvement in community outreach (I wanted to get a rough sense of whether the service-learning component of the course had or hadn't influenced future community engagement); and another that queried whether they would recommend the course to an incoming student. I concluded with an open invitation to comment.
As might be expected from a small, interactive honors seminar, most who responded had favorable memories of the course. But more interesting to me were the specifics: they singled out particular books, stories, and assignments. Several of those I was planning to keep in the course anyway, a few of those I was considering replacing (each semester I fiddle with my reading list). The student comments rescued a few of those.
I also attend to what was not said. The readings and assignments that none of the 18 mentioned will be my prime candidates for cutting from the syllabus.
Without prompting, a few students from the 2008 section singled out the portfolio system as encouraging them to take risks in their writing, which affirms that approach. Students from both sections mentioned the value of the collaborative writing assignments (I'm always struggling with the proportion of individual versus collaborative assignments). Several surprised me by wishing that we had spent more time on prose style.
I also learned that while more than half of the respondents continued to be involved in some kind of community outreach (not a big surprise because they had self-selected a service-learning course), only one continued to work with the same community partner from the course. That suggested that I need to be more deliberate about encouraging such continuity.
In all, the responses didn't trigger a seismic shift in how I'll next teach the course, but they did help me revise with greater confidence and tinker with greater precision.
I am not suggesting that delayed online surveys should replace the traditional captive-audience, end-of-semester evaluations. Delayed surveys likely undercount students who are unmotivated or who had a bad experience in the course and miss entirely those who dropped or transferred out of the institution (and we need feedback from such students). Yet my small experiment suggests that time-tempered evaluations are worth the hour it takes to create and administer the survey.
Next January, another round, and this time with larger, non-honors courses.
Tom Deans is associate professor of English at the University of Connecticut.