Last year Temple University Press published Toby Miller's Blow Up the Humanities, a book that starts straining for provocation with its title and never lets up. The author is a professor of media and cultural studies at the University of California at Riverside. His preferred rhetorical stance is that of the saucy lad -- pulling the nose of Matthew Arnold and not fooled for a minute by all that “culture as the best which has been thought and said” jazz, man.
What we must recognize, his argument goes, is that there are two forms of the humanities now. What the author calls "Humanities One" (with literature, history, and philosophy at their core) is just the privileged and exclusionary knowledge of old and dying elites, with little value, if any, to today’s heterogeneous, globalized, wired, and thrill-a-minute world. By contrast, we have studies of mass media and communications making up “Humanities Two,” which emerged and thrived in the 20th century outside “fancy schools with a privileged research status.”
In the future we must somehow establish a third mode: “a blend of political economy, textual analysis, ethnography, and environmental studies such that students learn the materiality of how meaning is made, conveyed, and discarded.” Enough with the monuments of unaging intellect! Let the dead bury the dead; henceforth, culture must be biodegradable.
What I chiefly remember about Blow Up the Humanities, a few months after reading it, is exclaiming “What a cheeky monkey you are!” every few pages -- or at least feeling like this was expected of me. Otherwise it mostly seemed like vintage cultural-studies boilerplate. But one passage in the book did strike me as genuinely provocative. It takes the form of a footnote responding to Google’s claim of a "commitment to the digital humanities." Here it is, in full:
“In the United States, ‘the digital humanities’ can mean anything from cliometric analysis to ludic observation. It refers to a method of obtaining funds for conventional forms of Humanities One, dressed up in rather straightforward electronic empiricism. So counting convicts in law reports or references to Australia in Dickens becomes worthy of grant support because it is archival and computable.”
A scrawl in the margin records my immediate response upon reading this: “Cute but misleading.” But now, on second thought… Well, actually “cute but misleading” pretty well covers it. The caricature of the digital humanities might have been recognizable a dozen years ago, though just barely even then. What makes Miller’s polemical blast interesting is the angle of the assault. For once, a complaint about the digital humanities isn’t coming from traditionalist, semi-luddite quarters -- “traditionalist” with regard to the objects of study (i.e., books, manuscripts, paintings) if not necessarily the theories and methods for analyzing them.
On the contrary, Miller regards video games as a rich cultural medium, both profitable and profound. To shore up his claims for Humanities Two (or, fingers crossed, Three) he finds it useful to pretend that the digital humanities will, in effect, take us back to the era of professors tabulating Chaucer’s use of the letter “e.” The scholarship will be more efficient, if no less dull.
Now, I have no interest in impeding the forward march of Angry Birds studies, but there is no way that Miller doesn’t know better. The days when humanities computing was used to count dead convicts are long gone. Much more likely now would be a project in which all of the surviving files of Victorian prisons are not simply rendered searchable but integrated with census data, regional maps, and available documentation of riots, strikes, and economic trends during any given year.
MLA is a major component of the Humanities One infrastructure, of course, but has enough Humanities Two people in it to suggest that the distinction is anything but airtight. And while Miller pillories the digital humanities as nothing but “a method of obtaining funds for conventional forms of Humanities One,” even old-school philological practice takes on new valences in a digital environment.
“In the humanities,” write Charles Cooney, Glenn Roe, and Mark Olsen in their contribution, “scholars are primarily concerned with the specifics of language and meaning in context, or what is in the works. [Textbases] tend to represent specific linguistic or national traditions, genres, or other characteristics reflecting disciplinary concerns and scholarly expertise.… [T]extbases in the digital humanities are generally retrospective collections built with an emphasis on canonical works in particular print traditions.”
So far, so Humanities One-ish -- with only the neologism “textbase” to show that much has changed since Isaac Casaubon’s heroic proof that the Corpus Hermeticum wasn’t as ancient as everybody thought. Textbase just means “collection,” of course. For that matter, the options available in textbase design (the ways of annotating a text, of making it searchable, of cross-referencing it with other items in the textbase or even in other textbases) are basically high-tech versions of what scholars did four hundred years ago.
Alas, what Casaubon could do alone in his study now requires an interdisciplinary team, plus technicians. But he did not have the distractions we do.
If digital humanists were limited to converting cultural artifacts of the print era into textbases, that would still be useful enough, in its way. The classics aren’t going to annotate themselves. But the warehouse is much larger than that. Besides the inherited mass of documents from the past 5,000 years, more and more texts are now “born digital.” Besides warehousing and glossing such material, the digital humanities incorporate the changes in how people receive and engage with cultural material, as Alan Liu discusses in “From Reading to Social Computing,” his essay for the MLA anthology.
What Liu calls “the core circuit of literary activity” – the set of institutions, routines, and people involved in transmitting a poem (or whatever) from the author’s notebook to the reader's eyeballs – has been reconfigured dramatically over the past two decades. Besides making it possible to publish or annotate a text in new ways, the developing communication system transforms the culture itself. The digital humanist has to map, and remap, the very ground beneath our feet.
Nor is that a new development. Other papers in the anthology will give you a sense of how the digital humanities have developed over the long term -- beginning when Roberto Busa started using a computer to prepare an exhaustive concordance of Thomas Aquinas in the 1940s. At some point, an important change in the digital humanities will be necessary, which is to drop the word "digital."
(Note: This essay has been updated from an earlier version to correct Toby Miller's name.)
The faculty in postsecondary education has changed so much in the last 20 years that it has been labeled a "revolution" by researchers who study the professoriate. More than two-thirds of the faculty providing instruction in nonprofit higher education are currently employed off the tenure track, and their numbers continue to rise. This shift alone may be cause for concern, but the real dilemma is that institutions have not developed a new faculty model or employment practices that are based on a realistic conception of the faculty and its composition. The faculty model currently in use has not been achieved through intentional and thoughtful planning. It is the haphazardly derived product of casual, short-term planning and reactionary decision making amid constrained budgets; it reflects little thought or concern for its implications for student learning or enlightened employment practice.
Today, many faculty members have no job security or expectation of employment beyond the current term. Many do not receive benefits and their compensation is extremely low, averaging $2,700 per course, making it difficult to earn a living wage even when they can get consistent work. Sometimes, however, they cannot obtain a full course load. Institutional policies and practices often make them ineligible for unemployment when this occurs. Recent reporting has exposed that some faculty members are living on food stamps. Only 25 percent of non-tenure-track faculty have any form of health insurance, and even those covered often have less than adequate coverage.
Even basic forms of institutional support that could improve faculty performance -- and, by extension, enhance their capabilities to promote student learning -- are lacking. As a result of our failure to acknowledge and address the changing faculty, we have made it unnecessarily difficult for a majority of the faculty to do their jobs. Non-tenure-track faculty members – particularly part-time faculty members – often do not receive an orientation, professional development or mentoring, and they may even be excluded from faculty meetings. So they may not understand institutional goals, learn about pedagogies for effectively educating the students they teach, or have opportunities to strengthen their skills.
Only a very few are involved in curriculum design and governance, even though they may outnumber tenure-track faculty or teach a majority of the credit hours at their institutions. They typically lack office space and may not receive compensation for conducting office hours to support their students. Additionally, hiring decisions are routinely made at the last minute, often within days of a class beginning. Making matters worse, institutions do not always provide these faculty members with adequate materials or resources, including a sample syllabus, to help them to prepare on such short notice.
This model constrains faculty members’ ability to provide a quality learning environment and make their maximum contribution to educating students. There is now evidence that the poor working conditions we impose upon them have an adverse effect on student retention, transfer, and graduation rates, as well as other indicators of learning and student success. Much of the employment literature addresses the need for employees to be motivated and well-trained, but also to have access to basic resources, materials, supplies, and conditions that allow them to perform their duties. Adjuncts have been robbed of the opportunity to give their best effort for their students. With this evidence close at hand and the moral objections inherent in a model that would leave employees without a living wage or safety net becoming clearer, it seems there would be more significant outrage or at least concern within our academic community.
Adjuncts have been writing about their poor working conditions for years. They have done so with trepidation, as many commentators have demonized them as being the root of the problem, rather than recognizing the effects of this poor employment model or the conditions they endure. Yet they continue to lend their voices to the just cause of change.
Why have so few outside these ranks taken up this cause? While non-tenure-track faculty have been vocal in advocating for change, virtually no institutional, foundation, or policy leaders have acknowledged the hard realities of these conditions or expressed concern. In fact, in private, a few postsecondary leaders will note that they feel bad and think the model is morally bankrupt. In public, though, they often show no leadership, nor do they voice their objections to a model that surely cannot be sustained -- nor should it be.
As a result, institutions, foundations, and government pour billions of dollars into initiatives for completion and success, many of which cannot succeed because they fail to understand the faculty responsible for carrying out changes designed to improve the learning environment. Goals for improving access and outcomes are severely affected. We can blame decreasing funding and external pressures. However, many institutions have had a choice and still shifted money away from instruction to fund other priorities. Others, particularly community colleges, are sometimes so lacking in resources that they have been given no options.
This cannot continue. Ours should be an ethical employment model with integrity – one that allows us to draw upon the strengths of all our faculty to create and sustain a high-quality learning environment to best serve students. Today, we raise these concerns; in a short time, so too will a public dissatisfied with the inaction and inattention of our leaders to these problems. So we invite leaders from across the country to join the Delphi Project on the Changing Faculty and Student Success not only in calling for changes, but in helping to create new solutions to this problem now – to challenge the status quo and advance a new employment model for higher education that has integrity.
We applaud the leaders that have joined us so far, including the Western Interstate Commission on Higher Education, Association of American Colleges &Universities, the New Faculty Majority, American Association of Community Colleges, American Federation of Teachers, League of Innovation, Council for Higher Education Accreditation, Association for Governing Boards, National Association for College and University Business Officers, State Higher Education Executive Officers, various disciplinary societies, and others (listed on our website). We hope you will visit our website and utilize the resources we have prepared to begin to address and move away from this unethical employment model.
Adrianna Kezar, David Longanecker and Daniel Maxey
Adrianna Kezar is a professor at the University of Southern California and director for the Delphi Project for the Changing Faculty and Student Success.
David Longanecker is president of the Western Interstate Commission on Higher Education.
Daniel Maxey is a doctoral student at the University of Southern California.
A recent news item cut me to the nib. Many public schools no longer teach cursive writing; 46 states no longer mandate that districts must teach cursive in their language arts core curriculum. This comes from the mistaken logic that our keyboard-happy society has made cursive a relic of the past that students no longer need. Numerous public schools now teach only printing, and some don’t even bother with lower and upper case – just block letters. Roman Catholic schools still demand cursive, and good for them. For the foreseeable future, kids who don’t have cursive will be at a competitive disadvantage. I’m surprised parents aren’t on the pitchfork-and-torch brigade over this, but I’d like to suggest that college professors should be (especially if they have kids).
I’m no pen-wielding Luddite waxing rhapsodic about creativity flowing down the barrel of a pen, making allusions to a shared Western heritage, or discoursing on calligraphy as art. Like millions of Americans I hit the keyboard most of the time. Nor do I harbor fond memories of learning cursive. My grade school taught the Peterson Method, a system loaded with unnecessary curlicues, severe angles and precise slants. It required mind-numbing oval drills that began with a roomful of kids rotating their arms from the elbow down as the teacher chanted, "Round, round, ready, touch." We repeatedly penciled the same oval – points deducted for lines that strayed. I hated Peterson Method and couldn’t wait to dump its silly W -– a looped double-V – with a more efficient double U construction. I was so bad at penmanship that even my sainted Pennsylvania grandmother called my handwriting "chicken scratchin'." These days I have a hand disorder that makes my scrawl closer to hieroglyphics. But I can read it within 90 percent accuracy and I can pen it very fast.
My defense of cursive is pragmatic, not aesthetic (though I covet elegant script). The first is discipline-specific. The humanities are more text-oriented than most math, computer science, and hard and experimental sciences. We humanities professors tend to demand more prose writing, our content is frequently more subjective, and an ability to take notes is essential. One unexpected consequence of cursive’s decline shows up among recent graduate students working in archives. Those unable to write cursively, often experience difficulty reading the script of others. That was difficult enough in past times, but what we are seeing now is quasi-illiteracy in all things cursive. If a document hasn’t been transcribed, students won’t use it. Need I remind humanities professors how few documents have been transcribed?
A second problem lies with blue-book exams. Count me among those who find blue-book exams an imperfect way of assessing student achievement, but I doubt that they will become obsolete as long as class sizes soar rather than shrink. Large classes present logistical problems. Administrators want professors to be up-to-date, yet they saddle them with classroom structures akin to industrial-age assembly lines. Those with bulging classes of first-year students could assign take-home exams or papers, if they wished also to flunk half of the class for plagiarism. There are other options, but they are limited, which means that today’s college students are likely to take numerous blue-book exams. The results won’t be pretty.
Students swear they can type far faster than they can "write," by which they mean block-letter printing, and that’s correct. Then comes a blue-book exam and with it the instruction, "No, you can’t type this on your laptop." (If you allow that, you’d better have an army of test monitors to stare over shoulders.) Many students cannot fill an eight-page bluebook in an hour, which means that their essays are superficial and are graded accordingly.
"Unfair!" they cry. "Incomplete," we reply. "We cannot assign a grade based on what you might have said." Is it unfair? No more so than a math class in which a professor insists that students do their own arithmetic rather than using a calculator. Or a computer scientist who tells students that the code students write must work at the end of the hour. There are numerous other situations that disallow computers, including the GREs, LSATs, and most licensing exams.
Problem three occurs when technology fails. Students use electronic devices so frequently that they’ve come to assume access. They’re often the same ones who don’t keep batteries charged, think professors come to class armed with extra power cords, and can’t imagine a classroom without empty electrical outlets with their names on them. Heaven help them if their laptops run out of juice in the middle of a class. You know what most of them do? Nothing! The best students try to focus, hoping they will retain enough information to transfer it to their computer once it’s recharged. Try that and tell me how well it works. Almost none open their backpacks and pull out pen and notebook. The weakest students ask me to put my notes on the class website. I know that some of you do that, but I refuse.
Problem four is among the reasons I won’t. "Good listening skills" generally rank high in lists of what employers desire of new hires. There are still jobs where one cannot use technology all the time. Journalism – even for e-zines – is one of them. I have done freelance music journalism for decades. When I can, I use a recorder and a laptop. But I have conducted interviews in backstage green rooms as noisy as a chorus of jackhammers, in the back of buses, on the street, at the side of stages, and in various other situations where the only thing that makes sense is pad and pen.
Journalism isn’t alone. One business leader tests perspective candidates by devising mock scenarios. Candidates must jot down information – no machines allowed – as the interviewer rattles off details, and the candidate must come up with a plan to address the problem. The point isn’t revelatory problem-solving; it’s a test of listening and short-term memory. Why? Because sometimes you simply need to take notes on the fly – a supervisor barks out an assignment, one is trapped in a no-gadgets environment, verbal directions are given to someone who is lost, or you need to focus on a client, not a screen. (Realtors, doctors, caseworkers, therapists….)
Problem five is one of keeping up. An accomplished typist cranks out 60-80 words per minute (WPM). I can write faster than that even with my bad hand. To hit 60 WPM, you need to know how to touch-type, another skill that most students never acquire. I often observe students struggling to keep up. Sometimes I can slow down, but there’s not much to be done during discussions or AV presentations. Just as violin players can play faster than a cellists because they don’t have as much instrument to cover, so too can a cursive writer scribble on paper faster than typists can traverse a 12-18 inch keyboard (especially if one is a hunt-and-peck typist.)
There are limitations even in nontraditional classrooms. The latest rage is the "flipped" classroom, in which students refine what would normally be called “homework” in class. One historian assigns questions to answer in writing outside of class and devotes class time asking students divulge, discuss, and expound upon their answers. They hand in their prewriting and keep a second for themselves. On the second they take notes based on class responses, as this is all they can use to complete papers and exams. The pace is rapid – answer, redirection, and discussion until depth is achieved. When I asked if all students keep up, I was told, "No. And that’s not my problem. It’s not a remedial course."
Finally, computers can be deadly to discussion and deep comprehension. A mind focused on a screen is less actively engaged with live speakers, be they professors or student peers. (And that’s before other temptations from the World of Wireless intrude. Try reading a single e-mail and see how long it takes to refocus on an active discussion.) Many students are great at retrieving information, but extremely slow in analyzing it, partly because they fail to grasp the connective tissue that relates one bit of information to another. The more distracted they get, the less likely they are to find that tissue. Some educational psych studies claim the physical act of writing produces better comprehension than typing. That’s not my field, but it rings true.
Again, I’m not a technophobe. But I do think those declaring the death of cursive are wrong -- at least for the immediate future. Today’s world depends increasingly upon flexibility, suppleness, and adaptability. I simply see no benefit in retiring cursive, and the potential for harm looms large. It’s no fun to teach or practice. Meh! I didn’t like learning multiplication tables, conjugating verbs, or discovering how to decode the periodic table of elements, but they were good medicine.
College professors should deliver the message that the decline of cursive reduces student chances for success. Our new Secretary of the Treasury, Jack Lew, might be able to get away with scrawling gibberish across a page, but ask yourself: Would you hire some kid who can’t sign his or her own name?
Rob Weir teaches history at Smith College. He is the author of Inside Higher Ed's "Instant Mentor" career advice column.
In 1892, the president of Leland Stanford University, David Starr Jordan, managed to convince Ewald Flügel, a scholar at the University of Leipzig, to join the young institution’s rudimentary English department. Flügel had received his doctoral degree in 1885 with a study of Thomas Carlyle under the aegis of Richard Wülcker, one of the founders of English studies in Europe. Three years later, he finished his postdoctoral degree, with a study on Sir Philip Sydney, and was appointed to the position of a Privatdozent at Leipzig.
The position of the Privatdozent is one of the most fascinating features at the modern German universities in the late 19th century. Although endowed with the right to direct dissertations and teach graduate seminars, the position most often offered only the smallest of base salaries, leaving the scholar to earn the rest of his keep by students who paid him directly for enrolling in his seminars and lectures. In a 1903 Stanford commencement speech Flügel warmly recommended that his new colleagues in American higher education embrace the Privatdozent concept:
What would the faculty of Stanford University say to a young scholar of decided ability, who, one or two years after his doctorate (taken with distinction), having given proof of high scholarly work and spirit, should ask the privilege of using a certain lecture room at a certain hour for a certain course of lectures? What would Stanford University say, if – after another year or two this young man, unprotected but regarded with a certain degree of kindly benevolence […], this lecturer should attract more and more students (not credit hunters), if he should become an influence at the university? What if the university should become in the course of years a perfect hive of such bees? […] It would modify our departmental boss-system, our worship of "credits," and other traits of the secondary schools; it would stimulate scholarly life at the university; it would foster a healthy competition in scholarly work, promote survival of the fittest, and keep older men from rusting.
Unabashedly Darwinian, Flügel was convinced that his own contingent appointment back in Germany had pushed him, and pushed all Privatdozenten, to become competitive, cutting-edge researchers and captivating classroom teachers until one of the coveted state-funded chair positions might become available. He held that the introduction of this specific academic concept was instrumental at furthering the innovative character and international reputation of higher education in Germany. Flügel himself had thrived under the competitive conditions, of course, and his entrepreneurial spirit led him to make a number of auspicious foundational moves: He took on co-editorship of Anglia, today the oldest continually published journal worldwide focusing exclusively on the study of “English.” And he founded Anglia Beiblatt, a review journal that quickly established an international reputation.
Despite his formidable achievements, however, he could not secure a chair position as quickly as he hoped. Since he was among the very few late 19th-century German professors of English who possessed near-native proficiency, he began to consider opportunities overseas. Even the dire warnings from a number of east coast colleagues ("the place seems farther away from Ithaca, than Ithaca does from Leipzig"; "they have at Stanford a library almost without books") could not scare him away. Once he had begun his academic adventure in the Californian wilderness, he took on a gargantuan research project, the editorship of the Chaucer Dictionary, offered to him by Frederick James Furnivall, the most entrepreneurial among British Chaucerians and founder of the Chaucer Society. As soon as he took over from colleagues who had given up on the project, he found, in this pre-computer age of lexicography, "slips of all sizes, shapes, colors, weights, and textures, from paper that was almost tissue paper to paper that was almost tin. Every slip contained matter that had to be reconsidered, revised, and often added to or deleted.”
Undeterred by this disastrous state of affairs, he decided to resolve the problem with typically enterprising determination: Although grant writing was uncharted territory for him, he applied for and secured three annual grants for $7,500 and one for $11,000 (altogether the equivalent of at least $300,000 in today’s money!) from the Carnegie Foundation for the Advancement of Teaching between 1904 and 1907 "for the preparation of a lexicon for the works of Geoffrey Chaucer," bought himself some time away from Stanford, and signed up a dozen colleagues and students in Europe and North America to assist him in his grand plan.
His and their work would become the foundation of the compendious Middle English Dictionary which now graces every decent college library in the English-speaking world and beyond. Beyond the work on the Chaucer Dictionary, the completion of which he never saw because of his sudden death in 1914, he maintained an impressive publication record and served in leadership positions such as the presidency of the Pacific Branch of the American Philological Association. When Flügel passed away, his American colleagues celebrated his "enthusiastic idealism" and remembered him as "more essentially American" than the other foreign-born colleagues they knew, an appreciation due to his entrepreneurial spirit.
I am relating this story to counteract the often defeatist chorus sung by colleagues in English and other humanities departments when confronted with a request, usually from impatient administrators in more grant-active areas, for at least giving grant writing and other entrepreneurial activities a try. There is no doubt that, compared to the situation in most other Western democracies, government support through the National Endowments for the Humanities and Arts is small in the U.S. Conversely, the number of private foundations, from the American Council of Learned Societies through the Spencer Foundation, makes up for some of the difference.
In my experience, what keeps the majority of English professors from even considering an involvement with entrepreneurial activities is that they deem them an unwelcome distraction from the cultural work they feel they have been educated, hired, and tenured to do. Most grant applications require that scholars explain not only the disciplinary, but also the broader social and cultural relevance of their work. In addition, they entail that scholars put a monetary value on their planned academic pursuits and create a bothersome budget sheet, learn how to use a spreadsheet, develop a timeline, and compose an all-too-short project summary, all grant-enabling formal obstacles many colleagues consider beneath the dignity of their profession.
In fact, many of us believe that the entire discipline of English and the humanities in general may have been created so as to counterbalance the entrepreneurial principles and profit motives which, from within the English habitat, seem to have a stranglehold over work in colleges of business, computing, engineering, and science. However, by making English a bastion of (self-)righteous resistance against the evil trinity of utilitarianism, pragmatism, and capitalism, English professors have relinquished the ability to be public intellectuals and to shape public discourse. After all, too many of our books and articles speak only to ourselves or those in the process of signing up to our fields at colleges and universities.
Ewald Flügel labored hard to remain socially and politically relevant even as he was involved in professionalizing and institutionalizing the very discipline we now inhabit. Recognizing that the skills and kinds of knowledge provided by his emerging field were insufficient for solving complex real-world issues, he became a proponent of a more co-disciplinary approach to academic study, a kind of cultural studies scholar long before that term was invented. Most of us would agree that he applied his formidable linguistic and literary expertise to a number of problematic goals, speaking to academic and public audiences about how the steadily increasing German immigration and the powers of German(ic) philology should and would inevitably turn the United States into an intellectual colony of his beloved home country. However, even if his missionary zeal reeks of the prevailing nationalist zeitgeist, I can appreciate his desire to experiment, innovate, and compete to make the study of historical literature and language as essential to the academy and to humanity as did his approximate contemporaries Roentgen, Eastman, Edison, Diesel, Marconi, and Pasteur with their scientific endeavors.
Perhaps his example might entice some of us to revisit and even befriend the idea of entrepreneurship, especially when it involves NGOs or the kind of for-profit funding sources the Just Enough Profit Foundation might define as (only) "mildly predatory" or (preferably) "somewhat," "very" and "completely humanistic." At the very least, Flügel’s biography provides evidence that today’s prevailing anti-entrepreneurial mindset has not always been among the constitutive elements defining the "English" professoriate.
There are encouraging signs that some colleagues in English studies have begun to abandon that mindset: George Mason University’s Center for Social Entrepreneurship (directed by Paul Rogers, a professor of English) and the University of Texas consortium on Intellectual Entrepreneurship (directed by Richard Cherwitz, a professor of rhetoric and communication), generate promising cross-disciplinary collaboration between the academy and society; English professors at Duke, Georgia Tech, and Ohio State, funded by the Bill & Melinda Gates Foundation, are among the national leaders testing the pedagogical viability of the controversial massive open online courses (MOOCs); and Ellito Visconsi of the University of Notre Dame, and Bryn Mawr colleague Katherine Rowe created Luminary Digital Media LLC, a startup that distributes their "Tempest for iPad," an application designed for social reading, authoring, and collaboration for Shakespeare fans with various levels of education. I believe Ewald Flügel would find these projects exciting.
Richard Utz is professor and chair in the School of Literature, Media, and Communication at the Georgia Institute of Technology.