Around this time last year, I noted with interest -- via that late-night bastion of (t)reason, "The Colbert Report" -- that the color-coded terror index, which has been an ongoing but perhaps absurd part of our recent lives, was being retired.
That index, which charts the level of terror threat, painted a fairly redundant rainbow of fear from green (mild -- having a cup of tea with radical groups who have given up all thoughts of violent protest) to red (extreme -- hunkered down hoping to be somewhere just outside the blast radius, gripping a bootleg Kalashnikov in case Al Qaeda knocks on the door).
The index had a largely hostile audience among many commentators and the American public, because it seemed to do little other than engender, if I might borrow some words from WB Yeats, a “terrified, vague” panic. It wasn’t particularly informative or functional. To select Green would have been an act of folly, a kind of professional or political hubris no career could possible survive if anything went wrong. Red, on the other hand, was little more than an admission of blind, hand-wringing panic. That left an elevated, cautious alert, the familiar yellows and oranges of our recent years, as the wise and inevitable response to a pervasive and undisclosed terror threat. And who needs a color chart to encourage that?
I grew up in Britain during the Irish Republican Army terror campaigns of the '70s and '80s, and I was only really made aware of my innate wariness when I moved to the United States in the '90s, and discovered a society largely unaware of the implications of terror in the domestic space.
But that has now largely changed by virtue of experience. The American public is largely schooled now in the anxieties of terror, by spectacular instances of it, whether home-grown in Oklahoma City, or international in New York or via the ever hungry cable news cycle.
And, unfortunately, because of those experiences, it has been forced into becoming an old hand at watchfulness. So, the terror alert, practically redundant, seemed largely an object lesson in the mass-marketing of anxiety: ultimately inducing elevated panic, some have posited, as a form of control.
Stephen Colbert’s thought-provoking suggestion noting the paralyzing effect of fear-mongering was that the retiring color code be updated and changed simply to "Quiet," or "Too Quiet."
But while those terror charts are now a thing of the past, a relic of older ways of thinking, destined to gather dust covertly in some office space, now otherwise empty because of austerity measures, or perhaps to swim peacefully in the quiet back waters of a frustrated think tank, I think there is a way for us to recycle, reuse and restore them — in higher education, and particularly in the liberal arts.
I think there is a place for them in the halls of academe, where we can mount the charts prominently close to office and teaching spaces. Of course, they should be edited for context. While academe has often played out as a context for personal or political terror, it seems more appropriate if the new fear index should be financial (and, paradoxically, thereby intellectual).
Green could read "low or no risk of financial exigency," though of course those days, if they ever existed, are long gone. I read about them, as a doctoral student in England, in David Lodge novels about the academic Promised Land in America, as one might read of the unicorn. Blue, a relaxed but guarded watchfulness, would mean that the conference expenses are still covered, but we might, collegially, forego dessert on the expense report. Elevated yellow tells us that it's time for faculty to tighten their belts, and forget about cost-of-living raises, or bold capital investment. Orange, that dreaded state of high financial exigency, might indicate furloughs, lost tenure lines, or even disappearing programs. Of course anything is possible, plausible or permissible when it hits “severe” red.
I can imagine that there are institutions and administrations all over which would gladly jump at the opportunity to place these prominently on display because of the chilling effect of fear, and the “terrified, vague” paralysis in which it often results. It would be a strong, visceral reminder that faculty should be quiet and grateful that we have our jobs. It would keep us meek while critical inroads are being made into faculty governance, academic standards, tenure, and full-time instruction.
It would stymie protest while traditional and essential parts of the academy like philosophy, history, literature and languages revert to a service function for a notional "liberal arts" core while students otherwise pursue their vocational certification in nursing, or air conditioning repair, or increasingly popular faddish pseudo-subjects like leadership.
As an industry we would put up with the growth of adjunct instruction, the increase in class sizes, the loss of civility in discourse, even the loss of discourse itself, along, of course, with the loss of instruction and instructors, to inanities like third-party computer software — as though something that belongs in the university were little more than checking off the boxes in a defensive driving class. Why, with one of those charts on the wall, we could replace entire language departments with Rosetta Stone, and entire English departments with carefully selected PBS programming. Why, even biology departments could be lost to interactive screenings of "Shark Week," and no one would be willing to say a word. Such is the effect of terror.
I don't mean to downplay the realities of the need for austerity as a response to real financial urgency. Just like the terrorist threat, it's out there, but it’s often not everywhere it seems to be. No doubt all of it would be easier to bear, however, if we had those advisory charts on our walls — even the steady gains of both administrative salaries, and administrative support staff, even in the midst of a profound state of financial exigency. After all, we want the very best to lead us out of this kind of financial crisis. And we all know, except in cases of faculty, of instruction, of academics, and, of course, of education in general, if you want the very best then it’s going to cost you.
David Mulry is chair of English and foreign languages at Schreiner University.
Established orthodoxy indicates that the ideal pedagogical method centers on small, discussion-based classes. Such a model enables "active learning" that, coupled with on-the-spot guidance from a skilled faculty member, is much more likely to change deep thought patterns than traditional lecture-based approaches. The emphasis shifts from the assimilation of content (and its regurgitation) to learning how to learn — how to be a better reader, how to think more critically and creatively, how to collaborate with others in the task of learning.
Few would doubt that this model sounds very appealing. Yet the experience of many educators inclines them to believe that it is unrealistic. Students, it seems, are generally too unmotivated to make it work. As a result, discussion falls into the all-too-familiar patterns: a mostly silent classroom tunes out as the same students dominate the conversation, more intent on getting “participation points” than advancing anyone’s understanding. Even worse, students apparently don’t regard the discussion model as an ideal and often actively prefer lectures. After all, why should they have to listen to the free associations of their peers when they’re paying a lot of money to have access to an expert? Thus, teaching evaluations often push us away from current thinking on “best practices.”
My early experience in teaching also led me to believe that the discussion model was overhyped. Yes, it could work in grad school or even in upper-level courses — but first-year students just didn’t know enough. Everything changed, though, when I started teaching at Shimer College, a small liberal arts institution in Chicago with a distinctive discussion-centered pedagogy based on a Great Books curriculum. Within the first few weeks of teaching there, I realized that the central problem with the pedagogical ideal of small, discussion-based classes is that hardly anyone is really doing it. Many pay lip service to it, but administrative pressure to increase class sizes and a lack of buy-in from faculty ensures that the ideal model always remains a supplement to more traditional methods.
What Shimer’s approach showed me is that if you’re going to do a discussion-centric model, it has to be the main event. I think of the skills required to succeed in such a pedagogical model as a foreign language—you can’t learn them in a handful of supplementary discussion sections per week. The very best way to learn them, of course, is through immersion. The way this works at Shimer is that every single class, from day one, is a small, discussion-centric class, where class participation accounts for roughly half of a student’s final grade. There is no way to “opt out” of the hard work of discussion: students have to figure out how to learn in this style if they are going to succeed at all.
First-year courses can certainly be difficult, though the amount of progress from the first to the second semester is often remarkable. All the familiar pitfalls of class discussion make an appearance: the vague free-association, the off-topic remarks, the sense of competing monologues that don’t quite come together into a real conversation. The faculty’s primary job in these class sessions isn’t so much to supply content as to help students get over these problems and become productive participants.
The key to cultivating a productive discussion, in my view, is Shimer’s Great Books curriculum. Many associate such curricula with cultural conservatism and a narrow focus on “dead white males,” but that is misleading. For me, the importance of the model stems from three crucial pedagogical advantages. First, it provides a center of reference and authority for the classroom other than the professor — or the students’ own personal opinions. The standard for whether students are on-topic is whether they can support their views from the text, and the standard for whether a remark is helpful is whether it advances our understanding of the text. Second, the emphasis on reading primary source texts means that the texts reward and require discussion. In contrast to a textbook or an introductory secondary source, primary sources don’t come “pre-digested” and must be worked on.
Third, it allows us to get past the dreaded “why are we reading this” syndrome: the model guarantees that the texts under discussion are always widely agreed to be worthy of attention. The exact configuration of core texts of course varies from school to school. St. John’s College, for example, one of the leading Great Books institutions in the country and an indispensable point of reference for all such programs, takes a basically chronological approach in its core reading list, with a strong emphasis on classical antiquity and without much concern for disciplinary boundaries — but still with considerable diversity, particularly in the modern period. Shimer’s core curriculum follows what’s known as the Hutchins model, which is divided into the three primary disciplinary areas of humanities, social sciences, and natural sciences, and includes a greater emphasis on more contemporary works. Other colleges use other models, but the shared feature is a concern to choose texts that students will agree that one “should” read — with no requirement than any text be written by someone who is dead, white, male, or any of the above.
The combination of the discussion model with the use of primary texts creates a situation where students are forced to take responsibility for their own education. Instead of getting the material pre-digested in the form of lectures and introductory textbooks, they have to grapple with it themselves. Class sessions then become a chance to actively work on the text with an experienced faculty member and a group of similarly-motivated peers. To me, the real turning point in a Shimer education comes when students come to fully understand this and hold themselves and each other accountable for their contributions. Things don’t automatically go smoothly after that point — people are people, after all, and personalities are bound to clash in unpredictable ways — but the older students generally take an active role in working to solve the problems that do arise, rather than tuning out when things don’t go to their liking.
All this leads me to believe that when the ideal model is used in a thorough-going, uncompromising way, it really is ideal. Yet I can already anticipate an objection: this all sounds great, but it would cost too much. In an era when colleges and universities are constantly trying to cut costs through large classes and online education, Shimer’s approach admittedly may seem unrealistic. I believe, however, that it’s not a matter of “cost” in an abstract sense, but rather a matter of priorities. At Shimer, the priority is classroom instruction, and everything else takes a back seat to that. We have no athletic programs, a relatively low number of administrators (with academic administrative responsibilities rotating among current faculty members), and no buildings to maintain (we lease space from the Illinois Institute of Technology). Faculty salaries are lower than average, but aside from a handful of courses (taught by semi-retired faculty members or administrators with academic expertise), all teaching is done by full-time faculty. Overall, Shimer manages to remain faithful to its model while keeping tuition levels comparable to other small liberal arts schools — without having the luxury of a large endowment.
Another possible objection is that the outcome is ideal because Shimer students are ideal — this would never work at a less selective institution. It is true that Shimer students, like the students at basically all small liberal arts colleges, tend to be more privileged by most measures. Even more crucial, in my view, is the fact that Shimer’s student body tends to be very self-selecting: students are very clear about what the college is offering, and they aren’t going to attend if they aren’t interested in our pedagogical model.
I’ve spoken of the lack of faculty buy-in at other institutions, but I think this points to an even more important factor: student buy-in. If students don’t care, if they’re enrolled for utilitarian reasons and have no intrinsic love of learning, they will most likely wind up failing — and dragging the class down with them. Hence it seems to me that less-selective institutions could offer an optional program for interested students, much like those at two of the City Colleges of Chicago (Harold Washington and Wilbur Wright Colleges). Shimer has worked with Harold Washington in particular for many years, and several of their Great Books students have ultimately finished their four-year degrees at Shimer as a result. Many other community colleges around the country have found success with Great Books programs as well.
The more difficult problem, though, is what to do with students who have the motivation, but are less academically prepared. Shimer deals with this in part through an innovative scholarship program where students come to campus for a day to simulate the kinds of discussion and writing we require — and they can earn a full-tuition scholarship on the strength of their performance alone, regardless of their official credentials. However, one could argue that that merely allows us to reach students who really do already have the skills, but haven’t signaled those skills in the accepted ways. One might suspect that something similar is going on in community college programs, which often tend to attract the more precocious students.
One potential solution might be to organize the core curriculum, at least in the early stages, explicitly around difficulty or accessibility. This might mean starting with more contemporary works (Toni Morrison rather than Shakespeare, for example) or works with more immediate contemporary relevance (Foucault’s Discipline and Punish rather than Kant’s Critique of Pure Reason). It might also mean focusing on works that appeal with particular urgency to one’s target population. For instance, a Great Books program serving students on the South Side of Chicago might do well to lead off with great works in the African-American tradition, then branch into other intellectual traditions with which those works are in dialogue.
More broadly, a new Great Books program that aims to serve underprepared students should be bold and experimental, ruthlessly cutting works that fail to reach students and reaching in unexpected directions for those that do. If one needs to start with films and graphic novels in order to get the discussion started, even that shouldn’t be out of bounds if one embraces the view that the point of the Great Books curriculum isn’t solely to represent a particular vision of our cultural heritage, but to cultivate a collaborative learning environment that allows and requires students to take an active role in their own education.
Developing ways to make this type of curriculum more widely available is hugely important as a matter of justice — why shouldn’t everyone have the opportunity to try their hand at the “ideal” pedagogical model? On my more cynical days, I do agree with the view that there are some students who are simply never going to be motivated enough to do this kind of work, who are in college just because their parents are making them, or because they feel like they vaguely “should” be, or because they want to get a good job. Yet I don’t think its idealistic or unrealistic to assume that there are students who really do love learning and who are coming to college to pursue that love, at least in part.
In fact, I think we should ask ourselves whether our supposed “realism” about students’ abilities and motivations is foreclosing the possibility for students to really blossom. We should consider the possibility that it is precisely the more passive instructional methods that we “realistically” embrace that in part produce the “reality” (boredom, instrumentalization of learning) that those methods are supposedly responding to. Under different circumstances, perhaps even some of my best Shimer students could have wound up resigning themselves to tuning out and resentfully waiting for the professor to just tell them what’s on the test — and by the same token, I suspect that some of those bored students could be successful in a model like Shimer’s if given the chance.
As James Baldwin draws to a close his 1949 essay, “Everybody’s Protest Novel” — eventually pitting Richard Wright’s Native Son against Harriet Beecher Stowe’s Uncle Tom’s Cabin — one reads: “But our humanity is our burden, our life; we need not battle for it; we need only do what is infinitely more difficult — that is, accept it. The failure of the protest novel lies in its rejection of life, the human being…” What I once heard in these words was a call to the humanities: Our humanities are our burden.
Rather than leave the humanities by the side of the road what is more difficult is to accept the stacks and stacks of them at our disposal and find or make a use for them in the curriculum of liberal arts education. More to the point, Baldwin’s essay seemed a dare to me, personally. He dared me, while hewing syllabi on the south side of Chicago, to make use of the protest novel (be it by Stowe or Wright), regardless of its alleged shortcomings. So, I assigned both Stowe and Wright in my next ethics class, which resulted in months of lively, original, and unplanned discussions comparing and contrasting both of them with various motifs in Friedrich Nietzsche’s Genealogy of Morals.
Most of the students were excited to read what they considered to be classics. What was more impressive and joyful to watch unfold was the way by the Wright’s and Stowe’s characters and narratives — regardless of what Baldwin sees as their shortcomings — gave the students a working and almost personalized vocabulary with which to interpret, analyze, and comprehend many of Nietzsche’s themes and insights. This allowed Nietzsche’s lofty rhetoric to seem a little less distant and, at least by way of Bigger Thomas, the transvalution of values became applicable on 95th Street. The very idea that Stowe, Wright, and Nietzsche were not made to be read alongside one another or that our doing so, in this class, was in some way strange was never voiced.
I do not teach in a Great Books curriculum, but I am a believer. I am not ashamed of the Great Books. If teaching at St. John’s or Shimer College represents a certain institutionalized kind of Great Books orthodoxy, then I am at least a layman of the Great Books, if not an iconoclast, and maybe even a heretical reformer. Whether from my own personal hubris, naïveté or arrogance, I believe that it is part of my job to choose the Great Books or, at the very least, to choose what great texts could be used this semester to illuminate or drive home the key points or high notes of this particular class. If this means that I am destined to vulgarity by allowing myself to teach and read texts that may never grace the annals of official Greatness, it is only because the owl of Minerva begins its flight only at the fall of dusk.
Chicago State University is the oldest public university in the Chicago metropolitan area. Its student body is comprised overwhelmingly of minority students; most of whom are black or African-American and, judging from the enrollments in my class, the majority of whom are women rather than men. Many are products of the Chicago Public School system. In my philosophy classes at Chicago State I advocate a discussion-based classroom that some might call Socratic. The classes rely heavily on weekly intertextual adventures with primary sources. I do not assign textbooks and rarely indulge in the use of anthologies. Class discussion and participation account for almost half of the overall grade. Although the students do take a sizable and challenging exam and must write a term paper of moderate length, they learn quickly that their true homework is to read multiple primary texts and find connections between them in order to illuminate, criticize, or supplement those very texts.
This does — credo! — contribute to an overall improvement in the caliber of their writing. I don’t teach my students how to write, but rather try to teach them how to read and, as such, to succor a love and appreciation of lifelong reading. This approach appears to inculcate in the student a vocation of scholastic responsibility. They consider it their job as college students to read, learn and master — as much as such a thing is possible — as many "classics" as they can be exposed to during the precious reading time allowed to them during their college years (which I constantly remind them is a luxury that they will all-too-soon come to miss) and, further (perhaps I should add Baldwin’s phrase, “infinitely more difficult”) not just use that information to spit out book reports and answer trivia questions, but rather craft an intelligence from that information by finding or creating a way by which this canonical intelligence sheds insight and comprehension on other fields of study and other non-canonical approaches.
If reaching an understanding is what they want to get out of a class (a teleological or practical ambition, which I leave up to them to decide), they are obliquely invited to consider that if they cannot use this understanding to understand something different or something more, then perhaps they (or we) have not understood it that well, at all. Students feel proud and confident when one of their own customized and idiomatic intertextual connections or tropes help another student in the discussion reach an understanding on a particular point that the latter may have missed without the help of the former. It is almost an epiphenomenal bonus that this cooperative understanding which emerges through class discussion comes about only by way of an applied cultural awareness and knowledge of classic or canonical texts.
This often requires a thematic approach to reading texts. It is important to get across to the students that this is not the only, nor the best, nor the most desirable way to read a text. But even this is a crucial opportunity to impress upon them that a text — specifically, the great texts — can and must be read in multiple ways and one is never, truly, done reading any of them regardless of their approach.
Rather than merely memorize the names and definitions from a list of informal logical mistakes, my classes will prepare for a discussion about such informal fallacies by reading from the speeches of Malcolm X, Thomas Pynchon’s Gravity’s Rainbow or Laurence Sterne’s The Life and Opinions of Tristram Shandy, Gentleman. I find that students are apt — more so than rehearsing such errors from textbooks — to grasp, remember, and even enjoy the intricacies of enallage and homonymy or the fallacies of amphiboly and accent after struggling with Slothrop’s various uses of “You never did the Kenosha kid” or the tragicomic dangers of equivocation after reading Uncle Toby tell Lady Wadman that she shall see the very place or put her finger on the very spot where he received his war wound.
Even the best and brightest students can fall into momentary disinterest if discussion seems to collapse into an exercise in bookishness or erudition for erudition’s sake. Glazed eyes, drooping heads, and the checking of text messages can accompany any litany of Latin names. But rather than let a species of anti-intellectualism take root and win the day, those very eyes and heads tend to brighten and perk up when such thoughts are addressed (or, better: applied) through the words of Malcolm X.
Argumentum ad populum feels a little closer to home when reading: “One of them will never come after one of you. They all come together.” Argumentum ad misericordiam has a little more gravity after reading: “With the skillful manipulation of the press, they’re able to make … the criminal look like the victim.” If the difference between ad hominem (circumstantial) and ad hominem (abusive) just isn’t clicking, it only helps to consider how: “in Asia or the Arab world or in Africa, where the Muslims are, if you find one who says he’s white, all he’s doing is using an adjective to describe something that’s incidental about him, one of his incidental characteristics; so there’s nothing else to it, he’s just white.” After that connection is made, it is easier to identify argumentum ad veracundiam when Malcolm compares that Muslim world to: “over here in America … when he says he’s white, he means the boss.” The fallacy of suppressed evidence or the Straw Man version of ignoratio elenchi seems less abstract while reading: “They take one little word out of what you say, ignore all the rest, and then begin to magnify it all over the world to make you look like what you actually aren’t.” The fallacy of composition (and, by contrast, division) is just waiting to be explained with Malcolm’s dinner table analogy: “Because all of us are sitting at the same table, are all of us diners? I’m not a diner until you let me dine. Then I become a diner. Just being at the table with others who are dining doesn’t make me a diner.”
These are just a few examples from only one of Malcolm’s speeches; the one delivered at Ford Auditorium in Detroit on February 14, 1965. (By the way, the students are more likely to remember the Latin names of the fallacies after experiencing them in action by Malcolm, even though he does not call them by their Latin names.) With the proper patience and an eye for detail, almost any of his speeches suffices as a dangerous supplemental text to both formal and informal logic. I’ve had similar success with Noam Chomsky’s Failed States. The same could be said for reading Audre Lorde’s “Poetry is Not a Luxury” from Sister Outsider alongside René Descartes’ Meditations on First Philosophy or Zora Neal Hurston’s Moses, Man of the Mountain in tandem with Sigmund Freud’s Moses and Monotheism (or, simply, the Bible).
It does not always work, of course. There are collateral failures. It does not go by unnoticed when a business major drops my business ethics course because reading a scant 20 lines from the Iliad about the problem of greed is not what the student wished to sign up for. And it is the case that some students simply aren’t prepared to jump into a whirlwind of primary readings. A student who misses one week of class can feel overwhelmed that he or she is already 300+ pages behind.
But being the vulgarian Great Bookist that I am, I can indulge in merely decent books (which is to say, more readily available and readable books) in order to get to the Great Books; to achieve, like Milo and his calf, a progressive resistance that can build up the reading muscles over a matter of weeks. Once a student, who has not yet given her or himself over to a consistent practice of reading or, perhaps, was simply never encouraged to do so, knocks out Kurt Vonnegut’s Galàpagos in a week — and is a bit surprised to have done so, quite easily — he or she is likely to make it through Aristotle’s Parts of Animals in the following weeks, and within a month is working through Charles Darwin’s The Descent of Man with a working set of intertextual concepts that feel quite close to home.
There is, of course, a very real danger of such a curriculum coming off as a one-sided hermeneutic by which certain texts resistant to the status quo or stereotypical power codes of the established canon are only appreciated through the Eurocentric lens of that very canon. I can anticipate such criticism from thinkers and critics, whom I take very seriously and whose concerns I share, such as Amiri Baraka, Gayatri Spivak, or Edward Said. I do not think that an intertextual approach necessarily condemns one to making sense of resistant texts only by the yardstick or measure of accepted ones. James Baldwin did not need Dostoevsky to understand Richard Wright any more than Cornel West needs Chekhov to understand John Coltrane.
As cautious and concerned as I am of being complicit in various forms of Orientalism, Eurocentrism, or logocentrism I am even more concerned at allowing that caution to squelch the priceless and productive intertextual adventures that may result from refusing the separatism that forever quarantines the likes of Baldwin, Wright, Malcolm X, and Lorde to what Bertrand Russell calls “the evils of specialization” in The History of Western Philosophy as if they have nothing to teach — as dangerous supplements — to students also interested in Aristotle, Descartes, Kant, Nietzsche, or Freud. But, once again, even this danger and problematic is a teachable moment in such classes by which to discuss and address the appropriateness or inappropriateness of such a curriculum. And you are likely able to end such a discussion in such a class — if such discussions or classes have ends — as I have, with Audre Lorde: “while we wait in silence for that final luxury of fearlessness, the weight of that silence will choke us.”
Virgil W. Brower is full-time lecturer in Philosophy at Chicago State University, where he has taught for nine years while completing a dual-Ph.D. program with the Chicago Theological Seminary and Northwestern University in theology and comparative literature, respectively, with a home department in philosophy. He is author, most recently of, “Ethics is a Gustics" and “Speech and Oral Phenomena.”
After a decade of reading papers and attending panels on the Crisis in Scholarly Publishing (it feels established and official enough now to deserve capital letters) I’m dubious about the prospect of ever writing another column on the topic. It starts to feel like Chevy Chase interrupting with a bulletin that Generalissimo Francisco Franco is, in fact, still dead.
Scholarly publishing isn’t dead, of course -- although at this stage, as with the Generalissimo, a major reversal of fortunes would appear unlikely. Ian Maclean’s Scholarship, Commerce, Religion: The Learned Book in the Age of Confessions, 1560-1630 (Harvard University Press) evokes a publishing world so different from the 21st century’s that visiting it seems like a vacation from today’s too-familiar circumstances.
Maclean, a professor of Renaissance studies at the University of Oxford, identifies the period covered by his study (which started out as a series of lectures at Oxford) as the late Renaissance. Maybe so. Clearly publishers were catering to a much-expanded audience that had acquired a taste for humane letters. A stable of freelance philologists cranked out new editions of ancient works, as well as translations. The public able to parse a page of Attic text was much smaller than that reading Latin, there was still a demand for books in Greek -- if only as a kind of erudite furniture, or for use as an implied credential. You imagine someone going a doctor or lawyer for the first time and spying the volume of Aristotle open on his desk, then thinking, “Wow, this guy must be good.”
But the big money, it sounds like, was in controversy – in pamphlets and collections of documents from the combat between Roman Catholics and Protestants, and between Protestants and one another. Theological argument in the era of Luther, Calvin, and Erasmus sought to bring the reader to the one true faith, but the now the line between polemic and character assassination had been blurred beyond recognition. If, say, a Calvinist scholar went over to the Vatican’s side, it was fair game for ex-colleagues to embarrass the apostate by publishing a volume of letters he’d written mocking Catholicism.
And -- more to the point – such a book would sell. The fracturing of the public along religious lines divided the publishing world into distinct “confessionalized” sectors -- each demanding its own editions of scripture, of course, but also of patristic and writings, and of historically significant documents backing up its claims to be the one true faith. Technology rendered the mass-production of books possible, while theology made it urgent.
Learned books in this period “fell into two broad classes,” Maclean explains: “textbooks for schools and universities, on the one hand, and more specialized humanist editions, historical, legal, theological, and medical works on the other.” The publishers themselves didn’t fall into corresponding categories; most did some of each.
Nor was the connection between scholarly publishing and academe all that close – least of all geographically. “While it was recognized that printing shops were a sign of the health of a country’s scholarship as much as the institutions of higher learning,” says Maclean, “this does not seem to have weighed much in their location.” Most presses “were based in cities without universities, and a surprising number of university cities were without printers who could compose in ancient languages.” Proximity to a university counted far less than the availability of raw material and skilled labor, not to mention access to trade routes and a strong patron.
Place of publication was also metadata: it signaled what religion confession it reflected, depending on which faith the authorities there favored. But the city indicated on a title page might or might not tell you where it was actually printed. Someone in Geneva publishing an anti-Calvinist pamphlet would have good reason to claim it came from Venice.
Besides textbooks, there was another sort of publishing aimed at the student market: editions of notes taken during the lectures for certain courses. Maclean says that a reputable publisher would clear this with the professor. That suggests, by implication, that shadier operations didn’t. (As far as I know, this practice was still going strong through at least the 19th century. We have information about some of Kant’s lectures thanks to publishers serving the needs of undergraduates who couldn’t make it to class.)
The scholarly publisher of the early 16th century was likely to be something of a Renaissance humanist himself, playing a role of servant to “the new learning.” Drawing on publishers’ catalogues, reports of the Frankfurt book fair (where the number of titles more than doubled between 1593 and 1613) and the records of titles found in scholars’ libraries following their deaths, Maclean recreates something of the prevailing routines and difficulties of scholarly publishing in this era.
The correspondence between publishers and authors (and the grumbling of each to third parties about delayed manuscripts or shoddy workmanship) are a reminder of the micropolitics of intellectual reputation in the days when getting work into print was considerably more difficult than it would soon become. But scholarly publishing was not at all a matter of academic credentialing. “No one in the late Renaissance obtained professional validation in a university through publication with distinguished publishers or in reputed publications as is done now,” writes Maclean. “The pressure scholars felt to achieve publication, if it did not arise from their desire to promote themselves and their subject, was rhetorically attributed to their patron, whose prestige they enhanced….”
The nobility of scholarship, then, depended on the scholarship of the nobility. But over time the publishing field was overtaken by “a new breed of entrepreneurs who were not so much involved in the production of knowledge as its marketing.” Every book is, after all, something of a gamble: the investment in publishing it involves risk, and the skills required to identify a valuable work of scholarship are distinct from those of keeping the enterprise solvent. More and more publishers entered the field, publishing more and more material; and for a long time things continued more or less profitably, in spite of the wars and plagues and whatnot.
By the 1590s, a satirist was complaining about the flood of shoddy material: Publishers were more interested in best-sellers than in serious scholarship. Volumes went to market as the revised, expanded, corrected edition of some work, even though the only thing new about it was the title page. Hacks were turning out commentaries on commentaries, and worse, people were buying them, just to add them to their collections.
Things were not, in short, like the good old days. On the other hand, neither were they as stable as they appeared for quite a while. The capacity for mass producing books developed more rapidly than market of readers could absorb them (or at least buy them). The bubble started to deflate in various fields in the the early 17th century. In 1610 you’d be be turning out treatises as fast as they could be typeset, which only meant that by 1620 you had a warehouse full of stuff in neo-Latin that nobody wanted to read.
Not to say that the Crisis in Scholarly Publishing has been going on for 400 years. Things bounced back at some point. Maclean does not say when, or how. But whatever happened after 1630 had to be a mutation, rather than just a market correction: a huge restructuring of institutions and of fields knowledge, to say nothing of the changes in what and how people read, and why. The expansion of readership preferring work in the vernacular was undoubtedly a factor, but was it sufficient?
Perhaps Maclean will pursue the matter in another book. On the strength of Scholarship, Commerce, and Religion, I certainly hope so.
The reality is exactly the opposite: the for-profit sector is challenging a centuries-old practice of separating philanthropy from business.
Since the Elizabethan statute of charitable uses in 1601, Anglo-American law has sought to encourage charitable giving to promote the common good. The idea behind modern philanthropy is that nonprofits undertake services that are either inappropriate for market activity or would not be supported by the market. To ensure that these goods are provided, the state both provides them itself through public institutions and offers private nonprofits legal privileges (such as incorporation) and economic incentives (such as tax benefits).
In 1874, Massachusetts passed one of the earliest general laws exempting from taxation any “educational, charitable, benevolent, or religious” institution. Believing that citizens, not just the state, should promote the common good, Massachusetts sought to encourage citizens to devote their money to institutions that would serve the public. Implicit was the assumption that certain kinds of activities — educational, charitable, benevolent, and religious activities in particular — should be done as a service and not for a profit. Massachusetts’ law became a model for other states.
In the modern era, tax incentives are one of the primary ways in which the state encourages nonprofit institutions, whether churches, local grassroots associations, large endowed philanthropies, or universities. The state also subsidizes nonprofits that serve the community, especially in social services and education. As Olivier Zunz has demonstrated in his recent book Philanthropy in America, Americans have not only given generously but benefited greatly from philanthropy.
This is not to suggest that the history of American philanthropy is without conflict. After the American Revolution, many Americans worried about what Anglo-Americans called the “dead hand of the past.” Thomas Jefferson was among them. He believed that permanent endowments enabled one generation to influence the affairs of the next in ways that threatened democracy. “The earth belongs in usufruct to the living; . . . [and] the dead have neither powers nor rights over it,” proclaimed Jefferson in 1789.
These questions re-emerged in the 20th century. Many Americans reacted with great concern when Andrew Carnegie and others used their wealth to engage in philanthropic endeavors that some opposed. During the Cold War, foundation-sponsored research led some policymakers to question foundations’ power and political agenda. Similar concerns can be raised about the Gates Foundation today. Private philanthropies’ wealth may give them undue influence in public deliberation. Philanthropy, no less than business, requires regulation.
Moreover, public and nonprofit institutions become corrupted when profit becomes their goal rather than a means to fulfilling their mission. This has happened to some extent in American universities that invest in tangentially related programs like big-time sports. Since the passage of the Bayh-Dole Act (1980), which permitted universities to profit from publicly funded research, universities have encouraged marketable rather than socially beneficial science. Moreover, in an era of state defunding, many policy makers are urging universities to act more like businesses, even when doing so perverts their mission and institutional culture.
The state must ensure that both public and nonprofit institutions remain true to their civic mission in return for the legal and financial benefits they receive. This point was made recently by Robert Zemsky, a member of President George W. Bush’s Spellings Commission. In Making Reform Work, Zemsky urges colleges to talk constantly “about purposes, about ends rather than means,” to hold fast against the temptations of profit.
Whether colleges are for-profit or not matters a lot. It affects their mission, their culture, their labor practices and, most important, the lessons they offer students. For-profit education implies that education is a commodity bought for the advantage it provides. It makes no pretense that service is a necessary part of being a college graduate. In fact, even if it did, students are too smart to believe it. They know what they are buying -- a degree from a vendor. We expect businesses to make money, but we do not want our churches and schools to treat us as consumers but as congregants and students.
For-profits must be regulated as businesses. They are not charities, despite being subsidized heavily by public student loan dollars. In reality, in return for these public subsidies, for-profits should live by the same rules as other nonprofits. They should make the common good their primary goal and reinvest all revenue to fulfill their mission. They will not, however, because, as Kevin Kinser argues in From Main Street to Wall Street, they exist to generate wealth for investors and shareholders. As recent scandals have made clear, for-profit institutions in higher education, like other Wall Street businesses, too often put their bottom line ahead of the common good.
For-profit higher education’s advocates are declaring war on American philanthropy. They seek to profit off of charity, transforming what should be a service into another way to gain wealth. They threaten a distinction that has deep roots in American history and law. They suggest that all goods -- including education, charity, and religion -- should be commodities. History and common sense tell us otherwise. While the line between the for-profit and nonprofit sectors can be blurry at times, the differences between them are very real, of moral significance, and worthy of protection.