The golden age of unsolicited credit-card applications ended about five years ago. It must have been a relief at the post office. At least ten envelopes came each week -- often with non-functioning replica cards enclosed, to elicit the anticipatory thrill of fresh plastic in the recipient’s hot little hand.
For a while, I would open each envelope and carefully shred anything with my name on it, lest an identity thief go on a shopping spree in my name. But at some point I gave up, because there were just too many of them. Besides, any identity thief worth worrying about enjoyed better options than trash-diving for unopened mail.
Something started happening circa 2006 or ’07. More and more often, the very envelopes carried wording to the effect that approval for a new card was a formality, so act now! With the benefit of hindsight, this reads as a last surge of economic acceleration before the crash just ahead. But at the time, I figured that credit-card companies were growing desperate to grab our attention, since many of us were throwing the offers away without a second glance.
The two alternatives -- turbocharged consumerism on the one hand, the depleted willingness (or capacity) of consumers to take on more debt, on the other -- are not mutually exclusive. It was subprime mortgages rather than overextended credit cards that brought the go-go ’00s to an early end, but each was a manifestation of the system Andrew Ross writes about in Creditocracy and the Case for Debt Refusal (OR Books).
Ross, a professor of social and cultural analysis at New York University, was active in Occupy Wall Street, and Creditocracy bears a few traces of the movement, both in its plainspoken and inclusive expressions of anger (this I like) and its redeployment of old anarco-syndicalist ideas (that, not so much).
One commonplace account of the near-collapse of the world financial system in 2008 is that it was the product of consumer hedonism at its most irresponsible. It was just deserts for people playing Xbox on jumbo flat-screen TVs in subprime-mortgaged houses they shouldn't be in. Whatever the limits of its explanatory power, this interpretation allows for a pleasing discharge of moralistic aggression. Hence its popularity. The most familiar argument opposing it places the blame, rather, on bankers, brokers, and other criminals “too big to jail.” It was they who were greedy and short-sighted, not average people.
Besides the more obvious similarities, what these explanations share is an implication that the disaster could have been avoided with some self-discipline and the understanding that hyperbolic discounting is a very bad habit.
Ross leans in the anti-plutocratic direction, but he proves ultimately less interested in the morality of anyone’s decisions than he is in the framework that permits, or demands, those decisions in the first place. The system he calls “creditocracy” turns out debt as fast and efficiently as Detroit once did automobiles, and just as profitably:
“Financiers seek to wrap debt around every possible asset and income stream,” he writes, “ensuring a flow of interest from each…. [T]he tipping point for a creditocracy occurs when ‘economic rents’ – from debt-leveraging, capital gains, manipulation of paper claims through derivatives and other forms of financial engineering – are no longer merely supplementary sources of income, but have become the most reliable and effective instrument for the amassing of wealth and influence.”
At that level of description, Ross has simply given a new name to what Rudolf Hilferding, writing a hundred years ago, called “finance capital.” But what Hilferding had in mind was the merger of banking and industrial capitalism – the marriage of big money and big factories, with monopoly presiding. Creditocracy, by contrast, “goes small,” insinuating itself into every nook and cranny of life. The relationship between creditor and debtor takes many different shapes, some more overt than others.
When you take out a student loan or a mortgage, your submission to the financial system is more or less deliberate, and in any event explicit. It runs deeper, and proves less purely voluntary, if you have to use credit cards in lieu of unemployment insurance. The credit relationship is much more efficiently disguised if it takes the form of an unpaid internship – the “exchange” of your time and skills for intangible and impossible-to-quantify credit” toward a future job, if you’re lucky.
And if that doesn’t pan out, you might end up working in one of the less desirable positions at Walmart or Taco Bell, among other corporations that banks have persuaded, Ross writes, “to pay their employees with prepaid debit cards that are only lightly regulated.” The banks then “charge the users fees to make ATM withdrawls and retail purchases, along with inactivity fees for using their cards. Almost all of these are minimum or subminimum wage employees, compelled to fork over a fee to enjoy their paycheck." (The practice was described in a New York Timesarticle a few months ago.)
In next week’s column, I’ll consider Ross’s analysis of how the impact of creditocracy on education amounts to a ruthless exploitation, not just of present-day society, but of the future. We’ll also take a look at the comparable argument in a new book called The Falling Rate of Learning and the Neoliberal Endgame (Zero Books) by David J. Blacker, a professor of philosophy of education and legal studies at the University of Delaware.
Until then, I’ll sign off by mentioning that someone has just sent me an application for a $40,000 line of credit. This must be evidence of that “recovery” one reads about. If so, we’re in real trouble.
As Scott Jaschik points out in his January 13, 2014 article, “The Third Rail,” the terrible stress our newly minted Ph.D.s in English, comp lit, and foreign languages confront when they begin the job search seems only to be escalating rather than abating. Understandably, then, many Modern Language Association convention sessions, as well as a growing body of publications, have been taking up a variety of proposals for addressing the job crisis. Jaschik mentions the session I chaired, “Who Benefits? Competing Agendas and Graduate Education,” and he carefully articulates the basic positions of the panelists as we were all in general agreement that shrinking the size of graduate programs in English would not be the best way to remedy the situation. But the reasons we hold those beliefs in favor of expansion rather than contraction seem to have slipped out of view. I would like to highlight them here.
Let me begin by stating the obvious nature of the suffering: When you defund public higher education, someone is going to have to pay, and it has been our colleagues forced to accept unethically precarious working conditions both during and after grad school, and students at all levels burdened with massively increasing educational debt. These are circumstances we must protest with all the solidarity we can muster. But all this misery, the sense of lives ruined, institutionalized failure, personal anguish — these horrors come not just from oversized grad programs, but from a much larger capitalist economy that is wreaking havoc on many workers and unemployed poor in and out of academia. As Marc Bousquet has explained, it is not a market; or, at least, it is not a “free market” in any real sense despite our common rhetorical reference to the horrors of the “job market.” It is a system we are caught in, and one orchestrated, it’s true, by our own institutional structures that have now been fine-tuned to serve the champions of privatization, defunding, and austerity. In this type of economic system, higher education has become a kind of laboratory for the production of a precarious, contingent, low-wage faculty. The economic inequality within the profession mirrors the economic inequality in the society. From any ethical perspective, it is a system that has gone terribly wrong.
What has been most missing from the discussion about graduate school size has been a concise understanding of why the market logic doesn’t work for English grad programs, and the main reason is because it is not an accurate description of how the system really works. If it were a case of supply and demand, it might make good ethical sense to reduce the overproduction of Ph.D.s to meet the lower demand for tenured professors. In short, if you could reduce the supply without altering demand, this equalizing would clearly make it easier for graduates to get tenured jobs for the simple reason that there would then be fewer Ph.Ds competing for the same number of jobs. But the system does not work that way. Rather, when you reduce supply by shrinking graduate programs, you also end up reducing demand (as I will explain in what follows): our system is so structured that we cannot reduce the one without reducing the other, and that’s a real ethical and political conundrum.
When you shrink graduate student enrollments (the supply side), you inevitably also shrink the size of graduate programs, which means, willy-nilly, that you decrease tenured faculty lines (the demand side) because they are the folks teaching in grad programs. Administrators would be happy to shrink our programs and eliminate some tenured lines through attrition and retirement because new, cheaper temp hires can easily fill in to teach the few undergraduate lower-division classes that some tenured faculty teach.
The gurus of supply and demand would like nothing better than for us graduate faculty to do our own regulating by cutting down of our own accord on producing so many new highly educated people schooled in the legacies of critique and dissent. We then serve the wishes of those seeking more power to hire and fire at will the most vulnerable among us who have no protections under a gutted system of tenure and diminished academic freedom. The system can play itself out under the contraction model, then, as a vicious cycle of reducing supply, which reduces demand for tenured faculty (while increasing the non-tenure-track share of the faculty), which calls for further reducing of supply. To believe that contracting the size of graduate programs can, in and of itself, improve the situation is a misattribution of cause and effect: The real cause of the job misery is the agenda for privatization and defunding public expenditures orchestrated by the global economic system that has been producing misery and suffering for millions of lives around the world as socioeconomic inequalities continue to magnify.
Now, having said all that, I also want to be very clear that there are strategic, local situations where reducing graduate student populations in order to expand funding and support for them, or in order to revise a program (hopefully without shrinking tenured faculty lines), can certainly be the most ethical thing to do. So I am speaking at a general level of overall tactics for the profession, and at that level, shrinking (without other forms of compensation) inevitably leads to weakening graduate education, not strengthening it through some mythical model of “right-sizing” to be achieved by a proposed matching of supply and demand.
But, of course, the pain is real, and it reaches fever pitch in the transitional moments of crisis when graduate students face the “market” for jobs. The wretched system we endure makes it impossible not to sympathize with graduate students who understandably often argue that we must reduce the supply of Ph.D.s to give them a better chance to get a job. Under these enormous tensions, the short-term, crisis-management model of supply and demand can especially seem like the only fair-minded option.
In those moments of anguish, which I myself witness every time one of my own students reaches this transition stage, our only ethical task is to support them and listen to them as best we can to help them navigate the transition. So I want to make sure that my remarks here are not intended to provide any specific advice other than the obvious need for support. Specific situations and contextual demands will have to be navigated with all the pragmatic skills and rhetorical resourcefulness possible. In contrast, then, to a focus on the crisis moment of the job search, I have framed my comments here in terms of a big picture narrative.
From the longer and larger perspective, what becomes most clear is that our system of having elite graduate faculty surrounded by masses of non-tenure-track teachers mostly fulfilling service functions of teaching lower-level humanities distribution courses and writing courses fuels that cycle of devolution. We need, then, to change the academic system over which we do have some control. Systemic changes can be difficult to even imagine, but it is by no means impossible as long as we understand that it will not happen in an overnight revolution. And the first step inevitably leads us to examine more critically the ethical and political work of both curricular revision and resource allocation. In short, it leads us to a careful analysis of the systemic class structure within the profession, bolstered as it is by procedures and policies, many of which we actually have some degree of professional autonomy to alter.
Of course, the resistance to institutional transformation remains overwhelming at times, and the struggle to mitigate our academic hierarchies and internal class stratifications is a long-term project, well beyond the scope of these comments. To even imagine such changes in our local institutional circumstances, we will have to make many arguments convincing our colleagues that a more collective and collaborative approach to teaching assignments will be beneficial for us all in the long run. And I have at least some evidence that something like what I have been suggesting can actually happen. Where I teach in the Pennsylvania State System of Higher Education (PASSHE), our collective bargaining agreement affecting all 14 universities with a total enrollment of over 100,000 students has created an anomaly in U.S. higher education: more than 75 percent of all faculty on all campuses are tenure-track lines (the inverse of the national percentage average), and all faculty teach all levels of courses.
Much work remains to be done, and we too continuously struggle against state underfunding and the pressure to hire more temporary faculty. But the potential benefits of these efforts, I believe, would make our profession less stratified and more responsive to public needs for high quality education at all levels, so that, ultimately, the humanities will become a more vital part of the social fabric of everyday life for more citizens. That is a goal we should never abandon.
David B. Downing is director of graduate studies in literature and criticism at Indiana University of Pennsylvania. He is the editor of Works and Days, and his most recent book (co-edited with Edward J. Carvalho) is Academic Freedom in the Post-9/11 Era.
Whether or not the humanities are truly in crisis, the current debates around them have a certain gun-to-the-head quality. “This is why you -- student, parent, Republican senator -- shouldn’t pull the trigger,” their promoters plead. “We deserve to live; we’re good productive citizens; we, too, contribute to the economy, national security, democracy, etc.” Most of these reasons are perfectly accurate. But it is nonetheless surprising that, in the face of what is depicted as an existential crisis, most believers shy away from existential claims (with someexceptions). And by not defending the humanities on their own turf, we risk alienating the very people on whose support the long-term survival of our disciplines depend: students.
One reason why our defenses can have a desperate ring to them is that we’re not used to justifying ourselves. Most humanists hold the value of the objects they study to be self-evident. The student who falls in love with Kant, Flaubert, or ancient Egypt does not need to provide an explanation for why she would like to devote years of her life to such studies. To paraphrase Max Weber, scholarship in the humanities is a vocation, a “calling” in the clerical sense. It chooses you, you don’t choose it. The problem with this kind of spiritual passion is that it is difficult to describe. To paraphrase another 20th-century giant, Jimi Hendrix, it’s more about the experience.
It’s not surprising, then, that when we humanists feel (or imagine) the budget axe tickling the hairs on the backs of our necks, we don’t have ready-made apologia with which to woo or wow our would-be executioners. And because a calling is hard to explain, we turn instead to more straightforward, utilitarian defenses -- “but employers say they like English majors!” -- which, while true, don’t capture the authentic spirit that moves the humanities student.
There is of course sound logic to this approach. Government and state funding is a zero-sum game, and politicians are more likely to be receptive to practical arguments than to existential propositions. But in the long run, it takes more than state and university budgets to maintain the health of the humanities. It also takes students. And by constantly putting our most productive foot forward, we may unintentionally end up selling ourselves short (disclosure: I, too, have sinned). The fundamental reason why students should devote hours of their weeks to novels, philosophy, art, music, or history is not so that they can hone their communication skills or refine their critical thinking. It is because the humanities offer students a profound sense of existential purpose.
The real challenge that we face today, then, lies in explaining to a perplexed, but not necessarily hostile audience -- and perhaps even to ourselves -- why it is that the study of literature, anthropology, art history, or classics can be so meaningful, and why this existential rationale is equally important as other, more utilitarian ones. This line of argument stands in opposition to proclamations of the humanities’ uselessness: to declare that the humanities are of existential value is to affirm that they are very useful indeed.
So how might we go about defining this existential value? A good place to start would be with existentialism itself. A premise of existentialist philosophy is that we live in a world without inherent meaning. For atheists, this is often understood as the human condition following the death of God. But as Jean-Paul Sartre pointed out in “Existentialism is a Humanism,” even believers must recognize that they ultimately are the ones responsible for the production of meaning (in fact, many early existentialists were Christians). Abraham had to decide for himself whether the angel who commanded him to halt his sacrifice was genuinely a divine messenger. In Sartre's memorable formulation, man is “condemned to be free”; we have no choice but to choose. While it may feel as though a humanities vocation is a calling, you still have to decide to answer the call.
The realization that meaning isn’t something we receive from the outside, from others, but that it always must come from within us, from our conscious, deliberative choices, does not make us crave it any less. We are, existentialists insist, creatures of purpose, a thesis that psychological research has also confirmed.
Now what does this have to do with the humanities? It’s not that obvious, after all, how reading Madame Bovary, the Critique of Pure Reason, or The Book of the Dead can fill your life with purpose. At the same time, we also know that some people do find it deeply meaningful to peruse these works, and even to dedicate their careers to studying them.
What is it, then, that lovers of literature -- to consider but them for the moment -- find so existentially rewarding about reading? In a recent book, my colleague Joshua Landy argues that one of the more satisfying features of literature is that it creates the illusion of a meaningful world. “The poem forms a magic circle from within which all contingency is banished,” he writes apropos of Mallarmé’s celebrated sonnet en -yx. The order we discover in literary works may be magical, but it isn’t metaphysical; it comes from the sense that “everything is exactly what and where it has to be.” Art offers a reprieve from a universe governed by chance; what were merely sordid newspaper clippings can become, when transported into artful narratives, The Red and the Black or Madame Bovary. Landy suggests that fictions produce these illusions through a process of “overdetermination:” the ending of Anna Karenina, for instance, is foreshadowed by its beginning, when Anna witnesses a woman throwing herself under a train.
If art offered only illusions of necessity, it would hardly satisfy existential longing. Pretending that everything happens for a reason is precisely what the existentialists castigated as “bad faith.” Yet there’s an obvious difference between enjoying a novel and, say, believing in Providence. We don’t inhabit fictional worlds, we only pay them visits. No lover of literature actually believes her life is as determined as that of a literary heroine (even Emma Bovary wasn’t psychotic). So why does the semblance of an orderly universe enchant us so?
Well-ordered, fictional worlds attract us, it seems, because we, too, aspire to live lives from which contingency is kept at bay. Beauty, wrote Stendhal, is “only a promise of happiness.” As Alexander Nehamas suggested, in his book of this title, the beautiful work of art provides us with a tantalizing pleasure; beauty engages us in its pursuit. But what do we pursue? “To find something beautiful is inseparable from the need to understand what makes it so,” he writes. Behind the beautiful object -- sonnet, style, or sculpture -- we reach for the idea of order itself. The promise of happiness made by art is a promise of purpose.
But a promise of purpose is still a bird in the bush: it can disappear when you put down the book, or leave the concert hall. For the philosopher Immanuel Kant, art only provides us with an empty sense of purpose; or as he put it, in his distinctively Kantian way, "purposiveness without purpose" (it’s even better in German).
It’s true that few existential crises have been resolved by a trip to the museum or the download of a new album. But Kant may have underestimated how the sense of artistic purpose can also seep into our own lives. For instance, as Plato and every teenager know well, instrumental music can give voice to inexpressible feelings without the help of language. These emotional frameworks can convey a potent sense of purpose. When my youngest daughter spent six weeks in the neonatal ICU with a life-threatening condition, my mind kept replaying the second movement of Beethoven’s seventh symphony to tame my fears. Its somber, resolute progress, punctuated by brief moments of respite, helped to keep my vacillating emotions under control. As in films, sometimes it is the soundtrack that gives meaning to our actions.
The promise of order found in beautiful works of art, then, can inspire us to find purpose in our own lives. The illusion of a world where everything is in its place helps us view reality in a different light. This process is particularly clear -- indeed, almost trivial -- in those humanistic disciplines that do not deal primarily with aesthetic objects, such as philosophy. We aren't attracted to the worldviews of Plato, Kant, or Sartre, purely for the elegance of their formal structure. If we’re swayed by their philosophies, it’s because they allow us to discover hitherto unnoticed patterns in our lives. Sometimes, when you read philosophy, it seems as though the whole world has snapped into place. This is not an experience reserved for professional philosophers, either: at the conclusion of a philosophy course that my colleagues Debra Satz and Rob Reich offer to recovering female addicts, one student declared, “I feel like a butterfly drawn from a cocoon.”
So where art initially appeals to us through intimations of otherworldly beauty, a more prolonged engagement with the humanities can produce a sense of order in the here and now. One could even say that Plato got things the wrong way around: first we’re attracted by an ideal universe, and then we’re led to discover that our own reality is not as absurd as it once seemed. And while particularly evident with philosophy, this sensation of finally making sense of the world, and of your own place in it, can come from many quarters of the humanities. In a delightful interview (originally conducted in French), Justice Stephen Breyer recently exclaimed, “It’s all there in Proust — all mankind!” Other readers have had similar responses to Dante, Shakespeare, Tolstoy, and many more.
But exploring the humanities is not like a trip to the mall: you don't set off to find an off-the-rack outfit to wear. Proust can change your life, but if you only saw the world through his novel, it would be a rather impoverished life. Worse, it would be inauthentic: no author, no matter how great, can tell you what the meaning of your life is. That is something we must cobble together for ourselves, from the bits and pieces of literature, philosophy, religion, history, and art that particularly resonate in us. “These fragments I have shored against my ruins,” T.S. Eliot wrote at the end of The Waste Land. No poem offers a better illustration of this cultural bricolage: Shakespeare answers Dante, and the Upanishads disclose what the Book of Revelation had suppressed.
So here we find an existential rationale for a liberal education. To be sure, the humanities do not figure alone in this endeavor: psychology, biology, and physics can contribute to our perception of ourselves in relation to the world, as can economics, sociology, and political science. But the more a discipline tends toward scientific precision, the more it privileges a small number of accepted, canonical explanations of those aspects of reality it aims to describe. If 20 biology professors lectured on Darwin’s theory of evolution, chances are they’d have a lot in common. But if 20 French professors lectured on Proust’s Recherche, chances are they’d be quite different. The same could be said, perhaps to a lesser extent, for 20 lectures on Plato’s Republic. The kinds of objects that the humanities focus on are generally irreducible to a single explanation. This is why they provide such good fodder for hungry minds: there are so many ways a poem, a painting, or a philosophy book can stick with you.
In his diatribe against the way the humanities have been taught since the '60s, Allan Bloom harrumphed, “On the portal of the humanities is written in many ways and many tongues, ‘There is no truth -- at least here.’ ” But the point of a liberal education is not to read great works in order to discover The Truth. Its point is to give students the chance to fashion purposeful lives for themselves. This is why authors such as Freud, whose truth-value is doubted by many, can still be a source of meaning for others. Conversely, this is also why humanities professors, many of whom are rightfully concerned about the truth-value of certain questions or interpretations, do not always teach the kinds of classes where students can serendipitously discover existential purpose.
There are more than existential reasons to study the humanities. Some are intellectual: history, for instance, responds to our profound curiosity about the past. Some are practical. To celebrate one is not to deny others. The biggest difficulty with defending the humanities is the embarrassment of riches: because humanists are like foxes and learn many different things, it is hard to explain them to the hedgehogs of the world, who want to know what One Big Thing we do well. The danger is that, in compressing our message so it gets heard, we leave out precisely the part that naturally appeals to our future students. Yes, students and parents are worried about employment prospects. But what parents don’t also want their child to lead a meaningful life? We are betraying our students if, as a society, we do not tell them that purpose is what ultimately makes a life well-lived.
Dan Edelstein is a professor of French and (by courtesy) history at Stanford University. He directs the Stanford Summer Humanities Institute.
“It ain’t what I say, it’s the way that I say it.
That’s all, brother, that’s all.”
Let us say there is a well-meaning administrator – in particular, a college president – who wants to be sure that students at her institution are benefiting as much as possible from their undergraduate educations. Clearly, this is a central concern for any college president, since that is presumably a major reason for taking the job.
The president assumes that this is a goal shared by the faculty, since it is at the heart of their own vocation. Moreover, she has heard them speak of how their various disciplines teach not only specific subject matter, but important intellectual skills and habits of mind as well. The president also assumes that, since her faculty colleagues are themselves scholars and scientists, they will have an interest in discovering whether or not their teaching is having the desired effects. And is it not the case that true professionals wish to become better and better at their chosen work?
So, the president makes a carefully prepared presentation at a faculty meeting about “competencies” and “assessment.”
What result can be expected?
The outcome may be a positive one if the faculty community in question is already comfortable with these terms and their meanings; they may be willing, perhaps even eager, to consider how best to go about such a project or to improve an initiative already undertaken.
If, on the other hand, the faculty members have not been enculturated into the world of professional higher ed jargon, which is not the same as disciplinary jargon – or, if, for that matter, they have taken issue with some of it for well-considered reasons that require serious discussion – they will be sufficiently put off by the lingo not to bother attending to the message.
To be sure, there may be other reasons for opposition. Reports have surfaced from the higher education community about faculty members who are resistant to change and wish to go on doing things in the manner to which they have become accustomed. Nonetheless, it is worth attending to the Mae West principle: it is not just the content of the message that is important, but also how that content is being communicated.
Which bring us to “competency” and “assessment.”
“Assessment” has actually been faring better among faculty members in recent years insofar as it avoids what we might call nudnik positivism (i.e., forgetting Einstein’s famous observation that “not everything that counts can be counted and not everything that can be counted counts”) and as long as it is clear that the main goal is to make teachers better at their work, as opposed to fulfilling some misconceived external rankings system obligation. The term has thus been developing a more familiar, relatively congenial specific content, perhaps making it less necessary to use more elegant and traditional alternatives like “evaluation.” Though one should never assume.
What about the neologism “competency,” which some of us (this writer included) have avoided up to this very day? There is, to be sure, a persuasive grammatical justification for preferring “competency” to good old “competence”; this has to do with the distinction between mass nouns and count nouns. “Competence” -- like “water”, “butter”, or “common sense” -- is a mass noun, something you can have more or less of (or, in fact, none at all). “Competency,” on the other hand, operates as a count noun and is thus applicable to items you can have a specific number of – say, two, five, or 16, depending on how many you wish to list. Moreover, “competency” may be preferred over such traditional count nouns as “skill” (which may sound too narrowly technical) or “capacity” (often used of qualities that are innate).
And yet, there are reasons to distrust this term. Some have to do with the meanings it has been acquiring among change enthusiasts who seem to believe that the benefits of higher education can be achieved without significant interaction with actual, human, salaried teachers.
But, even leaving these issues aside and returning to the well-meaning among us who seek to incorporate the benefits of online resources into the essential student/teacher relationship: the very use of the term “competency” may shut down the channel because of what it seems to say about the speaker. Many inhabitants of the world of higher education – especially faculty members – are put off by professional higher ed jargon. If they find such jargon rebarbative (now, that’s a word to conjure with), they may view those who utter it as Aliens from Planet Administration.
Which brings us to a distinction made by sociolinguists and philosophers of language (who prefer greater precision in their analyses than Ms. West found sufficient) -- namely, the distinction between “illocutionary force” and “perlocutionary effect.” “Illocutionary force” refers to what a speaker intends in a communication. So, for example, when someone asks “Do you know what time it is?”, the speaker intends this as a request to be told the time. Should the addressee answer “Yes” and leave it there, that would be a failure of communication. Or, to put it another way, the perlocutionary effect (that is, the effect upon the addressee) will not have been the one hoped for.
So, returning to in the faculty meeting at issue here, the speaker president may strongly believe in the illocutionary force of terms like “competency” and “assessment,” while the perlocutionary effect on the faculty addressees may be roughly equivalent to “yadda yadda yadda.” In brief, if we want the illocutionary force to be with us, we must be ever mindful of the perlocutionary effect.
Given the increasing acceptance of the term “assessment," can we expect the same for “competency”? The very distinguished Derek Bok uses it – more often in Higher Education in America than in an earlier work, Our Underachieving Colleges. Faculty members in a number of institutions are using it – especially in reports submitted to foundations. The Association of American Colleges and Universities has recently been testing the usefulness of the term in identifying the desiderata of a high-quality liberal education.
As it happens, though, AAC&U’s president, Carol Geary Schneider, told me recently that she and others are finding the term “competency” too modest for the true goals of a mind- and horizon-expanding education. The AAC&U is planning to move to the term “proficiency.” an improvement in both substance and style that also serves better to engage the high standards of faculty members. Needless to say, Mae West would have her own reasons for preferring it.
Judith Shapiro is president of the Teagle Foundation. She is also president emerita and professor of anthropology emerita of Barnard College.