Matt Reed’s recent column on experimental sites and competency-based education (CBE) offers just the kind of thoughtful analysis we’ve come to expect of his columns. He raises important questions about the role of faculty, the efficacy of approaches that include less instructional interaction, the viability of pay-for-performance aid models, and more. The answers to those questions today? We don’t know. And that’s why we need to support the Department of Education’s experimental sites proposal, to create safe places in which to explore the kind of thoughtful and constructive questions that Matt poses.
Last year saw the dizzying ascendency of the massive open online course, driven by some combination of their blue chip provenance, their creators’ triumphant claims, and the smitten embrace of popular media outlets (especially TheNew York Times).
To the satisfaction and relief of some, MOOCs have come back to earth. Still in search of a purpose (the job they are “hired to do,” to use a Clay Christensen phrase), a business model, and an ideal user scenario, MOOCs are entering a more useful and realistic phase of their development. A lot of smart, mission-driven people are working on MOOC 3.0 (everyone forgets about MOOC 1.0 that came before Coursera and edX put MOOCs on the map) and we’ll see if MOOCs are 21st-century content, a platform innovation, or a powerful new disruptive presence in the educational landscape.
Competency-based education is the hot new innovation, at least in its latest incarnation, largely untethered to the structure of courses and credits, the basic building blocks of curriculums and thus learning. In truth, CBE has been around for decades and pioneered by accredited nonprofits like Excelsior, Charter Oaks, and Western Governors University. They have been joined by a growing number of new providers including the University of Wisconsin System, Northern Arizona University, Brandman University, Capella University, Lipscomb University, the Kentucky Community and Technical College System, and my own Southern New Hampshire University. Another 30 or more institutions are working on their own CBE offerings.
The Department of Education is exercising its authority to create experimental sites and has invited proposals for administering federal financial aid funds in new ways that support CBE models, and the White House is calling for more innovation and putting its weight behind CBE. The leading higher education associations – including EDUCAUSE, CAEL, AAC&U, and ACE – are joining in and announcing new initiatives, webinars, and meetings.
Accreditors are releasing new guidelines for CBE programs and the administration continues to pressure them by raising the possibility of new validation systems better suited to support innovative new delivery models. Think tanks and foundations have added their intellectual and financial backing to the effort. The hope, one I share, is that CBE can deliver on the holy triad of quality, cost (access), and completion.
This is a very different set of circumstances than those that have characterized the MOOC movement. CBE has an actual track record of success in its earlier iterations, is being embraced by powerful stakeholders, is being developed by institutions with deep understanding of the students they seek to serve, and is being tied into the established financial system of funding.
More importantly, CBE offers a fundamental change at the core of our higher education “system”: making learning non-negotiable and the claims for learning clear while making time variable. This is a profound change and stands to reverse the long slow erosion of quality in higher education. It is so fundamental a change that we hardly yet know all its implications for our world. For example:
If the claims we make for student learning really are non-negotiable, we will likely see a drop in completion rates, at least for some length of time;
We will have a lot of work to do around assessment, still difficult terrain in higher education;
The Department of Education, entrusted to protect billions of taxpayer dollars, will need reassurance that we have in place measures that guard against fraud;
If competencies are a new “currency” replacing credit hours, we will need to work out the “exchange rates” if we are to have a system that does not replicate the waste and inefficiencies of the current credit hour and transfer system.
Faculty roles are likely to be redefined, at least in some models, and a profession long in transition, and some would say under siege, will be further impacted;
Student information and learning management systems are not designed for these new models, yet form the administrative backbone that supports everything from registration to transcripts to billing to financial aid to... well, almost everything we do.
Accreditation standards, even new ones, will be tested and will have to evolve to reflect the lessons we learn over time.
In other words, if CBE is finally a movement, it is like many new movements still in search of the basics. It lacks a taxonomy, an agreed-upon nomenclature, the aforementioned exchange rate, a widely accepted form of documentation (what is the right form of CBE transcript?), the supporting systems, and experience with a wide variety of students.
This is why the Department of Education’s proposed experimental sites are so important. The key word here is experiment. Institutions need safe spaces in which to try new things, new rules by which to operate, the ability to rethink fundamental assumptions about how we deliver learning and support students, trying new models for costing and paying, and tolerance for mistakes. If we are not making mistakes, it isn’t really innovation that’s going on.
We need a range of approaches to see what works best for what students in what settings. In return, institutions engaged in the work have to do their part. That includes collecting and providing data with a level of transparency that our industry has historically resisted (higher education is a culture that innately resists accountability outside of student grades), putting aside underlying competitive impulses to share what we learn, and finding ways to support students and quickly address the mistakes we must inevitably make (remembering that we never “play” with student welfare).
Experimental sites are important for what they allow, but also for what they (should) fend off. We should beware a premature setting of standards or guidelines. We should beware a premature overturn of the credit hour, flawed as it is, before we have worked out its substitute (or more likely, complementary system). We should beware an opening of the gates like the one that attended online learning, when unscrupulous players entered the market and abused the system for enormous gains at enormous costs for students and the federal government.
In other words, we need just the kind of good questions that Matt Reed poses in his recent column. We need leading thinkers like CAEL and AAC&U to help us think through the big questions before us. We need EDUCAUSE to help us spec out new systems and technologies. And we need to try various models, collect data, and work through the significant questions still in front of us so we can better inform policy-making and the reauthorization discussion now getting under way.
Traditional higher education is not going away any time soon, but CBE has the potential to both provide new affordable, high-quality pathways to students and to challenge our incumbent delivery models to better identify the claims they make for learning and how they know. Those demands, whatever CBE turns out to be, are not going away either and CBE can function like the industry’s R&D lab. The proposed experimental sites align with that very useful role and deserve our collective support.
Paul LeBlanc is president of Southern New Hampshire University.
Landing a job right out of law school is a challenge many recent graduates experience. Despite the gradual decline in the overall employment rate for those students, 92 percent in 2007, and 85 percent in 2012, law students in their final year still said they are satisfied with the overall law school experience, according to a new survey. Even though the overall satisfaction has remained consistent, 55 percent of law school students are still unsatisfied with their institutions’ career counseling and job search help. The Law School Survey of Student Engagement, the study, received responses from more than 26,000 students at 86 different law schools.
The decline in dissatisfaction seems to occur shortly after the first year and it’s not just with career advising, but all advising services like academic, personal and financial aid.
While linguistics remains a relatively small major nationally, it has been seeing significant growth nationally, from a little more than 700 bachelor's degrees awarded in 2000 to 2,200 in 2012 -- a period in which there has not been dramatic change in graduate enrollments. Further, 70 percent of the undergraduate enrollments are women. These are among the figures in the first report of the Linguistic Society of America on the state of the discipline. While the new report features some longitudinal data based on other sources, the new report will seek to annually track changes in the discipline.
Whether or not the humanities are truly in crisis, the current debates around them have a certain gun-to-the-head quality. “This is why you -- student, parent, Republican senator -- shouldn’t pull the trigger,” their promoters plead. “We deserve to live; we’re good productive citizens; we, too, contribute to the economy, national security, democracy, etc.” Most of these reasons are perfectly accurate. But it is nonetheless surprising that, in the face of what is depicted as an existential crisis, most believers shy away from existential claims (with someexceptions). And by not defending the humanities on their own turf, we risk alienating the very people on whose support the long-term survival of our disciplines depend: students.
One reason why our defenses can have a desperate ring to them is that we’re not used to justifying ourselves. Most humanists hold the value of the objects they study to be self-evident. The student who falls in love with Kant, Flaubert, or ancient Egypt does not need to provide an explanation for why she would like to devote years of her life to such studies. To paraphrase Max Weber, scholarship in the humanities is a vocation, a “calling” in the clerical sense. It chooses you, you don’t choose it. The problem with this kind of spiritual passion is that it is difficult to describe. To paraphrase another 20th-century giant, Jimi Hendrix, it’s more about the experience.
It’s not surprising, then, that when we humanists feel (or imagine) the budget axe tickling the hairs on the backs of our necks, we don’t have ready-made apologia with which to woo or wow our would-be executioners. And because a calling is hard to explain, we turn instead to more straightforward, utilitarian defenses -- “but employers say they like English majors!” -- which, while true, don’t capture the authentic spirit that moves the humanities student.
There is of course sound logic to this approach. Government and state funding is a zero-sum game, and politicians are more likely to be receptive to practical arguments than to existential propositions. But in the long run, it takes more than state and university budgets to maintain the health of the humanities. It also takes students. And by constantly putting our most productive foot forward, we may unintentionally end up selling ourselves short (disclosure: I, too, have sinned). The fundamental reason why students should devote hours of their weeks to novels, philosophy, art, music, or history is not so that they can hone their communication skills or refine their critical thinking. It is because the humanities offer students a profound sense of existential purpose.
The real challenge that we face today, then, lies in explaining to a perplexed, but not necessarily hostile audience -- and perhaps even to ourselves -- why it is that the study of literature, anthropology, art history, or classics can be so meaningful, and why this existential rationale is equally important as other, more utilitarian ones. This line of argument stands in opposition to proclamations of the humanities’ uselessness: to declare that the humanities are of existential value is to affirm that they are very useful indeed.
So how might we go about defining this existential value? A good place to start would be with existentialism itself. A premise of existentialist philosophy is that we live in a world without inherent meaning. For atheists, this is often understood as the human condition following the death of God. But as Jean-Paul Sartre pointed out in “Existentialism is a Humanism,” even believers must recognize that they ultimately are the ones responsible for the production of meaning (in fact, many early existentialists were Christians). Abraham had to decide for himself whether the angel who commanded him to halt his sacrifice was genuinely a divine messenger. In Sartre's memorable formulation, man is “condemned to be free”; we have no choice but to choose. While it may feel as though a humanities vocation is a calling, you still have to decide to answer the call.
The realization that meaning isn’t something we receive from the outside, from others, but that it always must come from within us, from our conscious, deliberative choices, does not make us crave it any less. We are, existentialists insist, creatures of purpose, a thesis that psychological research has also confirmed.
Now what does this have to do with the humanities? It’s not that obvious, after all, how reading Madame Bovary, the Critique of Pure Reason, or The Book of the Dead can fill your life with purpose. At the same time, we also know that some people do find it deeply meaningful to peruse these works, and even to dedicate their careers to studying them.
What is it, then, that lovers of literature -- to consider but them for the moment -- find so existentially rewarding about reading? In a recent book, my colleague Joshua Landy argues that one of the more satisfying features of literature is that it creates the illusion of a meaningful world. “The poem forms a magic circle from within which all contingency is banished,” he writes apropos of Mallarmé’s celebrated sonnet en -yx. The order we discover in literary works may be magical, but it isn’t metaphysical; it comes from the sense that “everything is exactly what and where it has to be.” Art offers a reprieve from a universe governed by chance; what were merely sordid newspaper clippings can become, when transported into artful narratives, The Red and the Black or Madame Bovary. Landy suggests that fictions produce these illusions through a process of “overdetermination:” the ending of Anna Karenina, for instance, is foreshadowed by its beginning, when Anna witnesses a woman throwing herself under a train.
If art offered only illusions of necessity, it would hardly satisfy existential longing. Pretending that everything happens for a reason is precisely what the existentialists castigated as “bad faith.” Yet there’s an obvious difference between enjoying a novel and, say, believing in Providence. We don’t inhabit fictional worlds, we only pay them visits. No lover of literature actually believes her life is as determined as that of a literary heroine (even Emma Bovary wasn’t psychotic). So why does the semblance of an orderly universe enchant us so?
Well-ordered, fictional worlds attract us, it seems, because we, too, aspire to live lives from which contingency is kept at bay. Beauty, wrote Stendhal, is “only a promise of happiness.” As Alexander Nehamas suggested, in his book of this title, the beautiful work of art provides us with a tantalizing pleasure; beauty engages us in its pursuit. But what do we pursue? “To find something beautiful is inseparable from the need to understand what makes it so,” he writes. Behind the beautiful object -- sonnet, style, or sculpture -- we reach for the idea of order itself. The promise of happiness made by art is a promise of purpose.
But a promise of purpose is still a bird in the bush: it can disappear when you put down the book, or leave the concert hall. For the philosopher Immanuel Kant, art only provides us with an empty sense of purpose; or as he put it, in his distinctively Kantian way, "purposiveness without purpose" (it’s even better in German).
It’s true that few existential crises have been resolved by a trip to the museum or the download of a new album. But Kant may have underestimated how the sense of artistic purpose can also seep into our own lives. For instance, as Plato and every teenager know well, instrumental music can give voice to inexpressible feelings without the help of language. These emotional frameworks can convey a potent sense of purpose. When my youngest daughter spent six weeks in the neonatal ICU with a life-threatening condition, my mind kept replaying the second movement of Beethoven’s seventh symphony to tame my fears. Its somber, resolute progress, punctuated by brief moments of respite, helped to keep my vacillating emotions under control. As in films, sometimes it is the soundtrack that gives meaning to our actions.
The promise of order found in beautiful works of art, then, can inspire us to find purpose in our own lives. The illusion of a world where everything is in its place helps us view reality in a different light. This process is particularly clear -- indeed, almost trivial -- in those humanistic disciplines that do not deal primarily with aesthetic objects, such as philosophy. We aren't attracted to the worldviews of Plato, Kant, or Sartre, purely for the elegance of their formal structure. If we’re swayed by their philosophies, it’s because they allow us to discover hitherto unnoticed patterns in our lives. Sometimes, when you read philosophy, it seems as though the whole world has snapped into place. This is not an experience reserved for professional philosophers, either: at the conclusion of a philosophy course that my colleagues Debra Satz and Rob Reich offer to recovering female addicts, one student declared, “I feel like a butterfly drawn from a cocoon.”
So where art initially appeals to us through intimations of otherworldly beauty, a more prolonged engagement with the humanities can produce a sense of order in the here and now. One could even say that Plato got things the wrong way around: first we’re attracted by an ideal universe, and then we’re led to discover that our own reality is not as absurd as it once seemed. And while particularly evident with philosophy, this sensation of finally making sense of the world, and of your own place in it, can come from many quarters of the humanities. In a delightful interview (originally conducted in French), Justice Stephen Breyer recently exclaimed, “It’s all there in Proust — all mankind!” Other readers have had similar responses to Dante, Shakespeare, Tolstoy, and many more.
But exploring the humanities is not like a trip to the mall: you don't set off to find an off-the-rack outfit to wear. Proust can change your life, but if you only saw the world through his novel, it would be a rather impoverished life. Worse, it would be inauthentic: no author, no matter how great, can tell you what the meaning of your life is. That is something we must cobble together for ourselves, from the bits and pieces of literature, philosophy, religion, history, and art that particularly resonate in us. “These fragments I have shored against my ruins,” T.S. Eliot wrote at the end of The Waste Land. No poem offers a better illustration of this cultural bricolage: Shakespeare answers Dante, and the Upanishads disclose what the Book of Revelation had suppressed.
So here we find an existential rationale for a liberal education. To be sure, the humanities do not figure alone in this endeavor: psychology, biology, and physics can contribute to our perception of ourselves in relation to the world, as can economics, sociology, and political science. But the more a discipline tends toward scientific precision, the more it privileges a small number of accepted, canonical explanations of those aspects of reality it aims to describe. If 20 biology professors lectured on Darwin’s theory of evolution, chances are they’d have a lot in common. But if 20 French professors lectured on Proust’s Recherche, chances are they’d be quite different. The same could be said, perhaps to a lesser extent, for 20 lectures on Plato’s Republic. The kinds of objects that the humanities focus on are generally irreducible to a single explanation. This is why they provide such good fodder for hungry minds: there are so many ways a poem, a painting, or a philosophy book can stick with you.
In his diatribe against the way the humanities have been taught since the '60s, Allan Bloom harrumphed, “On the portal of the humanities is written in many ways and many tongues, ‘There is no truth -- at least here.’ ” But the point of a liberal education is not to read great works in order to discover The Truth. Its point is to give students the chance to fashion purposeful lives for themselves. This is why authors such as Freud, whose truth-value is doubted by many, can still be a source of meaning for others. Conversely, this is also why humanities professors, many of whom are rightfully concerned about the truth-value of certain questions or interpretations, do not always teach the kinds of classes where students can serendipitously discover existential purpose.
There are more than existential reasons to study the humanities. Some are intellectual: history, for instance, responds to our profound curiosity about the past. Some are practical. To celebrate one is not to deny others. The biggest difficulty with defending the humanities is the embarrassment of riches: because humanists are like foxes and learn many different things, it is hard to explain them to the hedgehogs of the world, who want to know what One Big Thing we do well. The danger is that, in compressing our message so it gets heard, we leave out precisely the part that naturally appeals to our future students. Yes, students and parents are worried about employment prospects. But what parents don’t also want their child to lead a meaningful life? We are betraying our students if, as a society, we do not tell them that purpose is what ultimately makes a life well-lived.
Dan Edelstein is a professor of French and (by courtesy) history at Stanford University. He directs the Stanford Summer Humanities Institute.
We hear it again and again: The jobs of the future are going to take hustle. Job-seekers will have to be creative, generate buzz, be extraordinary. Make their own luck.
So why, in my Dorothea Lange vision of present conditions, do I visualize a young person with a cardboard sign that reads, “Too Tired to Hustle”?
As a community college professor, I’m proud that our institutions are open-admission. With very rare exceptions, there’s no qualifying exam. We don’t, for reasons of experience or ability, turn people away. But what are we turning them toward? What jobs lie ahead for my students? That question is increasingly troubling.
At my first community college gig 15 years ago, my students -- for better or for worse -- often met the then-stereotype of community college: the place you end up only because it is your first chance, or your last. Some of my students had parole officers, some had just become citizens, some had meandered through high school, and few had parents who had themselves gone to college.
My students today -- at a much nicer campus in a less disadvantaged part of the country -- meet those negative stereotypes less and less. That recent community college students are increasingly of traditional college age and qualifications is evident to me in my classroom and in their written work. More often than not, now, mine are “university” students simply priced out of the market for four-year education, or prudently looking for the first two years at a bargain.
But for all their improved preparation, they are anxious -- terribly anxious -- and I am anxious for them.
I am anxious not only for the same reasons they are -- the onset of a debilitating student loan burden, the desperate competition for unpaid internships, the concern that there might simply be not enough jobs to go around.
I am anxious, also, for a reason that many of them have not caught onto yet: the mismatch between the supposedly “good” jobs that popular wisdom seems to suggest will definitely continue to exist -- entrepreneurial, experimental, start-up jobs, jobs of risk, hustle, and verve -- and the jobs my students claim to want. Flipping through a semester’s worth of self-introductions is like an obituary pamphlet for Old Economy employment. Again and again, they express a desire for mostly public or public-ish, long-term, safe and stable, even unionized, positions: firefighting, criminal justice, firefighting, nursing, nursing, teaching, teaching, teaching, radiology, firefighting, criminal justice.
Although a few students write, vaguely, business, and a few more, computer science, few are writing, “I want to start my own company,” “I want to freelance myself as a consultant,” “I’m going to sell myself, I have a vision, and I’m going to hustle until I get there, on my own.” There’s little excitement, to tell you the truth. There’s just the longing for a job where you do one thing, easily described, for a long term, and get predictably and sufficiently paid for what you do.
My students don’t want to be astronauts. They want jobs with reasonable, set hours, job security and pensions.
And I don’t know how to break it to them. I don’t know how to sell the alternative -- the more realistic future of work, that sort of chance, the chanciest chance I’ve ever sold.
To be clear, there’s nothing wrong with the competitive capacity of my students; if anything, they seem more experienced in cutthroat competition than ever before. What is exhausted -- just worn and jaded, from constant use, and such challenging odds of reward -- is their inner reserves. Their belief that hustle can actually, well, work. And their trust that a hustle-world -- a world of contingent, not permanent, labor; of setting your own path, not following the path of a established bureaucracy; and of preparing, always preparing, not for the present, but for the as-yet-unimagined-job-that’s-next -- will be a good one, an equitable one, a world they’ll want to join. Or that will include a place for them, even if they do.
The problem with making your own luck is that it requires so much previous luck. To be nimble, to be ready, to have the excess emotional capacity to take future self-driven employment by the balls -- you need to not already be tired, scared, in shelter-mode. To risk more, you have to have not lost too much already. Or at least: not having lost too much already really, really helps.
Many of my students are not the unluckiest, but neither have they been that lucky. They are willing to work, but too tired to hustle. And that used to be enough.
Nicole Matos is associate professor of English at the College of DuPage, in Illinois. Her writing credits include Salon, The Rumpus, berfrois and numerous other literary and academic journals.