At 30 years old, I definitely consider myself part of the Facebook generation. Zuckerberg’s brainchild hit the ‘net when I was a senior in college, and by then I was already well acquainted with e-mail, chat rooms, text-messaging, and all the multifarious precursors to today’s social media. I text, I post, I chat, I even snapchat: in these respects, I’m an utterly unremarkable member of my society.
But I also happen to be a college professor and a molder of young minds. And, far from indulging the technology-driven spirit of the times, I make my students work as students have always worked. They read Seneca, Pascal, Tolstoy, and Schopenhauer. They are obliged to turn in papers by hand; they must come to office hours to speak with me about their grades; they are even, and this is most anachronistic of all, required to attend class. Physical presence is key to every aspect of their learning experience, be it my hovering, breathing presence in the classroom or the office, the cohort of 30 or so warm bodies that shows up for lecture twice a week, or the more abstract form of embodiment conveyed by the weight of a book.
To believe certain commentators, however, this embodied notion of learning is on its way out in American higher education. Writing for The American Interest’s January/February 2013 edition, the recent Yale graduate Nathan Harden offers the following ominous prognostications about the future of university instruction in our digital age:
In fifty years, if not much sooner, half of the roughly 4,500 colleges and universities now operating in the United States will have ceased to exist. The technology driving this change is already at work, and nothing can stop it. The future looks like this: Access to college-level education will be free for everyone; the residential college campus will become largely obsolete; tens of thousands of professors will lose their jobs; the bachelor’s degree will become increasingly irrelevant; and ten years from now Harvard will enroll ten million students.
On Harden’s account, one of the principal reasons for this portended transformation, which is already being partially implemented by such institutions as Harvard and MIT, is that the cost of college is increasingly out of proportion with its perceived economic benefit. As the American job market has become more competitive, the cost of a degree has increased, and only the most naïve of students still believe that a college education is a universally redeemable ticket to middle-class prosperity. The weighing up of costs and benefits involved in earning a college degree will lead inevitably to a re-evaluation of the current higher education model. Luxury residence halls, face-to-face interaction between professors and students, ivied brick walls -- these will all be things of the past once the much-heralded education bubble finally bursts. What will replace them are massively populated, inexpensive online courses and lectures, prerecorded by the very best lecturers and administered by those hordes of professors and other academics not quite sexy or charismatic enough to warrant virtual celebrity.
To anyone who thinks Harden’s predictions are a little too ambitious (not to mention deeply disturbing, at least for college professors who don’t fancy the idea of working in a grading factory), don’t worry -- they most likely are. What Harden forgets -- and indeed, what just about everyone prophesying the eclipse of face-to-face interaction in a virtual world forgets -- is that human beings are, above all else, bodies, and that to lead full, happy, and meaningful lives, we need other bodies. Let’s consider the following examples of how technologies of virtualization have failed to triumph over our species’ thirst for physical presence.
1. The Giant Head. Some older readers may recall a famous article in Reader’s Digest from the late 1950s featuring an illustration of a massive human head connected to miniscule arms and legs. What was the thesis of that article? The tech junkies of the time believed that in the future technology would become so advanced that human beings would no longer need to use their bodies, leading to a swelling of the brain and a shriveling of our appendages. Many also foretold a time when food supplements would replace food. Wouldn’t it be great, they asked, if instead of spending hours preparing and eating meals, we could nourish ourselves in just a few seconds? No one at the time seemed to consider that human beings might not want to do any of this — that we might enjoy using our bodies, eating, and the like. In the half-century since these predictions were made, restaurants have proliferated, and heads haven’t grown one bit.
2. Live Theater. When I was a kid, there were hardly any live theaters in my hometown of Bakersfield, Calif. Now there are about ten. Many people used to believe that movies had sounded the death knell for live theater, but today the latter enjoys just as much, if not more, prestige than it did 100 years ago. I recently had the good fortune to see Kevin Spacey’s production of Richard III. I’ll remember his performance for the rest of my life — it had never occurred to me that acting could be so visceral, so violent, so physical. How many of us can say the same thing about movies? Again, those who foretold the demise of live theater never reckoned that people might just plain like seeing living bodies move around and speak on the stage, and that no amount of special effects could compensate for the lack of real flesh and blood.
3. The myth of social media. This myth holds that virtual, online or technologically mediated interactions are in the process of replacing face-to-face interactions. Most people never take the time to think about what the world would be like if this were really the case. I live in a small college town, and I can assure anyone interested in such things that student interactions on Friday and Saturday nights are plenty physical —sometimes I can hear them from across the lake! Social media does little more than provide a way of sharing information that enhances the intimacy of eventual physical contact. Anyone who doesn’t know this doesn’t understand the technology.
Of course, people like Harden will point to other sectors of the economy where technological innovation has erased thousands of jobs. People don’t need information from stockbrokers or travel agents to make decent decisions about travel or investment anymore, so why should a living, breathing professor be necessary to convey the sort of information one gets out of a college education? If that information can be distributed more cheaply thanks to virtualization, why should students be expected to bear the extra expense of classroom education?
The answer to this question is so elementary that the objection supporting it is almost hard to take seriously. The truth is that education is not simply the conveying of information. In fact, it is probably only marginally that. How many people remember most of what they learned in college? Only very few, I would guess. The benefit of a classroom education is that it keeps students under a certain amount of mental pressure, forces them to think on the spot, and obliges them to explain themselves to other people who are physically present. Information is afoot in these interactions, but so are wisdom, passion, empathy, and a whole host of other viscera that only an embodied teacher or student can properly convey.
How effective, for instance, do we imagine an online church experience would be compared to the real thing? Is it reasonable to think that a virtual tour of the cathedral at Chartres would be as spiritually moving as being there? We should also consider that many students might simply enjoy the physical classroom and their interaction with peers and professors -- or at least they might recognize that they learn better under these conditions. The costs of classroom education may be soaring out of proportion at present, but this is not a verdict on the education itself.
So let’s ask -- what developments are behind these grim augurs of the collapse of America’s higher education model? Some of it undoubtedly has to do with politics. Many commentators on the right (and perhaps Harden is one of them) would likely cheer the dismantlement of a system whose values are often perceived as far left of center. If taking education online can put “tenured radicals” out of work, then why not welcome it? At the same time, however, just as many moderate and left-leaning thinkers have joined the chorus of those predicting the failure of higher education (for instance, see Thomas Friedman’s recent writings in The New York Times), and it would be simplistic to chalk this latest round of doom-peddling up to politics.
The real culprit, I suggest, is what, for lack of a better term, we might call Appleism. Innocent in principle but nefarious in practice, the doctrine of Appleism holds that increases in technological capability are synonymous with increases in human happiness. Anything that can be put on a screen is better than what can be seen with the naked eye. The passage of electrons through a cathode tube is equivalent to passage from a lower to a higher state of being. Proponents of Appleism hold out technology as an intrinsic good; they are the sorts of folks who compulsively buy the latest Apple product, simply on principle.
We can point to fiscal insolvency all we want, but one has difficulty believing that Harden’s and others’ vision of a fully or almost-fully online education is not also the product of society’s limitless fascination with virtualization. Proponents of the current craze ought to think carefully about the human costs of technology before enthusiastically proclaiming the end of a system that could leave hundreds of thousands of people without work, students cheated out of a quality education, and that would further contribute to the creation of a world where virtualization is always and everywhere, without qualification or questioning, heralded as an unequivocal good.
Louis Betty is an assistant professor of French at the University of Wisconsin-Whitewater.
If you are an enrolled student and have any questions, feedback, bug reports, suggestions, or any other issues specific to a particular class, please post on the discussion forums of that class. This is the most effective way to get heard, as both the teaching staff and Coursera staff routinely check the discussion forums, and your fellow students will also be able to answer your questions and help you resolve technical issues.
—Coursera Support Center
anyone out there know whats going on in this Beckett play?
something about death
doesn’t get to the point fast enough…like prof winkler
i literally skipped winkler’s last 3 lectures, but with 1500 students whose gonna know?
You get out of it what you put into it.
that’s what she said!
CAN SOMEONE PLEASE ANSWER MY QUESTION?
does anyone think godot’s ever gonna come?
that’s what she said!! lol
he hasnt by the end of act 1
bet he doesn’t come at all. boy, will those two bums be disappointed!
lotta waiting in this sucker.
Know how long I had to wait on line at Best Buy on Black Friday?
look this isn’t the time
Ten hours. Ten fricken hours. But I got the TV!
then you should just watch the movie version
Waiting for Godot. can’t remember who stars in it, but it’s really slow. nothing ever happens.
Because nothing ever does. it’s like real life
Okay, I’m late to this discussion, but what’s the question?
What are those two guys waiting for, and why don’t they ever get moving?
Good question! Think it’ll be on the quiz?
Doesn;t matter. We can always cheat.
I’ll pretend I didn’t hear that.
Anyway its peer review. I’m grading your quiz.
You mean Winkler doesn’t look at them?
You kidding? He’s a prof at Harvard!
but he’s up there every week, talking to us
That’s just a video. Probably made that months ago.
Maybe he’s dead.
You mean like Godot?
its not about the plot, its like existential.
That’s some help.
It’s based on Freud’s trinitarian ego, id, and superego structure, asshole.
No, it’s all about the Cold War.
It’s Plato’s allegory of the cave.
I want an answer to my question. I’m going to e-mail Prof. Winkler.
You can’t. It comes back “addressee unknown.”
Really? Thats like so existential.
That’s it. I’m outa here.
They do not move.
Who typed that? Hey! Is that Prof. Winkler? Are you monitoring this? I wanna know, I wanna know! And is this going to be on the quiz?
David Galef directs the creative writing program at Montclair State University. His latest book is the short story collection My Date With Neanderthal Woman (Dzanc Books).
Submitted by Andrew Ng on January 24, 2013 - 3:05am
Educators create online courses for the same reasons that they became teachers to begin with: to educate students, broaden their awareness of the world and thereby improve the students’ lives. And with massive open online courses (MOOCs), educators can now reach many more students at a time. But MOOCs offer many other benefits to the education community, including providing valuable lessons to the instructors who teach them.
Online courses inherently allow students to create their own pathways through the material, which forces educators to think about the content in new ways. And MOOCs offer professors fresh opportunities to observe how their peers teach, learn from one another’s successes and failures and swap tactics to keep students engaged. This is, in turn, makes them better teachers.
MOOCs are still the wild west of higher education, and there is no “one size fits all” approach to building one. At Coursera, we’ve been working with educators as they experiment with designing courses for this new format, and for a student body of unprecedented proportions. (For example, Duke University’s Think Again: How to Reason and Argue by Walter Sinnott-Armstrong and Ram Neta has more than 180,000 enrolled students.) We’re reimagining many aspects of what it means to teach a course, ranging from lecture delivery, to assignments, to strategies for engaging the online community of students.
While there are many resources for teachers to learn from when approaching online education, we’ve become aware that there is still a need for a central space for professors to share successful practices, ask each other questions, and showcase examples of what’s worked and what hasn’t in their online classes. Recently, we launched a course called Teaching a MOOC, open to all of the professors on the Coursera platform (we’ll be launching a free, public version soon). It functions like any of the courses we offer, including video lectures that offer guidelines for developing an online course for the first time, discussion forums and a gallery where professors can see examples from other classes. And that’s just the tip of the iceberg.
An educator who’s been teaching in a traditional classroom format faces many challenges and unknowns when creating an online course. The lecture creation process is different. The peer-graded homework is different. The process for managing your “classroom” is different. Even the copyright law requirements are different. Jeremy Adelman, a Princeton University professor who teaches A History of the World Since 1300, explains, “When you lecture into a recording box, it’s different from lecturing to students in person. I have a teaching style that relies on energy from students, and I had to figure out strategies that would transcend [that style] for my class on Coursera.”
Adelman discovered that in putting his course online, he became more focused on what students are experiencing, even though he wasn’t in direct contact with them. “When I lectured, I had to ask myself at all times ‘What is it that I want my students to learn?’ In the old-fashioned lecture hall I was an entertainer, more self-focused rather than teaching-focused, but I was not conscious of this dynamic until I put a course online for the first time,” he says. “For me, the lectures alone were a source of continuous learning and adaptation.”
Throughout the entire MOOC creation process, educators must constantly be student-focused, figuring out what is the most useful content for their students to experience next. With no admissions office, online students are vastly more diverse than the students in a typical college classroom. They vary in educational background, learning ability, and culture. Students are also at different points in their life, and range from teenagers to working professionals to retirees, and may have different learning goals. Educators have to make classes accessible without underestimating student ability.
Stanford professor Scott Klemmer was pleasantly surprised by his experience teaching a Human-Computer Interaction course. His class was the first to use peer grading (in fact, he worked with Daphne Koller and me to design Coursera’s current peer assessment system). After using self-assessment for six years in his class at Stanford, he thought there was “no way” that he could expect students to handle self- and peer-assessment online.
“But it worked amazingly well,” Klemmer explains. “When we surveyed students at the end of class, one of the things they rated highest, in terms of what taught them the most, was the act of assessing peers -- they found it extremely valuable. I put a huge amount of time into designing course materials based on rubrics and assessment techniques that I taught in my Stanford class on campus; I had no idea what it would mean to translate that into the online world.”
There has always been a tendency in distance education to focus on the physical barriers -- the distance between the professors and the students, and between the students themselves. Many people, including those in academia, believe there to be a broadcast quality to online lectures, with one person delivering lectures to students behind screens, where they can’t engage directly with the professor. They wonder, “If the professors don’t see their students, how can it be teaching?”
But through today's technological advancements, online courses are very much alive. They are part of an ecosystem that, if nurtured through community discussion forums, meetups, e-mails, and social media (like Google+ hangouts), can flourish and grow. This allows each class’s community to take on a life of its own, with a distinct culture that’s defined at least as much by the students as the instructor, and which even skillful instructors can only guide, but not control. Nearly every instructor that I’ve spoken to has been surprised by the deep desire of students to connect with each other as well as with the teaching staff and professor.
University of Michigan professor Eric S. Rabkin found his experience teaching Fantasy and Science Fiction: The Human Mind, Our Modern World incredibly enriching. “I had not anticipated the kindness and excitement I see in this large body of participants. Despite the potential for impersonality, I have received emails of thanks, of enthusiasm, of discovery. I have replied to some of those and some of my replies have been re-posted to the forums by the recipients. The community knows I care and, at first astonishingly to me, cares back. They care enough not only to spend time with each other but to share their experiences, some even through blogs of their own, with the wider world,” he says. “Amazingly, this feels somehow like a family. Not like a nuclear family, but like a suddenly discovered distant city brimming with eager cousins one had never known before.”
“I have been [teaching] the same way for years -- for decades and decades -- without being mindful of the changes in technology, the changes in our students. Online courses blow up the old conventions. But I think it will take us a while to figure out what works and what doesn’t work,” says Princeton professor Jeremy Adelman.
University of Pennsylvania professor Al Filreis, who teaches Modern & Contemporary American Poetry, says that teaching online has given him his “most extraordinary pedagogical experience” in 30 years of teaching. “The course is rigorous and fast-paced, and the material is difficult, but the spirit of curiosity and investigation among the students produced very good results,” he says. “Several eminent poetry critics joined the course to rate the quality of the students' critical writing and came away very impressed -- and surprised. We discovered that a qualitative, interactive humanities course can indeed work in the MOOC format."
With MOOCs, there is so much more potential for educators to go into each other’s classrooms and share resources with their peers. We’re seeing this happen more and more, especially when it comes to professors adapting online course structures from other professors.
“Online education means that I have shared more stories with fellow professors about teaching than I had in the eight years I’ve spent teaching on campus,” says Stanford professor Scott Klemmer.
We might not have an answer to the question “What defines a high-impact MOOC?” just yet, but universities and professors who have taken the plunge are constantly learning and growing from their experiences. And what we’re seeing emerge from the trenches is an exciting new breed of education.
Andrew Ng is a co-founder of Coursera and a computer science faculty member at Stanford University. He is also director of the Stanford Artificial Intelligence Lab, the main AI research organization at Stanford. In 2011, he led the development of Stanford University's main MOOC platform, and also taught an online machine learning class that was offered to over 100,000 students, leading to the founding of Coursera. Ng's goal is to give everyone in the world access to a high quality education, for free. His Twitter handle is @AndrewYNg.
The rush toward the creation of massive open online courses (MOOCs) is catching on in higher education like wildfire. All it takes, it seems, is to wave a bit of money around, talk up the brave new world of technological innovation, bash the “failed” world of higher education as we know it, and the privatization troops have administrators in a fit of unexamined, swooning technophilia. These “courses,” however, in addition to offering false promises, also undermine shared governance, run roughshod over established curriculum development procedures and move colleges toward the era of “teacherless classrooms,” which destroy the academic integrity of our institutions and demean the value of the education our students receive.
MOOCs are designed to impose, not improved learning, but a new business model on higher education, which opens the door for wide-scale profiteering. Public institutions of higher education then become shells for private interests who will offer small grants on the front end and reap larger profits on the back end.
At present, MOOCs are being proposed as solutions to enrollment shortages, among other things, in open-access institutions such as community colleges. The MOOC crowd promises cost savings, efficiency, improved access and the answer to our “completion” woes. The concern as voiced by Arne Duncan himself is that in our quest to increase completion, maintain quality and save money: “The last thing we want to do is hand out paper that doesn’t mean anything.” Wethinks he doth protest too much.
And that’s the big lie behind this allegedly noble quest to provide much broader access to higher education and improve student learning. There is not a bit of proof that MOOCs will do so in any meaningful way. The notion is to turn community colleges into Petri dishes for MOOC experiments, principled objections be damned. There are costs to cut in the public sector and dollars to be made in the private sector.
The much-hyped arrival of MOOCs has been made possible by the Bill and Melinda Gates Foundation and a host of the usual corporate education reform suspects, who have long been involved in a full-court press propaganda campaign for their venture/vulture philanthropy.
Some of these interests are trying to figure out schemes for monetizing MOOCs in such a way that the small percentages of students passing MOOCs in cyberspace would pay institutions for certificates of competency awarded for completion of prescribed course regimens. Indeed, colleges and universities conceivably might even cash in further by recommending the most successful students to corporate interests … for fees.
Critics, meanwhile, are easily dismissed as part of the corrupt old world of failed higher education, troglodytes as afraid of this bold new magic as cavemen were of fire. And to the consternation of the self-proclaimed “change agents,” those reactionary faculty shielded by union contracts and powerful academic senates stubbornly resist the next new wave. Never are the implications of the MOOC offerings — typically announced with fanfare — outlined with respect to faculty and classified staff workloads (e.g., registering students and setting up and maintaining the technology infrastructure for individual course sections which, in some cases, have enrollments in excess of hundreds or even tens of thousands of students). Students will grade each other, or course work will be evaluated through word recognition computer software programs. Faculty, the promoters tirelessly stress, just must stop lecturing, instead becoming “facilitators” for student engagement in experiential education. And what of the student support services? Or, perhaps, in this idyllic (or should that be dystopic?) educational space, those needing support are just left out in the cold, after corporate partners first have made their millions though software sales.
In the San Diego Community College District we have dared to step in front of the vaunted train of progress that many of us see as nothing more than a repackaged Taylorism for academia. The San Diego City College Academic Senate recently passed a resolution decrying the move toward MOOCs. The resolution followed on the heels of a faculty presentation at the San Diego Community College (SDCCD) Board of Trustees meeting in response to administrative attempts to circumvent the departmental and collegewide shared governance process so as to rush through grant applications for MOOCs at both City and its sister college, San Diego Mesa College, before any campuswide discussion had occurred. This resulted in Chancellor Constance Carroll declaring a one-year moratorium on MOOCs in the SDCCD while a task force investigates the appropriateness of this new form of instruction for our district.
In our view, the central philosophical flaw in the MOOC paradigm is that proponents believe that there is nothing to be lost in turning professors into glorified tutors, parts of a larger information delivery system. What this misses is the key fact that the heart of what we do as college educators has to do with the immeasurable human interaction that we have with our students and the vital social experience of the face-to-face classrooms. This is something that simply can never be reproduced by a new technology, no matter how advanced.
Demoting professors to the level of information delivery systems may be gratifying to our detractors and financially attractive to bean-counters but it won’t improve education in the process of “transforming it”; it will degrade it. But to the academic Taylorists, who don’t believe in anything that can’t be quantitatively measured, this kind of thinking is destined for the dustbin of history.
No doubt the brave new world of MOOCs will give lots of people who can’t go to Harvard access to “Harvard,” but it won’t be same. Indeed, the future of higher education will be less egalitarian and far more two-tiered with the sons and daughters of the elite still receiving real top-quality educations while other folks will get something different, quick, cheap and easy.
But this tale of two futures is perfectly in line with the thinking of the plutocrats who brought us the “productivity revolution” in the business world. There they got a smaller number of American workers to labor longer hours for the same money and fewer benefits while increasing productivity and bringing record profits for those at the top. In the realm of higher education, they can blame the colleges for the fact that fewer graduates are prepared for employment in the austere marketplace that they fostered while milking our schools for profit and transforming them to their purposes at the same time. It’s nice work if you can get it.
In the meantime, our job as professors, according to the dictates of the emboldened technocrats, is to become rope-makers for our own professional hangings. The debate here is not really one about technology and higher education, as most of us know that online education is now a permanent part of the educational landscape with legitimate uses. No, what this MOOC debate is about is whether we blithely open the door to the gutting of what is most precious about what we do.
If the unthinking technophilia and new Taylorism which MOOCs represent ends up killing face-to-face education as we know it, it won’t be because the technology offers a superior form of education. It will be because our visionless political and educational leaders have almost entirely abandoned educational values for market values. As many scholars have noted, in the era of neoliberalism we have just about given up on the notion of education as a public good rather than a mere commodity. Let’s hope we don’t allow this near-total triumph of market values to destroy one of the last public spaces in our society not completely determined by greed and instrumentalism. As opposed to the creed of the forces of privatization, we believe that there are still things whose value cannot be determined by the market and that education in a democratic society should be much more than an instrument of our economic system.
Six community college faculty members
Jennifer Cost is chair of English department at Mesa College.
Jim Miller is professor of English at San Diego City College.
Jonathan McLeod is professor of history at San Diego Mesa College.
Marie St. George is professor of psychology at San Diego City College.
Peter Haro is president of San Diego City College Academic Senate.
Jim Mahler is president of the American Federation of Teachers for the San Diego and Grossmont–Cuyamaca Community College Districts.
“The best vision is peripheral vision.” – Nicholas Negroponte
Back when I was a carefree grad student, some 20 years ago, I decided to write a dissertation about apocalyptic discourse. The millennium was then looming on the horizon like a Mayan baktun or a disruptive innovation, threatening to bring about the end of the world as we knew it. In one corner stood those who confidently predicted that a comprehensive desolation would be visited upon everything that we’d once held sacred, and in the other corner stood those for whom the clanging bell of the millennium would most certainly signal the restoration of a profound peace and a lasting illumination.
Who could resist stepping into a fight like that?
Not only did it sound like fun to tell them they were both wrong, but it also seemed fundamentally true. I felt pretty confident, for a start, that the world wouldn’t really end.
I feel much the same today when I confront yet another breathless news story about whatever latest innovation (hint: it’s always a MOOC) is going to change higher education utterly, for better or worse, full stop. Except it doesn’t, and they never do – film didn’t change education utterly, television didn’t, the computer didn’t, and the Internet hasn’t. After all, I write this to you from the cozy corner of a major urban research university; the old ways are still very much with us, even as we make way for the new.
While I don’t want to lay the blame for all of this apocalyptic rhetoric at the doorstep of Clayton Christensen, I would still like to have a word.
Don’t get me wrong, I’m a longtime fan of online learning (as well as classroom learning; I’ve studied and taught in both environments, and each has its strengths). I also like change as much as the next person – whether it’s the sun coming out in December or a shiny new iPhone. All of these things have something to recommend them. But do any of them portend the end of time? I don’t think so.
I read Christensen’s Innovator’s Dilemma with real interest back in 1997 when I was an IT market analyst. I even reviewed it for a trade newsletter. I think his theories of “sustaining innovation” and “disruptive innovation” offer interesting tools for understanding how a variety of industries have evolved at particular moments in the past.
Whether these theories can be used to predict the future, however – well, I’m not so convinced.
For example, I was surprised to read the following prediction from Christensen’s Disrupting Class in 2008: “by 2019, about 50 percent of high school courses will be delivered online.” I already knew back then that the penetration of online learning in higher education was nowhere near approaching that level, and I also knew that higher education was far advanced in its experimentation with online learning relative to high schools. While Christensen and his co-authors had some interesting mathematical models to draw upon to support their prediction, common sense and recent history seemed to suggest that the revolution probably wouldn’t come as early as they’d anticipated. In fact, if the prediction turns out to be true in six years’ time, I’ll eat this article.
My real beef, though, isn’t with Christensen. He’s a smart man, even if he’s still something short of a prophet. My real problem is with the acolytes – and I urge any young readers out there considering a dissertation of their own on apocalyptic discourse to keep an eye out for these types. Acolytes tend toward reductivism, simplification, and speaking very loudly. And often they mangle the prophet’s core message in the process, occasionally even inverting it.
Take disruptive innovation. Please.
In higher education, at least judging by the recent conferences I’ve attended, far too many people have come away from Christensen’s work (or whatever second- and third-order echoes of it they’ve picked up from the media) with the idea that, if we all try, we can simply disrupt ourselves. And that way, nobody has to lose a job or a research grant or move back to the home office.
Among this strand of believers, “disruptive” innovation appears to be synonymous with “cool” innovation, or even simply “change” – or even, more simply, “the status quo.”
What these believers forget, or perhaps never knew, is that Christensen uses the concept of disruptive innovation as a means of describing how the giant company is so often killed by the little guy with the sling shot – a sling shot that just happens to be cruder, easier to use, less expensive, and more attractive to a heretofore unengaged set of new consumers than the giant’s weapon of choice. In other words, if genuinely disruptive innovation does occur within higher education, traditional universities are much more likely to play the role of the giant than the innovator.
And yet some of the more attentive readers of Christensen’s work have taken to heart his hopeful message that the only way for incumbent leaders to survive these market disruptions is to create new and separate business units of their own, free from profit pressures and growth strategies of the core business, and allow them to break all the rules en route to coming up with “the new, new thing” that will prove to be the true category-killer. If any institutions within higher education succeed at disrupting themselves, it may be the few that have adopted this model. But for those institutions working desperately to preserve the rules and still somehow survive in a dynamic market – the ending may not be the one they expect.
In the months ahead, I’d like to use this column to reframe and refocus the conversation about innovation in higher education. All of this talk about disruption has become a distraction – an apocalyptic tick. It reminds me of that great line from Tolstoy, “He in his madness prays for storms, and dreams that storms will bring him peace.” Let’s leave that aside for the time being and look at productive change in higher education from a different vantage point.
“The best vision,” Nicholas Negroponte likes to say, “is peripheral vision.” New ideas are out there – in the margins, away from the main frames of reference. Their immediate effects may be small or local in scale, but they can gradually introduce meaningful improvements to mainstream practices. In this column I want to examine some of the interesting experiments taking place in the margins of our field of vision – experiments that may well inform how we refine our approach to delivering higher education going forward.
Along the way, I’d like to propose that we focus on a humbler but still worthwhile form of innovation – the kind that isn’t dependent upon hype, gadgetry, a singular eureka moment, or the game-changing end of all that came before it. I’d prefer to focus on the kind of innovation characterized by a planned and responsible approach to strategy and management, the kind that continuously seeks to transform products and services in ways that are more responsive to the evolving needs of the contemporary marketplace, and which, as a consequence, delivers enhanced benefits to all participants, whether they be students, faculty, administrators, parents, governments, or the public.
And let’s assume the world is still there with us, too.
Peter Stokes is executive director of postsecondary innovation in the College of Professional Studies at Northeastern University.
Rather than having students wait weeks for feedback on homework, MIT professor has developed computer program that assigns diverse group of people to review small chunks of each student's work. MIT may use program in MOOCs.
Submitted by Aaron Bady on December 6, 2012 - 3:00am
Clay Shirky is a big thinker, and I read him because he’s consistently worth reading. But he’s not always right – and his thinking (and the flaws in it) is typical of the unquestioning enthusiasm of many thinkers today about technology and higher education. In his recent piece on "Napster, Udacity, and the Academy," for example, Shirky is not only guardedly optimistic about the ways that MOOCs and online education will transform higher education, but he takes for granted that they will, that there is no alternative. Just as inevitably as digital sharing turned the music industry on its head, he pronounces, so it is and will be with digital teaching. And as predictably as rain, he anticipates that "we" in academe will stick our heads in the sand, will deny the inevitable -- as the music industry did with Napster -- and will "screw this up as badly as the music people did." His views are shared by many in the "disruption" school of thought about higher education.
I suspect that if you agree with Clay Shirky that teaching is analogous to music, then you are likely to be persuaded by his assertion that Udacity -- a lavishly capitalized educational startup company -- is analogous to Napster. If you are not impressed with this analogy, however, you will not be impressed by his argument. And just to put my cards on the table, I am not very impressed with his argument. I think teaching is very different from music; that it is so different as to make the comparison obscure a lot more than it reveals.
But the bigger problem is that this kind of argument is weighted against academics, virtually constructed so as to make it impossible for an academic to reply. If you observe that "institutions will try to preserve the problem to which they are the solution," after all -- what has been called "The Shirky Rule" -- it can be easy to add the words "all" and "always" to a sentence in which they do not belong. This not a principle or a rule; it’s just a thing that often happens, and often is not always. But if you make the mistake of thinking that it is, you can become uniformly prejudiced against "institutions," since you literally know in advance what they will do and why. Because you understand them better than they understand themselves -- because they don’t or can’t realize that they are simply "institutions" -- you can explain things about them that they can neither see, nor argue against. "Why are you so defensive?" you ask, innocently, and everything they say testifies against them.
If someone like me -- a graduate student for many years, currently trying to find an academic job -- looks at MOOCs and online education, and sees the downsides very clearly, it’s also true that no one has a more strongly vested interest in arguing the benefits of radically transforming the academe than Clay Shirky and a number of others who talk about the inevitability of radical change. As Chuck Klosterman unkindly put it, once, "Clay Shirky must argue that the Internet is having a positive effect – it’s the only reason he’s publicly essential." Which is not to say that Shirky is wrong, simply that he must prove, not presume, that he is right.
I have to go through this excessively long wind-up because of the ways that Shirky has stacked the rhetorical deck in his favor. He uses the word "we" throughout his piece, and in this powerful final paragraph, he hammers us over the head with it, so precisely that we might mistake it for a caress:
"In the academy, we lecture other people every day about learning from history. Now it's our turn, and the risk is that we’ll be the last to know that the world has changed, because we can’t imagine — really cannot imagine — that story we tell ourselves about ourselves could start to fail. Even when it’s true. Especially when it’s true."
But what do you mean "we," Mr. Distinguished Writer in Residence? I asked Shirky on Twitter if he considered himself primarily an academic, and though he didn’t respond, it’s important that he frames his entire post as if he’s an insider. But while it’s certainly true that I am biased in favor of academic labor continuing to exist in something like its present form, he is no less biased by having nothing to lose and everything to gain if academe is flipped on its head. And yet the cumulative rhetorical effect of his framing is to remind us that no one within the institution can speak knowledgeably about their institution, precisely because of their location within it; when Shirky speaks of "we" academics, he does so only to emphasize that "we" can’t imagine that the story we tell ourselves is wrong.
It's because he is willing to burn the village to save it that Shirky can speak for and of academe. Because Shirky never has to show evidence that online education will ever be any good; he notes an academic’s assessment of a Udacity course as "amazingly, shockingly awful" and is then, apparently, satisfied when Udacity admitted that its courses "can be improved in more than one way." A defensive blog post written by Udacity’s founder is enough to demonstrate that change for the better is happening. And when the academic who criticized the Udacity course mentions a colleague whose course showed some of the same problems -- but does not name the colleague -- Shirky is triumphant. The academic in question "could observe every aspect of Udacity’s Statistics 101 (as can you) and discuss them in public," Shirky observes, "but when criticizing his own institution, he pulled his punches."
This is Clay Shirky’s domain, and also the domain of so many others who point to one or another failing of traditional higher ed to suggest that radical change is needed. The anecdote that illustrates something larger. In this case, the fact that academe is a "closed" institution means it cannot grow, change, or improve. By contrast, "[o]pen systems are open" seems to be the end of the discussion; when he contemplates the openness of a MOOC, the same definitional necessity applies. "It becomes clear," he writes, "that open courses, even in their nascent state, will be able to raise quality and improve certification faster than traditional institutions can lower cost or increase enrollment.” It becomes clear because it is clear, because "open" is better, because it is open.
But how "open" is Udacity, really? Udacity’s primary obligation is to its investors. That reality will always push it to squeeze as much profit out of its activities as it can. This may make Udacity better at educating, but it also may not; the job of a for-profit entity is not to educate, but to profit, and it will. There’s nothing necessarily wrong with for-profit education -- and most abuses can be traced back to government deregulation, not tax status -- but the idea that "openness," as such, will magically transform how a business does business is a massively begged question. A bit of bad press can get Sebastian Thrun to write a blog post promising change, but actually investing the resources necessary to follow through on that is actually a very different question. The fact that someone like Shirky takes him at face value -- not only gives him the benefit of the doubt, but seems to have no doubt at all -- speaks volumes to me.
Meanwhile, did the academic that Shirky criticizes really "pull his punches"? Did he refrain from naming his colleague because of the way academics instinctively shield each other from criticism? It’s far from clear; if you read the original blog post, in fact, it’s not even apparent that the academic knew who this "colleague" actually was. All we really know is that a student referred to something her "last teacher" did. But suppose he did know who this student’s last teacher was; suppose the student mentioned the teacher by name. Would it have been appropriate to post someone’s name on the Internet just because a secondhand source told you something bad about them? Does that count as openness?
Open vs. closed is a useful conceptual distinction, but when it comes down to specific cases, these kinds of grand narratives can mislead us. For one thing, far from the kind of siege mentality that characterized an industry watching its business model go up in smoke -- an industry that was not interested in giving away its product for free -- academics are delighted to give away their products for free, if they can figure out a way to do it. Just about every single public and nonprofit university in the country is working to develop digital platforms for education, or thinking hard about how they can. This doesn’t mean they are doing it successfully, or well; time will tell, and the proof will be in the pudding. But to imagine that Silicon Valley venture capitalists are the only people who see the potential of these technologies requires you to ignore the tremendous work that academics are currently doing to develop new ways of doing what they do. The most important predecessors to MOOCs, after all, were things like Massachusetts Institute of Technology's OpenCourseWare, designed entirely in the spirit of openness and not in search of profit.
The key difference between academics and venture capitalists, in fact, is not closed versus open but evidence versus speculation. The thing about academics is that they require evidence of success before declaring victory, while venture capitalists can afford to gamble on the odds. While Shirky can see the future revolutionizing in front of us, he is thinking like a venture capitalist when he does, betting on optimism because he can afford to lose. He doesn’t know that he’s right; he just knows that he might not be wrong. And so, like all such educational futurologists, Shirky’s case for MOOCs is all essentially defensive: he argues against the arguments against MOOCs, taking shelter in the possibility of what isn’t, yet, but which may someday be.
For example, instead of arguing that MOOCs really can provide "education of the very best sort," Shirky explicitly argues that we should not hold them to this standard. Instead of thinking in terms of quality, we should talk about access: from his perspective, the argument against MOOCs is too narrowly focused on the "18-year-old who can set aside $250k and four years" and so it neglects to address students who are not well-endowed with money and time. "Outside the elite institutions," Shirky notes, "the other 75 percent of students — over 13 million of them — are enrolled in the four thousand institutions you haven’t heard of." And while elite students will continue to attend elite institutions, "a good chunk of the four thousand institutions you haven’t heard of provide an expensive but mediocre education."
This is a very common argument from MOOC boosters, because access is a real problem. But while a "good chunk" of 13 million students are poorly served by the present arrangement, it is quite telling that his example of "expensive but mediocre education" is Kaplan and the University of Phoenix, for-profit institutions that are beloved by the same kinds of venture capitalists who are funding Udacity. He is right: For-profit education has amassed a terrible track record of failure. If you are getting a degree at a for-profit institution, you probably are paying too much for too little. But would it be any less mediocre if it were free?
Udacity’s courses are free to consumers (though not, significantly, to universities), at least for now. And Shirky is not wrong that "demand for knowledge is so enormous that good, free online materials can attract extraordinary numbers of people from all over the world." But Shirky doesn’t mean "demand" in the economic sense: demand for a free commodity is just desire until it starts to pay for the thing it wants. Since there is a lot of unmet desire for education out there, and since that desire is glad to have the thing it wants when it finds it for free, it seems all to the good that students can find courses for free. But while we should ask questions about why venture capitalists are investing so heavily in educational philanthropy, we also need to think more carefully about why is there so much unmet desire in the first place, and why so many people want education without, apparently, being able to pay for it. Why hasn’t that desire already found a way to become demand, such that it must wait until Silicon Valley venture capitalists show up, benevolently bearing the future in their arms?
The giveaway is when Shirky uses the phrase "non-elite institutions": for Shirky, there are elite institutions for elite students and there are non-elites for everyone else. The elite institutions will remain the same. No one will ever choose Udacity over Harvard or U.Va., and while elite institutions like MIT, Stanford, Princeton, and my own University of California are leaping into the online education world head first, anyone who thinks these online brands will ever compete with "the real thing" will be exactly the kind of sucker who would fork over full price for a watered-down product.
MOOCs are only better than nothing and speculation that this will someday change is worth pursuing, but for now, remains just that, speculation. It should be no surprise that venture capital is interested in speculation. And it should be no surprise that when academics look at the actual track record, when we try to evaluate the evidence rather than the hope, we discover a great deal to be pessimistic about.
Why have we stopped aspiring to provide the real thing for everyone? That’s the interesting question, I think, but if we begin from the distinction between "elite" and "non-elite" institutions, it becomes easy to take for granted that "non-elite students" receiving cheap education is something other than giving up. It is important to note that when online education boosters talk about "access," they explicitly do not mean access to "education of the best sort"; they mean that because an institution like Udacity provides teaching for free, you can’t complain about its mediocrity. It’s not an elite institution, and it’s not for elite students. It just needs to be cheap.
Talking in terms of "access" (instead of access to what?) allows people like Shirky to overlook the elephant in the room, which is the way this country used to provide inexpensive and high-quality education to all sorts of people who couldn’t afford to go to Yale -- people like me and my parents. While state after state is defunding its public colleges and universities (and so tuition is rising while quality is declining), the vast majority of American college students are still educated in public colleges and universities, institutions that have traditionally provided very high-quality mass higher education, and which did it nearly for free barely a generation ago.
"Access" wouldn’t even be a problem if we didn’t expect mass higher education to still be available: Americans only have the kind of reverence for education that we have because the 20th century made it possible for the rising middle class to have what had previously been a mark of elite status, a college education. But the result of letting these public institutions rot on the vine is that a host of essentially parasitic institutions -- like Udacity -- are sprouting like mushrooms on the desire for education that was created by the existence of the world’s biggest and best public mass higher education system.
Shirky talks dismissively about his own education, at Yale, and recalls paying a lot of money to go to crowded lectures and then to discussion sections with underpaid graduate students. Let me counter his anecdote with my own, When I was a high school student, in Appalachian Ohio, I told my guidance counselor that I wanted to go to Harvard, and he made me understand that people from Fairland High School do not really go to Harvard. I was a dumb high school student, so I listened to him. But although both of my parents worked in West Virginia, they had moved to Ohio when I was young so that I could go to Ohio schools, and this meant that although my grades were only moderately good -- and I had never had access to Advanced Placement classes -- I was able to apply to Ohio State University, get in, afford it, and get an education that was probably better than the one that Shirky got at Yale, and certainly a heck of a lot cheaper. My parents paid my rent, but I paid my tuition myself -- with part time jobs and $20,000 in loans -- and I didn’t have a single class in my major with more than 30 students. I had one-on-one access to all of my professors, and I took advantage of it.
It's a lot harder to do this now, of course; tuition at Ohio State is more than double what it was when I started in 1997. More important, you not only pay a lot more if you go to a school like Ohio State, you’re also a lot less likely to get in; the country’s college-age population has continued to grow, but the number of acceptance letters that public universities like OSU send out has not increased. As Mike Konczal and I have argued, this shortfall in quality higher education creates what economists call "fake supply." If you don’t get in to a college specializing in education "of the best sort" (or if your guidance counselor tells you not to apply), where do you go, if you go? You go to an online university, to Kaplan, or maybe now you try a MOOC or a public college relying on MOOCs to provide general education, as Texas now envisions. Such things are better than nothing. But "nothing" only seems like the relevant point of comparison if we pretend that public higher education doesn’t exist. And if we ignore the fact that we are actively choosing to let it cease to exist.
Beware anyone who tries to give you a link to WebMD as a replacement for seeing a real doctor.
Aaron Bady is a doctoral candidate in English literature at the University of California at Berkeley, and he writes and tweets for The New Inquiry as @zunguzungu.