In many respects, higher education in the United States – with credits awarded on time a student sits in a chair – remains trapped in the 19th century and has been slow to embrace technology.
Online education from traditionally accredited colleges has been available since at least 1999, but almost always at the same high tuition cost as the traditional “physical” courses. New ideas, such as tuition-free massive open online courses (MOOCs), are now emerging, but are generally not accredited.
For a true revolution to occur, regulation will need to change along with the technology. The key advance would be to establish a new private sector accrediting body, the “Modern States Accrediting Agency,” that would ensure the quality and reputation of the innovative courses, make the credits transferable into the traditional system and which would be recognized by the U.S. Department of Education as an approved accreditor in order to qualify students for federal student aid.
The Department of Education began its first online education pilot program in 1999. In 2006, it allowed institutions to offer all of their courses online. However, these courses were offered by institutions accredited in the traditional way, with student enrollment in the courses kept limited, and with tuition set as high (or even higher) than tuition for the physical alternative.
The paradigm began to shift in 2011 when Stanford University offered three of its courses online, free of charge, to any person anywhere who chose to take them.
Since then other innovators have continued to introduce MOOCs, notably led by the nation’s most respected traditional universities, such as MIT and Harvard. MIT, for example, is creating MITx, which is intended to offer a great number of MIT’s courses free of charge or nearly so; taught by MIT’s renowned faculty; and with graded assignments, tests, online discussion groups, online professor “office hours” and other quality advances.
The problem with MOOCs, though, is that there is usually no mechanism for obtaining accreditation and, in U.S. higher education, accreditation is the “coin of the realm,” which gives a degree its value. As a result, most MOOCs can offer students only a letter of completion, a pat on the head and no degree. Few other institutions or graduate schools will recognize completion of a MOOC course for credit, and employers do not know how to judge the student’s level of accomplishment.
Schools like MIT should not be forced to dilute the power of their brand by being forced to give their regular degree to students who simply take some of their tuition-free online courses. However, it is equally inappropriate to give no value to the online learning that occurs in a MOOC, particularly if a student can complete a high-quality, rigorous course and then prove mastery of the material on a separate, proctored, certifying exam.
In the traditional system, a degree is accredited because the degree-granting institution is itself accredited by an agency recognized by the U.S. Secretary of Education. The accrediting agencies (such as the Middle States Commission on Higher Education or the New England Association of Schools and Colleges) are private sector, self-regulatory groups which, in most cases, were created nearly a century ago by the member institutions themselves. The best of these agencies were later recognized by the Department of Education and included on its list of approved accreditors.
Today students can only qualify for federal financial aid if their institution has been accredited by one of these recognized agencies, and these accreditors’ decisions control access to the more than $150 billion in federal aid paid out to students each year. The traditional accrediting agencies, which were founded long ago to serve the needs of the traditional institutions, are not well-suited to lead technological and social innovations that are alternatives to the traditional system. A few experiments with traditionally accredited MOOCs are under way.
However, for the most rapid and effective progress, America needs a new, innovation-focused accreditor, Modern States, which would also be recognized by the Department of Education and which could accredit providers of emerging technologies and ideas in order to drive down costs, drive up quality and to shape federal aid programs in new and effective ways.
Unlike traditional accreditors, Modern States would be able to accredit specific courses, not just the degree-granting institution as a whole. For example, it could recognize that the freshman physics MOOC from MITx is of high quality, and then develop a widely available, proctored test for students who complete that course, similar to an SAT exam or CPA exam.
Students who complete the preapproved, tuition-free MOOC and also pass the confirmatory Modern States assessment would earn accredited course hours from Modern States itself. Enough such courses in the right scope and sequence (say physics from MITx, poetry from Harvard, theology from Notre Dame and so on) could lead to a fully accredited Modern States degree. Modern States would also approve courses and develop tests in vocational areas, in career training fields and at the two-year and community-college level, in order to serve all types of students.
The creation of Modern States could then enable a whole field of academic innovations to bloom, including blends of “bricks and clicks” and new types of federal financial aid models. For example, students might take their core lectures tuition-free and online from a nationally renowned professor in a MOOC, and then attend supplementary weekly study groups with a live professor and other students in their home towns, all at a lower overall cost than a traditional course today.
Similarly, students might take their first year of introductory courses all online for free, but then transfer to a traditional college for the last three years, lowering their total educational costs by 25 percent. Students who complete their educations at a low cost or no cost to the federal government might even be paid a federal bonus upon completing their degree and successfully entering the work force. In this way, the self-motivated Abe Lincolns of tomorrow could finish their higher educations debt-free and with cash in the bank. Meanwhile, U.S. taxpayers would save money.
Modern States could also lead the way in areas unrelated to MOOCs, such as competency-based exams and on-the-job skills training. For example, if there is a national shortage of skilled welders, and if an employer trains a worker in welding who then passes the Modern States assessment, the worker could earn a course credit in welding while the employer itself might be paid some stipend as an “educational institution of one.”
The traditional accreditors were founded by their member institutions, but – with a few exceptions – traditional institutions are not likely to be the best champions for low-cost alternatives to themselves. Modern States should be chiefly formed by a voluntary association of philanthropies and nongovernmental organizations concerned with increasing access to high-quality education while lowering its cost -- groups such as the Bill and Melinda Gates Foundation, the Ford Foundation, the World Bank and so on. Employer and labor groups could join as well, as could providers of the innovative courses, student consumer groups and others.
Membership contributions could fund a staff that would develop a starting catalog of approved courses from the already-existing universe of MOOC and related offerings. The staff would then work with testing organizations to develop the independent assessments needed to prove student mastery of the material. The catalog would cover a range of academic and vocational fields, and grow and evolve over time.
The ultimate path to success for Modern States would be to keep its testing standards high and rigorous so that employers, traditional institutions and the world at large will see that the students are truly deserving of the degree credits. In this way, Modern States could function like a universal version of the CPA exam developers, who have become an accepted standard of testing and quality in the accounting field.
Once Modern States is formed, it would write criteria for granting accreditation that would be aligned with the Secretary of Education’s criteria for recognition of accrediting agencies. After applying these criteria, Modern States could petition the Department of Education for recognition. The petition would be reviewed by staff and by a federal advisory committee that advises the Secretary on whether to recognize accreditors. If the Department grants recognition, then students at the institutions and programs accredited by Modern States would be eligible to participate in federal student aid programs.
Transferability of credits to more traditionally accredited programs would be negotiated by Modern States through reciprocity agreements with other accreditors, as supported by the Department of Education. Modern States would be a private-sector organization, not a government organization. However, political leaders in both parties could help achieve educational goals by expressing clear support for the Modern States approach as outlined here.
In its best form, traditional higher education is one of America’s great treasures, and no online program is ever likely to equal the experience of four years on campus at a great school. However, the high cost and limited availability of such traditional best-in-class programs have placed them increasingly out of reach to many striving students in America and around the world. By unleashing the power of technology and social innovation, Modern States could be the key regulatory mechanism to make education more accessible and affordable, and to bring higher education more fully into the 21st century.
David Bergeron and Steven Klinsky
David Bergeron, the former acting assistant secretary for postsecondary education at the U.S. Department of Education, is vice president of postsecondary education policy at the Center for American Progress.
Steven Klinsky, a New York-based businessman and philanthropist, has been active in education reform since 1993.
My first political philosophy teacher was the great Joseph Cropsey who, when we came to a difficult problem in Plato, would sometimes exhort us.
“Courage,” he would say, knowing that we were tempted to quit, not only because Plato was a hard read but also because there was much in us, from vanity to laziness to fear, that resisted education.
Like Cropsey, Mark Edmundson thinks that education makes demands on a student’s character. In his 1997 Harper’s essay, “On The Uses of A Liberal Education: As Lite Entertainment for Bored College Students,” he retells the story of a professor who supposedly issued “a harsh two-part question. One: What book did you most dislike in the course? Two: What intellectual or characterological flaws in you does that dislike point to?” Edmundson admits that the question is heavy-handed but approves of the idea that teachers summon students to an encounter they may want to dodge. Students so challenged may skip the reading, or close themselves to what they read, or engage in other kinds of cheating.
I use “cheating” in the extended sense we use when we say our students are “cheating themselves.” James Lang, for the most part, means it more narrowly in in Cheating Lessons: Learning from Academic Dishonesty. But I thought of Cropsey and Edmundson as I read Cheating Lessons because Lang shies away from the question of character. Instead, his book is about helping “faculty members to respond more effectively to academic dishonesty by modifying the learning environments they [have] constructed.”
Lang, an associate professor of English at Assumption College, advances a “theory about how specific features of a learning environment can play an important role in determining whether or not students cheat.” Students who think learning is a means to an end take shortcuts. So a learning environment discourages cheating when it fosters “intrinsic motivation in our students,” rather than “relying on extrinsic motivators such as grades.”
Students encouraged to outperform each other on high-stakes assessments feel pressure to cheat. So a learning environment discourages cheating when it invites students to attain “learning objectives” and permits them to show that attainment in a variety of ways, with low-stakes assessments preparing the way for high-stakes assessments. Students who think assignments are impossible will find it easy to justify cheating. So a learning environment discourages cheating when it instills a “strong but realistic sense of self-efficacy.”
But Lang does not want teachers to think of themselves as academic honesty cops. The most “exciting discovery [he] made while writing” Cheating Lessons is this: “environments which reduce the incentive and opportunity to cheat are the very ones that, according to the most current information we have about how human beings learn, will lead to greater and deeper learning.”
Lang made this discovery, he writes, by looking at the “problem of cheating through the lens of cognitive theory.” For example, a teacher may think that giving frequent low-stakes assessments is a distraction from learning. Lang himself thought so until he found out “how little [he] knew about the basic workings of the brain.” The well-documented “testing effect” suggests that such assessments are not merely measures of learning but an effective means of helping students retain what they have learned.
Yet I balk at the very term “learning environment,” with its faint odor of antiseptic. Educators may use the term out of humility, placing themselves in the background and seeking not so much to teach as to place students in a situation in which they can learn. But the idea of a teacher as a constructor and modifier of learning environments merely shifts the teacher’s role from the front of the room to inside the control room, flipping switches and twisting dials, modifying conditions in the same way one might modify “the conditions of a laboratory,” in accordance with the latest learning theory. It is not obvious that this approach is humbler than that of Cropsey, who, while he stood in front of the room, nonetheless was visibly engaged in the same set of difficult and fascinating problems in which he sought to engage us. If we think of our students as subjects in our laboratory, to be manipulated and nudged toward desirable behaviors, how can we develop in them the qualities of character they will need to govern themselves in environments we do not control?
To be fair, Lang, who offers several exemplars of great teaching, is well aware that teachers are models, or even coaches, not just environmental technicians. But even when he profiles a teacher, Jim Hoyle, who plainly exemplifies for students both the joys and demands of work in his field, Lang is interested in how “the ways in which we communicate with students can also help them develop an appropriately gauged sense of self-efficacy.”
Hoyle, who has written his own book on teaching, indicates that there is something more going on when he describes his own role model, Vince Lombardi. Lombardi exemplified not only a way of communicating with athletes but a message, about “courage,” “determination,” “dedication,” and “sacrifice,” that Hoyle thinks “excellent ... for both teachers and students.”
Lang’s target readers “might feel uncertain about their ability to cultivate virtues in their students.” Lang himself reminds the reader that “you are not an ethics professor” and warns against haranguing. I assume Hoyle, like most sensible people, takes for granted neither his own virtues nor his capacity to foster them in others, and he does not, on Lang’s account, do much haranguing.
But Hoyle also seems to think that he need not be an American Philosophical Association certified moral expert to try to impart to students, as well as the readers of his book on teaching, the virtues that attend the best learning and teaching. The cultivation of such virtues may be a more effective spur to learning and antidote to cheating in its narrow and broad senses than the strategies, all of them useful, on which Lang focuses. As Peter Lawler has recently argued, teachers may do well to recall the “Aristotelian point” that “intellectual virtue depends on moral virtue.”
Admittedly, I cannot appeal to the social science literature on cheating that Lang has acquainted himself with to support that last set of claims. And I agree with him that teachers and administrators must not ignore what experiments can tell us about learning. It would be foolish to spend a dime on an academic integrity orientation before you have processed Dan Ariely’s finding that Princeton’s academic integrity orientation showed absolutely no effect on the likelihood that Princeton students would cheat on a math test two weeks after it ended. It would be foolish to ignore the results of the MIT experiment with a “studio model” for teaching physics, which dramatically reduced both cheating and the rate of failure in the course.
But Lang oversells what social science can tell us at present. For example, to support his argument that “performance oriented classrooms,” which emphasize “grades and competition among students,” encourage cheating, Lang cites a paper by Eric Anderman and Tamara Murdock. But Anderman and Murdock are more cautious than Lang because while “students report cheating more if they perceive the presence of a performance goal structure,” two studies find that “goal structure appears to be unrelated to cheating when a more objective method of assessing context is utilized.” The “extent to which teachers can reduce cheating by implementing” practices of the sort Lang recommends “is still unclear.”
Consider also Lang’s doubt that “hard punishments deter potential cheaters.” While Lang supports this claim in part by citing the work of Donald McCabe, Kenneth Butterfield, and Linda Trevino, they themselves have concluded, drawing on their own and others’ research, that “academic dishonesty is negatively associated with the perceived certainty of being reported and the perceived severity of penalties.” Similarly, Anderman and Murdock, in the same paper we have been considering, assume that “[f]ears of being caught and the perceived severity of the consequences for being caught are two of the most important deterrents to potential cheaters.”
Lang is still right to emphasize that “we have no incontrovertible evidence that harsh penalties deter cheating.” Moreover, I agree with him that an anti-cheating regime that focuses primarily on threats is unlikely to succeed. On the other hand, there is hardly a groundswell of support for harsh punishments. McCabe and his co-authors argue that the opposite is true: many faculty members have concluded that confronting cheating isn’t worth the trouble. How, they ask, “can we expect students to believe that cheating is a serious problem when faculty and others are reluctant to deal with cheaters ... when cheating receives minor consequences and, worst of all, when faculty look the other way?”
However that may be, Lang, as his discussion of the performance classroom shows, does not typically insist that evidence be incontrovertible before one acts on it. It is fine to set a high bar for accepting and acting on the results of social science research. But you can’t set a higher bar for approaches you are already inclined to disagree with than you set for approaches you are otherwise inclined to favor.
Jonathan Marks, author of Perfection and Disharmony in the Thought of Jean-Jacques Rousseau (Cambridge University Press, 2005), is associate professor of politics at Ursinus College. He tweets at twitter.com/marksjo1.
A teaching assistant at the University of Iowa accidentally e-mailed naked photographs of herself and a man to students. She had intended to send an attachment with answers to some questions on a problem set. As news of the e-mail embarrassment spread on social media, the university asked those who received the e-mail to delete the message and to not share the files with anyone else. The incident was “inappropriate” and the university will look into it and take appropriate actions under its policies and procedures, a spokesman said. He said that the teaching assistant regrets what happened.
Several decades ago – long before the level of technological sophistication we experience today -- I was part of a movement begun by the late Julian Stanley, a psychology professor, and the Johns Hopkins University Center for Talented Youth (CTY) to save academically talented youth from boredom in the schools. The most controversial instrument to rescue them was a pedagogical practice called, rather prosaically, "Diagnostic Testing Followed by Prescriptive Instruction" or, shorthand, “DT>PI.” It was principally applied to the pre-collegiate mathematics curriculum and relied on just a few key assumptions and practices:
1. Students already know something about a subject before they formally study it.
2. Test students before a course begins and then just instruct them on what they don’t know.
3. Test students again when you as the instructor and they as learners believe they have competency in a subject.
4. Move immediately to the next level of instruction.
The DT>PI model was placed in a more generous context with the adaptation of Professor Hal Robertson’s (University of Washington) notion of the Optimal Match. Simply stated, pace and level of instruction should match optimally an individual student’s assessed abilities — with the caveat that those accessing that talent would always try to stretch a student beyond his comfort zone. The Optimal Match theoretically could apply to all students at any level of education.
When I used to speak publicly in a wide variety of settings — at colleges and universities, community colleges, schools, education association meetings, parent gatherings -- about what I thought to be the commonsensical notions of DT>PI and the Optimal Match, the reactions were pronounced and fiercely negative. My colleagues and I were accused of presenting educators with the dissolution of the structured classroom as we knew it then; forcing students unjustifiably to proceed educationally without sufficient instructional guidance; destroying the communal, cooperative imperative of an American education; and, producing social misfits because students would finish academic coursework before the schedule established (rather arbitrarily, I might add) by educational professionals for all students of one age at one time. Parents joined often with educators to decry such imagined alienation and damage to a child’s personality.
And then there was a change in 2013.
There are now two closely related pedagogies -- adaptive learning and competency-based learning -- that are embraced by a growing number in higher education as a viable component of educational reform. The Bill and Melinda Gates Foundation is awarding grants to 18 institutions to experiment with 10 different adaptive learning platforms, and President Obama has expressed support for these innovations and urged easing of regulations to make that possible.
In general, adaptive learning uses data-driven information to design coursework that permits students to proceed educationally at their appropriate pace and level. And competency–based learning allows students to be free of "seat time" and flexibly progress as they demonstrate mastery of academic content.
These definitions, when combined, delineate precisely the key components of DT>PI and that of numerous other experiments in self-paced learning over the last few decades. But now, while the naysayers are still out there, an increasing number of for-profits and nonprofits are turning to adaptive and competency-based learning as a component of the next stage of reform in American education.
Why now? Something must have changed in society to accept self-paced, individualized learning when only decades ago it was roundly rejected on pedagogical, ethical and psychological grounds. Those concerns are clearly not as inviolate as they were only years ago. Answering this question might well provide education reformers with insight into what is now possible — even expected -- from students for the learning platforms of the future.
There are at least three reasons why self-paced learning might be more popular now than it was only a few years ago: technological advances, financial exigency and a new self-profile of the learner.
Advances in technology that rely on advances in data mining and data analytics -- predicting future learning behavior by an individual based upon analysis of thousands of earlier learners — permit now a high ability to track, direct, customize, evaluate and advise student learning at instantaneous speeds. What in previous decades seemed to be an impossible task for a teacher or professor to manage in a single course — diverse learning points among students — is at least now technically feasible.
Many institutions are rather intent to find new strategies that will at once reduce their cost of providing an education. Adaptive and competency-based learning are thought to be such "disruptive" opportunities, although how accompanying data-driven, all-knowing and anticipating, high-touch technologies will reduce dramatically both cost and price (tuition) remains elusive.
And, lastly, students have perhaps finally realized the expectation of the self-esteem movement that has dominated instruction in our nation’s schools for several decades. Students might well now believe that they are the center of all activity — to include education — and that they are both the sole focus and the drivers of learning. All instructional effort exists for the purpose of fulfilling their desires.
This "power shift" makes learners, individually — not teachers or professors -- aggregators of knowledge by and for themselves. Any approach to education that places them at the center of learning activity accommodates their perspective on education. Adaptive and competency-based learning accomplish this masterfully. Self-paced, individually adjusted instruction, enhanced by “big data” technologies that guide student progress “lockstep” in a course and beyond, eliminates distracting elements to the individual control of knowledge. Primary among those distractions for students are faculty with their pesky, seemingly inefficient and irrelevant questions.
And thus, in 2013, what was not acceptable several decades ago is now thought a solution to crisis in American education. A combination of new technologies, financial emergency and a shift in who is at the center and in control of learning has caused this to occur.
But all is not settled. The changing circumstances introduce concerns that did not exist decades ago when students were not the arbiters of their own learning, self-paced instruction was not thought to be a solution for all students in American education but only the academically talented and big data did not exist to mine and anticipate every move in student learning.
A defining element of DT>PI was that students must not just study what is the next logical step in a course, but they must through the exhortations of a teacher or professor attempt to go beyond what was thought statistically possible — they must stretch themselves intellectually at every point. Professor Stanley used to constantly quote the line of the poet Robert Browning that one’s reach must always exceed one’s grasp.
Questions remain whether in the absence of a live instructor exhorting a student who is not necessarily academically acute and motivated, students will extend their reach or settle for statistically generated achievement delivered by an electronic adviser (I am referring here to traditional-aged undergraduates, not working adults who are propelled by substantial motivational factors). Such absence of exhortation could be extremely damaging to the majority of American students who often do not naturally attempt to achieve to the levels of which they are capable without personal mentorship.
And one traces in those who are enthralled with "big data" and "data analytics" for solving the maladies of American education a disturbing belief. Student will achieve through data-enhanced technologies the perfectibility of education — perhaps life itself -- by eliminating all resistance, frustration, indecision, trial and error, chance and expenditure of time. For example, Harvard University social scientist and university professor Gary King is quoted in a May 20, 2013 New Yorker article entitled "Laptop U." as saying, “With enough data over a long period, you could crunch inputs and probabilities and tell students, with a high degree of accuracy, exactly which choices and turns to make to get where they wanted to go in life."
And yet, there is growing commentary that it is precisely the absence of frustration, resistance and associated imperfections in a so-called “Me Generation” and its aftermath that is compromising contemporary students' learning and preparation for a life. By educators blithely accepting students’ assertion of self-determination without legitimate maturing experiences (that will include failure and self-doubt) and by arranging learning electronically so that they will make no wrong decisions, they are granting them little ability to deal with inevitable disappointment and frustration in life.
Students are educated without gaining resilience and that is hardly an education of which a nation can be proud or secure, regardless of the utopian promises of the big data enthusiasts. All this reminds me of a call I received decades ago from an entrepreneur who wanted me to comment on his idea of developing a school basketball court that would have the hoop move electronically with the ball so that no student would ever miss a shot and thus, in his words, "suffer humiliation."
So while I am delighted that self-paced education in the form of adaptive and competency-based learning is finally a more generally discussed component of reform in American education, I urge that those advancing it think long and hard about some of the humanly-damaging consequences of learning platforms so perfected by technology that students are offered a Faustian bargain – the comfort of non-resistant and frustration-free learning in exchange for the ultimate loss of a resilience needed for a satisfying life after schooling.
William G. Durden is president emeritus and professor of liberal arts at Dickinson College, and operating partner at Sterling Partners, a private equity company.
In all those years I was pursuing a Ph.D. in religious studies, the question of what my profession really stood for rarely came up in conversation with fellow academics, save for occasional moments when the position of the humanities in higher education came under criticism in public discourse. When such moments passed, it was again simply assumed that anyone entering a doctoral program in the humanities knowingly signed on to a traditional career of specialized research and teaching.
But the closer I got to receiving that doctorate, the less certain I became that this was a meaningful goal. I was surrounded by undergraduates who were rich, well-meaning, and largely apathetic to what I learned and taught. I saw my teachers and peers struggle against the tide of general indifference aimed at our discipline and succumb to unhappiness or cynicism. It was heartbreaking.
Fearing that I no longer knew why I studied religion or the humanities at large, I left sunny California for a teaching job at the Asian University for Women, in Chittagong, Bangladesh. My new students came from 12 different countries, and many of them had been brought up in deeply religious households, representing nearly all traditions practiced throughout Asia. They, however, knew about religion only what they had heard from priests, monks, or imams, and did not understand what it meant to study religion from an academic point of view. And that so many of them came from disadvantaged backgrounds convinced me that this position would give me a sense of purpose.
I arrived in Bangladesh prepared to teach an introductory course on the history of Asian religions. But what was meant to be a straightforward comparison of religious traditions around the region quickly slipped from my control and morphed into a terrible mess. I remember an early lesson: When I suggested during a class on religious pilgrimage that a visit to a Muslim saint’s shrine had the potential to constitute worship, it incited a near-riot.
Several Muslim students immediately protested that I was suggesting heresy, citing a Quranic injunction that only Allah should be revered. What I had intended was to point out how similar tension existed in Buddhism over circumambulation of a stupa — an earthen mound containing the relics of an eminent religious figure — since that act could be seen as both remembrance of the deceased’s worthy deeds and veneration of the person. But instead of provoking a thoughtful discussion, my idea of comparative religious studies seemed only to strike students as blasphemous.
Even more memorable, and comical in hindsight, was being urged by the same Muslims in my class to choose one version of Islam among all its sectarian and national variations and declare it the best. Whereas Palestinians pointed to the "bad Arabic" used in the signage of one local site as evidence of Islam’s degeneration in South Asia, a Pakistani would present Afghanis as misguided believers because — she claimed—they probably never read the entire Quran. While Bangladeshis counseled me to ignore Pakistanis from the minority Ismaili sect who claim that God is accessible through all religions, Bangladeshis themselves were ridiculed by other students for not knowing whether they were Sunni or Shi’a, two main branches of Islam. In the midst of all this I thought my call to accept these various manifestations of Islam as intriguing theological propositions went unheeded.
With my early enthusiasm and amusement depleted, I was ready to declare neutral instruction of religion in Bangladesh impossible. But over the course of the semester I could discern one positive effect of our classroom exercise: students’ increasing skepticism toward received wisdom. In becoming comfortable with challenging my explanations and debating competing religious ideas, students came to perceive any view toward religion as more an argument than an indisputable fact. They no longer accepted a truth claim at face value and analyzed its underlying logic in order to evaluate the merit of the argument. They expressed confidence in the notion that a religion could be understood in multiple ways. And all the more remarkable was their implicit decision over time to position themselves as rational thinkers and to define their religions for themselves.
An illustrative encounter took place at the shrine of the city’s most prominent Muslim saint. I, being a man, was the only one among our group to be allowed into the space. My students, the keeper of the door said, could be "impure" — menstruating — and were forbidden to enter. Instead of backing down as the local custom expected, the students ganged up on the sole guard and began a lengthy exposition on the meaning of female impurity in Islam. First they argued that a woman was impure only when she was menstruating and not at other times; they then invoked Allah as the sole witness to their cyclical impurity, a fact the guard could not be privy to and thus should not be able to use against them; and finally they made the case that if other Muslim countries left it up to individual women to decide whether to visit a mosque, it was not up to a Bangladeshi guard to create a different rule concerning entry. Besieged by a half-dozen self-styled female theologians of Islam, the man cowered, and withdrew his ban.
I was incredibly, indescribably proud of them.
Equally poignant was coming face to face with a student who asked me to interpret the will of Allah. Emanating the kind of glow only the truly faithful seem to possess, she sat herself down in my office, fixed the hijab around her round alabaster face, and quietly but measuredly confessed her crime: She had taken to praying at a Hindu temple because most local mosques did not have space for women, and she was both puzzled and elated that even in a non-Islamic space she could still sense the same divine presence she had been familiar with all her life as Allah. She asked for my guidance in resolving her crisis of faith. If other Muslims knew about her routine excursions to a Hindu temple, she would be branded an apostate, but did I think that her instinct was right, and that perhaps it was possible for Allah to communicate his existence through a temple belonging to another religion?
In the privacy of my office, I felt honored by her question. I had lectured on that very topic just before this meeting, arguing that sacred space was not the monopoly of any one religion, but could be seen as a construct contingent upon the presence of several key characteristics. This simple idea, which scholars often take for granted, had struck her as a novel but convincing explanation for her visceral experience of the Islamic divine inside a Hindu holy space. Though she had come asking for my approval of her newly found conviction, it was clear that she did not need anyone’s blessing to claim redemption. Humanistic learning had already provided her with a framework under which her religious experience could be made meaningful and righteous, regardless of what others might say.
And thanks to her and other students, I could at last define my own discipline with confidence I had until then lacked: The humanities is not just about disseminating facts or teaching interpretive skills or making a living; it is about taking a very public stance that above the specifics of widely divergent human ideas exist more important, universally applicable ideals of truth and freedom. In acknowledging this I was supremely grateful for the rare privilege I enjoyed as a teacher, having heard friends and colleagues elsewhere bemoan the difficulty of finding a meaningful career as humanists in a world constantly questioning the value of our discipline. I was humbled to be able to see, by moving to Bangladesh, that humanistic learning was not as dispensable as many charge.
But before I could fully savor the discovery that what I did actually mattered, my faith in the humanities was again put to a test when a major scandal befell my institution. I knew that as a member of this community I had to critique what was happening after all my posturing before students about the importance of seeking truth. If I remained silent, it would amount to a betrayal of my students and a discredit to my recent conclusion that humanistic endeavor is meant to make us not only better thinkers, but also more empowered and virtuous human beings.
So it was all the more crushing to be told to say nothing by the people in my very profession, whose purpose I thought I had finally ascertained. In private chats my friends and mentors in academe saw only the urgent need for me to extricate myself for the sake of my career, but had little to say about how to address the situation. Several of my colleagues on the faculty, though wonderful as individuals, demurred from taking a stance for fear of being targeted by the administration for retribution or losing the professional and financial benefits they enjoyed. And the worst blow, more so than the scandal itself, was consulting the one man I respected more than anybody else, a brilliant tenured scholar who chairs his own department at a research university in North America, and receiving this one-liner:
"My advice would be to leave it alone."
It was simultaneously flummoxing and devastating to hear a humanist say that when called to think about the real-life implications of our discipline, we should resort to inaction. And soon it enraged me that the same people who decry the dismantling of traditional academe under market pressure and changing attitudes toward higher education could be so indifferent, thereby silently but surely contributing to the collapse of humanists’ already tenuous legitimacy as public intellectuals.
While my kind did nothing of consequence, it was the students — the same students whom I had once dismissed as incapable of intellectual growth — who tried to speak up at the risk of jeopardizing the only educational opportunity they had. They approached the governing boards, the administration, and the faculty to hold an official dialogue. They considered staging a street protest. And finally, they gave up and succumbed to cynicism about higher education and the world, seeing many of their professors do nothing to live by the principles taught in class, and recognizing the humanities as exquisitely crafted words utterly devoid of substance.
As my feeling about my discipline shifted from profound grief to ecstatic revelation to acute disappointment, I was able to recall a sentiment expressed by one of my professors, who himself might not remember it after all these years. Once upon a time we sat sipping espresso on a verdant lawn not far from the main library, and he mused that he never understood why young people no longer seemed to feel outrage at the sight of injustice. He is a product of a generation that once once rampaged campuses and braved oppression by the Man. On first hearing his indictment, I was embarrassed to have failed the moral standard established by the older generation of scholars like him. But now I see that it is not just young people but much of our discipline, both young and old, that at present suffers from moral inertia. With only a few exceptions, humanists I know do not consider enactment of virtue to be their primary professional objective, whether because of the more important business of knowledge production or material exigencies of life. And I can only conclude, with no small amount of sadness, that most humanists are not, nor do they care to be, exemplary human beings.
Maybe I should move on, as did a friend and former academic who believes that the only people we can trust to stand on principle are "holy men, artists, poets, and hobos," because yes, it is true that humanists should not be confused with saints. But the humanities will always appear irrelevant as long as its practitioners refrain from demonstrating a tangible link between what they preach and how they behave. In light of the current academic penchant for blaming others for undoing the humanities, it must be said that humanists as a collective should look at themselves first, and feel shame that there is so much they can say — need to say — about the world, but that they say so little at their own expense.
After a year and a half in Bangladesh, I do not doubt any longer that the humanities matters, but now I know that the discipline’s raison d’être dies at the hands of those humanists who do not deserve their name.
Se-Woong Koo earned his Ph.D. in religious studies from Stanford University in 2011. He currently serves as Rice Family Foundation Visiting Fellow and Lecturer at Yale University.