You have /5 articles left.
Sign up for a free account or log in.

While recently performing in Australia, Thom Yorke—the front man of Radiohead—was interrupted by a protester who called on him to condemn Israel’s campaign of violence in Gaza. After a hostile exchange, Yorke resumed the show with one of the band’s iconic hits, “Karma Police.” The rock star was apparently feeling intimidated.

On the dystopian album OK Computer (1997), Yorke had sung that the Karma Police were out to get us. A reimagining of Orwell’s Thought Police, they personified the totalitarian forces of the digital age that was bearing down upon us. A quarter century later, amid a more fully realized technocracy, the specter of the Karma Police feels more prescient than ever. But it’s not always clear who’s policing whom: the heckler or the rock star? One had a captive audience and a microphone. The other was escorted from the show by security.

More than lead us to wonder whether Yorke is following in the footsteps of Morrissey, the episode reflects the currents of thought and the exercises of power that have roiled American campuses for the last two years. Universities’ recent warming to AI, along with their refusal to accommodate student encampments, have suggested how the neoliberal university will manage expressions of free thought in an increasingly technocratic landscape. Suspensions, sanctions, expulsions and deportations: “This is what you get when you mess with us,” the university’s Karma Police intone, in an echo of OK Computer. Our students’ thinking will be done by a computer, OK?

The language of accommodation applies both metaphorically and literally to how we might think about the student encampments. The raids of 2024 manifest, on the one hand, that a genuine politics of liberation will remain, for now, homeless. Solidarity with Palestine will find shelter neither in the university nor elsewhere in the public sphere. As with the encampments of those who are literally homeless, the student encampments are denounced, among other things, as disruptive. In both cases, the denunciations probably say more about us than about the encamped: Above all, that we are unwilling to confront what they represent. So awful are the implications of what the encampments voice—our complicity in a genocide of children—that we do with the students as we do with the homeless. We contract police to beat them up and remove them from view.

It’s a different story when considering the disruptive arrival of AI, whose adoption has overlapped with the encampments. With the rollout of ChatGPT in November 2022, there were reservations. Two years later, they persist. But much of American higher ed has moved on. Our new consensus? “To work with AI, not around it.” With Pandora’s box open, why not learn to live with AI—accommodate it, leverage it, incorporate it in our courses and possibly even profit from it? The university where I teach awarded grants to faculty who would study how to adopt AI in our pedagogy. Newly formed committees provided primers on how to use AI. The center for teaching and learning regularly communicates resources to help navigate the use of AI in our classrooms. With the encouragement of our president, the university fast-tracked a new graduate program in artificial intelligence. We’re just barely a midsize university. One can imagine the inducements that exist elsewhere.

If universities have provided the carrots, faculty have supplied the sticks: denialism, gaslighting and shaming. AI will not substitute for thinking, we are reassured, because LLMs do not think—they parrot. That’s small consolation when they produce language slick enough to be passed off as real thought. If we voice concern that this poses a threat, we are told that our pedagogy is outdated or insufficiently creative, or that we ought to stop policing our students. My most pressing dilemma—how to assess in good conscience a piece of work that is patently fabricated—is met by bad-faith injunctions to self-delusion: “My professor hot take is that trusting students makes life so much better than assuming they’re all lying or trying [to] cheat on everything.” Ignorance is a bliss for which I often yearn, but it is generally not an advisable principle of pedagogy. We settle for self-delusion because the Thom Yorkes of the world are waiting to brand us the Karma Police. Yet in a dizzying irony, many of us say and do almost nothing about the deployment of actual police on our campuses. They have been deployed against students whose conviction, as Mario Savio once said, has led them to put their bodies upon the gears of the machine. As the gears churn on, we’ve been learning how to accommodate ideology machines that threaten to erode democracy and degrade the environment.

These issues converged for me around December 2023, as I was wrapping up a term in which I had prepared undergraduates to spend the spring in Berlin. For a reflection assignment, I asked students (a) to draft some questions they would like to pose to the Germans whom they would meet and (b) to sketch the responses they might anticipate. The use of ChatGPT was prohibited. One student, Sam, asked a fictional German interviewee to give a statement on Germans’ views of the current conflict between Israelis and Palestinians. The language of the submission—effortlessly flowing clauses with a hypercorrect but clinically dry lexicon—was a dead giveaway that the imagined response had been copied and pasted from ChatGPT. More glaring than the language, though, was the content: no mention of Oct. 7, nor of Israel’s brutal campaign in the months that followed. Bizarrely, the response ended on a note of cheerful optimism, referencing recent positive developments that inspired hope for broad peace in the region. The student was not some Panglossian optimist. This was a case of plug and play.

In a subsequent meeting, I encountered a hubris that suggested the rules of the game had changed. Sam presumed from the start that I suspected his use of ChatGPT and had prepared accordingly. The diction and syntax were not his own, he said pre-emptively, because he was trying to imagine how a German might sound when writing in English. The German’s response did not mention Oct. 7 or the subsequent violence, he explained, again pre-emptively, because he wanted to avoid discussing dark topics. By the way, he asked, did I even know how large language models worked? He could explain them to me. The condescension felt warranted—not because Sam’s arguments held water, but because we both knew the university was not equipped to cope with students who opted to subcontract their thinking. By the end of 2023, students like Sam knew we were upon an inflection point in how universities maintained academic integrity.

Faculty who have begun to adopt AI in their teaching will disagree with the premise of my assessment. They will insist that their use of AI remains thoughtful, critical, cautious—and, above all, necessary. The arrow has left the bow, after all, and no matter what we say, students will use AI. We might at least (try to) dictate the terms on which they do so. And anyway, employers are already requiring that employees use AI in their work. It is their future—and ours, too, judging by much of the discourse. At root, the fatalism that grounds these arguments derives from the same force that motivates universities to silence their students: the Market.

Students protesting U.S. and Israeli policy in Gaza, after all, are not primarily protesting the violence, abhorrent though they find it. They are protesting their universities’ complicity in a market that funds it. Student activists at my institution have concentrated their efforts on uncovering the university’s entanglements with the arms industry. Their actions have been tame, yet the university has been swift to respond. Last year, students reported to me that our university’s anonymous bias reporting system had been used to target them for engaging in political speech on social media (the anonymous reporting option has since been abandoned). In the spring, campus police were called to stop students from chalking campus sidewalks with messages of Palestinian solidarity. Their act was forbidden, they learned, because they were not part of a recognized student organization.

But how could an organization that calls to end institutional ties with the military industrial complex gain approval, when the student code of conduct dictates that students must “at all times behave in ways that reflect positively on the institution”? Their views were de facto prohibited. Regulations around civil and respectful speech are convenient for pettifogging students into submission (discussion of genocide could be triggering). Unsurprisingly, then, within four days of holding a lunch-hour demonstration in October 2024, students were asked to meet with two deans for having disrupted university operations. The code of conduct at my university also calls for academic integrity and truthfulness, but it isn’t clear there is the same appetite to encourage integrity of thought as there is to restrict it. In October, I participated in a campus hearing board to adjudicate a case in which a student had fabricated an essay on Holocaust remembrance. It took 11 months to get a hearing.

The institutionalization of these damages is well underway. There may be pockets of the curriculum where the adoption of AI makes sense, but it has begun to erode liberal education in ways that are far more thoroughgoing than many faculty yet know. Rather than think seriously about how and where to resist its incursion, however, universities have zeroed in on civic engagement as a more pressing threat, marshaling deans, lawyers, police and marketing teams toward eliminating or bogging it down in a morass of policy. Even on those campuses that haven’t witnessed spectacles of violence, administrations have almost certainly updated their codes of conduct, events policies, advertising policies, external speaker policies and marketing materials. The arrival of the Trump administration will lead meek administrators to intensify these authoritarian measures.

We know that this is how the neoliberal university operates. We do not need to accept it. Our frequent avowals to teach critical thinking must be accompanied by a willingness to incur the discomfort that criticism entails. In times of emergency, Jonathan Crary writes in Scorched Earth, it becomes necessary “to insist that forms of radical refusal, rather than adaptation and resignation, are not only possible but necessary.” The time for radical critique and refusal is now. We would do well to learn from the courage of our students.

Daniel DiMassa is an associate professor of German in the Department of Humanities and Arts at Worcester Polytechnic Institute. He serves as vice president of the university’s American Association of University Professors chapter.

Next Story

Written By

More from Views