You have /5 articles left.
Sign up for a free account or log in.
SOPA Images/Contributor/LightRocket/Getty Images
In the winter of 2020, while COVID-19 was just beginning to crawl its way across the globe, there was a palpable sense in the United States that the utter catastrophe that COVID has turned out to be, with over one million dead Americans left in its wake (and counting), simply would not happen. Sure, the front pages of newspapers showed Chinese officials in white hazmat suits carrying body bags, but we’d seen such images before with SARS. Swine flu felt like much ado about nothing, and we’re always hearing about some sort of bird flu that never quite takes flight. For many, the endless calamities we had forever read about in the news had never actually reached out of our screens and grabbed us by our throats. But then a couple months later, as if a master switch in the center of the Earth was suddenly flipped, we found ourselves in an entirely new world. Years later, we’ve never quite been the same.
Today we are facing a new sort of plague, one that threatens our minds more than our bodies. ChatGPT, the artificial intelligence chat bot that can write college-level essays, is going viral. The New York Times reports that the hashtag #chatgpt has surpassed well over half a billion views on TikTok. A lecturer at an Australian university found that a fifth of her students had already used ChatGPT on their exams. Scores of Stanford University students reportedly used it on their fall 2022 final exams mere weeks after its release. A critical mass, a superspreader event, is clearly forming. Yet as in the early days of COVID, most educators have yet to fully grasp the stark reality of the tsunami that is about to hit the educational system. While headlines warning about ChatGPT have populated the news cycle daily for more than a month now, most educators have yet to really feel the brunt of this viral sensation directly.
What winter 2020 was for Covid, winter 2023 is for ChatGPT. These are like the early days, when we thought we could stave off a pandemic through an abundance of hand sanitizer and toilet paper. We realize there is potentially a calamity about to wash upon our shores, but we still have our heads in the sand. We think this won’t really affect us, that we can avoid having to make any major changes to the way we’ve always done things. But soon the first crop of assessments will come back, and educators will begin to notice a change, more so with each passing week. Worried that the first cases of GPT may have popped up in their classroom, they will plug their assignments into one of the new GPT detectors that have emerged online. To their shock and dismay, they will find that their classroom has tested positive for GPT.
In these early days of the GPT spread, we are largely defenseless against this novel threat to human intelligence and academic integrity. A return to handwritten and oral in-class assignments—a lockdown response—may be the only immediate effective solution as we wait for more robust protections to arise. But we are assured they are on the way. As with COVID, breakthrough remedies to protect against GPT are being developed at breakneck speed. Several GPT detectors are already available in beta form: we had COVID zero and now we have GPTZero. Turnitin, the main company employed by colleges to detect plagiarism in online student submissions, plans to have its own GPT detector incorporated into its software soon.
But as with those brief glory days when we received our second dose of Pfizer and Moderna shots, breathing a false sigh of relief that the bad dream was finally over, the efficacy rate of these GPT detectors may soon drop as a stronger mutation of the GPT virus emerges just around the corner: GPT-4—the Delta of GPT. And so the GPT detectors will be updated to protect against GPT-4, and the alphabet of GPT variants coming, but the efficacy may keep weakening, and the mutations will keep occurring. Unlike with COVID, the new variants of GPT will only keep getting stronger, even exponentially so.
Sooner or later, after GPT washes over the world in wave after wave of new and more powerful forms, we will learn to live with this digital plague just as we are learning to live with that physical virus that has so disrupted our lives. But we are learning to live with COVID because we have gained a certain mastery over it. Learning to live with GPT, on the other hand, might mean something quite different. Rather than gaining mastery over the virus, it may be that it gains mastery over us.
Today our students are largely back from the COVID Zoom nightmare and once again interacting with each other in person. But will we ever come back from GPT? Will we develop a resistance to this new plague in our classrooms, or will we simply throw our hands up in defeat, embrace our AI overlords and with it throw out our ability to write, to create and to think critically for ourselves? Will learning to live with GPT mean submission? Only one thing is for sure: like COVID, GPT is not going away.
So what should we do now that ChatGPT has been let loose upon the student body? Rather than succumb to hopelessness, I offer a set of brief concrete suggestions below:
- If you haven’t done so already, try out ChatGPT for yourself. It’s free (though currently fairly bogged down). Plug in your take-home exam questions. Plug in your writing prompts. Work with it like an editor would rather than simply accepting the first response it produces. If you haven’t yet done this, you’re probably not fully comprehending the power of this AI. You have to see it for yourself to grasp what we are facing. Then you have to realize this is only the very beginning and that AI language models will grow in power exponentially.
- Universities should immediately designate an AI task force, ideally at the level of each college or possibly even each department. While many universities are starting to discuss how to handle the AI revolution, this cannot be a one-off conversation, but rather needs to be an ongoing dialogue, as the AI landscape will be constantly evolving.
- Place a stronger focus on the classroom and human interactions. Relationship skills and constructive dialogue skills will likely be more important for the future of humanity than perhaps many of the skills students are learning today that will soon be automated away. I’d personally suggest also banning all technology in your classroom (except for justified exceptions) in order to facilitate a relationship-rich educational environment.
- Incorporate or re-incorporate in-class assignments, both handwritten and possibly oral assignments.
- Build internet-disabled computer labs on campus so instructors can incorporate in-class written essays and exams without having to rely on handwritten assignments that are often hard for instructors to read and cumbersome for students to write. These computers could be quite cheap to purchase, since all they will need is word processing, so this need not be a huge expenditure.
- Develop new courses that are specifically designed for working with GPTs and other AI text generators (e.g., Medical Diagnosis With GPT). Students should have familiarity and even expertise with these programs, since such skills will likely be in demand in the job market. Another option would be to have specific AI writing or editing assignments as a portion of one’s class. But simply allowing AI text generators to be a standard student resource in every class or for every assignment will ultimately lead to an outsourcing to machines of the ability to think critically or even possess basic knowledge of the subject matter at hand. This would be a failure of a core function of the university—to produce a knowledgeable society capable of critical thought and reflection.
- Develop standard university protocols for dealing with algorithmically plagiarized work that is flagged by GPT detectors. While GPT detectors will likely be far from a silver bullet and are trivially easy to evade at the moment, they still will likely provide some level of “protection” against GPT. But unlike current plagiarism detectors used by companies like Turnitin, there will be no hard evidence of cheating, as AI generators develop original work rather than directly copying information from prior sources. Rather, the “evidence” of misconduct will be the detector stating that the student’s work is “likely” algorithmically generated. It’s rather Kafkaesque to penalize a student for cheating because “the machine says you’re guilty with a 98.5 percent probability.” Much more will need to be done to fairly impugn a student for AI-related academic misconduct. On that note, university academic misconduct policies should be updated to include the misuse of AI and specify what counts as AI misuse.
Now, where there is crisis, there is opportunity. The crisis of COVID, for example, has led to major societal restructuring and has brought about fairly revolutionary changes to the economy and workforce. Some of these changes are for the better and some for the worse. Similarly, while ChatGPT may give us an opportunity to reflect and make substantial changes to how we approach teaching and assessment, we should not pretend that ChatGPT’s arrival is not a major crisis or that any of us as educators can avoid it. Rather, we are at a pivotal moment in the history of education. How we respond could have a resounding impact for decades to come. I recommend we do all we can as educators to cultivate the powers of the human mind in the face of this novel threat to our intelligence.