You have /5 articles left.
Sign up for a free account or log in.

A black mobile phone with the words "ChatGPT" in bold white letters on the screen, against a yellow background.

Jernej Furman from Slovenia, CC BY 2.0, via Wikimedia Commons.

ChatGPT usage declined 10 percent in June, sparking a flurry of speculation that the bloom is off the rose for the chat bot. But that analysis overlooked what’s probably obvious to most academics: the spring semester ended in May.

So, what should we expect this fall when our students log back on? According to Ethan Mollick, we are facing a “homework apocalypse.” Mollick’s eye-catching title belies the fact that his essay offers a sober, thoughtful discussion of how AI may affect academic assessment in the fall.

Other writers are less measured. Inara Scott declares, “Yes, We Are in a (ChatGPT) Crisis”! Sounding the siren call of panic, Scott calls ChatGPT an existential crisis for the university. Expressing her shock at the lack of concern that faculty members seem to have about ChatGPT’s impact on education, Scott declared, “This type of apathy is unacceptable. Our students deserve better, and frankly, the entire world needs us to care.”

This kind of catastrophic thinking has become all too common in the academic press and department meetings. Joseph M. Keegin, for example, calls ChatGPT “a plagiarism machine.” Yet articles like Scott’s and Keegin’s rarely offer substantive evidence. In Keegin’s case, we’re told that some instructors reported “encountering student work … that smelled strongly of AI generation.” A strong smell is not evidence of widespread practice. Yet on the basis of this, Keegin demands that administrators create a strong anti–AI plagiarism policy, apparently overlooking the fact that plagiarism policies should be the purview of faculty.

I’m not naïve about students’ willingness to cheat, especially given the pressures they are under. And it may be a fact that students are using the chat bot extensively. But faculty have an ethical obligation to know that students are using AI and to know how and why they are using it before they make dramatic changes to their curricula. In addition to the ethical obligation, it’s just plain foolish to make major changes to our curriculum—let alone redesign the structure of the university, as Scott calls for—without any concrete data about how students are engaging with AI. In one of the more perceptive articles on AI and education, Andrea L. Guzman points out that students themselves will have a wide range of responses to AI, and we shouldn’t make the mistake of thinking they will all rush right out and embrace the technology.

Guzman’s portrait of the varied nature of student engagement with ChatGPT matches my own experience last semester, when I did not see an increase in error-free or garbled syntax–free writing. In my online Introduction to American Literature course, the average grade on the first formal essay was about a B-minus, which is typical, going back many years, for first essay assignments. My students’ formal writing and discussion posts were still filled with as many comma errors, dangling modifiers, cohesion problems, missing apostrophes and capitalization errors as they’ve always been, which is exactly what I would expect to see in a class with students wrestling with new intellectual concepts and unfamiliar forms of writing. And this was a class where I introduced AI and discussed with students the ethics of using it in their writing.

This summer, I taught an online course on grammar, which I also taught in the spring. In March, ChatGPT struggled to accurately complete the exercises I fed it, but when I resubmitted those exercises in July, I noticed a marked improvement in the bot’s ability to identify the different components of sentences. My students could easily have been using ChatGPT to complete the assignments. But their work looked the same as it did in the spring, or in the many iterations of this course that I taught before ChatGPT arrived.

Yet there are students who claim that ChatGPT usage is widespread. Owen Kichizo Terry, an undergraduate student at Columbia University, paints a portrait of students wholeheartedly using ChatGPT to sidestep the need for critical thinking. Perhaps, but we shouldn’t use Terry’s description of how students are using AI as a representation of what’s happening across campuses in America. It might be possible for the average student at Columbia to simply rewrite what ChatGPT produces to make it “flow better,” but for many college students, revision is difficult, and blending documents with distinct voices and styles into a cohesive whole is a daunting task. Writing instructors have long known that students are reluctant to revise for a number of complex reasons, ranging from a weak understanding of the subject to a lack of a clear argument to an inability to imagine a particular sentence in another syntactical form. When ChatGPT spits out a response, all those problems are still there.

Further, while I don’t know the actual assignment Terry was given, the question he asked ChatGPT—“Write a six-page close reading of The Iliad”—is one most veteran instructors would recognize as so broadly constructed as to invite plagiarism. And the process that Terry describes differs only in degree from that of a student surfing SparkNotes or Shmoop for ideas. This isn’t something new, and faculty have long had a proven array of strategies to make their assignments less susceptible to this kind of plagiarism; a quick search for “antiplagiarism strategies” will take you to a host of handouts from teaching and learning centers across the country. Ideas like these apply to the ChatGPT world just as much as they did the pre-AI world we lived in last fall.

The whole notion of approaching the advent of chat bots as an emergency or crisis or apocalypse is a bad idea that will lead to bad policies and practices that aren’t based on evidence. Instead, we in the university need to approach this as what it is: an intellectual problem that needs a thoughtful, judicious response. We need to use the methodologies of academic disciplines to figure out what AI can do, how students are engaging with it and how it interacts with existing forms of knowledge production.

Doing that requires, well, scholarship. We need to study people’s (including students’) attitudes toward AI. We need to study AI’s impacts on social organizations. We need to explore how chat bots work as writing tools and, just as important, how people are actually using them as writing tools in a wide range of situations, both in and outside the academy. This requires employing the methodologies and knowledge bases of a wide range of disciplines, including communication, psychology, sociology, economics, English, composition, education, business and more, in addition to the STEM fields that were engaged with AI long before ChatGPT came along. As this scholarship emerges, we will have a clearer understanding of what we are dealing with and how it impacts our pedagogical practices. At that point, we will be able to formulate sensible, effective responses that help our students achieve and thrive in this new environment.

But scholarship takes time. So what should we do in the meantime? First, don’t panic. Don’t assume all your students are cheating. Don’t revamp the way you assess students based on heated speculation.

Do familiarize yourself with ChatGPT and other bots. And keep familiarizing yourself with them, as they gain new capabilities regularly. And I’m not just talking about increasing linguistic capabilities or improvements in factual accuracy. We’re already seeing the first wave of ChatGPT apps appearing, including this one, which reads PDFs. Most of these early apps are clunky, but that will no doubt change soon. Also, OpenAI recently announced the ability to create URLs for particular chats. This is a significant development, because it makes it much easier for students to share their interactions with ChatGPT. Assigning and assessing exercises using ChatGPT just became much easier and more transparent.

Finally, do craft assignments that make plagiarism less likely; this is a good idea whether or not students are using AI.

I do not know what AI means for the future of writing instruction or college-level writing writ large. But neither does the author of the last article you read on AI, nor the author of the next article you will read. There are just too many uncertainties. And while moments of change like this make us more eager to find soothsayers and prophets who will tell us what the future holds, as scholars and teachers we should, in times like these, be more skeptical of the clairvoyants and Cassandras. And even if this is a crisis, our best response as academics will be to do what we do best: careful, thoughtful scholarship.

Andrew C. Higgins is an associate professor of English at the State University of New York at New Paltz.

Next Story

More from Views