You have /5 articles left.
Sign up for a free account or log in.
Is an ice cream sandwich a sandwich? How about a sushi roll, chicken wrap or sloppy joe? These were some of the prompts included in a classification and model-building assignment in the fall 2022 Knowledge-Based AI course that David Joyner taught at the Georgia Institute of Technology.
But when Joyner, executive director of online education and the online master of science in computer science and senior research associate, was scheduled to teach the course again in the spring 2023 semester, he reconsidered the assignment in the presence of ChatGPT—the OpenAI chat bot that burst onto the global stage in late 2022 and sent shock waves across academe. The bot interacts with users in a conversational way, including by answering questions, admitting its mistakes, challenging falsehoods and rejecting inappropriate requests.
“I’d used the questions for five years because they were fun questions,” Joyner said. “But ChatGPT’s answer was so precise that I’m pretty sure it was learning from my own best students,” whom he suspected had posted their work online. Joyner replaced several of the sandwich options with avocado toast, shawarma, pigs in a blanket, Klondike bar and Monte Cristo. He also updated the academic misconduct statement on his syllabus to “basically say that copying from ChatGPT isn’t different from copying from other people.” Such efforts, Joyner acknowledges, may be a temporary fix.
As faculty members ponder academe’s new ChatGPT-infused reality, many are scrambling to redesign assignments. Some seek to craft assignments that guide students in surpassing what AI can do. Others see that as a fool’s errand—one that lends too much agency to the software.
Either way, in creating assignments now, many seek to exploit ChatGPT’s weaknesses. But answers to questions concerning how to design and scale assessments, as well as how to help students learn to mitigate the tool’s inherent risks are, at best, works in progress.
“I was all ready to not stress about the open AI shit in terms of student papers, because my assignments are always hyper specific to our readings and require the integration of news articles to defend claims etc. … BUT THEN I TRIED IT …” Danna Goldthwaite Young, professor of communication at the University of Delaware, wrote this week in introducing a thread on Twitter.
Students Should Surpass AI—or Not
When Boris Steipe, associate professor of molecular genetics at the University of Toronto, first asked ChatGPT questions from his bioinformatics course, it produced detailed, high-level answers that he deemed as good as his own. He still encourages his students to use the chat bot. But he also created The Sentient Syllabus Project, an initiative driven by three principles: AI should not be able to pass a course, AI contributions must be attributed and true, and the use of AI should be open and documented.
“When I say AI cannot pass the course, it means we have to surpass the AI,” Steipe said. “But we also must realize that we cannot do that without the AI. We surpass the AI by standing on its shoulders.”
Steipe, for example, encourages students to engage in a Socratic debate with ChatGPT as a way of thinking through a question and articulating an argument.
“You will get the plain vanilla answer—what everybody thinks—from ChatGPT,” Steipe said, adding that the tool is a knowledgeable, infinitely patient and nonjudgmental debate partner. “That’s where you need to start to think. That’s where you need to ask, ‘How is it possibly incomplete?’”
But not every faculty member is convinced that students should begin with ChatGPT’s outputs.
“Even when the outputs are decent, they’re shortcutting the students’ process of thinking through the issue,” said Anna Mills, English instructor at the College of Marin. “They might be taking the student in a different direction than they would have gone if they were following the germ of their own thought.”
Some faculty members also challenge the suggestion that students should compete with AI, as such framing appears to assign the software agency or intelligence.
“I do not see value in framing AI as anything other than a tool,” Marc Watkins, lecturer in composition and rhetoric at the University of Mississippi, wrote in an email. Watkins, his department colleagues and his students are experimenting with ChatGPT to better understand its limitations and benefits. “Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology.”
Still, others question the suggestion that AI-proofing a course is difficult.
“Creating a course that AI cannot pass? Shouldn’t take very long at all,” Robert Cummings, associate professor of writing and rhetoric at the University of Mississippi, wrote in an email. “Most AI writing generators are, at this stage, laughably inaccurate … Testing AI interactions with components of a course might make more sense.”
But Steipe is pondering a possible future in which descendants of today’s AI-writing tools raise existential questions.
“This is not just about upholding academic quality,” Steipe said. “This is channeling our survival instincts. If we can’t do that, we are losing our justification for a contribution to society. That’s the level we have to achieve.”
How Faculty Can Exploit ChatGPT’s (Current) Weaknesses
In the future, faculty members may get formal advice about how to craft assignments in a ChatGPT world, according to James Hendler, director of the Future of Computing Institute and professor of computer, web and cognitive sciences at Rensselaer Polytechnic Institute.
In the meantime, faculty are innovating on their own.
In computer science, for example, many professors have observed that AI writing tools can write codes that work, though not necessarily of the kind that humans find easy to edit, Hendler said. That observation can be exploited to create assignments that distinguish between content and creative content.
“We try to teach our students how to write code that other people will understand, with comments, mnemonic variable names and breaking code up into meaningful pieces,” Hendler said. “That’s not what’s happening with these systems yet.”
Also, since ChatGPT’s ability to craft logical arguments can underwhelm, assignments that require critical thinking can work well in the presence of ChatGPT.
“It’s not very good at introspecting,” Steipe said. “It just generates. You often find non sequiturs or arguments that don’t hold water. When you point it out to the to the AI, it says, ‘Oh, I got something wrong. I apologize for the confusion.’”
Several faculty members contacted for this article mentioned that lessons learned from the earlier emergence of Wikipedia hint at a path forward. That is, both the online encyclopedia and OpenAI’s chat bot offer coherent prose that is prone to errors. They adapted assignments to mix use of the tech tools with fact-checking.
Moving forward, professors can expect students to use ChatGPT to produce first drafts that warrant review for accuracy, voice, audience and integration to the purpose of the writing project, Cummings wrote. As the tools improve, students will need to develop more nuanced skills in these areas, he added.
An Unsolved Problem
Big tech plans to mainstream AI writing tools in its products. For example, Microsoft, which recently invested in ChatGPT, will integrate the tool into its popular office software and sell access to the tool to other businesses. That has applied pressure to Google and Meta to speed up their AI-approval processes.
“My classes now require AI, and if I didn’t require AI use, it wouldn’t matter, everyone is using AI anyway,” Ethan Mollick, associate professor of management and academic director of the Wharton Interactive at the University of Pennsylvania, wrote on his blog that translates academic research into useful insights.
But big tech’s speed in delivering AI products to market has not always been accomplished with care. Social media platforms, for example, were once naïvely celebrated for bringing together those with shared interests, not realizing at the time that the platforms also brought together supporters of terror, extremism and hate.
Meta’s release of a ChatGPT-like chat bot several months before OpenAI’s product received a tepid response, which Meta’s chief artificial intelligence scientist, Yan LeCun, blamed on Meta being “overly careful about content moderation,” according to The Washington Post. (LeCun spoke with Inside Higher Ed about challenges in computer science in September.) Faculty members may need to help students learn to mitigate and address inherent, real-world harm new tech tools may pose.
“The gloves are off,” Steipe said of the huge monetary driver of the emergence of sophisticated chat bots. In higher education, this may mean that the ways in which professors assess students may change. “We’ve heavily been basing assessment on proxy measures, and that may no longer work.”
Professors may assess their students directly, but that level of personal interaction generally does not scale. Still, some are encouraged to find themselves on the same side, so to speak, as their students.
“Our students want to learn and are not in a rush to cede their voices to an algorithm,” Watkins wrote.
Such alignment, when present, may offer comfort to the heady disruption academics have experienced since ChatGPT’s release, especially as bigger questions—beyond how to assign grades—loom.
“The difference between the AI and the human mind is sentience,” Steipe said. “If we want to teach as an academy in the future that is going to be dominated by digital ‘thought,’ we have to understand the added value of sentience—not just what sentience is and what it does, but how we justify that it is important and important in the way that we’re going to get paid for it.”