You have /5 articles left.
Sign up for a free account or log in.

I had a follow-up post to last week’s “Freaking Out About ChatGPT” ready to go, and then I saw some things that Marc Watkins of the University of Mississippi was saying on Twitter about the technology and its use, and I immediately shelved my post so I could share this from Marc. —JW


In Plato’s Phadreus, Socrates knocks the value of writing over spoken discourse, arguing that “Even the best of writings are but a reminiscence of what we know.” Two millennia later, this sentiment feels fresh and alive among shouts of moral panic following the launch of AI writing assistants like ChatGPT. Fearful that students will use language models to cheat, some have called for a return to oral exams, blue books or, worst of all, shifting to AI-powered proctoring services that track student’s eye moments and are trained on data that unfairly bias results against students of color. We should avoid such regressive measures. This fall, I taught several sections of first-year composition using GPT-3 powered tools. Let me calm your fears. Students, and likely you, will come to use this technology to augment the writing process, not replace it.

Language models are imperfect tools—amazing one moment and frighteningly inaccurate the next. They make up facts, hallucinate and contain all the biases we’ve come to cringe at on the internet—like Google, with few of the safeguards. Now, OpenAI is trying to address these issues with some pretrained responses with ChatGPT, but they’re easily defeated with simple commands. Still, the rate these tools are being deployed and readily adopted means that we’re all likely to use some form of AI assistance in our writing, if we aren’t already. This is why it is crucial for us to teach our students about these models, how to use them ethically and what it will mean for their writing process.

This summer, we started an AI working group at the University of Mississippi’s Department of Writing and Rhetoric with Bob Cummings, Stephen Monroe, Angela Green, Chad Russell, Guy Krueger and Andrew Davis to explore if these tools could help students write. Our group worked with Fermat to develop tools using GPT-3. One example is a counterargument generator, which allows students to explore different perspectives on topics they are interested in. We also used Elicit’s AI research tool to help students brainstorm research topics. Elicit provides a quick summary of articles using a research question as a prompt, making it easier for students to find relevant information from a database of open-access journal articles. It’s like JSTOR on 1980s box-office Schwarzenegger steroids.

Our writing group gathered reflective feedback from students about their use of these tools this fall. Students shared that using these apps in scaffolded assignments can enhance their creative process, a promising outcome, when both students and faculty approach the technology with caution. One student reflected on how Elicit helped them expand their knowledge, saying, “Some of the ideas that Elicit gave me I had already thought of, but the ones that I didn’t have were outside my scope of thinking, which is really helping me start to expand on new ideas.” Another student shared that Elicit provided them with a “broader perspective about what details [they] should write about.” Students also found value in using Fermat to explore counterarguments, with one student saying, “The AI was helpful when I figured out how to use it, although there are still parts of it that confuse me. I think it is helpful because it looks at your writing and develops all sorts of statements or questions that can go with or against the point you are trying to prove.”

I know you’re freaking out. OpenAI knows it, too. Their engineers are working on a watermarking process to label generated text in order to curb intentional misuse, like academic cheating. I doubt it will work as envisioned, and even if it does, I’m not so sure I want to write a check to a company for inventing a detection service for the problem it has helped create. We should be proactive in our response and not approach our teaching out of panic and mistrust of our students. What message would we send our students by using AI-powered detectors to curb their suspected use of an AI writing assistant, when future employers will likely want to them to have a range of AI-related skills and competencies?

What we should instead focus on is teaching our students data literacy so that they can use this technology to engage human creativity and thought. Keep in mind, language models are just math and massive processing power, without any real cognition or meaning behind their text generation. Human creativity is far more powerful, and who knows what can be unlocked if such creativity is augmented with AI?

If we don’t educate our students on the potential dangers of AI, we may see harmful consequences in our classrooms and beyond. Meta’s Galactica language model was recently pulled from the internet after a disastrous demo where users generated false, racist and sexist research articles. We don’t want professionals like doctors and pharmacists using language models without understanding their limitations, and we definitely don’t want law enforcement or judges relying on biased AI in their decision-making. The potential for unintentional harm is too great to ignore. Imagine the consequences if a well-meaning teacher were to use a language model in a K-12 setting without fully understanding its capabilities and limitations.

The pace at which this tech is being developed and deployed is staggering. By the time we finished the fall semester, Fermat had created another counterargument tool, one that changed its suggestions as a user typed in real time—a huge departure from the original tool we piloted just a few weeks before. And Elicit made its own staggering leaps, now automatically providing a user a short synthesis of the top four research articles in their database search and changing it in real time as you click through and rank them. All these changes are made possible by the continued tweaking of OpenAI’s GPT-3 and hundreds of very enthusiastic app developers using them to power AI writing and AI research assistants.

There are a number of these developers out there, like Fermat and Elicit, who are interested in hearing what educators would like to see in these apps. One example is Explainpaper. It was built by Aman Jha and Jade Asmus and launched for under $400 to help students explore and read complex jargon found in many research studies. They have an active Discord channel and welcome suggestions for how their tool might continue to be developed for education. There’s also a growing community of educators active on social media who regularly discuss AI text generators and their implications.

As language models continue to be deployed, it is essential for universities to invest in resources and faculty to ensure that educators are equipped to effectively teach writing in this new technological landscape. This is going to be fast and messy, and training won’t be cheap. It may mean rethinking traditional teaching methods or assessments or retiring antiquated forms of writing and searching for more meaningful assignments to help shape student thinking. At the University of Mississippi, Bob Cummings and I have already begun discussing how AI text generators might impact the future of teaching, and I hope to continue this conversation across departments and colleges, but this must extend beyond a single university and involve collaboration across disciplines and institutions.

We must also consider the broader implications of AI in education, particularly in regions and communities where access to quality education is limited. In these communities, AI writing assistants have the potential to narrow the gap between underresourced schools and more affluent ones and could have an immense impact on equity.

Next Story

Written By