Sorbetto/iStock/Getty Images Plus
My first Inside Higher Ed article on text spinners was published in November 2022, the same month that OpenAI launched ChatGPT. Before the year was out, academics were “Freaking Out About ChatGPT” and exploring how this might affect “The Future of Continuing Education.” I wondered if the advent of generative AI meant that students would abandon text-spinner tools such as Quillbot, which had been acquired by Course Hero the year before.
Students have been using text spinners—also called rewriter or paraphrase tools—for years as a means of avoiding automated plagiarism detection. Since software such as SafeAssign, Ouriginal or Turnitin identifies verbatim text that has appeared elsewhere, when a text is altered with synonyms and new sentence structure, it is less likely to be flagged as probable plagiarism. While those tools once primarily replaced key words with synonyms—a process Chris Sadler dubbed “Rogeting”—now they also restructure sentences, making the source of the original text more difficult to pinpoint.
I thought that generative AI, which can produce unique sentences unlikely to match existing text, might render text spinners obsolete. However, since Turnitin launched its AI-detection feature in April, I have instead noticed an uptick in submitted student work that bears the hallmarks of text-spinner alterations. A quick search of “how to make AI content undetectable” yields millions of results, including webpages and videos with advice on methods for outwitting AI detection. Almost all involve automated means of rephrasing AI-generated text, and sites have emerged touting their tools that will “humanize” AI writing.
The use of these tools is not secret: an article in Mashable notes that “you can … have tools like Quillbot paraphrase the essays ChatGPT gives you so it doesn’t look too obvious.” A March article in The Korea Times offers a more comprehensive list of text spinner tools: “Plagiarism can also be produced not only by ChatGPT, but by rewriting published texts with the help of Quillbot, DeepL Writer, Paraphrase tools, Wordfixer Bot, AI Article Spinner, SpinBot and other online tools.”
I recently tested Turnitin’s AI detection to see if GPT-4-generated text that had been rephrased via Quillbot would change its probable AI percentage score. I gave ChatGPT the prompt “Write a 750-word essay about the symbolism of the billboard in The Great Gatsby” and submitted the result to Turnitin. Turnitin estimated the text as containing 100 percent AI content and assigned it a 19 percent unoriginality score, mostly consisting of matches to other submitted student papers. I then asked Quillbot to paraphrase this text and submitted that result to Turnitin. Turnitin concluded that this altered text included only 21 percent probable AI-generated content.
It is worth mentioning that despite the low percentage of AI content Turnitin detected, this text would not be likely to fool a human reader into believing that it was a primarily human-produced text. Quillbot’s version of the ChatGPT paper contained some “tortured phrases” that are characteristic of text spinners: “the bubble of wealth and privilege” turned into “the privileged and wealthy cocoon,” and “Gatsby amasses great wealth” became “Gatsby acquires enormous money.”
While ChatGPT has received the most attention as an AI text generator, many other AI tools are available. In April, Grammarly released a beta version of Grammarly Go, which includes generative AI. Grammarly advertises that by using its AI writing tool, one can “automatically generate a draft using simple command prompts.”
Using the free version of the Grammarly app, I opened a new document and clicked on the “Grammarly Go: AI text generation” button. I gave it my Gatsby prompt and it produced an essay that I then submitted to Turnitin. It was flagged as 100 percent likely AI writing and 13 percent unoriginal, with matches to a few student papers.
I next used Quillbot to change the wording of the Grammarly paper. This rephrased paper was flagged as only 31 percent likely AI and just 8 percent unoriginal, matching to only two previously submitted student papers. This version also included noticeably unusual wording: a reference to “the illusion of the American dream” from the initial Grammarly paper was now “the fabrication of the American ideal,” and the assertion that “The characters are haunted by their past and are unable to move on from their mistakes” was revised as “The characters are unable to learn from their faults because they are plagued by their history.”
Not Playing Plagiarism Police
In July, the Modern Language Association and the Conference on College Composition and Communication released the MLA-CCCC Joint Task Force on Writing and AI working paper. This paper expresses concern about the use of AI detection programs, advising instructors to
“Focus on approaches to academic integrity that support students rather than punish them and that promote a collaborative rather than adversarial relationship between teachers and students. We urge caution and reflection about the use of AI text detection tools. Any use of them should consider their flaws and the possible effect of false accusations on students, including negative effects that may disproportionately affect marginalized groups.”
It is the “adversarial relationship” warned against here that strikes me the most. I have heard other instructors mention that they now become suspicious of writing that is “too good,” wondering if this is indicative of AI-generated text. How can we celebrate our students’ successes if we speculate whether marked improvement is not the result of their own dedicated work but the result of an AI program? I do not want to distrust my students or play plagiarism police, so I try to ensure that from the beginning of the class term they have a foundational knowledge of academically honest practices and understand the importance of originality and appropriate citation of words and ideas from others.
To accomplish that, instructors need to be aware of the messages students are receiving about how to approach generative AI—“beat AI detection!”—and what types of AI tools there are. As an example, an instructor might advise a student to “use Grammarly” for help writing their paper, believing it to be solely a grammar-checking tool, but then inadvertently frustrate and distress that student by accusing them of academic dishonesty after they use Grammarly’s generative AI tool.
The point in the MLA’s paper about the unreliability of AI-detection tools is also well taken. My own experiment with text spinners and AI detectors demonstrates that students can use AI to effectively disguise much AI writing in numerous ways. But even so—and of greater concern—the fact is that an accusation based on a false positive can irrevocably damage trust between an instructor and student.
The MLA paper has also astutely noted that marginalized groups may experience disproportionate negative effects from the use of AI content detectors. For example, the free version of Grammarly does not offer the plagiarism detection or full sentence rewriting features that the paid version, Grammarly Premium, does. Quillbot and the Undetectable AI rewriter tool will paraphrase only a handful of sentences before prompting users to upgrade to the paid version. And, as I noted in my previous article, language that calls attention to itself can be indicative of writing by an English-language learner and not necessarily an automated text spinner.
Inside Higher Ed has already featured articles on “Integrating Gen AI Into Your Fall Classes” and how to “Integrate Gen AI Into Your University Work,” and I am learning from colleagues who have successfully used generative AI as an effective learning tool. Perhaps as we work to integrate it meaningfully and ethically, and teach students how to cite its use, students will feel less compelled to disguise AI-generated writing.