You have /5 articles left.
Sign up for a free account or log in.

Is something in the water—or, more appropriately, in the algorithm? Cheating—while nothing new, even in the age of generative artificial intelligence—seems to be having a moment, from the New York magazine article about “everyone” ChatGPTing their way through college to Columbia University suspending a student who created an AI tool to cheat on “everything” and viral faculty social media posts like this one: “I just failed a student for submitting an AI-written research paper, and she sent me an obviously AI-written email apologizing, asking if there is anything she can do to improve her grade. We are through the looking-glass, folks.”

It’s impossible to get a true read on the situation by virality alone, as the zeitgeist is self-amplifying. Case in point: The suspended Columbia student, Chungin “Roy” Lee is a main character in the New York magazine piece. Student self-reports of AI use may also be unreliable: According to Educause’s recent Students and Technology Report, some 43 percent of students surveyed said they do not use AI in their coursework; 5 percent said they use AI to generate material that they edit before submitting, and just 1 percent said they submit generated material without editing it.

There are certainly students who do not use generative AI and students who question faculty use of AI—and myriad ways that students can use generative AI to support their learning and not cheat. But the student data paints a different picture than the one presidents, provosts, deans and other senior leaders did in a recent survey by the American Association of Colleges and Universities and Elon University: Some 59 percent said cheating has increased since generative AI tools have become widely available, with 21 percent noting a significant increase—and 54 percent do not think their institution’s faculty are effective in recognizing generative Al–created content.

In Inside Higher Ed’s 2025 Survey of Campus Chief Technology/Information Officers, released earlier this month, no CTO said that generative AI has proven to be an extreme risk to academic integrity at their institution. But most—three in four—said that it has proven to be a moderate (59 percent) or significant (15 percent) risk. This is the first time the annual survey with Hanover Research asked how concerns about academic integrity have actually borne out: Last year, six in 10 CTOs expressed some degree of concern about the risk generative AI posed to academic integrity.

Stephen Cicirelli, the lecturer of English at Saint Peter’s University whose “looking glass” post was liked 156,000 times in 24 hours last week, told Inside Higher Ed that cheating has “definitely” gotten more pervasive within the last semester. But whether it’s suddenly gotten worse or has been steadily growing since large language models were introduced to the masses in late 2022, one thing is clear: AI-assisted cheating is a problem, and it won’t get better on its own.

So what can institutions do about it? Drawing on some additional insights from the CTO survey and advice from other experts, we’ve compiled a list of suggestions below. The expert insights, in particular, are varied. But a unifying theme is that cheating in the age of generative AI is as much a problem requiring intervention as it is a mirror—one reflecting larger challenges and opportunities within higher education.

(Note: AI detection tools did not make this particular list. Even though they have fans among the faculty, who tend to point out that some tools are more accurate than others, such tools remain polarizing and not entirely foolproof. Similarly, banning generative AI in the classroom did not make the list, though this may still be a widespread practice: 52 percent of students in the Educause survey said that most or all of their instructors prohibit the use of AI.)

Academic Integrity for Students

The American Association of Colleges and Universities and Elon University this month released the 2025 Student Guide to Artificial Intelligence under a Creative Commons license. The guide covers AI ethics, academic integrity and AI, career plans for the AI age, and an AI toolbox. It encourages students to use AI responsibly, critically assess its influence and join conversations about its future. The guide’s seven core principles are:

  1. Know and follow your college’s rules
  2. Learn about AI
  3. Do the right thing
  4. Think beyond your major
  5. Commit to lifelong learning
  6. Prioritize privacy and security
  7. Cultivate your human abilities

Connie Ledoux Book, president of Elon, told Inside Higher Ed that the university sought to make ethics a central part of the student guide, with campus AI integration discussions revealing student support for “open and transparent dialogue about the use of AI.” Students “also bear a great deal of responsibility,” she said. They “told us they don’t like it when their peers use AI to gain unfair advantages on assignments. They want faculty to be crystal clear in their syllabi about when and how AI tools can be used.”

Now is a “defining moment for higher education leadership—not only to respond to AI, but to shape a future where academic integrity and technological innovation go hand in hand,” Book added. “Institutions must lead with clarity, consistency and care to prepare students for a world where ethical AI use is a professional expectation, not just a classroom rule.”

Mirror Logic

Lead from the top on AI. In Inside Higher Ed’s recent survey, just 11 percent of CTOs said their institution has a comprehensive AI strategy, and roughly one in three CTOs (35 percent) at least somewhat agreed that their institution is handling the rise of AI adeptly. The sample size for the survey is 108 CTOs—relatively small—but those who said their institution is handling the rise of AI adeptly were more likely than the group over all to say that senior leaders at their institution are engaged in AI discussions and that effective channels exist between IT and academic affairs for communication on AI policy and other issues (both 92 percent).

Additionally, CTOs who said that generative AI had proven to be a low to nonexistent risk to academic integrity were more likely to report having some kind of institutionwide policy or policies governing the use of AI than were CTOs who reported a moderate or significant risk (81 percent versus 64 percent, respectively). Leading on AI can mean granting students institutional access to AI tools, the rollout of which often includes larger AI literacy efforts.

(Re)define cheating. Lee Rainie, director of the Imagining the Digital Future Center at Elon, said, “The first thing to tackle is the very definition of cheating itself. What constitutes legitimate use of AI and what is out of bounds?” In the AAC&U and Elon survey that Rainie co-led, for example, “there was strong evidence that the definitional issues are not entirely resolved,” even among top academic administrators. Leaders didn’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—“the verdict was completely split,” Rainie said. Clearly, it’s “a perfect recipe for confusion and miscommunication.”

Rainie’s additional action items, with implications for all areas of the institution:

  1. Create clear guidelines for appropriate and inappropriate use of AI throughout the university.
  2. Include in the academic code of conduct a “broad statement about the institution’s general position on AI and its place in teaching and learning,” allowing for a “spectrum” of faculty positions on AI.
  3. Promote faculty and student clarity as to the “rules of the road in assignments.”
  4. Establish “protocols of proof” that students can use to demonstrate they did the work.

Rainie suggested that CTOs, in particular, might be useful regarding this last point, as such proof could include watermarking content, creating NFTs and more.

Put it in the syllabus! (And in the institutional DNA.) Melik Khoury, president and CEO of Unity Environmental University in Maine, who’s publicly shared his thoughts on “leadership in an intelligent era of AI,” including how he uses generative AI, told Inside Higher Ed that “AI is not cheating. What is cheating is our unwillingness to rethink outdated assessment models while expecting students to operate in a completely transformed world. We are just beginning to tackle that ourselves, and it will take time. But at least we are starting from a position of ‘We need to adapt as an institution,’ and we are hiring learning designers to help our subject matter experts adapt to the future of learning.”

As for students, Khoury said the university has been explicit “about what AI is capable of and what it doesn’t do as well or as reliably” and encourages them to recognize their “agency and responsibility.” Here’s an excerpt of language that Khoury said appears in every course syllabus:

  • “You are accountable for ensuring the accuracy of factual statements and citations produced by generative AI. Therefore, you should review and verify all such information prior to submitting any assignment.
  • “Remember that many assignments require you to use in-text citations to acknowledge the origin of ideas. It is your responsibility to include these citations and to verify their source and appropriateness.
  • “You are accountable for ensuring that all work submitted is free from plagiarism, including content generated with AI assistance.
  • “Do not list generative AI as a co-author of your work. You alone are responsible.”

Additional policy language recommends that students:

  • Acknowledge use of generative AI for course submissions.
  • Disclose the full extent of how and where they used generative AI in the assignment.
  • Retain a complete transcript of generative AI usage (including source and date stamp).

“We assume that students will use AI. We suggest constructive ways they might use it for certain tasks,” Khoury said. “But, significantly, we design tasks that cannot be satisfactorily completed without student engagement beyond producing a response or [just] finding the right answer—something that AI can do for them very easily.”

In tandem with a larger cultural shift around our ideas about education, we need major changes to the way we do college.”

—Emily Pitts Donahoe, associate director of instructional support in the Center for Excellence in Teaching and Learning and lecturer of writing and rhetoric at the University of Mississippi

Design courses with and for AI. Keith Quesenberry, professor of marketing at Messiah University in Pennsylvania, said he thinks less about cheating, which can create an “adversarial me-versus-them dynamic,” and more about pedagogy. This has meant wrestling with a common criticism of higher education—that it’s not preparing students for the world of work in the age of AI—and the reality that no one’s quite sure what that future will look like. Quesenberry said he ended up spending all of last summer trying to figure out how “a marketer should and shouldn’t use AI,” creating and testing frameworks, ultimately vetting his own courses’ assignments: “I added detailed instructions for how and how not to use AI specifically for that assignment’s tasks or requirements. I also explain why, such as considering whether marketing materials can be copyrighted for your company or client. I give them guidance on how to cite their AI use.” He also created a specialized chat bot to which students can upload approved resources to act as an AI tutor.

Quesenberry also talks to students about learning with AI “from the perspective of obtaining a job.” That is, students need a foundation of disciplinary knowledge on which to create AI prompts and judge output. And they can’t rely on generative AI to speak or think for them during interviews, networking and with clients.

There are “a lot of professors quietly working very hard to integrate AI into their courses and programs that benefit their disciplines and students,” he adds. One thing that would help them, in Quesenberry’s view? Faculty institutional access to the most advanced AI tools.

Give faculty time and training. Tricia Bertram Gallant, director of the academic integrity office and Triton Testing Center at the University of California, San Diego, and co-author of the new book The Opposite of Cheating: Teaching for Integrity in the Age of AI (University of Oklahoma Press), said that cheating part of human nature—and that faculty need time, training and support to “design educational environments that make cheating the exception and integrity the norm” in this new era of generative AI.

Faculty “cannot be expected to rebuild the plane while flying it,” she said. “They need course release time to redesign that same course, or they need a summer stipend. They also need the help of those trained in pedagogy, assessment design and instructional design, as most faculty did not receive that training while completing their Ph.D.s.” Gallant also floated the idea of AI fellows, or disciplinary faculty peers who are trained on how to use generative AI in the classroom and then to “share, coach and mentor their peers.”

Students, meanwhile, need training in AI literacy, “which includes how to determine if they’re using it ethically or unethically. Students are confused, and they’re also facing immense temptations and opportunities to cognitively offload to these tools,” Gallant added.

Teach first-year students about AI literacy. Chris Ostro, an assistant teaching professor and instructional designer focused on AI at the University of Colorado at Boulder, offers professional development on his “mosaic approach” to writing in the classroom—which includes having students sign a standardized disclosure form about how and where they’ve used AI in their assignments. He told Inside Higher Ed that he’s redesigned his own first-year writing course to address AI literacy, but he is concerned about students across higher education who may never get such explicit instruction. For that reason, he thinks there should be mandatory first-year classes for all students about AI and ethics. “This could also serve as a level-setting opportunity,” he said, referring to “tech gaps,” or the effects of the larger digital divide on incoming students.

Regarding student readiness, Ostro also said that most of the “unethical” AI use by students is “a form of self-treatment for the huge and pervasive learning deficits many students have from the pandemic.” One student he recently flagged for possible cheating, for example, had largely written an essay on her own but then ran it through a large language model, prompting it to make the paper more polished. This kind of use arguably reflects some students’ lack of confidence in their writing skills, not an outright desire to offload the difficult and necessary work of writing to think critically.

Think about grading (and why students cheat in the first place). Emily Pitts Donahoe, associate director of instructional support in the Center for Excellence in Teaching and Learning and lecturer of writing and rhetoric at the University of Mississippi, co-wrote an essay two years ago with two students about why students cheat. They said much of it came down to an overemphasis on grades: “Students are more likely to engage in academic dishonesty when their focus, or the perceived focus of the class, is on grading.” The piece proposed the following solutions, inspired by the larger trend of ungrading:

  1. Allow students to reattempt or revise their work.
  2. Refocus on formative feedback to improve rather than summative feedback to evaluate.
  3. Incorporate self-assessment.

Donahoe said last week, “I stand by every claim that we make in the 2023 piece—and it all feels heightened two years later.” The problems with AI misuse “have become more acute, and between this and the larger sociopolitical climate, instructors are reaching unsustainable levels of burnout. The actions we recommend at the end of the piece remain good starting points, but they are by no means solutions to the big, complex problem we’re facing.”

Framing cheating as a structural issue, Donahoe said students have been “conditioned to see education as a transaction, a series of tokens to be exchanged for a credential, which can then be exchanged for a high-paying job—in an economy where such jobs are harder and harder to come by.” And it’s hard to fault students for that view, she continued, as they receive little messaging to the contrary.

Like the problem, the solution set is structural, Donahoe explained: “In tandem with a larger cultural shift around our ideas about education, we need major changes to the way we do college. Smaller class sizes in which students and teachers can form real relationships; more time, training and support for instructors; fundamental changes to how we grade and how we think about grades; more public funding for education so that we can make these things happen.”

With none of this apparently forthcoming, faculty can at least help reorient students’ ideas about school and try to “harness their motivation to learn.”

Next Story

Share This Article

More from Academics