You have /5 articles left.
Sign up for a free account or log in.

Artificial robotic arm writing down some notes with pen - stock photo

Михаил Руденко/istock/getty images plus

A wise homeowner knows to deal with roof leaks right away. Otherwise, the leak will get worse, and one day a big rain will flood the house.

For colleges and universities, ChatGPT is that big rain.

And the leak is this: for some 20 years now, we really haven’t known who is doing homework, and most of us haven’t sufficiently revamped our classes to deal with that.

The rise of the internet in the 1990s and 2000s led to dozens of online homework solution sites that appear at the top of web search results and have become legitimized in students’ eyes. Google’s phone app has “Solve homework” as the third search option just after “Shop” and “Translate,” and university bookstores upsell students on those sites with offers like, “Get homework done fast!”

The internet also gave students easy, anonymous access to low-cost contractors who will write essays, code computer programs or take entire online courses. It made solutions from friends or family just an email away. And it gave students anonymous classmate access on apps like Discord or GroupMe, where solution trading occurs at a pace akin to the New York Stock Exchange trading floor.

Many professors may not be aware of how much cheating is actually happening in their classes. In an analysis I’ve done at professors’ requests, we found one large California university’s introductory computer science class had a 60 percent rate of blatant copying on at-home programming assignments. At another college, we found that every student in a 30-student computer science class was clearly copying their programs—every single one. It’s not just one subject and not just homework: a student in a biology class told me that nobody was learning because the professor reuses their quizzes and the solutions are all online. High cheating rates today are common across all disciplines.

What’s more, in interviews I’ve conducted with students, a common theme is that copying is not really “cheating” because everyone is doing it and professors don’t do much to prevent it. According to many students, it’s not fair for a student to get a bad grade due to the professor not preventing copying. One mentioned being put between a rock and a hard place—either you do your own work but spend 10 times more time than classmates, while also getting a lower grade, or copy but risk getting caught and penalized. Another gave a common rationalization: “If professors get to be lazy, so do we.”

Many students say they view copying as simply an efficient way of getting points. It’s like how most people speed a bit when driving but don’t consider themselves to be committing a “crime.” They’re just flowing with the others, and it’s basically become the normal and efficient way of getting where they’re going. There’s little shame in getting caught speeding or copying today when everyone’s doing it.

Adding to this situation is that college students are often driven more by emotion than by reason. Those years from 18 to 25 are when the decision-making, reasoning part of the brain is really forming. Professors want to think of their students as adults, but an 18-year-old’s brain is still transitioning to adulthood—something rental car companies have known for decades.

In short, our leaky roof has been getting worse every year, and now along comes ChatGPT—the big rain—and our house is about to get flooded.

ChatGPT uses powerful artificial intelligence that can write essays, solve physics problems and code computer programs. It’s shockingly good at those things. It can take a page-long English description of a desired computer program, and in one minute it will generate a working program in nearly any common programming language. As one of my colleagues put it, “I’ve never been much impressed by AI, but oh my gosh!”

I had a student use ChatGPT to try to complete two hours’ worth of programming assignments from a past term. He finished in eight minutes and scored the class average. In just another 10 minutes of telling ChatGPT what to fix, he scored 100 percent. In another experiment, a student completed an entire semester’s 50 hours’ worth of programming in under two hours, earning 96 percent.

AI magnifies the copying problem because it’s so good. It’s far easier to use than web searches and cheaper and faster than contractors—without that pesky modern risk of the contractor blackmailing you at the end of the term. And AI will only get better. Beyond OpenAI’s ChatGPT, other companies are releasing their own versions, like Google’s Bard.

So how do we in academe deal with the big rain and our leaky roofs?

The first step is to realize that we have a problem, and it’s not a new one. ChatGPT just magnifies it.

The next step is to find a solution. Institutions and professors are trying many different approaches with some success, such as:

  • Putting more emphasis on high-quality proctored assessments—and ensuring solutions aren’t online. But as every professor knows, proctoring has challenges, plus exams are limited in what they can assess.
  • Requiring students to present their work to teachers. That’s great but hard to scale to large classes, and it also introduces more subjectivity in grading.
  • More aggressively seeking and punishing cheating, ideally to prevent it—nobody speeds right past a cop.

But those approaches take high professor effort, involving time and often money. Through research that we’ve conducted at the University of California, Riverside, we’ve found a few low-effort prevention methods can yield some success. We regularly show our cheat-detection tools to the class—in a fun way, like “Let’s see if there was struggle on last week’s programming labs,” or “Let’s see if anyone came up with similar solution approaches on the coding quiz” (with the tool in anonymous mode, or asking for volunteers).

We also have a clear discussion of what constitutes cheating and the harsh penalties if caught (in week three, since nobody on day one is planning to cheat), plus we administer a quiz on cheating with clear dos and don’ts. Those relatively simple steps have reduced cheating rates from 30 percent to 10 percent in our large computer science classes. Those low-effort techniques—showing the tools at a professors’ disposal, clearly discussing cheating and administering a cheating quiz—can help reduce cheating in a variety of courses.

Those techniques help, but more is needed to revamp our classes to help ensure students are learning despite the easy availability of solutions. I mean things like ensuring that classes are carefully scaffolded, that help is readily available (via office hours, student learning assistants, discussion forums, allowed collaboration and so forth), and that students can reattempt work (within reason). Like adding more low- and medium-stakes proctored assessments, which may require more campus facilities that support such assessments. Like ensuring professors, or at least teaching assistants, actually get to know their students individually, and those students’ work, which goes counter to today’s increased class sizes. And like spending some reasonable percentage of teaching time seeking out and addressing cheating cases, which many professors today simply don’t do.

Looking ahead, we need to develop new approaches that automatically detect whether a particular student actually did the work themselves, such as comparing their homework with their proctored exams or identifying each student’s distinct patterns. How can we do that? By basically using AI to fight AI, or at least to better manage it.

The light that AI has shone on a 20-year-old weakness in academe could very well lead to great improvements in how we run classes to ensure submitted work is actually from the student. And beyond that, AI shows great promise to improve education as well. AI in the form of ChatGPT does an amazing job explaining things, finding errors in things (it can sometimes spot bugs in a student’s computer programs far faster than I or my teaching assistants can) and even teaching things.

And AI will only get better. AI basically can become a private tutor who is available 24-7, provides help in seconds and never judges you. That is potentially a huge step forward in education, and I look forward to it.

Frank Vahid is professor of computer science and engineering at the University of California, Riverside; chief learning officer at zyBooks (a Wiley brand); and founder of

Next Story

Written By

More from Teaching