Custom exams containing unique data sets for each student being tested at a British university could significantly curtail cheating in online assessments, researchers believe.
With some studies suggesting that cheating has massively increased following the switch to remote assessments, scholars have begun examining how they can design exams that make it impossible for students to collude or plagiarize their classmates’ work.
In a novel approach, chemists at the University of Exeter are using computer coding to generate 60 different data sets for a single class—one for each student—taking a data analytics test that is worth 20 percent of the entire module grade.
The script—which models lab equipment to produce realistic data but introduces some randomness so that each data set is different—could with very little work be used to generate unique data sets for thousands of exams in different disciplines, says a study published in The Journal of Chemical Education.
“If you have an exam which counts a lot towards a degree and it moves online, it presents an opportunity to cheat and even incentivizes it,” explained Alison Hill, a senior lecturer in biosciences who co-authored the paper with her Exeter colleague Nicholas Harmer.
“This [online cheating] isn’t just happening in academia—my husband is a Devon chess champion, and when chess moved online in the pandemic, there were reports of people using computers to help them win.”
One method to prevent collusion on data analysis questions is to limit exams to one hour. However, this brief exam window unfairly penalized students with poor internet connections or those based in different time zones, who often had to begin their tests at 3 a.m., explained Hill.
But the students’ preferred option of a 24-hour exam window was an invitation for students to share their answers, she argued.
“We’ve seen in other territories how once the paper goes live, a WhatsApp group is set up immediately—people simply see this kind of sharing as a good investment of their time,” said Hill, who argued that relying on university honor codes to halt cheating would be “completely naïve.”
“We can’t entirely stop cheating, but if every student has their own data set—with the same question—the cost-benefit balance of cheating is no longer in the student’s favor, as they will need to do the work again [for a classmate] with a different information set,” said Hill.
Designing out cheating by creating different data sets could be applied to most data-heavy exams—with automatically generated answer sheets for each paper—says the journal article.
This kind of test design would be far more effective than some exam proctoring techniques piloted in the pandemic, such as the webcam surveillance of those taking exams, which some students easily circumvented, said Hill.
“Lockdown students will find a way to get around these types of rules,” she said.