You have /5 articles left.
Sign up for a free account or log in.
urbazon/iStock/Getty Images
Professors who adopt a slate of simple, proactive classroom measures to address cheating can significantly increase academic integrity among students, according to a new study from the University of California, Riverside, and zyBooks, a digital college courseware platform run by Wiley.
The study looked at six “low-effort” interventions—each of which took less than an hour for the professor to prepare and could be easily adapted for other courses—aimed at reducing cheating in an online section of an introductory computer science course in which about 100 students were enrolled.
Those interventions were: discussing academic integrity early in the course; requiring students to achieve 100 percent on an academic integrity quiz; allowing students to withdraw assignments they may have second thoughts about handing in; reminding students about the cheating policy partway through the term; demonstrating anticheating tools, such as software that identifies similarities in completed student assignments; and normalizing academic help and support.
The researchers measured the time it took the students to complete the assignments and the number of students who turned in notably similar code—two metrics that can indicate cheating—compared to students in another section of the course.
Smita Bakshi, Wiley’s senior vice president for academic learning and a co-founder of zyBooks, said that similarity checking is the most common method of sussing out cheating, but she noted in an email to Inside Higher Ed that both similarity and time “only suggest potential violations of academic integrity. An instructor always should thoroughly investigate to determine whether actual violations occurred.”
Collectively, the six interventions appeared to increase the amount of time students took to complete assignments, indicating they weren’t simply copying and pasting code from another source, and decreased instances of multiple students turning in work that was suspiciously similar.
“The results show substantial student behavior improvements when applying those low-effort methods,” the study found. The median time students spent on a programming assignment increased by 60 percent, from six minutes and 56 seconds to 11 minutes and six seconds. And the proportion of students who turned in programs deemed overly similar decreased 45 percent, from 33 percent to 18 percent.
David Rettinger, director of academic integrity programs at the University of Mary Washington and president emeritus of the International Center for Academic Integrity, said several interventions in the study have previously proven effective, including talking with students about the definition and importance of academic integrity.
Still, he said, the research is promising for its ability to evaluate effectiveness by using the metrics of time and similarity on a relatively large scale; because measuring actual instances of cheating is so difficult, most academic integrity studies are self-reported student surveys, he said.
“Detecting cheating is hard. What they did was a good first-order approximation, in my opinion,” he said.
Individual Interventions
Rettinger said his biggest criticism of the study was that the researchers bundled all six interventions into one experiment, rather than teasing them out to gauge the value of each one.
The researchers acknowledged this shortcoming in the study, writing that measuring each intervention independently would have been impractical, requiring them to run dozens of different experiments. They also surmised that bundling actually enhanced the effectiveness of the interventions.
“We suspect it is more likely that the collection has a more powerful impact on the student’s mental model of the class, with the sum being greater than the parts. Furthermore, since doing all six methods still only required just a few hours of effort total, there is no compelling reason for us to prune away any of the methods,” the study stated. “However, learning of the impact of each method is an area others may wish to investigate.”
Rettinger’s concern was that one or more of the interventions might be virtually useless—or, even worse, detrimental—but that the positive effects of the others obscured that fact.
“Some of these things can backfire on you,” he said. “And there’s no way to know if you dump them all into one course.”
The study also acknowledged a concern that some instructors have expressed about implementing anticheating interventions: that placing a heavy emphasis on academic integrity will impact their course evaluations. Indeed, the course evaluation score of the computer science professor who participated declined slightly after the experiment was conducted; as one student noted, “The professor put more effort in trying to find ‘cheaters’ than in actually teaching the class.”
“In any case, this evaluation data suggests teachers concerned about evaluation scores may need to think carefully about how to keep their evaluation scores higher when applying the interventions. We hope to do future work in this area,” the researchers wrote.
Ultimately, the study found that professors need not devote extensive time and effort to curbing cheating; low-effort tactics like those outlined in the study, which researchers said took the professor a total of five hours to prepare and implement, can also be effective—and not only in computer science.
“In any field where students must do something outside of class that is hard to do (designing circuits, writing essays, etc.), these techniques can yield improvements,” Bakshi wrote. “A key is that students should know the instructor is holding up their end of ensuring the class is being run in a way that is fair to students who are doing the hard work required.”