You have /5 articles left.
Sign up for a free account or log in.

A professor looks over the shoulder of a student engaged in a classroom, with other students working at desks in the background.

Professors can pair in-class work with take-home work, with honesty in completing take-home work reinforced by the classwork.

Photo by RDNE Stock project from Pexels

AI plagiarism is one of the most pressing issues currently facing higher education. Tools like ChatGPT have spurred a bit of panic, surprising us all with how effective AI can be in helping students complete written assignments. Many of us may think that AI isn’t quite ready to take the world by storm, but a recent AI plagiarism challenge has made it evident that AI is further along than is often thought and can aid students in achieving passing grades in even the most advanced courses.

The challenge, conducted through the newsletter AutomatED, asked professors to submit assignments to see if they could be cracked by AI tools. The results were eye-opening. Six professors submitted their assignments, and to everyone’s surprise, two of them could be completed with passing grades using AI in less than an hour each.

The implications of this experiment have been thought-provoking for both optimistic AI-friendly professors and those who are more skeptical. It has shed light on what AI excels at, where it falls short and how it impacts assignment design. What seems clear is that AI must be grappled with (or even embraced) because it is here to stay, and instructors should consider incorporating alternative AI-resilient assessment approaches, such as dialogue, in their courses.

The Capabilities of AI

One might assume that a written take-home exam for a clinical research course intended for advanced students, like physicians who have completed their fellowships, would be nearly impossible for AI tools to pass. Surely AI wouldn’t be able to tackle complex questions that require interdependent answers. Surely it wouldn’t be capable of discussing hypothetical examples of biomedical research projects involving human subjects that violate ethical standards. And it would be impossible for it to express the specialized knowledge and jargon of the field to make a reasonable argument, right?

Wrong.

Professors should be cautious about assuming that the content of their course and the difficulty of their assignments are too complex for AI to digest and work to create original, competent outputs that simulate human comprehension.

AI tools have surpassed expectations in these dimensions and are good enough to achieve passing grades. Still, they are not flawless. Generic AI tools like ChatGPT struggle with completing assignments where grading rubrics demand higher field-specific standards from students’ answers. For example, in the case of one of the economics projects submitted to AutomatED, distinguishing between economic growth and economic opportunity posed a challenge.

Moreover, if a rubric mandates students cite specific passages from documents, relying solely on generic AI tools like Bard becomes inadequate, at least in their current form. While GPT-4 can provide broad sections from widely available texts, it doesn’t consistently offer precise and accurate citations. Additionally, if a rubric demands engagement with class-specific content, such as in-class lectures or discussions, more advanced AI tools would be required, introducing a higher chance of error.

The results of this ongoing AutomatED experiment do not imply the demise of written take-home assignments. Instead, it narrows the scope of viable written take-home assignments, like a track that used to have eight lanes but now has four.

For professors concerned about AI plagiarism, there are two broad strategies to consider moving forward:

  1. Assign only in-class work.
  2. Pair take-home work with in-class work so that students must complete the take-home work honestly to successfully complete the in-class work.

The first strategy may not be feasible in some contexts. When it comes to the second approach, there are many factors to consider for successful implementation.

Contemplating and Preparing for Dialogue-Based Assessment

While in-class written assignments can be practical, oral ones can be an effective alternative to engage and assess students, especially when paired with preparatory take-home work. Through discussion-based assignments, students can display their own knowledge and circulate new ideas and ways of thinking among their peers.

An advantage to this strategy is that, unlike take-home assignments, students can learn from each other and build on ideas while their knowledge is tested in real time. Additionally, dialogue-based assignments foster career-ready skills (an area where colleges and universities have a reputation for falling short), such as collaboration, leading meetings and verbally working through problems and brainstorming with teammates.

In-class activities and tasks that leverage dialogue can range from small group discussions to whole-class debates. These activities can vary in the number of participants, their significance to a student’s grade and how they are structured to test subject competency.

Before engaging students in this assessment type, it’s essential to consider classroom culture and dynamics. Student success will hinge on engagement, so a positive classroom climate, students’ comfort and the psychological safety of the group are critical to achieving the best outcomes. There are practical ways for faculty to establish a foundation for dialogue before implementing this form of assessment in their courses. Free tools, guides and trainings exist through organizations like the Constructive Dialogue Institute that can aid instructors with student skills development, classroom norm-setting and building trust.

Examples of Dialogue-Based Activities and Assessments

Dialogue-based assignments can take on many forms, depending on the subject matter and goals of the course; they can even have their place in STEM courses. Below are a couple of examples of how an assignment could work. Professors can adjust these activities if they want to bring the advantages of AI into their classrooms.

  1. Small-group scenarios: The professor divides students into small groups, assigns each group a scenario or problem, and tasks them to address it through oral explanation or storytelling. Afterward, peers ask questions to explore aspects of the solution they may not fully grasp. Students are evaluated based on the viability of their solution, the effort put into its development, their ability to answer questions and the quality and constructive nature of their questions.
    • AI-enhanced pairing variation: Students utilize tools like ChatGPT and Bard to research their solutions at home before engaging in the classroom activity.
  2. Knowledge gap identification: Students are divided into groups, and each group explains a solution to a given problem (similar to the previous example). Students then work on identifying gaps in their own logic and knowledge. Instead of asking questions, peers point out gaps and offer insights to help fill these gaps.
    • AI-enhanced pairing variation: Students use their devices to access AI tools like ChatGPT and Bard to identify gaps in their knowledge. Through querying the AI tool and using it as a conversational partner, students discover new information and discuss the knowledge gaps within their group.

These examples should be tailored to fit specific fields and learning objectives, but the ultimate goal is to get students talking in structured ways that earnestly display their knowledge. The AI revolution has arrived, and it presents professors with a unique opportunity to emphasize an age-old form of communication as a “new” method for assessing in the classroom (and even evolve it with the help of AI). Ironically, advancing sometimes means moving back to what’s tried and true.

Graham Clay is an Irish Research Council GOI Postdoctoral Fellow in the School of Philosophy at University College Dublin. He is a co-founder of AutomatED, a newsletter and blog on the impact of tech and AI on higher education. Cambriae W. Lee manages communications for the Constructive Dialogue Institute, focused on elevating constructive dialogue in college and university classrooms.

Next Story

More from Views