You have /5 articles left.
Sign up for a free account or log in.
This is going to sound strange coming from someone who spent his career teaching writing and now spends his time trying to help others teach writing better, but it is entirely possible that, with the advent of chat bot artificial intelligence, we need to rethink how and when we use writing as a tool for assessing student knowledge and learning, and that may involve assigning less writing.
Let me also suggest that this challenge is nothing new. The only difference is that the AI has made plain a weakness that’s always been present in how we assess student writing.
To help show where I’m coming from let’s take a trip in the wayback machine, to the late 1980s, early 1990s at the University of Illinois, when I weaponized my writing abilities to avoid learning.
Because I was a mediocre college student, I played a tactical game when it came to my course selection.
I would sign up for either 18 or 21 credits with the intention of dropping at least one or two courses following an initial shopping period, which primarily consisted of going to the first day to collect the syllabus and see what kinds of assessments were on offer.
Anything with multiple-choice exams that involved studying was out. Yuck. Short three- to five-page out-of-class response essays were preferred, with in-class essay exams a close second. Long, clearly research-driven papers were preferable to exams but also something to be avoided if possible.
This was my strategy because I knew that with my base level of writing fluency (a.k.a., accomplished bullshitter) I could score at least a B on just about any writing-related assignment, even if I did not really know what I was talking about. Literature classes were great for this since they often required short response papers and I liked to read. Even if I was off base for what a professor was looking for, I could reach the bar of “this kid seems like he did some work, so here’s your B-plus.”
At times, I even took a kind of perverse pleasure in ignoring the actual prompts for the response papers and writing something out of my own impulses. Sometimes, not often, but sometimes, I could even eke out an A-minus if I hit on something genuinely interesting. Having been on the other side of these assignments, benumbed by the similarity of student responses, I can imagine an instructor coming across one of my swings at originality and, provided they weren’t a stickler about coloring in the lines, appreciating the change of pace.
The reason this technique worked so well is because we—and I definitely include myself in this—tend to mistake surface-level fluency with having learned the material. I know that prior to reorienting my assessment practices to privilege learning, I was often guilty of handing out decent grades for uninspiring work simply because the writing reached a level of competency that suggested this was a student who should be doing well, even if the specific piece of writing in front of me didn’t quite hold together, substance-wise.
Now that this surface-level fluency is available with the entry of a prompt into a generative-AI tool like ChatGPT, this indicator is no longer reliable for a student that is “safe” to pass, even if they aren’t learning all that much in the course.
Attacking this problem requires some reconsideration on several fronts.
No. 1 for me is designing writing assignments that are genuinely engaging for students that are tied to authentic occasions for learning and, from the student’s perspective, seem worth doing. This was my approach before ChatGPT came into existence, so it’s a comfortable space for me, and therefore the focus of my online course on helping instructors design better writing assignments.
No. 2 is to assess writing according to the qualities that mark human-produced thought and synthesis from large language model syntax assembly. As I discuss in the course linked above, one way of achieving this is to hold students accountable for their own understanding of having achieved this through the use of metacognitive reflection.
Set the bar high and help students keep going back to the task until they achieve it. Do not allow the serial B.S.ers of the world like I was to keep getting away with their B.S.
This can get complicated pretty quickly if you’re going to send students back to the well multiple times until they come back with something meaningful. There needs to be time to respond to student efforts. You need to make sure the students who have already cleared the bar have something new to work on. I do not claim this is easy, and it really does require a paradigm shift that will initially feel very foreign.
This is one of the reasons I moved toward alternative grading and set the semester bar in a way where the goal was to demonstrate critical thinking through writing by the end of the term, giving lots of practice along the way.
Third is to integrate the technology into how you teach writing and encourage creative and productive use of it as a tool. As of yet, this is of limited interest to me, but I’m interested in what a number of other folks are doing on this front and rule out nothing when it comes to the future.
Fourth is to consider not assigning out-of-class writing as part of an assessment where the purpose is merely to elicit the capture and understanding of pre-existing knowledge or information. Given that ChatGPT can achieve this quite easily, and students may not see the value in learning the material for themselves, the conditions for academic dishonesty may be ripe. It’s possible that a well-designed in-class multiple-choice assessment is a better fit for the assessment objective.
But let’s also remember that this is nothing new, as my anecdote above about my college career shows. In truth, I engaged in similar behavior long before that, when my middle school math teacher would assign the odd-numbered questions for homework, knowing that the answers were in the back of the book. Chegg and Course Hero continue to provide a plethora of resources for students who want to make an end run around doing the work.
The challenge is deep, and it’s not about policing student behavior or cracking down on violators. Students want to learn important things in school. It’s simply that they have been acculturated to a view of schooling that is divorced from learning.
It’s up to us to create an atmosphere that is genuinely conducive to learning and then rewards that learning through our approaches to assessment.
There’s no set formula for this. There is only a process, an ongoing challenge that is likely to continue to evolve as AI makes its way further into our lives.