You have /5 articles left.
Sign up for a free account or log in.

This is a somewhat uncomfortable thing to admit, but for a good chunk of my teaching career, most of the feedback I gave students on their writing was not meaningful.

How do I know this? Well, in many cases it was not read by the students. This was never more apparent than their end-of-semester portfolios, which I would spend outsize amounts of time on evaluating and responding to before leaving them outside my office door for students to pick up at their convenience.

At least half of those portfolios would still be sitting there a couple of weeks into the next semester. Whatever meaning that feedback may have contained was not received, so by definition no meaning was transmitted.

I know I’m not the only instructor who has expressed frustration over students just looking at a grade and not engaging with the written comments, so I have to believe this phenomenon is pretty widespread.

It is tempting to fall back to a “pearls before swine” standpoint and blame students for being shortsighted or uncaring about their own learning and progress, but in my experience, for the most part, students are genuinely interested in learning, and if they thought there was something meaningful in those comments, they would’ve made sure to read them.

Ipso facto, those comments were not meaningful.

Along my multiyear journey of evolving my pedagogy, it became rather clear that in the context of what I was asking students to do during that period—write by using a prescription that results in a good grade—many of my comments were literally meaningless when it came to students improving as writers. The comments primarily existed to justify the grade, to explain my thinking that resulted in a particular score.

In theory, a comment like, “Claim X needed additional supporting evidence in order to be integrated into the larger argument” is conveying an important message regarding something the student should work on with their writing, but comments like these were not received this way by the student writer. Given that the grade was positioned as the chief indicator of success on a piece of writing, students would primarily view those sorts of summative comments as a bunch of blah blah about something they did wrong (or right), rather than as advice or instruction about what they should do differently (or the same) in the future.

This disconnect between what I thought was important when it came to learning to write—thinking and problem-solving inside a rhetorical situation—and what students were doing—following prescriptive instructions to get a grade—became so apparent to me I had no choice but to change my approach.

This evolution immediately changed the kind of feedback I was giving as I shifted to a mode where I was responding not as a teacher evaluating an assignment to assign a grade, but as a reader who was responding to the text as readers do, with thoughts, feelings and ideas of our own. My comments were now filled with comments like “I’m a little confused here” or the shortened version: “Huh?” I might write, “This is interesting, I hadn’t thought of it this way,” or an emotional exclamation such as “Wow” or Yes!” I was essentially recording the running interior monologue that any of us has when we’re reading a text. I was interpreting, as only humans are able to do.

From my responses, I would develop a comment that had two purposes: one was to respond to the ideas in the writing in the context of the rhetorical situation at hand. I was responding as one does with ideas when we encounter them in the world, not evaluating against a standard as I had been when I was teaching prescriptively.

The other purpose was to put on my editor hat and try to give the writer some guidance that would help put them back inside the piece of writing and either spur a process of revision or provide insight that would be helpful the next time they had a writing challenge in front of them (or both).

As the grade became increasingly meaningless in terms of a signal, the feedback became more meaningful. Once I switched to alternative grading approaches that explicitly valued student engagement and progress in developing their writing practices, the grades on the individual assignments became meaningless, and so I dropped them.

Essentially, I had moved from processing student writing to reading student writing. I believe this is a superior approach to engaging with student writing (or any writing, for that matter) because what is writing for if not to be read?

This is an extremely belabored windup to the main thing I want to convey in this post, which is to reiterate something I’ve said previously (several times now): allowing machine learning algorithms (like ChatGPT) to evaluate student writing should be a nonstarter, because these algorithms cannot think, feel or communicate with intention.

Why would we trust something that cannot read to respond helpfully and productively to a piece of writing?

The rationale I’ve seen among some who are open to algorithmic responses to student writing is that the AI seems capable of producing feedback that is similar to what they might say to students around issues like, for example, organization and structure. If the AI can do this, they say, the human can spend time on the more “meaningful” feedback the AI can’t achieve.

Here’s my question: Why bother with any feedback that isn’t meaningful, be it AI or human-generated?

I think the answer is rooted in the approach that I took for many years, the necessity of providing a justification for the grade. From the first appearance of ChatGPT, I have been arguing that we should be using this technology as an opportunity to examine our practices and discard anything that is obviated by the technology’s existence and capabilities. If ChatGPT can write that standard class essay, don’t assign those anymore. If ChatGPT can generate feedback that’s similar to what an instructor would say when working from a rubric, ditch those rubrics, because they aren’t revealing anything meaningful about students developing their writing and thinking abilities.

Yes, ChatGPT can generate syntax, but the process by which that happens is not the same as what happens when humans write.

When I write, I write for audiences inside a rhetorical situation, not a rubric on a grid. I’m thinking, feeling and communicating. Writing through an idea and the challenge of expressing an idea (or ideas) to an audience is the method by which I learn. Why should it be any different for students?

How have we gotten to a place where some people who teach writing are open to outsourcing responding to student writing to something that cannot read? I do not mean to be harsh or dismissive of alternate points of view, but this strikes me as something we should reject outright. Turning student work over to generative AI is an explicit admission that reading that writing simply doesn’t matter.

In the situations where the AI may seem “useful,” rather than embracing its use, we should first question if the AI product is genuinely meaningful or if it merely provides a familiar and perhaps comforting simulation of work we have been doing but that very well might not be meaningful.

When I was processing student writing against a rubric, this is exactly what I was doing—finding comfort in a simulation. When I started truly reading student work, things got considerably less comfortable, but in that discomfort we also discovered some freedom, which resulted in improved learning.

It’s clear that navigating as generative AI finds its way into educational spaces will continue to be difficult and fraught. If we’re to be successful, we have to be rooted in what makes us human.

Next Story

Written By