The early promise of generative AI like ChatGPT is that it will let people outsource the work they don’t want to do, or don’t have time to do, to the AI. It is a tool of increased speed and efficiency that, we’re told, will allow you to get down to the substantive stuff.
If this is true, we better spend some time deeply considering what the substantive stuff really is, because there seems to be some confusion on this front.
Here comes a longish example to illustrate.
There were a lot of little moments that led me to start the journey that would result in the radical shifting of how I approached the teaching of first-year writing, but one of the bigger little moments was when I realized that I was not reading my students’ essays.
Oh, I was running my eyes over them, pretty much as quickly as I could in order to grade them in the Thursday evening–to–Tuesday morning window I had for the task. Once you’ve spent enough time reading student writing, one’s radar for the kinds of mistakes students make becomes highly sensitive, and I could spot those errors almost intuitively. To respond to these errors as quickly and efficiently as possible, I developed a few dozen autotext macros in Microsoft Word that I could pop into the marginal comments with a couple of keystrokes.
For example, a very common student problem when summarizing the argument of another piece was describing what the piece was about, its subject matter, rather than zeroing in on the piece’s claims.
The problem would often announce itself in the student’s writing by the lack of a “claim verb,” e.g. believes, argues, says, claims, etc. … I had one macro comment that just said “no claim verb here,” which I would use after highlighting the relevant portion of the sentence.
I couldn’t tell you how students were responding to these comments, because, to be honest, I didn’t care. My focus was on providing the textual justification for the ultimate grade on every single student piece in the time window that was available to me, a window that was often too narrow when measured against the time available, and the necessity to complete the other work I was doing to supplement my non-tenure-track-instructor salary.
Anyway, at some point I realized that I was no longer reading my students’ writing at all, I was processing it according to those ingrained patterns of error seeking. I was giving no thought as to what might be the cause of these errors. There was no time for that. I would do my best to pre-coach students away from these errors by giving copious examples, but they’d appear in the submitted writing semester after semester anyway.
The whole thing wasn’t working, not for the students—though they generally expressed satisfaction with the course—but mostly not for me. I was literally alienated from my work. I’d been drawn to teaching because of my love for reading and writing, and here I was in a class that should’ve had me immersed in those things, feeling as though they were very far away.
So, I changed. I’ve written about it in this space many times, and it’s mostly all collected in Why They Can’t Write, so I won’t repeat myself, but here’s a frame I started using that I think has particular relevance in this moment.
Spend your time on the real work.
When it comes to teaching writing, the real work is reading and responding to student writing. This is why I’m incredulous when someone suggests outsourcing grading to machine learning. It’s the equivalent of asking someone to work with an orchestra by letting them hear the amount of applause but not listen to the performance.
Having students write to an algorithm rather than an audience sounds dystopian to me. If that’s what resources allow, let’s just stop the madness and do anything else with our time.
Reading a text and processing a text are not the same thing. Sometimes processing a text may be the right move, like if we’re doing a bunch of research and need to extract key points from a lot of different texts in a short amount of time. I’m doing that now with all this ChatGPT/generative AI stuff where there’s half a dozen articles a day I’m trying to familiarize myself with.
But if my job is to teach students to write, I have to actually be able to read their texts, which means doing the work.
The biggest change I had to make was in the values I brought to assessing student writing. If I was no longer hell-bent on my error hunt and de-emphasized justifying the grade, I could get out of processing. Instead, I made myself slow down and actually experience my students’ writing as an interested audience, the same way I want my writing to be judged by my readers.
This shifted my responses away from identifying error to seeking root diagnoses for lack of impact in what students were writing. This in turn changed the kinds of assignments I asked students to complete so they could focus on audience and impact.
This became a virtuous circle that made my work much more fulfilling and helped students develop into more confident writers, armed with a practice they could port to unfamiliar writing challenges.
It worked great until I bumped up against the hard reality of a system in which I was never going to make more than $35,000 a year teaching full-time, something I could not continue to do as I entered my 50s.
So, when someone says that ChatGPT can help with these problems of not enough time and not enough resources by, say, using it to “draft a syllabus in 15 minutes,” as one gentleman relentlessly touts on Twitter, the temptation of such a promise is obvious.
Bing, bang, boom, a few prompts, some checking to make sure that the readings GPT has fed you aren’t “hallucinated” (meaning made-up) and there you have it, a syllabus and schedule. Now you’re freed up for the important stuff.
I know that writing the syllabus feels like a colossal pain in the ass, and as the task that often initiates the period of concerted prep for a semester, we (OK, I) have approached it with extra dread, but I’m here to testify that the writing of a syllabus is the work.
As I say so often I’m sick of saying it, writing is thinking, and the writing of the syllabus is the time where the instructor gets to think through the full scope of a course, to develop the guiding ethos that will carry through the entire semester and significantly shape the experiences of both instructor and student.
The core purpose of a syllabus is not to have something to present to students on the first day, or to satisfy a bureaucratic requirement, but to work through the potential and possibilities of the course. This requires a process of thinking, not an outsourcing to a tool that cannot think.
Over the years I came to see my syllabus as a mix of plan, promise and manifesto. It would map out the journey I anticipated for the students and myself, it would articulate my role in guiding that journey and fostering the conditions for learning, and it would make a case for the importance of what we were about to do together.
By the end of the syllabus-writing process, a process that took more like days than minutes, as I allowed my thoughts to range and considered different paths, the low-level dread I experienced at the start of the task would be replaced by excitement for the coming semester as I envisioned the possibilities before us.
Rarely (meaning never) did the semester live up to those hopes, but that’s not the point. The point is that doing the work is what makes even having those hopes possible.
The creation of the syllabus is one of the most consequential things you can do for a course. The care and consideration in preparation will have ripple effects through the entire semester, and even beyond.
I certainly understand the temptation of using a tool to save time or conserve resources when you do not have enough of either, but every time someone is telling you how to use AI to get around the work we should be doing, maybe we should pause and consider how to help people do that work instead of outsourcing it.
That 15-minute, AI-generated syllabus is a recipe for alienation. We’re already suffering under an epidemic of disengagement. Why make it worse?
 I know as a student, there were readings that literally changed my life. What if those had been missed because a professor let the great predictive aggregation machine do that thinking for them?