You have /5 articles left.
Sign up for a free account or log in.

Late last week it was announced that Arizona State University was partnering with OpenAI in a deal that would make ChatGPT-4 available free (it currently costs $20 a month for individuals) for “approved” users.

The partnership fits with ASU’s self-branding as an innovative institution, and the specifics of how ChatGPT would be employed were somewhat vague, perhaps because no one really knows how they’re going to start playing around with this thing yet. In a report published at Axios, ASU’s chief information officer is cited as saying, “ASU plans to use ChatGPT to build personalized AI tutors and offer writing help to students in one of its largest classes, Freshman Composition.”

When this tidbit started to circulate on social media, that sound you heard was thousands of souls of first-year writing instructors being snuffed, as though the Empire had fired a bolt from the Death Star.

One of ASU’s most persistent goals in terms of innovation seems to be to reduce or eliminate the cost of human labor as it relates to the teaching of general education courses such as first-year writing.

Back in late 2014, ASU announced a plan to unilaterally increase the course load of every untenured writing instructor from a 4-4 to a 5-5 without any corresponding increase in pay. I threw a bit of a snit at this blog, eliciting a comment from a senior director of media relations and strategic communications that “While recommended class loads fit the ideals of academic associations, they are a luxury ill-afforded by a university trying to educate a growing population and workforce of tomorrow.”

The instructors organized and were able to secure some concessions in terms of course load and pay, but their student loads remained well above disciplinary maximums. In 2015 ASU announced a plan to outsource general education to edX and create an “all-MOOC freshman year.” The Global Freshman Academy was quietly scaled back in 2019 because, as reported by Lindsay McKenzie here at Inside Higher Ed, while the online courses initially enrolled hundreds of thousands of students, “four years later, only a fraction have completed a course, and just a minuscule number paid to receive college credit for their efforts.”

Now we have generative AI and ChatGPT, and ASU apparently sees a fresh opportunity to replace human labor, in this case with algorithmic automation.

I’m going to cut ASU a modicum of slack in that saying you’re going to use ChatGPT to help tutor in Freshman Composition is the kind of idea that comes up in brainstorming among people who don’t know or think much about what it means to teach students to write, and there’s some evidence that at this time this is a notion without a plan.

I’d like to explain why, in my view, it’s a bad notion and therefore should not be put into effect.

As amazing at ChatGPT-4 may seem, it is important to remember that it is an automated syntax–generating algorithm. It cannot think. It cannot feel. It does not read. It cannot even comprehend. It has no appreciation of style or understanding of writing inside a rhetorical situation. Some people would like to argue that large language models are a form of intelligence, but at their core, they are nothing like human intelligence.

Whatever humanlike traits we detect in the outputs of large language models are entirely the by-product of humans projecting these traits onto the automation. Yes, you can create scenarios where you can use a GPT to simulate the kind of feedback that might show up in instructor comments on a student assignment, but the process by which those comments are derived is not the same process as that used by a human instructor.

This distinction is meaningful if we want education to mean anything.

GPT-generated comments are a simulation. They are fake and not genuinely communicative. I’m sure everyone has heard of the problem of AI “hallucination,” where it invents material that is untrue or inaccurate, but it is important to recognize that from the point of the of the AI, everything is a hallucination. It has no capacity for separating the real from the fake, the true from the false. That it may occasionally or even frequently hit on good advice should not matter, because there is no genuine intent at meaning or communication behind the generation of that syntax.

That the sorts of comments a large language model can generate may pass for substantive is primarily a commentary on how the idea of what it means to write in school contexts—particularly in grades eight through 12, where standardized assessments dominate—has been reduced to a truly impoverished place, primarily concerned with demonstrating a few writerish moves that fit to a prescribed template.

This is academic cosplay, not writing. It is a phenomenon I explored at length, and I believe rather persuasively, in Why They Can’t Write: Killing the Five-Paragraph Essay and Other Necessities. If we had not lost sight of the kinds of experiences that are truly meaningful for developing writers, there would be zero temptation to turn to ChatGPT as a teacher or tutor.

This desire to automate what can and must be human response is also simply a further indication of the dismissal of the importance of the labor of the primarily non-tenure-track faculty who teach courses like first-year writing. As I’ve witnessed firsthand, these folks often do excellent work undoing the problems that came before under generally hostile circumstances in terms of pay, workload and security. Even entertaining the notion that the work can be done by generative AI shows the depth of scorn some administrators have for work that should be viewed as central to the institution.

Do I sound frustrated? I hope so.

Let me also issue a warning to the tenured faculty who are less likely to be immediately affected by AI tutors deployed in Freshman Composition that the slope is indeed slippery. Just as the adjunctification of faculty has led to a steady erosion of the quality and autonomy of tenured positions, the same will be true when automation is allowed to replace human labor. Once you allow the work you do to be devalued for some of the people who do it, it will be devalued for all who do it.

Concluding that generative AI tutors can and should be used to aid students in their writing is an announcement that you have given up on the work of education. You are now in the “automated batch processing of student units” business.

Maybe this is the inevitable future of higher education in this country. ASU finds itself on more solid financial footing that many of its state university peers because it has embraced something like an automated batch processing of student units model in order to increase scale and revenue. This is not to say that no learning happens at schools like ASU. I’m certain lots of faculty members are doing their best inside the system, but it is impossible to deny the values of that kind of system are anti-education.

We will hear about how this technology is meant to supplement, not replace, human instruction, but this is nonsense when it comes to writing instruction. Any use is replacement. The claims that human labor is a luxury is a choice to funnel resources toward algorithmic automation rather than human interaction.

A bad choice. A choice that ultimately leads to full abandonment of any kind of genuine educational mission.

ChatGPT can absolutely be a useful tool for writers, but if writing matters beyond producing text for the purposes of a grade that is part of a larger credential, then we have to treat writing as something genuinely human, not the mere act of generating syntax.

Next Story

Written By

More from Just Visiting