• Just Visiting

    John Warner is the author of Why They Can't Write: Killing the Five-Paragraph Essay and Other Necessities and The Writer's Practice: Building Confidence in Your Nonfiction Writing.


Language AI Don’t Know No Grammar

I thought the ultrasophisticated language algorithm would understand the rules of grammar. I was wrong.

May 25, 2022

You may have heard about the marvel that is the Generative Pre-Trained Transformer, more commonly referred to as GPT-3.

GPT-3 is a “large language” artificial intelligence algorithm that has achieved an extremely high level of fluency with the English language, to the point where some are speculating that the AI could wind up producing much of the text currently handled by we humans.[1]

The goal is essentially to be able to ask the large language model a question and in turn receive an answer that is cogent, accurate, thorough and actionable.

I would like to set aside the larger debate about AI and education, AI and writing, what it means for what and how we should teach to help students prepare for a world where these things exist, and instead note something interesting about how GPT-3 does its learning.

I had assumed that in order to produce fluent prose, GPT-3 was programmed with the rules of English grammar and syntax, the kind of stuff that Mrs. Thompson tried to drill into my classmates and me in eighth grade.

When using the subjective case, the verb … blah blah blah.

The difference between a gerund and a participle is … and so on.

You know the stuff. It’s everything I was once taught, then for a time taught to others and now spend exactly zero time thinking about.

I thought that GPT-3’s big advantage over us carbon-based life forms was that it had a comprehensive and instant access to these rules, but this is 100 percent incorrect.

Writing at The New York Times, Steven Johnson elicited this description of how GPT-3 works from Ilya Sutskever, one of the people who works with the system.

Sutskever told Johnson, “The underlying idea of GPT-3 is a way of linking an intuitive notion of understanding to something that can be measured and understood mechanistically, and that is the task of predicting the next word in text.”

As GPT-3 is “composing,” it is not referencing a vast knowledge of rules for grammatical expression. It’s simply asking, based on the word that it just used, what’s a good word to use next.

Interestingly, this is pretty close to how human writers compose. One word in front of another, over and over as we try to put something sensible on the page. This is why I say that I teach students sentences rather than grammar. Writing is a sense-making activity, and the way the audience makes sense of what we’re saying is via the arrangement of words in a sentence, sentences in a paragraph, paragraphs in a page and so on and so on.

Audiences do not evaluate the correctness of the grammar independent of the sense they are making of the words.

Considering the complexities of sense making, we understand that human writers are operating at a much more sophisticated level than GPT-3. As humans make our choices, we are not just thinking about what word makes sense but what word make sense in the context of our purpose, our medium and our audience—the full rhetorical situation.

As I understand it, GPT-3 does not have this level of awareness. It is really moving from one word to the next, fueled by the massive trove of information and example sentences it has its disposal. As it makes sentences that are pleasing, it “learns” to make sentences more like those. To increase the sophistication of GPT-3’s expression, programmers have trained it to write in particular styles, essentially working the problem of what word is next inside the parameters of the sorts of words a particular style employs.

The current (and perhaps permanent) shortcomings of GPT-3 further show both the similarities and the gaps between how it writes versus how people write. GPT-3 can apparently just start making stuff up in responding to a prompt. As long as there’s a next word at hand, it has no care if the information is accurate or true. Indeed, it has no way of knowing.

The GPT-3 also has no compunction about propagating racist rhetoric or misinformation. Garbage in, garbage out, as the saying goes.

Of course human writers can do that as well, which is why when working with students, we have to help them understand not just how to put words into sentences and sentences into paragraphs, etc. … but also to aid them in embracing and internalizing what I call the writer’s practice, the skills, knowledge, attitudes and habits of mind that writers employ.

I’m thinking it might be fun to ask GPT-3 to write on a prompt to compare and contrast how GPT-3 and Joan Didion employ grammar in their writing, based in Didion’s famous quote “Grammar is a piano I play by ear, since I seem to have been out of school the year the rules were mentioned. All I know about grammar is its infinite power. To shift the structure of a sentence alters the meaning of that sentence as definitely and inflexibly as the position of a camera alters the meaning of the object photographed.”

I wonder what it would say?

[1] Last year I wrote about an experiment where GPT-3 attempted to answer writing prompts from college courses and how it managed to successfully reproduce the kind of uninspiring responses that many students will churn out in order to prove they’ve done something class-related, even if they haven’t learned much of interest.


We have retired comments and introduced Letters to the Editor. Letters may be sent to [email protected].

Read the Letters to the Editor  »

Back to Top