You have /5 articles left.
Sign up for a free account or log in.

I have a friend who works in an education-related capacity (not as a teacher) who had been putting off their investigations of generative AI (artificial intelligence) and large language models until the end of semester when they had the bandwidth to do some exploring.

This friend said something interesting to me over email: “That ChatGPT knows a lot more than I thought it did.”

My friend did what a lot of us have done when first engaging with a genAI chatbot. They started asking it questions about stuff my friend knew well. ChatGPT didn’t get everything right, but it seemed to get a lot right, which is impressive. From there, my friend moved on to subjects about which they knew much less, if anything. This friend has a child who had been studying a Shakespeare play in school who had been frustrated about their inability to parse some of the meanings of some of the language as was expected on some short-answer questions. 

My friend went to ChatGPT, quoted the passages and asked “What does this mean, in plain English?” ChatGPT answered of course, and while I’m far from a Shakespeare expert—I put in my requisite time as someone with an M.A. in literature, but no more—I couldn’t find anything obviously wrong with what I was shown.

My friend’s enthusiasm was growing, and I hesitated to throw cold water on it, but given that I’d just finished the manuscript for my next book (More Than Words: How to Think About Writing in the Age of AI), and had spent months thinking about these issues, I couldn’t resist.

I told my friend, ChatGPT doesn’t “know” anything. I told them they’re looking at the results of an amazing application of probabilities, and that they could ask the same question over and over and get different outcomes. I said that its responses on Shakespeare are more likely to be on target because the corpus of writing on Shakespeare is so extensive, but that there was no way to be certain.

I also reminded them that there is no singular interpretation of Shakespeare (or any other text for that matter), and to treat ChatGPT’s output as authoritative was a mistake on several levels.

I sent a link to a piece by Baldur Bjarnason on “the intelligence illusion” when working with large language models, in which Bjarnason walks us through the exact sequence my friend had done, first querying in areas of expertise, and then “correcting” the model when it gets something wrong, the model acknowledging error and the user walking away thinking they’ve taught the machine something. Clearly this thing had intelligence.

It learned!

Moving on to unfamiliar material, makes us even more impressed. It seems to know something about everything. And because the material is unfamiliar, how would we know if it’s wrong?

It’s smart!

We had a few more email back and forths where I raised additional issues around the differences between “doing school” and “learning,” that if you just go ask the LLM to interpret Shakespeare for you, you haven’t had any experience with wrestling with interpreting Shakespeare, and that learning happens through experiences. My friend countered with, “Why should kids have to know that anyway?” and I admitted it was a good question, a question we should now be asking constantly given the presence of these tools.

(We should be asking this constantly when it comes to education, but never mind that for the moment.)

Not only should we be asking, “Why should kids have to know that?,” we should be asking “Why should kids have to do that?” There are some academic “activities” (particularly around writing) that I’ve argued have long been of dubious relationship to student learning, but which remained present in school contexts, and generative AI has only made these more apparent.

The problem is that LLMs make it possible to circumvent the very activities that we know students must do: reading, thinking, writing. My friend, who works in education, did not reflexively recoil from the thought of how the integration of generative AI into schooling made it easy to circumvent those things—as they’d demonstrated to both of us with the Shakespeare example. “Maybe this is the future,” my friend said. 

What kind of future is that? If we keep asking students the questions that AI can answer, and having them do the things AI can do, what’s left for us?

Writing recently at The Chronicle, Beth McMurtrie asks, “Is this the end of reading?” after talking to numerous instructors about the struggles students seem to be having in engaging with longer texts and layered arguments. These are students who, by the metrics which matter in selecting for college readiness, are extremely well-prepared, and yet they are reported as struggling with things some would say are basic. 

These students reflect past experiences where standardized tests—including AP exams—privilege a surface-level understanding, and writing is a performance dictated by templates (the five-paragraph essay), so it is not surprising their abilities and their attitudes reflect those experiences. 

What happens when the next generation of students spends their years doing the exact same experiences that we already know are not associated with learning, only now using AI assistance to check the boxes along the way to a grade. What else is being lost?

What does that future look like?

I’m in the camp that believes we cannot turn our backs on the existence of generative AI because it is here and will be used, but the notion that we should give ourselves over to this technology as some kind of “co-pilot,” where it is constantly present, monitoring or assisting the work, particularly in experiences which are designed for the purposes of learning, is anathema to me.

And really, the way these things are being used is not as co-pilot assistants, but as outsourcing agents, subcontractors to avoid doing the work itself.

I fear we’re sleepwalking into a dystopia.

Maybe we’re already living in it. 

Next Story

Written By

More from Just Visiting