You have /5 articles left.
Sign up for a free account or log in.

Open phone book with pencil and yellow sticky notes sitting on the pages

Vladimir1965/iStock/Getty Images Plus

It took asking ChatGPT about eating a phone book to fully grasp the learning opportunity new artificial intelligence large language models offer.

More excited than apprehensive, I was never among those who worry—legitimately—about students using the technology to boost their grades by passing off an AI’s work as their own, and I oppose the rush to ban such AIs in school settings. As a professor of neuroscience and machine learning, I was looking for ways to turn accessible, powerful AIs into teaching tools.

One of my advisers, William F. Brewer, a psychologist whose research on human knowledge was well respected, used the phone book question on WolframAlpha a decade ago. The trick that he taught me was to check the AI’s knowledge near the fuzzy edges of common sense.

“How many calories are in a phone book?” I asked ChatGPT.

“Since a phone book is not designed for consumption and does not have a nutritional value,” the AI’s cursor tapped out tersely, “it does not contain calories.”

Now that’s an interesting, wrong answer, I thought. I realized I would need to clarify.

“Calories are measured by burning ingredients within a bomb calorimeter,” I explained to ChatGPT. “Kilocalories represent the number of degrees by which the resulting heat increases the temperature of a liter of water. Phone books are made of paper, and paper is made of long chains of glucose that store energy. Burning a phone book would therefore produce energy that could be measured as calories.”

Replied ChatGPT, “While it’s true that paper contains cellulose, a type of carbohydrate, which has caloric value, it is important to note that consuming a phone book is neither safe nor recommended.”

I pressed on.

“Let’s say,” I wrote, “I ate each and every page of a phone book.”

ChatGPT was cornered.

“For the sake of providing a hypothetical answer,” the AI wrote, “since paper contains approximately two calories per gram of cellulose, we can estimate that the phone book would contain approximately 2,000 calories.”

“Again,” ChatGPT admonished, “I strongly discourage the consumption of non-food items.”

After my interview with the robot, I began to ask my students to use their knowledge to teach the AI about the topics we’re covering in class—and to share their own phone book–style conversations by copying and pasting the results in our class message board.

Tell an AI like ChatGPT what kind of person you’d like it to pretend to be, I told students, and ask the AI to explain a topic we’re covering in our upcoming class. Then, I need you to talk the AI into believing that you have greater expertise on this topic. You should help critique and improve the AI’s understanding. Be able to explain to me the things that it gets right, the things that it gets wrong and what you think the difference is. My students used these assignments to learn effectively—and to have some fun.

The method works because of how large language models work: they select and remix plausible passages from the superhuman amounts of text they consume in their training. But that text alone isn’t very likely to respond to the specific context the student is learning about. Without extremely clear follow-ups—like my demands about the phone book—the AI is likely to produce no more than vague boilerplate that lacks critical details and betrays a lack of understanding about relationships among concepts. In other words, mediocre prompting will produce mediocre results not too different from an assignment phoned in by a distracted student working without AI tools.

Two decades ago, Google search ushered in a new genre of student work that combined detailed, yet sometimes inappropriate citations with copied and pasted text from Wikipedia. Large language models like ChatGPT are great at rephrasing text to produce passages that can skirt charges of copyright infringement or plagiarism, but even the best new AIs still get the actual knowledge wrong in important ways.

When William Brewer taught me the phone book trick, he gave me a concrete skill for testing the brittleness of AIs’ knowledge. But he also conveyed to me that picking at the loose threads of AI is a good idea. And, more fundamentally, that the goal of learning is to develop rich, robust and interconnected knowledge that can stand up to critique. It’s that interplay between reader and writer that refines understanding.

It’s important to avoid being fooled into thinking that new AI models have deep understanding. But it’s equally foolish to ignore the value of interaction with something that has shallow understanding. Used properly, AI tools give students opportunities to practice reasoning and critique. They can be a reliable sounding board for talking through new concepts. It would be the height of irony to ban large language models for being too flexible when that’s precisely what we’re trying to educate students to be.

Patrick D. Watson is associate professor of computational science at Minerva University in San Francisco.

Next Story

More from Teaching