You have /5 articles left.
Sign up for a free account or log in.
In a world where generative AI exists, the questions of what and how we should teach are never going to go away.
Though, come to think of it, those questions were—or at least should have been—ever present before the arrival of ChatGPT. The biggest effect of large language models has been to force a reckoning over what happens when a machine incapable of thought, feeling or intention can produce work that seems to pass academic muster.
But it has not changed the nature of teaching. As Nupur Samuel and Anna CohenMiller explored in a recent post at the University of Venus blog, questioning is at the foundation of any pedagogical choice. In my own pedagogical practices, after following the folklore of teaching writing for years, I finally started questioning why I was doing certain things and not coming up with satisfactory answers, and I went on a quest to rebuild a practice that allowed me to satisfactorily answer the questions of why and what.
The specifics are complicated, but I increasingly believe that the roots of our response to generative AI should be to focus on helping students develop knowledge and skills around things that generative AI can’t do. Rather than trying to police generative AI use in order to maintain some semblance of integrity for work that generative AI can do, why not stop doing that stuff and move on to new things?
Or, as the case may be, old things, enduring things.
Perhaps you saw the recent public unveiling of Sora, OpenAI’s model that can create realistic video from text prompting. The online reaction to Sora followed a now-predictable pattern when a new application of generative AI hits the scene, as jaws drop in wonder at a process that didn’t seem possible, followed by a second wave of thought that notices the weird/uncanny glitches that seem to always show up in generative AI outputs. Brian Merchant, writing at his newsletter, has an interesting take on this phenomenon.
As imperfect as the outputs may be, some possible implications seem clear: the technical skills of rendering video will become less and less valuable as generative AI makes it possible to produce outputs through straightforward text prompts. In a world where those skills are no longer necessary, what still has value?
Here’s one thing: taste.
The Sora videos are an astounding spectacle knowing what kind of process is at work, but at the same time, are they, you know … good? Are they interesting as anything other than spectacle?
Once I got over my fetish for correctness, I started to put taste not far from the center of all of my writing courses. Our response to texts is largely irrational in the moment. We simply react. I believe those reactions are telling in terms of the underlying quality and conditions of those texts. This is true even of utilitarian texts like a set of instructions. If I am confused by the instructions, they are quite possibly bad instructions.
Taste is necessary both in terms of our response to a text and when it comes to creating a text meant to appeal to our audiences. Without taste, we cannot determine if our own work is hitting a target. There is a famous video of Ira Glass, host of This American Life, where he talks about how early in his career there was a significant gap between his taste—what he knew to be good and why—and his ability to execute stories that met his own standards in terms of taste.
Closing this gap becomes one’s work. The tools we can employ in the service of closing that gap may change over time, but the thinking (and feeling) we have to do is the same.
If “taste” seems like too narrow a term, or something that primarily applies to creative pursuits, let’s add in a related concept: discernment. It’s not really related—it’s the same—but in the realm of ideas, argument and critical thinking, we consider the ability to discern differences an important skill.
For sure, lots of education requires us to acquire information about things we don’t know, but as we’re doing that, it’s not that difficult to also be mindful of helping students be conscious of and therefore shape their own tastes. I recall a gen ed philosophy class in college which mostly involved some very broad introductions to different philosophical schools of thought.
We were introduced to Objectivism through some excerpts from Atlas Shrugged by Ayn Rand. I hadn’t really thought much about philosophy, but I had read a lot of novels, and from a purely aesthetic standpoint, I found Atlas Shrugged wanting. My taste sent me a message that allowed to also discern that my system of values was different from Rand’s, while also helping me have a productive academic conversation with this other point of view.
In terms of video production, while the technical barriers for creating video are falling, the taste that is required in order to be a cinematographer will remain. I was fascinated to listen to a recent episode of Marc Maron’s WTF podcast in which he talks to Rodrigo Prieto, a leading cinematographer who last year worked on both Barbie and Killers of the Flower Moon. Prieto talked about how his initial attempts at filmmaking as a kid involved stop-motion animation using eight-millimeter film, doing things like scratching the negative to simulate a burst from a laser gun.
He now works almost entirely with digital technology, but what separates him is not his knowledge of technology but his taste, a taste rooted in years of experience considering the impact of certain choices around light, picture quality and the visual frame.
Generative AI, having no capacity for thinking or feeling, has no ability to express taste. Anything that looks like taste is a simulation, an illusion. Taste and discernment are still ours. For that reason, they’re skills that education should lean into.