You have /5 articles left.
Sign up for a free account or log in.

A close-up of a laptop and a person’s hands – one hand rests on the laptop as if typing while the other seems to hold drawings of a chat bot icon and chat bubbles.

Barriography/iStock/Getty Images Plus

Generative artificial intelligence is all the rage—in both senses of the term. Many people in higher education are experimenting with new tools such as ChatGPT and DALL-E, and opinions tend toward the extremes: they are seen as either an unmitigated blessing or a curse. Yet there are some important implications of this new technology that have not been fully appreciated.

Let’s begin by noting that these tools don’t do anything new. They complete familiar tasks with greater efficiency. ChatGPT can produce an essay or slideshow in mere seconds. The innovation isn’t the final product but how it was produced. The disruptive force of these new AI tools is the speed and (for the time being, at least) low cost of producing something that used to require human labor. Whether or not this disruption is for the good depends on several factors.

The value of a more efficient process is, in part, a function of the value of the end it serves. If plagiarizing essays is bad, a more efficient means of doing so is also bad. But assuming your slideshow is a good thing, a more efficient means of producing it may also be good. In short, the fact that ChatGPT increases efficiency doesn’t tell us whether we should use it, because it can be used for both good and bad ends.

To fully evaluate generative AI tools, we need to shed light on how they increase efficiency. And when we do, I don’t think we’ll like what we see.

Let’s consider some examples. You might use ChatGPT to brainstorm ideas for the paper you need to complete before the conference deadline. Or you might use it to make a slideshow for the conference presentation. To head off some potential worries, we can assume you’re using only your own research and simply asking the AI to pull it together into a particular format. In both scenarios, ChatGPT speeds up a process you’ve done before, one that seems worthwhile.

How does it make this process more efficient? By cutting out humans. Brainstorming with ChatGPT obviates the need to knock on your colleague’s door; it makes the slides so you don’t have to. Looked at one way, this all seems welcome. You can use the time you would’ve spent making the slides on something else. Your colleague may not appreciate the disruption—they have their own work to do!—or may not be available at 3 a.m., when your creative juices are flowing. ChatGPT is not only efficient because it performs tasks quickly, but also because it doesn’t sleep and is the consummate multitasker (times when it’s overloaded and can’t take more prompts notwithstanding).

These gains in efficiency, however, come at the cost of alienation. Using these tools distances you from others, eliminates your worthwhile engagement in the productive process and undermines your ability to control and benefit from your own labor.

When you use ChatGPT instead of your colleague as a sounding board, you isolate yourself. You interact with an artificial intelligence as opposed to a human being. Once may not seem like much to worry about, but the truth—if we dare admit it—is that the first time is likely the first of many. ChatGPT is like Pringles: betcha can’t use it just once.

One way to recognize the problem here is to think back a couple of years to the dark days of the pandemic. We bemoaned the lack of human contact during lockdowns. Now we relish being back on campus and at conferences together. I would hope we haven’t so quickly forgotten how important human contact is, even in our professional lives. Yet here many of us are trading it for quick results, choosing to alienate ourselves in the name of efficiency.

We know isolation is bad for our health, but in using generative AI, we’re willingly cutting out human contact anyway. That’s a first sense in which these tools are alienating, and reason enough not to use them. Further reasons stem from the ways in which they alienate us in senses of the term familiar from discussions in political economy.

When you use ChatGPT to create the slideshow, you cut yourself out of the process by which it’s produced. The slideshow is the product of your work in the sense that your research provided the material from which it was made, but the slide deck isn’t something you put together. When you present it to the audience, you may or may not feel alienated from your own research. But you are alienated from it. An AI-generated slideshow is a product that doesn’t truly reflect your creative powers. Because you didn’t actively engage in its production beyond merely inputting a prompt, you also didn’t grow or learn from making it, and it doesn’t reflect your ability to communicate ideas. There’s a sense in which it’s meaningless to you. The presentation conveys information that stems from your research, but it has no more connection to you than a slideshow highlighting your work created by anyone else would.

The loss incurred in cutting yourself out of the creative process isn’t confined to uses of AI for research purposes. There are calls to incorporate generative AI tools in classrooms, too. To foster a greater sense of connection, you might use generative AI to send personalized emails to each of your students. Given all your other commitments, this may be the only manageable way to do so. And the evidence suggests personalized emails will help your students succeed in your class by helping them feel more connected to you. It’s a well-intentioned effort.

But these emails only nurture a sense of connectedness if everyone ignores the fact that you didn’t write them. Though sent in your name, they’re penned by an AI. Their supposed value derives from the appearance that you took the time to communicate with each student person to person. No matter how closely an AI can mimic this, the only way it can have the intended effect of making your students feel truly seen by you is through deception. Whether they willingly ignore the truth or don’t realize the email was written by an AI, those students who feel connected to you after reading it are basing this on a fiction. You didn’t actually take the time to write the email; instead, you programmed an AI to make it look like you did. This threatens to distance you from your students, not bring you closer together. It threatens to undermine your efforts to help them succeed.

A third sense in which using generative AI can be alienating has to do with how it promises to disrupt the workplace. The more you use a tool like ChatGPT, the more work you can get done. And you’re not the only one. There’s every reason to think this will ratchet up expectations. It will become reasonable to expect increased research output when no one has to spend time making their own slides, penning their own abstracts and so on.

The same is true for jobs outside academia. This is one reason why it doesn’t strike me as in our students’ interests to train them in the use of these tools on the grounds that they’ll be expected to use them when they join the workforce. The more normalized the use of generative AI, the more bosses will demand of workers. And the more efficiently we produce things, the more potential there is for others to profit from our labor. In addition to worrying about unemployment, we should also worry about compounding the sense of injustice that comes with increasing economic inequality and the sense of meaninglessness that comes from a life full of routinized tasks. We’re failing our students if we train them to uncritically accept what the job market offers them.

My case against using generative AI tools, such as ChatGPT, is this: the arguments in favor of using them boil down to considerations of efficiency. They can reduce the time and resources necessary to create the things we want. This isn’t necessarily a good thing, since efficient production of a bad thing is itself bad. And, less obviously, the means by which these tools speed up the creative process feeds alienation. A future in which we’re all using AI to complete our work is one in which we are more isolated from each other and the fruits of our labor. No, thank you.

Benjamin Mitchell-Yellin is associate professor of philosophy and director of academic initiatives and strategy at Sam Houston State University in Texas.

Next Story

More from Views