You have /5 articles left.
Sign up for a free account or log in.

A robotic hand types on a keyboard.

 pasiphae

Is it possible yet to see through the fog of hype about generative artificial intelligence? Can we now confidently predict its long-term impacts on higher education and white-collar professions and adapt accordingly?

I think so. Let’s consider two skills crucial to many professions: writing and coding.

Despite their distinct goals and histories, these two skills face a convergent future. As practiced by professionals with expertise in their fields, writing is both critical thinking and an evolving technology. So is coding.

Yet generative AI seems to obviate the need to think—beyond the ability to write prompts or simply copy and paste prompts crafted by so-called prompt engineers.

In practice, generative AI can impact the teaching and practice of writing and coding in two opposing ways:

  1. Democratizing access to expertise, or specialized rhetorical knowledge, about everything from leading-edge medical research to idiomatic, corporate-friendly American English. (Such expertise, of course, was developed by humans with considerable time and effort, looted from the internet via CommonCrawl and GitHub, and filtered for bias and hate speech by underpaid ghost workers in Kenya).
  1. Erode professional expertise and literacy, with disastrous consequences, due to overreliance on AI.

In the words of the well-known AI-skeptical piece “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” “most language technology is built to serve the needs of those who already have the most privilege in society.” And those with the most privilege in society also, typically, have the most expertise—or at least the most access to it. Generative AI, at this point, is designed to serve experts, not novices. Not students.

How can educators support No. 1 and limit No. 2? In short, through the cultivation of critical editing skills: the application of discipline- and context-specific expertise and sociopolitical awareness of the rhetorical situation—the ability to deeply analyze and understand the audience, purpose, genre of writing or type of code, and context for a given document or program—to edit draft text and code generated by AI.

As Robert Ochshorn, software developer and Reduct.Video CEO, says, “Critiquing and editing code is a big part of a software developer’s job, and when I was in school, it wasn’t something they knew how to talk about.”

In writing, we are accustomed to editing as a late stage in the process, when we refine sentence structure, word choice and style. Though we fact-check and keep audience in mind, usually we consider our audience’s needs and collect facts early in the writing process—when we face the blank page.

However, a range of experts, from communication specialists drafting press releases to scientists drafting grant proposals, no longer face the blank page. They can begin with a draft created by a chat bot. It’s also true that professional programmers in the era of generative AI rarely face the blank page—but then, they haven’t done that since at least 2008 with the advent of GitHub. (GitHub is the world’s largest repository of open-source code, with more than 100 million users as of January; Microsoft acquired it in 2018.)

Critical editing, then, must bring more critical thinking into the editing process. Professionals must be able to assess how well AI-generated draft text or code accomplishes their purpose and meets the needs of their audience. This cannot be achieved simply by crafting better prompts for AI. It requires developing and applying expertise. To cultivate critical editing skills, educators must therefore remain focused on developing students’ expertise, in key phases of many writing and coding assignments, with little to no help from AI.

Though college instructors need to adapt, we need not feel lost. Where professional writing is headed, coding has been before and has some cautionary tales to offer. Conversely, where coding is headed, writing studies has some guidance to offer. And the stakes are high. If we don’t get this right, the erosion of expertise, due to overreliance on AI, will severely impact higher education and the larger economy and could make costly, dangerous errors in the professional workplace routine.

The Convergence of Writing and Coding

Generative AI has brought writing and coding closer than ever before, not least because it threatens to render both obsolete. Many think pieces published this year predict “the end of writing.” Quite a few also predict “the end of programming as we know it,” because generative AI tools allow users to create programs by writing prompts in English (and other so-called natural languages), not coding languages.

Historically, automation has not been a goal of writing. Conversely, automation is the primary goal and basis of coding, which streamlines repetitive tasks to save time and effort. Yet learning to code by learning to think like a computer—even to write a simple program that can play tic-tac-toe—requires tremendous patience and self-discipline. Coding is a form of writing—writing the strings of rigorously logical commands that run the tools we use in every aspect of contemporary life.

And though writers and editors collaborate, compared to writing, coding is an extremely collaborative activity. Open-source culture is liberal, sharing code under permissive licenses. In the culture of coding, if someone else has written and freely shared a script that accomplishes a particular task, why on earth would you write it from scratch? In academic writing culture, on the other hand, reusing text written by others is simply plagiarism.

Generative AI, for some, is nothing but a plagiarism machine that makes it impossible to trace or credit the experts whose intellectual property or polished style has been plundered—even when, like Bing Chat, it can list sources in its response to a prompt. When a large language model learns to write smooth, idiomatic prose from billions of documents, no individual writer gets any credit. Generative AI has imposed the culture of coding, with its benefits and risks, on the culture of writing.

One moment perfectly captures the convergent fates of writing and coding. In December 2022, moderators of Stack Overflow, “a popular social network for computer programmers … banned A.I.-generated text” because users were “posting substandard coding advice written by ChatGPT.” As a moderator said, “Part of the problem was that people could post this questionable content far faster than they could write posts on their own … ‘Content generated by ChatGPT looks trustworthy and professional, but often isn’t.’”

What was the rest of the problem? Why were programmers fooled by ChatGPT’s coding advice?

One reason: as a profession, software developers have depended on others’ expertise, by sharing and reusing code and using AI-driven code-completion tools, for decades. This dependence speeds up software development—but overreliance on others’ code also opens the door to more errors and vulnerabilities, including the case that “nearly broke the Internet” in 2016.

Another reason: the popular, latent misconception that equates “good writing” with flawless grammar and a sophisticated vocabulary. ChatGPT has surfaced this misconception. In writing by humans, good writing typically depends upon good thinking. We are not used to getting one without the other, and so we are confused by error presented with slick delivery. But it takes expertise to recognize expertise—and to notice factual inaccuracies and other lapses.

Because experts can recognize and correct errors and oversights when they critically edit a chat bot’s draft, they are delighted to incorporate generative AI into their workflow to increase efficiency (e.g., doctors who use chat bots as scribes), while educators and conscientious students feel nervous about doing so. Not because they are unjustly maligned Luddites (or Hollywood screenwriters in fear for their jobs). But because the goal of education is to learn and develop expertise, not to save time and effort for already-trained experts.

“[With Copilot] I have to think less, and when I have to think it’s the fun stuff. It sets off a little spark that makes coding more fun and more efficient,” says an unnamed senior software engineer in a promotional blog post for GitHub Copilot, an AI coding-assistant tool that generates code in response to prompts in English.

Good for the senior software engineer. But what about those who teach coding? What about students?

Since its launch in July 2021, GitHub Copilot had already disrupted the teaching of coding as much as ChatGPT is now disrupting the teaching of writing. Researchers at Gonzaga University in 2022 found that Copilot “generates mostly unique code that can solve introductory [computer and data science] assignments with human-graded scores ranging from 68% to 95%.” ChatGPT has intensified the challenge for the teaching of coding, since it can write code, too.

Obviously, in this new environment, students will still need to learn to code from scratch. But they also need to develop strong critical editing skills so that they can judge and edit AI-generated drafts. The pressing need to develop programmers’ critical code editing skills is nothing new—but generative AI has made it more obvious.

Slowing Down in the Computer Science Classroom

Matthew Butner and Joël Porquet-Lupine, who teach computer science at the University of California, Davis, are worried. GitHub Copilot and ChatGPT, in Butner’s view, are “useful tools to accelerate learning, but my worst fear is that [students] will cheat themselves out of learning.” To the best of their ability, they want to prevent students from using any generative AI tools in introductory courses, and they plan to proctor all exams.

“We’re going to have to change the whole assignment structure [in the computer science major] sooner rather than later,” says Porquet-Lupine. He adds that students must “first learn the fundamental programming skills in introductory classes,” without AI, and that instructors must “redesign the advanced classes to integrate ChatGPT [and Copilot] properly, and legally”—addressing the problematic fact that both tools mostly rely on open-source code without license or acknowledgment.

Recognizing that students need to slow down to learn to code well, in the fall of 2022, Butner began to lead them through a problem-solving guide. Students must define the problem, research how others have solved it, write the steps in English for solving the problem, submit all this to be graded and only then begin to write the code.

He requires these steps precisely because students were rushing to write code before they really understood the task, drafting code that did not work and failing to learn crucial coding skills such as identifying logical errors and gaps, balancing performance and flexibility in software design, and so on.

Butner does allow students to use AI coding assistants like Copilot in advanced courses and on collaborative projects—and of course, professional software developers and software engineers use these tools, too. They report that these tools increase efficiency but are “useless for anything novel,” in the words of software engineer Kushant Patel (formerly of the Center for Mind and Brain at UC Davis, now at the Lawrence Berkeley National Laboratory).

Patel worries that, if programmers overrely on it, generative AI will quickly run out of the expert code, written by humans, that it needs to train itself. And he feels strongly that students should not be allowed to use AI in coding before the age of 13 or 14, and only then with “mandatory training.”

Critical Editing (for Audience, Purpose, Genre and Context)

We cannot put the latest Pandoras—those smiling corporate vampires that are ChatGPT, GitHub Copilot, Google Bard, etc., which feed on human text and code rather than blood—back in the box. We must prepare our students to enter a workforce that is incorporating generative AI into every conceivable workflow.

To do so, we must define the expertise we mean to teach in our fields, at every level, and ensure that we help students acquire it without overreliance on AI. And whatever time is saved in the classroom by allowing students to rely on generative AI to produce rough drafts or draft code must be devoted to developing critical editing skills, with attention to audience, purpose, genre and context. Practicing these skills will need to happen in the classroom, where students and younger workers especially benefit from the power of proximity to mentors—that is, human experts.

Experts can use generative AI to make writing and coding more efficient. Novices can use it, as Butner says, to cheat themselves out of learning. And if educators let them do that, generative AI will defeat humans not with some spectacular sci-fi mischief, but simply by making humans dangerously incompetent.

Marit J. MacArthur teaches in the University Writing program at the University of California, Davis, where she is also associate director of Writing Across the Curriculum (graduate level), and a faculty affiliate in performance studies. Since 2015, she has overseen the development of open-source software for digital voice studies research, with funding from the American Council of Learned Societies, the National Endowment for the Humanities, and the Social Sciences and Humanities Research Council of Canada.

Next Story

More from Views