You have /5 articles left.
Sign up for a free account or log in.

The Gartner hype cycle for AI-fueled text and image generation is currently at its peak of inflated expectations. Can the trough of disillusionment be far off?

Forgive this post’s misleading title. I consider ChatGPT, the text generator, an ally, not an adversary. Think of all the things that the program can already do:

  1. Generate lists of bibliographical references.
  2. Tutor students by defining terms and explaining difficult concepts.
  3. Solve math problems and debug programs step by step.
  4. Provide first drafts of course syllabi.
  5. Identify scholarly debates on a particular topic and explore subjects through differing theoretical lenses.
  6. Model clearly organized descriptive and argumentative writing on particular topics.

Text generators might help students learn about different writing genres and forms and force serious writers to become stylists. In other words, AI text generators will establish a new baseline for student essays.

In my own small (40-person classes), the students’ five shorter essays (at least 500 words in length) must include four parts:

  1. A detailed prompt input into ChatGPT
  2. The text that ChatGPT “wrote” in response to the prompt
  3. An essay that builds on the ChatGPT foundation, supplemented with additional research that must be cited in a bibliography.
  4. A list of the corrections, revisions and additions that the student made in producing the reworked essay.

We will devote time in class to discussing the text that ChatGPT produced, including its strengths and weaknesses.

Much as AI text generation can, I am convinced, strengthen student writing, AI image generators can inform visual literacy and arts education and set a new bar for artistic creativity. These applications can help students learn about various artistic styles and concepts and analyze an artwork’s distinct elements and techniques and push artists to shift in new directions that AI can’t duplicate.

I recently attended a collegewide discussion of ChatGPT’s teaching implications, and I was struck by the exasperated faculty participants’ intense and insistent negativism. One presenter after another declared that the application will encourage intellectual dishonesty. Others complained that the platform has expropriated and exploited scholars’ intellectual property and that coping with ChatGPT will place uncompensated burdens on faculty. Still others asserted that the tool will devalue writing, undermine faculty-student trust, raise doubts about the authenticity and originality of student work, and compromise the college essay.

But many of these concerns, I am convinced, are projections or displacements of a more profound concern: that my richly resourced campus has already downgraded the craft of writing. I learned from participants that a majority of freshman are currently exempt from the first-year writing requirement because of early-college/dual-degree or Advanced Placement courses in high school or because they took a less demanding course elsewhere—relieving the College of Liberal Arts of an expensive burden, but leaving all too many freshmen unable to write well at a sentence level. At the same time, advanced classes with an intensive writing flag are excessively large, with a minimum of 25 students and some with as many as 30 undergrads.

To make matters worse, many colleagues feel incompetent in offering the kinds of writing instruction and feedback that students require, with professional development training largely unavailable.

I’m well aware of ChatGPT’s limitations. That it’s unhelpful on topics with fewer than 10,000 citations. That factual references are sometimes false. That its ability to cite sources accurately is very limited. That the strength of its responses diminishes rapidly after only a couple of paragraphs. That ChatGPT lacks ethics and can’t currently rank sites for reliability, quality or trustworthiness.

Yet to my mind, the platform, even in its current form, is an asset that faculty would be remiss not to leverage.

However, if this tool is to live up to its potential, it must mine the proprietary databases in which serious scholarship resides. Then, it could conceivably provide the kinds of high-quality responses that scholars would take more seriously.

Also, imagine if the platform extracted campus-specific information about gen ed and major requirements. It could then provide quality academic advice to students that current chat bots can’t. Or what if the tool had access to data covering specific academic programs’ employment and earning outcomes or the payoff of various skills credentials? It could, then, provide the transparency that the College Scorecard promised but has yet to deliver.

Better yet, what if the platform had access to real-time local or regional job market data and trends and data about the efficacy of various skills certificates? It could then serve as initial-tier career counseling.

I’m no Dr. Pangloss, and I certainly don’t want to come across as a hopeless optimist, let alone as an uncritical promoter of the new technology. But I do think we shouldn’t let our fears outweigh our hopes or our anxieties overshadow future possibilities. The sad fact is that advising and career counseling resources on most campuses are grossly inadequate and that instruction in writing is insufficient. We must do better, and text generation software might help

But the impact of the new tool might be even more profound. In a fascinating recent essay, Tomas Chamorro-Premuzic, an organizational psychologist and professor of business psychology at University College London and Columbia, argues that generative AI will redefine what we mean by expertise. Much as Google devalued the steel-trap memory, electronic calculators speeded up complex calculations, Wikipedia displaced the printed encyclopedia and online databases diminished the importance of a vast physical library, so, too, platforms like ChatGPT will profoundly alter the most prized skills.

According to Chamorro-Premuzic, the skills that will be most in demand will be the ability to:

  1. Know what questions to ask. The quality and value of AI-powered tools’ responses hinge on the prompts it is asked. Better prompts elicit richer and more robust responses.
  2. Go beyond crowdsourced knowledge. Advanced and specialized domain and subject matter expertise will become more valuable, since AI-produced responses will inevitably contain errors or oversimplifications. The capacity to spot inaccuracies, miscalculations, mistakes in coding and other boo-boos and correct errors or complicate understanding will be highly valued.
  3. Leverage AI-generated insights into decisions and actions. Information becomes most valuable when it is actually applied in real-world contexts: when we solve problems or translate ideas into tangible products and services. The ability to implement solutions is, of course, well beyond AI’s current capabilities.

As Chamorro-Premuzic puts it succinctly, “If we want to retain an edge over machines, it is advisable that we avoid acting like one.”

In other words, if a program can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them. The next task for higher education, then, is to prepare graduates to make the most effective use of the new tools and to rise above and go beyond their limitations. That means pedagogies that emphasize active and experiential learning, that show students how to take advantage of these new technologies and that produce graduates who can do those things that the tools can’t.

It used to be the case that every educated adult knew the story of the Luddites, those early-19th-century English textile workers who sought to destroy the mechanized looms and knitting frames that ultimately eliminated their jobs and shattered their way of life. As the great historian of the English working class E. P. Thompson insisted, we mustn’t condescend to the Luddites who (like late-19th-century Southern and Western farmers and 1970s- and 1980s-era steel and auto workers) were the tragic victims of “progress.”

But we must also recognize that their resistance to technology’s advance, however valiant, was futile. It was doomed to defeat in the face of industrialization.

As Claudia Golden and Lawrence F. Katz have pointed out, technology and education are engaged in an ongoing race. Robots and automation did, in fact, displace millions of members of the industrial working class. Computerization eliminated large swathes of middle management jobs.

The threat now is to the very knowledge workers who many assumed were invulnerable to technological change. If we fail to instill within our students the advanced skills and expertise that they need in today’s rapidly shifting competitive landscape, they too will be losers in the unending contest between technological innovation and education.

Steven Mintz is professor of history at the University of Texas at Austin.

Next Story

More from Higher Ed Gamma