You have /5 articles left.
Sign up for a free account or log in.

The inevitable has now happened and what is clearly unedited generative AI content is finding its way into (apparently) peer-reviewed publications.

It’s possible that this practice is already disturbingly widespread.

This was inevitable because in a system that privileges efficiency and productivity—as seems to be the case with much of academic scholarship—using a tool that can generate a simulation of the kind of content that passes muster in these spaces will be sorely tempting. These simulations are entering a world where those tasked as gatekeepers are overwhelmed, and the temptation to simply check the box and move the article down the line of production is powerful, particularly when that production (for both scholars and editors) is the job. If the only things that “count” are things we can easily count (like number of publications), you incentivize the thoughtless production of things that count.

Similar to how ChatGPT forced us to look at the kinds of essays that were allowed to pass muster in school contexts, and question whether or not those are worth doing in the way we’ve been doing them, generative AI is exposing the pre-existing disconnects between what we claim scholars should be doing—producing original meaningful research—and what they’re actually incentivized to do: crank out as many articles as they can get through the publication pipeline as quickly as possible.

Using ChatGPT in this way should be self-evidently absurd. Outsourcing original academic research to a technology that has no capacity to think or reason and then accepting that output as passing muster is to make a mockery of whatever values we can still claim to cling to. I don’t actually have a dog in that particular fight, but similar to my feelings about embracing large language models (LLMs) as tutors or teachers of writing, accepting them as generators of research or peer reviewers is tantamount to just giving up on the academic mission.

Generative AI has not created these disconnects, but has instead exposed them. It’s up to us as to what, if anything, we do about it.

I’ve written quite a bit in this space about what I think we should be focused on in terms of writing instruction and assessment, and I'll have much more to say in my next book (More Than Words: How to Think About Writing in the Age of AI), but today, let me make a suggestion for the already established academics of the world.

It’s time to figure out how to do less stuff that matters more.

If you pause to think about it for a moment, it’s odd that volume of output has been allowed to substitute as a metric for quality or importance of output. More is not necessarily better; it is simply more. A ratcheting arms race around volume as the way to distinguish those worthy of elite opportunities or the security of tenure is not good for any stakeholder within the system itself. It is not good for the scholars who must race against time to hit arbitrary metrics associated with volume. It is not good for those whose job it is to judge the meaning of these outputs as they, too, become overwhelmed with work.

Of course, this ethos is not confined to the professorate. Students striving for admission to highly selective institutions will run themselves literally ragged as they amass experiences and credentials that will be looked upon favorably by admissions offices. Many of these activities mean nothing to the students themselves, and yet they persist for the purposes of some future, indefinite payoff.

Why would we embrace this ethos, supercharge it even, by employing AI tools that helps us be more mindlessly productive?

The awesome potential of an institution of higher education is as a place where people can develop their minds, their spirits, their capacities—and do so in a way that this development is shared with others. Above all, this work should be mindful, not mindless. If someone is turning to ChatGPT in order to crank out their research, this is a de facto admission that this work is mindless, given that ChatGPT has no mind.

Privileging outputs over experiences is a steering away from quality. This was true before generative AI. It’s not quite more true now, except that the existence of a technology that can create the simulations has given folks qualms about the “integrity” of the work.

That work never had integrity because it was already tainted by privileging a box-checking product over a meaningful process.

It is not all that difficult to shift to a mindset that privileges process and quality over product and volume and then remake the system around those values. It’s the shift I made as an individual teacher in how I taught writing. Finding this harmony between my pedagogical values and my pedagogical practices was literally liberatory for both me and my students.

I see a lot of different varieties of discontent in academic spaces, a lot of disconnects between values and practices. I think a similar liberation could be at hand for academics themselves. In the case of academic research, where we already claim to value quality, it shouldn’t be that hard to begin to live those values.

Do less that matters more.

Next Story

Written By

Found In