You have /5 articles left.
Sign up for a free account or log in.

Serhii Mudrevskyi/iStock/Getty Images Plus
In recent months, a curious fixation has emerged in corners of academia: the em dash. More specifically, the apparent moral panic around how it is spaced. A dash with no spaces on either side? That must be AI-generated writing. Case closed.
What might seem like a minor point of style has, in some cases, become a litmus test for authenticity. But authenticity in what sense—and to whom? Because here is the thing: There is no definitive rule about how em dashes should be spaced. Merriam-Webster, for instance, notes that many newspapers and magazines insert a space before and after the em dash, while most books and academic journals don’t. Yet, a certain kind of scholar will see a tightly spaced dash and declare: “AI.”
This tells us less about punctuation and more about the moment we are in. It reflects a deeper discomfort within academic knowledge production—about writing, authority and who gets to speak in the language of the academy.
Academic writing has long been a space of exclusion. Mastering its conventions—its structures, tones and unwritten rules—is often as important as the content itself. Those conventions are not neutral. They privilege those fluent in a particular kind of English, in a particular kind of intellectual performance. And while these conventions have sometimes served a purpose—precision, nuance, care—they have also functioned to gatekeep, obscure and signal belonging to a small circle of insiders.
In that context, generative AI represents a real shift. Not because it replaces thinking—clearly, it does not—but because it lowers the barriers to expressing ideas in the right register. It makes writing less labor-intensive for those who are brilliant thinkers but not naturally fluent in academic prose. It opens possibilities for scholars writing in their second or third languages, for early-career researchers who have not yet mastered the unwritten codes and for anyone who simply wants to get to the point more efficiently. This is not a minor intervention—it is a step toward democratizing academic expression.
And in that lies both the opportunity and the anxiety.
I have read academic work recently that likely used AI writing tools—either to help organize thoughts, smooth expression or clarify argument. Some of it has been genuinely excellent: clear, incisive and original. The ideas are coherent and well articulated. The writing does not perform difficulty; it performs clarity. And in doing so, it invites more people in.
By contrast, a fair portion of traditionally polished academic writing still feels burdened by its own formality—long sentences, theoretical throat-clearing prose that loops and doubles back on itself. It is not that complexity should be avoided, but rather that complexity should not be confused with value. The best writing does not show off; it shows through. It makes ideas visible.
Needless to say, I am not about to cite examples—whether of the work I suspect was AI-assisted or the work that could have done with a bit of help.
So why, then, do so many in academic circles focus their attention on supposed telltale signs of AI use—like em dashes—rather than on the substance of the ideas themselves?
Part of the answer lies in the ethics discourse that continues to swirl around AI. There are real concerns here: about transparency, authorship, citation and the role of human oversight. Guidance from organizations such as the Committee on Publication Ethics, and emerging policies from journals and universities, reflect the need for thoughtful governance. These debates matter. But they should not collapse into suspicion for suspicion’s sake. That’s because the academic world has never been a perfectly level field. Those with access to time, mentorship, editorial support and elite institutions have long benefited from invisible scaffolding.
AI tools, in some ways, make that scaffolding more widely available.
Of course, there are risks. Overreliance on AI can lead to formulaic writing or the flattening of style. But these are not new issues—they predate AI and are often baked into the structures of journal publishing itself. The greater risk now is a kind of reactionary gatekeeping: dismissing writing not because of its content, but because of how it looks, mistaking typography for intellectual integrity.
What is needed, instead, is a mature, open conversation about how AI fits into the evolving ecosystem of scholarly work. We need clear, consistent guidelines that recognize both the benefits and limitations of these tools. Recent statements from major institutions have begun to address this, but more are needed. We need transparency around how AI is used—without attaching shame to its use. And we need to refocus on what matters most: the quality of the thinking, the strength of the contribution and the clarity with which ideas are communicated.
The em dash is not the problem. Nor is AI. The problem is a scholarly culture still too often wedded to performance over substance—one where form is used to mask or elevate, rather than to express.
If we are serious about making knowledge more inclusive, more global and more just, then we should embrace tools that help more people take part in its production. Not uncritically, but openly. Not secretly, but responsibly.
What we should be asking is not “Was this written with AI?” but rather, “Is this work rigorous? Is it generous? Does it help us think differently?”
That is the kind of scholarship worth paying attention to—em dash or not.