You have /5 articles left.
Sign up for a free account or log in.

Ilya Sedykh/iStock/Getty Images Plus
I want artificial intelligence to plagiarize my work.
Let me clarify: I don’t mean I want a student to copy and paste my latest law review article and pass it off as their own. Academic integrity matters. But when it comes to the burgeoning capabilities of generative AI, our fixation on traditional notions of authorship and plagiarism is becoming a significant barrier to realizing perhaps the greatest proliferation of knowledge generation and sharing this world has ever seen. In my research on how legal frameworks can accelerate the development and adoption of AI, I see this firsthand. We stand on the precipice of a revolution in thought and expression, yet we hesitate, clutching tightly to outdated norms centered on individual credit rather than collective progress.
AI’s trajectory is already transforming how we interact with information. It can distill labyrinthine scientific papers into accessible summaries, translate complex policy debates across languages and cultural contexts, and empower individuals previously sidelined in public discourse to craft compelling arguments. Imagine community groups, such as Parent Teacher Associations, using AI to analyze proposed legislation and draft persuasive appeals to their representatives, tailored precisely to resonate with local concerns. Picture scientists in underresourced labs partnering with AI to rapidly survey existing literature, identify novel research avenues and author grant proposals. Think of citizens everywhere using AI tutors to grasp intricate subjects like the economic literature on tariffs, leveling the playing field for informed participation in democratic life.
This potential is particularly profound for scholars like me working on emerging issues of broad popular concern, such as the regulation of AI. Our core professional mandate is the creation and dissemination of knowledge. We spend years honing expertise, conducting research and crafting arguments, ideally to contribute to the broader understanding of our fields and the world. AI offers unprecedented tools to amplify this mission. It can synthesize vast data sets, identify subtle patterns across disciplines, help structure complex arguments, refine prose for clarity and even suggest novel lines of inquiry based on the existing corpus of human knowledge. It can be the most powerful research assistant, writing partner and knowledge amplifier ever conceived.
So, what holds us back? In academia, a significant part of the answer is ego, intertwined with our deeply ingrained obsession with attribution and the mortal fear of plagiarism. I encountered this resistance directly a few weeks ago at a well-known law school. I was presenting at a faculty workshop, soliciting feedback on a working paper, tentatively titled “Large Language Scholarship,” that I co-authored with Alan Rozenshtein. The paper explores how generative AI can genuinely improve the often-torturous process of legal scholarship—streamlining literature reviews, assisting with complex doctrinal synthesis, even helping overcome writer’s block to get crucial ideas onto the page faster and more clearly.
I laid out the potential benefits to my fellow academics: accelerating research, potentially democratizing scholarly contribution, allowing us to focus on higher-level analysis rather than rote summarization. The room was initially engaged, curious. But then the questions began. And almost immediately, the conversation snagged, then ground to a halt, fixated on one point: attribution.
The immense potential for enhanced knowledge creation, for tackling complex legal problems more efficiently and clearly, for making our often-esoteric work perhaps slightly more relevant—all of it evaporated against the specter of improper citation. We stumbled, repeatedly, against this brick wall. Despite my attempts to steer the conversation toward the opportunities and the need for new norms, several colleagues simply could not get past the ingrained fear that AI usage inherently equaled plagiarism, or at least created an unacceptable risk of it. The discussion became less about transforming scholarship and more about policing the boundaries that AI inherently challenges.
That experience crystallized the problem for me. We build careers on citation counts, on the meticulous tracking of who originated which precise turn of phrase or specific idea. We debate the ethics of AI “hallucinations” and worry about machines generating text based on our work without proper footnotes. But this fixation, as revealed in that workshop, often overshadows the core purpose of our work.
Let’s be honest with ourselves. How many people actually read the average scholarly article from start to finish? A handful? And how many delve into the citations? Even fewer. While citation serves a crucial role, our anxiety often extends beyond practical functions into the realm of intellectual ownership and personal recognition. We worry that if an AI learns from our work, our individual contribution will be diluted.
This is where I say, please plagiarize me. Train your models on my articles, my analyses, my proposals for legal reform around AI—even on the ideas in “Large Language Scholarship.” If an AI, having processed my research, helps a start-up navigate regulatory hurdles, assists a policymaker or simply explains a complex legal doctrine to a student, isn’t that the ultimate fulfillment of my work’s purpose?
My goal as a scholar isn’t personal renown. It’s about the ideas themselves and their potential impact. It’s about contributing to a body of knowledge that can lead to tangible improvements. If AI can act as a vector, carrying those ideas further, faster and to more people than I ever could alone—even if it remixes and integrates them without a footnote explicitly tracing back to “[Your Name], [Year]”—then it is serving the fundamental purpose of scholarship.
This doesn’t mean abandoning intellectual honesty. It means recalibrating our perspective. We need to shift the focus from “Who gets the credit?” to “How can this knowledge best be used?” When AI acts as a tool for synthesis and creation, the traditional model of single authorship frays. Insisting on applying old norms to new technology risks stifling the very innovation we claim to champion, just as I saw happen in that workshop.
The potential benefits of AI-powered knowledge generation are too immense to be held hostage by academic vanity or anxieties about attribution. Let the AI learn from us. Let it synthesize, remix and build upon our work. Let it help translate our niche insights into broadly accessible wisdom. Let the ideas flow. My work isn’t about me; it’s about the knowledge. If AI using my work helps accelerate innovation or empowers someone, then I invite it: plagiarize away. The real reward isn’t citation: It’s impact.