You have /5 articles left.
Sign up for a free account or log in.

A legal scale with a glass of pencils on the left side and the logo for ChatGPT on the right side

Law schools are weighing their options when it comes to using ChatGPT in admissions essays.

Photo illustration by Justin Morrison/Inside Higher Ed | iStock/Getty Images | Pexels

As ChatGPT becomes commonplace among legal professionals, law schools are divided on whether to allow students to use the artificial intelligence tool in the admissions process.

A week after the University of Michigan Law School announced the AI tool would be banned in law school applications, Arizona State University Law School took the opposite approach.

ASU announced on July 27 that future applicants will be allowed to use ChatGPT in their applications, specifically for their personal statements, which are akin to the essays required in undergraduate applications.

“It’s sort of accepting reality,” said Gary Marchant, director of ASU’s Center for Law, Science and Innovation. He said students would be using the tool regardless and more lawyers have been making headlines after using ChatGPT.

“If the assumption is going to be that they’re using it, it makes sense to allow its use under supervision and guidance,” he said. “That’s how they’re going to work in the real world—lawyers use it every day now.”

The growing adoption of ChatGPT among lawyers, who use it for researching and writing in legal briefs and filings, has created a sense of urgency for law schools. Multiple law professors said it would be “malpractice” to not teach students how to use AI chat bots like ChatGPT.

However, many are pushing back against using the tool in admissions. Law school experts and institutions, including college consultants and college admissions organizations, believe an overarching policy encouraging the use of the tool could be going a step too far.

“I do think given the number of colleges in the gray zone—and there are an awful lot of them—it does make sense to be careful with what you’re doing,” said David Hawkins, chief education and policy officer at the National Association for College Admission Counseling. “We still don’t know everything about the multiple dimensions of AI.”

ChatGPT’s Rise in Universities

ChatGPT first launched for public use in November 2022, kicking off a slew of discussions on the ethics, assets and liabilities that could come from using the artificial intelligence tool. While some utilize it as a helpful aid in their writing, others warn of overly relying on the technology—which, given its infancy, has several pitfalls including false facts and the use of stereotypes and biases.

Arizona State first dipped its toes in the AI waters this summer by offering a course on ChatGPT spanning a variety of majors, becoming one of the first institutions in the nation to do so.

“Innovation is a main thing at ASU,” Marchant said. “The leadership is very much pushing us that we need to be innovative.”

He said students began asking in the spring about the university’s policies on ChatGPT, with many saying they are already using the tools. Marchant said he believed in order to ensure equity, a blanket policy was needed.

“There’s already people gaming the system; this will make it more equal,” he said, referencing the use of costly consultants or hiring others to write a personal essay. “Many of these tools are free and help not just the rich people that use it but anyone.”

The University of Michigan Law School, which took an admissions stance opposite of the AI-welcoming ASU program, also cited equity as its motivation.

“I think some people will use it, some won’t, and I want a level playing field,” said Sarah Zearfoss, senior assistant dean at University of Michigan Law School. “If I used it and you didn’t, it’s hard to know if my essay was better. Is it because I used ChatGPT or because I’m a better writer?”

Zearfoss acknowledged there’s no way to tell if a student uses AI tools. But like nearly every other aspect of the application process, Zearfoss is taking the students at their word.

“The reality is, I have to take things on faith,” she said.

These law school policies come as a majority of colleges and universities remain silent on ChatGPT guidelines.

Hawkins speculated that the earlier adoption of rules for law schools could be due to a few factors. First, due to their size, law schools can be a bit more nimble in changing policy, he said. They are also a bit more independently minded; Hawkins pointed out that law schools were among the first to do away with vying for the contentious U.S. News & World Report rankings.

But Mike Spivey, founder of law school–focused Spivey Consulting, has a different theory.

“I think it’s more because there’s so much talk about lawyers using AI to practice law,” he said. “I don’t follow medical community news, but I haven’t seen articles about them using AI to perform surgery.”

The Future of AI in Law School

Most law schools fall somewhere in the middle of the spectrum.

Daniel Linna, a professor of technology at Northwestern Law School, said the Chicago-based university does not have a hard-and-fast rule when it comes to ChatGPT, nor does it have policies about using consultants or other tools to help during applications.

“It seems to me our policy is a good one,” Linna said. “We don’t talk about how you use consultants, or friends or family.” Instead, the university has applicants confirm their statements are accurate and truthful.

Many law schools will likely stay neutral on policy, at least for the time being, according to Spivey. His group counsels 35 law schools and thousands of prospective law students.

Spivey said the topic he gets questions about the most often—after the U.S. News & World Report rankings—is about finding the best policy for addressing the use of generative AI.

Spivey keeps it simple.

“What I’ve been saying is, ‘Why don’t you hold off until next year?’” he said.

Hawkins of NACAC advises institutions to consider their mission statement, how it relates to the student body they want to attract and if the use of AI is fundamentally important for those connections.

“With that knowledge in hand and the understanding of how AI is being used, then you’re well positioned to craft a policy that makes sense for your institution,” he said.

But schools that play it safe could be rewarded. Both Michigan’s Zearfoss and Spivey believe the use of ChatGPT in applications could actually hurt an applicant.

“ChatGPT is spectacularly unhelpful; it’s not specific to you,” Zearfoss said, referencing an applicant’s personal essay. “Yes, you can work in your own details, but I think the first draft is very important. And if you start with a substandard draft, [you] can end up with a substandard final product.”

This doesn’t mark the end of the ChatGPT discussion, which university officials and experts alike believe will continue to rage on.

“This is the right thing, based on what we’re looking for, based on the applicants right now,” Zearfoss said of the Michigan ban. “It may not be right forever or for all schools. We definitely have to keep our eye on it and not just lapse into rejection.”

Next Story

Written By

More from Artificial Intelligence