You have /5 articles left.
Sign up for a free account or log in.

Robot hands fill out a sample college application

AI-powered “counseling assistants” are here, and could change the way high school students receive college advising.

Photo illustration by Justin Morrison/Inside Higher Ed | Getty Images | Rawpixel

Since the launch of ChatGPT last November, college admissions officers have been wringing their hands over the impact of generative artificial intelligence on college applications. But the counselors reading those applications are increasingly using AI, too.

Fifty percent of higher education admissions offices are currently using AI in their review process, according to a new survey from Intelligent, an online education magazine aimed at prospective college applicants. Another 7 percent said they would begin using it by the end of the calendar year, and 80 percent said they plan to incorporate it sometime in 2024.

Admissions professionals were much less enthusiastic about embracing AI in their work when Inside Higher Ed last probed the issue in May, just a few months after ChatGPT introduced its newest iteration. But in the past year many have come to terms not only with the inevitability of AI-generated college essays but also with the potential value of using AI tools in their own jobs.

And they’re not only fighting fire with fire by running essays through AI-powered detectors to spot machine-generated writing. According to the Intelligent survey, admissions counselors most often use AI for reviewing transcripts and recommendation letters; more than 70 percent of respondents listed both. Typically, that means running transcripts through a keyword search in order to sort for minimum thresholds on GPA and test scores; in the case of recommendations, application readers often scan the letters to ensure that they are at least generally positive or don’t contain any potential red flags.

Sixty percent of admissions professionals said they currently use AI to review personal essays as well. Fifty percent also employ some form of AI chat bot to conduct preliminary interviews with applicants, or to transcribe and analyze candidate interviews with alumni and counselors.

Diane Gayerski, a professor of strategic communications at Ithaca College and a higher ed adviser for Intelligent, said she’s not surprised that AI is taking off in admissions offices, even as many institutions remain wary of the technology in general.

“Lots more people have played with ChatGPT and other AI tools now and know how good they are at interpreting prompts and questions, so I think there’s a lot more confidence in their usefulness than there was earlier this year,” she said.

‘A Brave New World’

Not everyone has welcomed the innovation with open arms, however. According to the Intelligent survey, two-thirds of respondents were concerned about ethical issues surrounding the use of AI in admissions.

Rob Lamb, director of college counseling at the Sage Ridge School, a small private prep school just outside Reno, Nev., wrote about his concerns in a blog post back in March. Since then, he said, he’s seen some of the initial wariness among admissions offices melt into a broad, if sometimes resigned, acceptance of AI in the review process—without much transparency around how the tools are being implemented.

“I’m a little concerned about it, to be honest … I’d like to know more about how exactly it’s being used,” he said. “Knowing that AI is here to stay, I think there needs to be more pressure, whether it’s from [the National Association for College Admission Counseling] or other groups, to increase transparency in the process and bring enrollment managers and admissions officers together to ask, ‘Where are we going with this?’”

David Hawkins, NACAC’s chief education and policy officer, said while there is no established guidance yet on the use of AI in the admissions process, it has quickly become a frequent subject of discussion and experimentation among counselors.

“As of now, NACAC has not put forward best practices related to the use of AI tools in application reviews, though there is an abundance of conversation about the topic in our membership,” he wrote in an email. “At this point, it’s fair to say that we are in listening mode, given that this issue is evolving rapidly.”

As AI technology advances, so do its uses, and for some the fast click of progress is more nerve-racking than exciting. Lamb fears there are too many unknowns concerning the consequences of an AI-assisted admissions process, and he would prefer colleges take a more cautious approach.

“It’s like Aldous Huxley’s Brave New World: a lot of people are excited about what this offers, but I think we’re just in the infancy of it, so it’s hard to know what to be concerned about,” he said. “It’s what [educator and cultural critic] Neil Postman would call a Faustian bargain. With every new technology, you gain something, sure, but you lose something, too.”

Gayerski said that in most current admissions uses, AI is merely automating tasks that were already fairly robotic even when completed by humans, such as checking for minimum GPAs or sorting by program interest. For now, she doesn’t believe introducing AI is likely to have much impact beyond increasing efficiency.

“Application readers have been mechanically doing at least the first screen of applications for decades now, based on some uniform criteria given to them by the institution. Some of that can easily be done by a machine,” she said. “These are all algorithms. Whether a person uses them or a machine does, it doesn’t make much difference.”

‘Gamifying’ Admissions (Even More)

Lamb called the use of keyword searches “especially concerning.” He worries that the introduction of AI scanning tools could make the competitive college admissions process even more subject than it already is to what he called “gamification.” Wealthy students at private high schools with dedicated counselors already have a leg up, he said; knowing which keywords certain selective colleges have trained AI to highlight in recommendation letters or essays—or which niche extracurriculars are likely to prompt an AI to flag an application for a second look—will only exacerbate the problem.

“It just creates more levers to pull and makes it all feel even more game-y,” he said. “I can definitely see more backroom discussion between high school counselors or coaches and admissions offices, where they’re asking, ‘OK, what are this year’s keywords?’”

Gayerski countered that the gamification of college admissions has been well under way for years, and she doesn’t believe AI will significantly worsen it. If anything, she said, it could make the process more straightforward by removing human error—and bias.

AI has plenty of more benign uses in the review process as well, like those that reduce the often-overwhelming workload of admissions officers. Larger public institutions, for instance, are more likely to use AI at some point in the review process than smaller private ones—especially for transcript reviews. Gayerski said that wasn’t surprising, considering one of the top reasons colleges cited in using AI was “efficiency”—combing through tens of thousands of transcripts for a first pass with a relatively small admissions team, as many larger public institutions do, is a time-consuming and often menial task, she said.

But Gayerski acknowledged that anxiety about AI in admissions is not unfounded, and said she shared many of the same concerns voiced by her more tech-skeptical peers.

She said she was shocked, for instance, that more than 80 percent of admissions officers responding to the Intelligent survey said AI often makes the final decision on a student’s admission, though she clarified that was likely just a way of culling the stacks of applications down to a more manageable number, which “are often rote decisions anyway.”

The most concerning possibility, she said, was that the AI tools would learn from colleges’ past patterns of acceptance and thus merely replicate historical preferences that highly selective institutions, in particular, have shown toward wealthy white applicants. It was this critique that led the University of Texas at Austin’s computer science Ph.D. program in 2020 to stop using an algorithm the program had designed itself in 2013—one of the earliest instances of admissions AI.

“The AI can select candidates who are similar to students who have succeeded in the same process before. And that may be good, but it also excludes other people and other factors that may have changed over the years,” Gayerski said. “I know there are cases in which these applications have been developed and dropped, because the reviewers understood that it was introducing a bias that they didn’t want.”

Gayerski said that means colleges, though warming to AI, are wise not to embrace it completely—especially after the Supreme Court’s decision striking down affirmative action made ensuring diversity a key priority for admissions offices.

“There is so much controversy around bias in the admission process and what criteria are being used, especially given the Supreme Court’s recent ruling, that I think institutions are being extremely careful and slow to respond because it’s navigating thorny territory,” she said. “And maybe that’s a good thing, because the technology is so fast-changing and so is admissions. We don’t even know what the rules are right now; that’s a bad time to start training AI.”

Next Story

Written By

More from Traditional-Age