You have /5 articles left.
Sign up for a free account or log in.

A robot hand is placed over a brown Bible with a cross on the cover.

Religious institutions are not shying away from the use of generative artificial intelligence, believing it can be used as a tool similar to Excel or Word.

Getty Images

The era of generative AI ushered in a fear of students plagiarizing that pervaded discussions about the technology at higher education institutions across the country.

While that fear extended to religious colleges and universities, these institutions are not only leaning into artificial intelligence in the classroom—unlike some of their secular counterparts—they’re also embracing AI as a tool for critical thinking at institutions focused on morality and the whole person in addition to academics.

“Secular education has an implicit or explicit utilitarian approach of, ‘You go to college to gain a skill and to get a job,’” said Jonathan Askonas, assistant professor of Politics at the Catholic University of America. “We don’t want you to use it to the extent you’ll be shortcutting your formation as a person. And not just Catholic, but religiously influenced institutions, will be able to articulate this kind of logic in a way secular institutions will struggle to do.”

Jordan Quaglia, research director at the Center for the Advancement of Contemplative Education at Naropa University, a private liberal arts institution in Colorado, also emphasized the focus on a whole person and their lived experiences.

While Naropa does not classify itself as a religious institution, it was founded by a Buddhist scholar and has a guiding force of “contemplative education.” Quaglia believes that focus can help students and faculty when approaching generative AI.

“In Buddhism, it’s very much encouraged to use your own experience to not just take another person’s word for what it is but explore it for what it is,” Quaglia said. “If fear and caution is guiding someone to avoid these technologies, that approach is less likely to succeed, versus an approach of trial and error, and showing students ways to skillfully and not skillfully utilize it.”

Religions, and the religious institutions as a result, are no strangers to technology. Some theologians have suggested that AI, by way of priest robots, could help alleviate the priest shortage, and even tamp down concerns about sexual abuse by priests. And earlier this year, a New York rabbi used ChatGPT to write his sermon.

Over the last year, there have also been large language models or LLMs, developed with a specific focus on various faiths—instead of ChatGPT, there’s BibleGPT, Jewish-GPT and BuddhaBot.

“The Catholic chat one does a bridging between the questioner and what the church teaches; on the surface, it’s good but I would not hold it as the authoritative source,” said John Tran, professor of computer science at Catholic Polytechnic University. University officials previously told Inside Higher Ed it plans to be the Caltech or MIT of religious institutions. “The LLMs stir innovation and level knowledge, but not nuance.”

Joshua Waxman, an assistant professor of computer science at Yeshiva University, a modern Orthodox Jewish institution in New York, said it is important to take a critical look into what is being used to train these models.

“It’s an issue of who is determining what’s ethical and how it should be aligned,” Waxman said. “A cause for concern would be people attributing a greater level of authority or trustworthiness to something that really isn’t [authoritative or trustworthy]."

He recalled a query he made in the early ChatGPT days about whether chicken parmesan was kosher, to which it responded yes, despite the mixing of meat and dairy, which Jewish religious laws prohibit.

“In many cases, it’s promising a lot more than what’s really there,” he said.

He noted that while GPT and others have attempted to tamp down on anti-Semitic prompts, it’s ‘like whack-a-mole’ and there are always ways to get around the safeguards.

Naropa’s Quaglia, like most of the institution officials interviewed, plans on using generative AI like any other learning tool, ‘like Microsoft Excel’ to gain a familiarity and discernment between when to use or not use the tool. But, he added, that NAROPA’s learning methods don’t exactly lend themselves to using generative AI.

“We have a lot personal reflection activities and experiences,” he said. “It would be hard to use ChatGPT to mimic things at this stage.”

The openness of the religious institutions to AI contrasts with the sentiments of the general Christian population. According to a 2023 study conducted by evangelical polling firm Barna Group, less than one-third (28 percent) of Christians are hopeful AI can do positive things in the world, compared to nearly 40 percent of non-Christians.

“There’s a danger in forbidding knowledge,” Tran said. “The church has always encouraged us to explore, learn and make your own mind up.”

Next Story

Written By

More from Artificial Intelligence