You have /5 articles left.
Sign up for a free account or log in.

Yale University’s campus

A Yale University student and professor worked together to create an artificial intelligence chatbot based on the professor's research into ethical AI.

Getty Images

When Yale University freshman Nicolas Gertler wanted to make an artificial intelligence (AI) chatbot based on his professor’s research into AI ethics, his professor advised him to temper expectations.

“I said, ‘Be prepared to be disappointed,’” said Luciano Floridi, a Yale professor in the practice in the cognitive science program. “You never know if people are really interested in this topic, of a professor at Yale who works on digital tech.”

He added: “Apparently they are.”

Two weeks after the launch of LuFlot Bot, there have been 11,000 queries from 85-plus countries. The bot is not intended to replace the more general ChatGPT-type bots, which can seemingly answer any question under the sun. LuFlot Bot focuses specifically on the ethics, philosophies and uses of AI, answering questions such as “Is AI environmentally harmful?” and “What are the regulations on AI?”

“I did not think the technology would reach people in so many corners of the world,” Gertler said. “It’s what happens when you break down the barriers to this technology.”

Gertler and Yale join the ranks of institutions creating their own large language models (LLMs). Building your own AI has taken off in recent months as concerns—about intellectual property, ethics and equity—swirl about mainstay generative AI tools like ChatGPT.

Yale Chatbot Overcomes IP Woes

Gertler first started tinkering with artificial intelligence five years ago, as a 14-year-old with a penchant for technology. He deployed his own AI chatbot last fall, during his first year at Yale, as a sort of study guide for his cognitive science class midterm test. He built it with lecture slides and study guides. He then had the chatbot ask questions similar to what would be seen on an exam.

“I just saw it as a really cool experiment,” Gertler said.

Gertler began the spring semester chatting about his chatbot with Floridi, who was instantly interested. Floridi is the founding director at Yale’s Digital Ethics Center, a well-known philosopher and has dozens of research papers and books studying the ethics of AI.

Gertler, who co-founded an edtech startup called Mylon Education with Rithvik Sabnekar, wanted to create the LuFlot Bot to inform users on the ethics of AI.

“He thought because of the topics I research, it would be natural to have all this work on philosophy and AI ethics, made available to the general public,” Floridi said.

One of Gertler’s main goals with the chatbot was to break down a digital divide that has been widening with the iterations of ChatGPT, many of which charge a subscription fee. LuFlot Bot is free and available for anyone to use.

“Giving people a source directly from academia is really important because gaining access to the literature is a privilege,” he said. “There’s tons of paywalls and usually the ideas are conveyed in ways that use advanced language that the general public wouldn’t be compelled or able to understand.

“The fact they’re now able to gain an understanding just through this website is vitally important to me,” he said.

For Floridi, there was an added bonus of ensuring intellectual property rights. Many higher education officials have pushed back at the training for LLMs, which is often murky when it comes to IP protections and copyright. With a homegrown LLM, it is clear what the professor’s research will be used—and not used—for.

“It’s the difference between buying something from the shop and cooking it yourself; you know the ingredients,” Floridi said. “It might not be better than what you buy, because you cook it, but you know exactly what we put in it.”

Several other higher education institutions—including Harvard University, Washington University, the University of California, Irvine, and the University of California, San Diego—have turned toward creating their own internal LLMs to use across campus, ensuring professors’ IP is safe within the institution.

And, as familiarity with the technology continues to grow, so could the trend of universities building their own internal models.

Both Gertler and Floridi acknowledged that while not every professor can create their own chatbots based on their teachings—given it requires a large number of documents to build upon—it could be helpful for both faculty and students in the future.

“This project is symbolic of what can be done in a relatively quick timeline to create a safe, secure and accessible chatbot, so think of the possibility of what if professors can create similar bots,” Gertler said. “Put together class slides and study guides and a bank of questions; they have so much of this rich data, it’s just plugging it in to make it more accessible to students.”

Next Story

Written By

More from Artificial Intelligence