You have /5 articles left.
Sign up for a free account or log in.

Two men sit on a stage at a conference. The man on the right is speaking and gesturing with his hands.

Inside Higher Ed Editor and co-founder Doug Lederman (left) speaks with Lev Gonick (right), chief information officer of Arizona State University, at the Digital Universities U.S. conference in St. Louis. The event brought together college administrators and education technology company officials to explore digital transformation in higher ed.

Times Higher Education

Higher ed moving beyond initial artificial intelligence (AI) fears to focus on practical and specific opportunities for the technology was a recurring theme at the Digital Universities U.S. conference that concluded on Wednesday in St. Louis.

The conference, co-hosted this week by Inside Higher Ed and Times Higher Education in collaboration with Washington University in St. Louis, brought together hundreds of college administrators and education technology company officials to explore the possibilities and challenges of digital transformation in higher ed.

“I’ve been on a digital transformation for over 20 years; the first lesson is, it doesn’t happen overnight,” said Lev Gonick, chief information officer at Arizona State University, who kicked off the event’s second day, describing a science-focused virtual reality lab and a partnership with OpenAI.

Gonick said that while ASU’s digital transformation has taken decades, there isn’t time to spare when it comes to AI. ASU has to go from “online to AI” in roughly three to four years, he said.

Artificial Intelligence

Unsurprisingly, generative AI was a talking point at many event sessions. At a packed workshop on “Why are universities slow to adopt technology,” the attendees tapped AI as a key higher education technology trend continuing to emerge in the next five years.

When it comes to AI accelerating academic processes, “a lot of the new work for us is figuring out what we want to assess in the process, instead of just at the end,” said Douglas Harrison, New York University associate dean and clinical professor. “The end has been so reliable, for so long, as a measure of learning. But now we have to assess in the middle—which we’ve been saying for decades, but now our hands are forced.”

At another session, Robbie Melton, interim provost and vice president for Tennessee State University academic affairs, warned of the dangers of bias in AI results. She described how AI-generated images of underrepresented groups may deliver negative portrayals, even in subtle ways with pictures tending to be sad or serious. Creating positive, happy AI-generated images may require multiple prompts, she said.

“There is a digital divide and there will be an even greater digital divide if underrepresented groups don’t have a seat at the table,” said Melton, who is also vice president of technology innovations for the SMART Global Innovative Technologies Division.

Badri Adhikari, associate professor of computer science at the University of Missouri at St. Louis, emphasized the importance of human checks on AI, including providing AI models “context” to reduce bias when they’re trained on inevitably biased data. Adhikari also stressed that AI is not yet reliable enough to go without a human check in any consequential application.

Neil Richards, Koch distinguished professor of law at Washington University in St. Louis, where he co-directs the Cordell Institute for Policy in Medicine and Law, encouraged thinking even beyond mitigating bias. “There’s somewhere between solving the bias problem and avoiding The Terminator, and I continue to work within that role.

The University of Florida is working hard to train faculty members to help students use AI ethically and practically, but it is holding off on using AI to build assessments because of similar concerns, said David Reed, associate provost for strategic initiatives. He said a promising early-term predictive analytics program has been halted as his team explores its possible implications.

Richards joked that he was invited as the resident contrarian in a panel discussing the ethical and legal implications of AI. He objected to the idea that regulation of technology and innovation are at odds, arguing that technology and law have long been intertwined and that any strong technology finds ways to adapt to sound ethical and legal guardrails.

Gonick of ASU said a key way to rapidly embrace AI is to have some employees solely focus on implementing the technology, whether that is a team of two or 20 people.

“They wake up in the morning and go to bed at night only thinking about AI acceleration at ASU,” Gonick said of his AI Acceleration team. “If you’re adding it to someone’s existing agenda, you’ll be in the game but it’s hard to imagine you’re dedicating the resources you’ll need.”

Equity and Inclusion

AI wasn’t the only topic at the event, where the official theme was “digital-first: access, equity, innovation.”

“One of the things that’s really great about online spaces is it gives us the opportunity to really think about creating learning experiences with diversity in mind,” said Tiffany Townsend, vice president of organizational culture and chief diversity officer for Purdue Global.

“What we’re doing with technology is really thinking, from the beginning, ‘How do our learners show up? How do they show up and learn in different ways? And how are we incorporating that?’ And the way we’re structuring our courses from the ground up,” she said.

When it comes to defining access and equity in online spaces, boiling down the ideas to single definitions could be limiting, said Racheal Brooks, director of quality assurance implementation solutions at Quality Matters, a nonprofit focused on online and blended learning.

“Instead of making sure no learner experiences challenges, we need to rely on the expertise of students to help illuminate how we can continue to expand that definition,” she said. “Keep in mind we’re going to grow and learn—by keeping that in mind, it can help us expand whatever definition we choose.”

Accounting for COVID Loss

University leaders also touched on the importance of addressing the learning and emotional loss that came during and after the COVID-19 pandemic.

At a session featuring leaders from minority-serving institutions, Maurice Tyler, vice president for information technology and chief information officer at Bowie State University, a historically Black institution in Maryland, said current students are developmentally behind in social relationship building compared to students prior to the pandemic.

“We can clearly see the cliff, but how to address it, we’re not sure,” Tyler said. “How do you fast forward someone’s brain six years into the future without barraging them with a lot of social interaction which we’re not really equipped to do?”

Part of Bowie State’s response has been to increase the frequency of outreach from its student support team, moving up check points for potential interventions from the fifth week of the semester to the second.

“Pulling in those timelines has helped because it helps us get ahead of those problems as opposed to catching up,” she said.

Wendy DuCassé, director of field education and assistant clinical professor at Saint Louis University, noted in another session the impact of the pandemic on student’s mental health—how some students thrived in online learning environments while it exacerbated existing mental health challenges for others. The tendency for young people to access a near-constant stream of news about global events on social media can create experiences of “vicarious trauma,” she said.

Tameka Herrion, senior director of programs at the Scholarship Foundation of St. Louis, where 85 percent of funded students are eligible for Pell Grants, told educators: “The one thing we can do to help our students the most is paying for mobile mental health apps—like Headspace or Calm—so they can access it on the go.”

Sara Custer, Colleen Flaherty and David Ho contributed to this article.

Next Story

Written By

More from Tech & Innovation