You have /5 articles left.
Sign up for a free account or log in.

Melissa Woo of Michigan State

Michigan State’s Melissa Woo: predictive technologies “in many ways risk reducing us to factories.”

THE Events

CHICAGO—When hundreds of college administrators and education technology company officials gather at a conference on the theme of the “digital campus,” many a faculty member might suspect—or fear—that the conversations wouldn’t be to their liking. Overly optimistic about all the great ways technology can improve “efficiency,” say. Ignoring potential problems such as invasion of privacy or prioritizing corporate profits over learning.  

Digital Universities U.S., a conference co-hosted by Times Higher Education and Inside Higher Ed here this week, had its share of technology enthusiasm in hallway discussions and on the agenda, with sessions heralding the possibilities of learning in the metaverse, harnessing data for student success and promoting well-being in online environments.

But the event was far from a pep rally, with many speakers expressing worries about the rapid emergence of generative artificial intelligence, bemoaning the tendency to embrace the latest “bright shiny object” and cautioning against use of technology that isn’t directly in service of institutions’ core missions.

Some of the ambivalence, if not outright skepticism, came from expected sources, like the philosophy professor who was specifically recruited to raise potential ethical questions on a panel about the use of virtual reality in the learning process.

As the CEO and co-founder of VictoryXR described how his company’s software brings elements of hands-on experience into virtual learning environments, Nir Eisikovits, a professor of philosophy and founding director of the Applied Ethics Center at the University of Massachusetts at Boston, said he appreciated how the technology might make learning more engaging for many students.

Nir Eisikovits speaking
Nir Eisikovits

THE Events

But he also noted that the biometric data gathered by the VR headsets as students reacted to what they saw and felt around them would be “gold-standard data” that could be enormously valuable for companies that wanted to market products to students. “That’s inherently dangerous data that creates special kinds of risks,” Eisikovits said.

It wasn’t only the philosophers who expressed qualms, though. At a session about how analysis of student learning data might help institutions serve their students better, Michael Gosz, vice president of data analytics at Illinois Institute of Technology, where the event was held, heralded a course recommendation system that has streamlined the advising process.

But he acknowledged that the system worked well because the data that drive the recommendations were generated through deep conversations between advisers and students in the past—conversations that the mechanized system might reduce the need for. “What happens in the future? Does the system degrade over time?”

“Or what if the system results in a bunch of advisers being fired?” said Kyle Jones, another panel member and an associate professor of library and information sciences at Indiana University–Purdue University at Indianapolis. “Maybe the adviser/advisee ratio goes from 350, which is already too high, to 700.”

Even the chief information officer on the panel, Melissa Woo of Michigan State University, fretted that predictive technology that accelerated the pace at which colleges might handle key functions, “in many ways risk reducing us to factories.” Yes, institutions should help learners reach their educational goals in the most affordable and direct way, Woo said, “but what’s happening to college as a time to explore?”

“And I’m speaking as an administrator” and a CIO, said Woo, executive vice president for administration at Michigan State.

In another session—this one exploring whether administrators and faculty are aligned on the changing digital landscape —some expressed concern that administrators often underestimate the burden technology changes place on faculty members. When colleges launch online programs, many are attuned to the needs of the target students—working adults seeking upskilling while juggling full-time work and family responsibilities.

“Well, that’s our faculty when it comes to artificial intelligence”—they’re just as overwhelmed, said Asim Ali, executive director of the Biggio Center for the Enhancement of Teaching and Learning at Auburn University. In response, Auburn has designed a self-directed, fully online course to help faculty members in the wake of ChatGPT’s release.

Also, administrators may not always appreciate that individual instructor decisions about whether or when to embrace new technology may play out over years. Greg Heiberger, associate dean for academics and student success in the College of Natural Sciences at South Dakota State University, confided that he has his “foot on the gas” to usher in technological change. But he has worked on developing empathy for instructors who avoid flashy group demonstrations sessions showcasing technology’s latest bells and whistles.

“They don’t want to put the headset on in front of their peers,” Heiberger said. “They have some of the same fears that our students have. Just as we meet our students where they are, we have to meet our faculty where they are.” For that reason, he has spent part of the past two years meeting individual faculty members for coffee and offering one-on-one demos to introduce virtual reality technology.

Attendees were also concerned about the challenges students face, especially as decisions to adopt technology accelerate and often happen in silos. In a single day, for example, a college student may bounce among learning management systems offered by a corporation, the college and a publisher. For a user, that experience could feel broken.

“That’s a lot of learning, time, energy and cognitive load that is redirected away from meeting outcomes and learning goals and instead focusing on how to get by, how to not fail fast because you miss something,” said Jason Beaudin, executive director of educational technology at Michigan State. “You have to be educated on how to be educated at our institution.”

But others expressed regret that they had no rubric for assessing their institution’s digital learning environment.

“How do you measure accessibility for each of the various tools that are out there?” Victoria Getis, senior director of teaching and learning technologies at Northwestern University, asked. “Is there some way we could say that our university’s digital learning environment is more accessible than it was last year? … It’s very hard to figure out what the baseline is.”

Others cautioned against listening to sales pitches from tech vendors.

“Sometimes we hear from vendors, ‘Oh, we can be anything you want us to be,’” said Kelvin Bentley, program manager at the Texas Credentials for the Future Initiative at the University of Texas system. “We want to work with partners who are willing to walk away, who are willing to say, ‘We did our best, but maybe we aren’t the best fit.’”

Instead of passively listening to sales pitches, Bentley advises being “pushy” with vendors. Administrators should articulate a need for data that address specific questions and whose answers will allow them to make informed decisions. If a vendor cannot deliver exactly what an institution wants, administrators should walk away.

Academics are grappling with how teaching, learning and even social media are evolving in their digital communities. But an informal poll conducted by Inside Higher Ed’s technology reporter suggested that attendees’ greatest angst, which appears to exist alongside a healthy dose of excitement, concerns artificial intelligence.

On the first day of the conference, news broke that Geoffrey Hinton, Turing Award recipient and one of the three “godfathers of AI,” resigned from Google so that he “could speak more freely about the dangers of the technology he helped to create.”

“The idea that this stuff could actually get smarter than people—a few people believed that,” Hinton told The New York Times, adding that a part of him now regrets his life work. “I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” Hinton has maintained that his former employer has acted responsibly regarding AI, explaining to MIT Technology Review that he would be “much more credible if I’m not at Google anymore.”

During the Digital Universities U.S. keynote, Vinton Cerf, Google’s chief internet evangelist and Turing Award recipient for his role in developing the internet’s architecture, spoke about potential threats that could emerge from artificial intelligence.

“I wouldn’t be so concerned about this if we were just treating this as a source of entertainment,” Cerf said. “But we’re not. Some of these tools can be abused—either intentionally or unintentionally.”

Cerf called on the higher education community to articulate the potential for abuse and ways to mitigate harm. Academics can identify priorities for policy makers, Cerf added.

Laws may help avert disaster, but they are not the only mechanism for doing so, Cerf reminded attendees. Academics might lead with vocal calls not to use AI for harm.

“I know that sounds a little wimpy,” Cerf said. “But I have to remind you that gravity is the weakest force in the universe, but when we get enough mass, it’s powerful [enough] to keep the planets in order and to keep us from flying off the planet. If we get enough social agreement on what behaviors are acceptable and which ones are not, you may actually influence behavior with something as simple as, ‘Just don’t do that. It’s wrong.’”

Next Story

More from Teaching & Learning