You have /5 articles left.
Sign up for a free account or log in.

I fear that institutions are going to fall prey to generative AI FOMO—fear of missing out—and this is going to lead to some very bad decisions.

I would like to suggest that for all the amazement and wonder we have experienced as Google, Meta, OpenAI and Anthropic sell their wares to the public with a series of eye-popping demonstrations, with the promise of more wonders to come, we actually have no idea what this technology means at this time.

Leaping headlong into this unknown by “pivoting” toward an AI-mediated student experience is a foreseeably costly mistake that some institutions nonetheless seem intent on making.

I’m wondering how many recall the ouster (and then return) of University of Virginia president Teresa Sullivan back in 2012. According to contemporaneous reporting from The New York Times, Sullivan was pushed out by an activist governing board because some members of the board felt “Virginia was falling behind competitors, like Harvard and Stanford, especially in the development of online courses, a potentially transformative innovation.”

If that’s too vague, what they were talking about was MOOCs, the revolution that never barked and did not end up being a transformative innovation despite many popular books insisting this was inevitable being published at the time. As to the robustness of that market today, perhaps you saw the recent news that online course provider 2U has warned of “substantial doubt” that it can continue to operate.

I think it is a reasonable bet that generative AI is likely to persist longer than the relatively brief MOOC craze, but I shall repeat myself: we have no idea what it is going to be. Perhaps we should also be reflecting on the failure of the previous generation of “learning analytics” movement, which failed to deliver significant benefits to how faculty teach and students learn.

(In fact, the chief innovations of big data in higher ed have been in the areas of marketing and enrollment management, important functions of a higher ed institution, but hardly ones at the core of the mission.)

I get that seeing what generative AI seems to be capable of and hearing projections about a future of a godlike superintelligence on a (depending on who is talking) five- to 20-year timeline is pretty freaky and feels like we should be doing something about it, but no, right now, what most of us should be doing about it is learning as much as we can, educating ourselves on what this technology can actually do and exploring its potential.

Sure, if you want to make your institution a guinea pig for an experiment employing an unknowable and untested technology, go for it, but we should be clear that this is what you are doing. Maybe it will pay off, but I think the greater likelihood at this time is that it will not.

The other thing to understand about this technology is that should it come to be as useful and powerful as its developers hope, it will be ubiquitous, and perhaps even relatively cheap given the array of competitors working in the space and the concurrent existence of open-source models. If some institution hits on some secret sauce of using generative AI–enhanced technology to do some good, the sauce is not going to be particularly secret given that it’s the underlying AI that’s the true powerhouse of the innovation.

Google, Anthropic and OpenAI are most assuredly not sharing their trade secrets with each other, and yet their highest-capacity models perform remarkably similarly to each other. We have reason to believe that access to generative AI could ultimately become something more like a utility than a capacity that an institution could put behind its walls and capture external utility from.

Institutions that rush headlong into a revolution that may not be happening—or, if it is happening, where we have little idea of its outcomes—will be engaging in a kind of unnecessary self-disruption. Think of the potential waste of resources, the risk of alienating various categories of stakeholder, of getting caught holding the bag for a half-cocked grab for the brass ring. Why not avoid this if at all possible?

At the University of Virginia back in 2012, it was the disruption to the institution that ultimately led to the board backpedaling and Teresa Sullivan being rather quickly reinstated. Sullivan had a persistent vision for the university that others shared, and they moved forward together under that vision. The broader community was not willing to dump her because of the allure of the new shiny object. Sullivan stayed in office until retiring in 2018.

For sure, the problems associated with higher ed that Sullivan and UVA faced in 2012—budgetary shortfalls, declining public faith, etc.—have only grown more intense over time, and it’s possible that institutions that are already facing an existential risk need to throw a Hail Mary at generative AI, but this is not most institutions.

Watch, learn, explore, but also resist doing something there’s a high chance of regretting later.

Next Story

Written By