You have /5 articles left.
Sign up for a free account or log in.

I had hoped that with generative AI having come to (and for) higher education that there would be some applications that would collectively be judged beyond the pale, nonstarters, no-go zones.

Verboten, taboo, over the line.

One of my personal lines is the use of generative AI as a writing tutor, teacher or assessor/grader. It’s my belief that we should not outsource the teaching and evaluation of writing to something that cannot read and does not write. (Processing and generating syntax are not the same thing as reading and writing.)

I know lots of people see some grey areas with the above proposition, and so be it. They understand that AI feedback is a simulation, but that the simulation is convincing enough that it may have some utility to student learning. I can’t get over the fact of the simulation. This may make me a hardliner, but it’s the line I’m comfortable with.

But there’s a handful of recent examples that I think do not have such grey areas, where I’m hoping there’s near-universal agreement that these are unacceptable uses of generative AI technology in education contexts.

My view that these are out-of-bounds is rooted in a belief that the values we claim to hold dear in higher education actually mean something, and are not just a bunch of B.S. meant to make people think higher education is meaningful.

Each section headline links to an article where the process I am commenting on is apparently happening. Please click through for more context.

AI Peer Review

I can dispatch with this one pretty quickly and the reason why is rooted in the meaning of the word “peer.” AI is not a peer. It cannot think, read, feel, or communicate with intention. Declaring that an article has been peer reviewed after feeding it through a large language model would be a lie.

If you want to invent a new category of review, god bless, but give it another name.

Using AI to Replicate Human Data in Experimental Studies

Another one that really shouldn’t be a tough call. If we want to ask people questions about the world, we should ask people questions about the world, not offload the task to an LLM simulation.

That research shows outside reviewers cannot necessarily tell the difference between human and AI-generated responses does not matter—to me, anyway—because if we’re designing a study that is meant to reflect genuine sentiments of human beings we must talk to actual human beings.

Right?

Replacing Striking Graduate Students With AI

This was, reportedly, a suggestion from Stan Sclaroff, the Dean of the College of Arts and Sciences at Boston University, who shared “creative ways” that faculty were adapting to the graduate student strike. Among other ideas, Dean Sclaroff included “Engage generative AI tools to give feedback or facilitate discussion on readings or assignments.”

Reached for comment, the university said that the list did not reflect a desire to replace graduate student teaching assistants.

I do not doubt that the university did not intend for this statement to be taken this way because that would be a colossally foolish thing to put in the world at the outset of a strike, but it is interesting and perhaps instructive that the suggestion for using generative AI escaped into the world without considering the potential blowback.

--

Each of these examples illustrates what I view as a privileging of institutional operations over the institutional mission. The institutional mission is teaching/learning/research, you know … that old stuff that is supposed to make the U.S. system of higher education the envy of the world and one of the sources of our continuing status as a superpower.

Institutional operations revolve around the entity as a business under which activities happen. The most consequential aspect of operations is the realization of revenue through tuition and other means which begets things like the ill-fated attempt to re-open campuses before a widely available Covid vaccine or universities taking commissions on students engaging in online gambling.

These are clear examples of a violation of the purported underlying values of the institution, jeopardizing student well-being in the name of continuing institutional operations. I call this tendency acting out of “institutional awe” that is privileging the institution over the people the institution is meant to serve.

Outsourcing TA labor to AI is a pretty clear example of institutional awe, where the first thought is how to keep things rolling operationally, rather than, “What is the mission-aligned action we could take?”

AI peer review and generating fake human data via large language model don’t quite fit the frame of institutional awe. These choices are happening further down the chain of activity from institutional awe, which usually flows from the top. These are more like individual adaptations to a system that is operating under the wrong incentives. If what matters is the volume of research produced, then seeking efficiencies in that production is both logical and sensible.

But how did we get to a place where the volume of research produced is what matters? This is, to my mind, not consistent with the mission of higher education. Here’s where my belief that we have to figure out how to do less that matters more comes into play. Using generative AI to simulate academic research is not the same thing as doing academic research.

This is true even if the outcomes look similar to what would have happened had everything been done by humans. If we believe our own rhetoric about the importance of teaching and research as the core of the university, those things are meaningless if they are not done by humans.

It doesn’t matter if the simulation is convincing. It’s still a simulation. I can’t fathom why people who will swear to the importance of the mission of higher education will run willingly toward an illusion.

Next Story

Written By