You have /5 articles left.
Sign up for a free account or log in.
Far be it from me to call bullshit on a legendary venture capitalist known as the “queen of the internet,” but having read through Mary Meeker’s “AI & Universities” report, recently covered by Lauren Coffey here at Inside Higher Ed, I’m calling bullshit.
The report itself is a cobbling together of various data points—primarily about how much data can now be harnessed by technology (lots!)—along with confident suppositions about the future (not present) capabilities of generative AI applications, turned into a call to arms to embrace inevitable innovation.
The report draws very few, if any, direct conclusions between the claims and the available evidence. It assumes a lot of facts not in evidence and utterly ignores some important dimensions of the work of teaching and learning in education spaces. According to the IHE reporting, Anna Ivey, a former dean of admissions for the University of Chicago Law School, “laughed out loud in a San Francisco Starbucks upon reading parts of the report.”
I interpret Ivey’s laugh as one of incredulity that this is being taken seriously because it is so transparently B.S., an emotion I share, but an emotion that is quickly followed by a different emotion: deep worry that important people are taking these ill-considered claims quite seriously.
Here’s the crux of the issue: Which group are institutions going to listen to about how to go about their work, venture capitalists or people who know something about teaching and learning?
I fear that answer to that question, but I’m going to set it aside for now to instead draw a distinction between the different mindsets that people like Mary Meeker and, for lack of a better alternative … me, bring to the question of using AI in education.
In essence, I think process matters, that the journey one takes to the destination of a credential is meaningful independent of the credential itself. Learning is found in the process, and too much of what we already ask students to do does not take the journey into account.
Meeker and the AI-enthusiast venture capitalists believe that all that matters is the product. Do stuff, get a credential, rinse and repeat.
When it comes to actually learning how to do important and meaningful things, Meeker is wrong, but she may be right that we are heading toward a future where it’s no longer important to actually know how to do stuff like think and write because we’re going to engage in one big, collective shrug and decide that whatever generative AI tools can conjure is an acceptable substitute.
This difference in mindset jumped off the page in one statement, highlighted by Coffey in the IHE article:
“In the basics of teaching—from drafting lesson plans to reviewing assignments and managing classroom communications—teachers already have a full plate. As technology evolves and becomes more widely available, teachers should be able to save time and increase productivity, focusing more on their core craft by leveraging AI for more time-intensive tasks.”
What Meeker describes as “the basics,” meaning things that are of low priority and therefore amenable to outsourcing to generative AI, are not actually the basics. We should not see things like planning what we teach, reviewing the work students do in response to those plans and interacting with our students as “the basics.”
Instead, we should think of them as the fundamentals.
As I’ve written previously, as a teacher of writing, I cannot outsource giving writing feedback to something that cannot read. Reading student writing is a fundamental part of teaching because I absolutely must know what students are doing to be able to help them develop. In fact, reading their work is not sufficient by itself. I must also have a conversation with them about my impressions of what they’ve written. This is the work.
Lesson plans, or something like the syllabus, cannot be outsourced because this is the blueprint that will inform my work. The opportunity to think through the problem of teaching a particular course is a necessity.
The cheerleaders for the use of this technology in educational spaces appear to know little to nothing about education. The only way this gets traction is if we ignore this distinction.
Lots of people appear willing to ignore those distinctions because it will increase “productivity.” What is the endgame to all those productivity increases other than to eliminate the mess humans make from the educational space?
AI assigns the work, students use AI to do the work, AI grades the work. Humans are in the loop to do what, exactly?
I ask again, what is the point?
I have no power and little influence when it comes to shaping the decisions that will be made by those who do have power and influence—the blog is not mightier than the dollar—but I cannot more strongly urge the people who do have these responsibilities to not give in to generative AI FOMO. The wealthiest tech companies in the world are throwing every last dollar (along with every last drop of water and kilowatt of energy) at AI development.
The result is likely to be a future in which access to the core AI models will be something like a commodity, closer to a cable subscription than something that requires exclusive or proprietary applications.
Do you know who agrees with me? Goldman Sachs, which believes the spending on generative AI is far out of proportion to its likely benefits.
All of the supposed productivity benefits of generative AI tools in education require abandoning work that should be done by humans. The fact that we can’t seem to find the resources to pay humans to do the work is not a recommendation for using generative AI.
Embracing Mary Meeker’s vision means giving up on education, not advancing it.