You have /5 articles left.
Sign up for a free account or log in.
In general, these columns have focused on the incremental actions that should be taken to benefit higher education institutions and their constituencies as they adapt to the rapidly changing technological advancements led by generative AI. This, I believe, is the best approach, to respond to the developments as they occur or when it becomes very clear that the changes are about to be released.
Yet we must not ignore the long-term projected consequences that we can expect over the next five to 10 years. I was prompted to once again consider those upon reading a profound article by Avital Balwit, chief of staff to the CEO at Anthropic. Titled “My Last Five Years of Work” and published last month in Palladium, this article raises concerns about many of the effects of AI taking over the jobs of those who are displaced from their employment and society at large. It prompted me to consider, once again, the longer-term context of the advent of the Fourth Industrial Revolution.
Recognizing that the era of autonomous agents is upon us, and that we will see over the next three to 12 months the release of agents (rather than chat bots) that will be driven by outcomes rather than more complex prompts. These will roll out rapidly with a wide range of capabilities. They will be a continuing part of the revolution.
Almost certainly the fall 2025 semester, or shortly thereafter, we will see the expanding use of generative AI as instructors. We already rely on apps to help us construct course syllabi, learning outcomes, grading rubrics and much more. AI conducts discussion boards, serves as tutors such as Khanmigo and orchestrates adaptive learning. The advent of synthetic instructors, perhaps supervised at first by human “master instructors,” will create a notable milestone along the way of AI progress in higher education. This is not likely to come without some pushback from some faculty, students, unions and others. Yet I believe the capabilities, economies and efficiencies of advanced AI are likely to prevail.
Writing in PC Magazine, Emily Dreibelbis quotes Ray Kurzweil, whose book The Singularity Is Nearer is available now:
“The Singularity is the ‘next step in human evolution,’ when humans merge with AI to ‘free ourselves [from] biological limitations,’ Kurzweil says. This will happen primarily through brain-computer interfaces like the one Elon Musk is building with Neuralink, he says. Continued increases in computing power and price drops on chips and processors make this future all but inevitable. ‘Some people find this frightening,’ Kurzweil tells PCMag. ‘But I think it’s going to be beautiful and will expand our consciousness in ways we can barely imagine, like a person who is deaf hearing the most exquisite symphony for the first time.’ Skeptics should look to the theory of exponential growth, Kurzweil argues. Advancements are not linear; rather, society makes great leaps in progress that far exceed the ones that came before. Case in point: ChatGPT’s explosive debut. ‘While it is amazing to see the incredible progress with large language models over the past year and a half, I am not surprised,’ Kurzweil tells PCMag.”
As Balwit notes in her recent Palladium letter, we are moving toward a society with rapidly diminished hiring of humans to perform many common white-collar tasks. The cost savings to corporations (and universities) in eliminating high-paying jobs alone will lead to less expensive products, courses, certificates and degrees.
One might ask how many jobs will be eliminated or replaced by AI technologies. At this point in the rollout of 4IR, the answer is not entirely clear. Certainly, there already have been millions of jobs in the past year and a half that have been augmented, replaced or otherwise affected by AI. Conjectures as to the extent of job losses range well into the hundreds of millions of positions. Science, the peer-reviewed journal of the American Association for the Advancement of Science, recently carried the article “GPTs Are GPTs: Labor market impact potential of LLMs: Research is needed to estimate how jobs may be affected,” which proposed a framework to do just that:
“We propose a framework for evaluating the potential impacts of large-language models (LLMs) and associated technologies on work by considering their relevance to the tasks workers perform in their jobs. By applying this framework (with both humans and using an LLM), we estimate that roughly 1.8% of jobs could have over half their tasks affected by LLMs with simple interfaces and general training. When accounting for current and likely future software developments that complement LLM capabilities, this share jumps to just over 46% of jobs. The collective attributes of LLMs such as generative pretrained transformers (GPTs) strongly suggest that they possess key characteristics of other ‘GPTs,’ general-purpose technologies (1, 2). Our research highlights the need for robust societal evaluations and policy measures to address potential effects of LLMs and complementary technologies on labor markets.”
If the authors’ predictions are on target, the loss (or broad restructure) of nearly half of all jobs in the labor market may require some kind of universal basic income as suggested in Balwit’s Palladium article. Earlier this year, CNN took up the topic, noting,
“Global policymakers and business leaders are now increasingly warning that the rise of artificial intelligence will likely have profound impacts on the labor market and could put millions of people out of work in the years ahead (while also creating new and different jobs in the process). The International Monetary Fund warned earlier this year that some 40% of jobs around the world could be affected by the rise of AI, and that this trend will likely deepen the already cavernous gulf between the haves and have-nots. As more Americans’ jobs are increasingly at risk due to the threat of AI, Tubbs and other proponents of guaranteed income say this could be one solution to help provide a safety net and cushion the expected blow AI will have on the labor market.”
Stanford University’s Basic Income Lab examines this topic in great detail on a continuing basis. It defines the practice: “basic income is a regular cash payment to all members of a community, without a work requirement or other conditions.” How would such a program be funded? Some suggest the money would come from a tax on entities that use AI to replace humans because they will realize profits due to the economies of AI.
While all of this remains in the realm of conjecture and uncertainty, it is a topic that we all may confront at some point in the near future. As Balwit suggests, there are self-image, mental health and related issues as well as the economic issues to consider as we navigate the road to the singularity.
In addition, we must consider the mission and scope of higher education if nearly half of the jobs are not filled by educated humans but by trained AI. The premise of our system of colleges and universities is founded on teaching and learning for humans. If the workforce of the educated and skilled drops by nearly half, there would seem to be implications for our institutions. At the same time as we respond to the incremental changes in knowledge and skills of employees, we should carefully track and consider the changing societal needs for what and how we deliver through higher education.