Last August, I sat in a conference room in Delhi, India, listening to a partner at a global consulting firm tell us that by 2030, one-third of the world’s workforce will be Indian or of Indian descent and that India would not replicate China’s strategy of taking on the industries of the West—manufacturing, chemicals and steel—but instead would be the world’s source for offshoring knowledge work. Just three months later, ChatGPT made artificial intelligence a popular reality, and the very definition of knowledge work, and how much of it humans will continue to do, is very much in question.
At a holiday party, just weeks after ChatGPT was released, a team leader for a large networking company’s cybersecurity division told me his team had just used ChatGPT to write thousands of lines of code for a project (in just minutes), saving them hundreds of hours of outside contracting. Great news for his team in terms of saving time and money, but bad news for those programmers who will increasingly find themselves displaced by ChatGPT and other large language model artificial intelligence systems.
India’s strategy for harnessing the brainpower of its ballooning population to become the knowledge work back office for the world may have just gone off the rails. The implications for AI and the worldwide workforce are enormous and frightening. These recent developments in AI have the potential to displace humans from whole categories of knowledge work. While the advent of electricity took some time to make its impact felt, it fundamentally changed the world. The advent of AI in everyday life will do the same and much more quickly.
Higher education is itself a knowledge work industry, one that largely prepares students for careers in knowledge work industries. Indeed, as we track our students’ success, we know that knowledge-based jobs carry the greatest status and rewards (think technology, finance, medicine, law), while more human-focused jobs, even though they all make more use of technology and data today, pay less and carry less status (think education, social work and criminal justice). That won’t change tomorrow—consider the old axiom that we overestimate the amount of change in two years and underestimate the amount in 10 years—but it will surely change, and we will increasingly see AI take over from humans work that once seemed secure and high paying.
The reaction to ChatGPT in much of the higher education community has been focused on immediate questions, such as “How can we trust a college application essay any longer?” and “How do we rewrite our academic honesty policies to prohibit the use of ChatGPT—and how do we then police those policies?” Those are understandable concerns, but they are myopic. As educators, we face much, much bigger questions, such as:
- What is the role of knowledge when all knowledge is immediately available? So much of our work is to make sure students know what they should as graduates in a given discipline. Will ChatGPT drive us to competency-based models of education in which what they can do with what they know becomes the key question?
- If so, if much of the basic cognitive work we now do is being done by an algorithmic coworker—a “cobot”—how do we rethink the levels of cognitive ability our students must now possess? How do we raise the cognitive bar in field after field, teaching students how to use their new algorithmic power and how to access the output?
Curricula across a wide range of fields are being rendered out of date at this very moment; we just don’t know in what ways yet.
Of course, we do not just prepare millions of graduates for the workforce; we also employ about 4.7 million people in our industry in the U.S. alone. At my own institution, we are already playing with AI. A curricular design team used ChatGPT to write a content module for a phlebotomy course, and the assessment of the subject matter experts on the team was that it was quite good. Our creative team used it to write a 30-second commercial and then to construct the shot selection for its production. Our chief marketing officer said the script needed just a bit of tweaking and that eight out of the 10 suggested shots were exactly what her team would have proposed. In both cases, ChatGPT produced the work in just a couple of minutes or less.
Universities have no lack of administrative processes that can be automated now, but this new generation of AI, with amazing natural language processing, also has implications for instruction, tutoring, advising, publishing and more. The efficiencies and cost savings will be irresistible to institutions, but real people who depend on their jobs to pay the bills and care for their families stand to be displaced unless we can reskill them and employ them in more meaningful work not easily done by machines. How many of those higher-order skills jobs will we have to offer in our industry? Not enough to compensate for the coming displacement, I fear.
Therein lies a more hopeful opportunity. Our society has endless need for people to do distinctly human jobs that algorithms cannot do, jobs that only work when people are in relationship with each other. We should flood our underperforming public schools with talented teachers and social workers. We should rebuild our gutted mental health system with well-trained professionals. Our broken systems of policing need to have mental health and community workers in every squad car and on every call. Our health-care system needs more health-care coaches and counselors. Algorithms can diagnose a disease, assess learning and provide predictive insights on crime hot spots, but no one who has just received a devastating cancer diagnosis wants a machine holding her hand. No crying child in a preschool class wants to be consoled by a machine. No machine can build trust between a suffering community and a police force. Those are distinctly human jobs.
They are also the jobs that we have not much valued in our society. They pay less, we staff them at minimum levels and our most talented and ambitious students mostly avoid them. We pay a terrible price for that reality. Just look at the levels of mental health problems, suicide, addiction, crime and general despair that afflict our country. We need to flood our systems of care, as I call them, with people who might find those jobs immensely satisfying because of their human dimension, and we need once again to pay them well, afford them respect and not burn them out. That will require a fundamental restructuring of our economy and workforce, as I propose in my recent book Broken: How Our Social Systems Are Failing Us and How We Can Fix Them (Penguin Random House). If that seems a tall order, it is. But the AI tsunami about to wash over us will fundamentally change our economy, our workforce and our existential sense of ourselves as humans.
If we are to get ahead of that paradigm shift, universities have a critical role to play. We need our philosophers and ethicists engaged in the work. We need our historians of science and technology helping us understand the likely road map ahead. We need sociologists and psychologists thinking through the complexities of a society where machines “live” and work among us. We need cognitive scientists building tools to make sure we understand an artificial brain, even as we work to understand our natural brains. We need law faculty writing the codes that define acceptable and unacceptable uses of AI.
The humanities have never been more relevant, and those faculty members have to work hip to hip with their colleagues who are inventing the technology that is now fundamentally reshaping our world. The knowledge silos that afflict academia will not serve us well when we need a holistic approach to a technology that changes everything. If the experience of social media and the ways it outraced our understanding and sensible use of it has taught us anything, it is that we cannot leave this work in the hands of tech bros like Mark Zuckerberg, Jack Dorsey and Elon Musk. Universities are about to go through seismic change, and universities have never been more necessary.