OpenAI was much in the news this past Thanksgiving week. After the firing and rehiring of the company’s CEO, Sam Altman, the world watched and considered the future of artificial intelligence in our lives. It was a valuable wake-up call to those who have minimized the impact of AI on our future lives.
I understand the anxiety some feel when they are told that something is coming that will change their lives forever. Artificial general intelligence (AGI) will mean that humans will no longer be the apex of our domain. Coming to our future is an entity with superior knowledge, decision-making ability and influence in our society. That may be a bit unnerving. And yet it also means we will likely have rapidly emerging cures to diseases, solutions to the plague of poverty, rational strategies to mitigate the ill effects of climate change and possible resolutions to conflict among countries and peoples. The potential for good is enormous.
It is not at all clear that the large language models (LLMs) of generative AI (genAI) that we are experimenting with today will directly lead to AGI—likely not. Yet, even with the relatively modest genAI, we are finding solutions to problems that have proven so difficult to solve in the past. Within higher education, genAI has begun to more effectively and efficiently tutor students through self-paced, personalized learning. By identifying student needs as they progress through a course, these technologies can prescribe or develop custom tutorials that inform and teach students what they need to know in order to progress. This is exemplified by a collaboration between OpenAI and Khan Academy, highlighted in a recent TED Talk:
“Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools—including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher—and demos some exciting new features for their educational chatbot, Khanmigo.”
Certainly, genAI models today have ample room for improvement. This is not unusual in the development of any technology. I would suggest that we compare this 1-year-old technology with other revolutionary technologies at the point of their first year of use. The Model T Ford was far short of the potential of automobiles in 1909, one year after it was released. We have seen dramatic improvements in speed, performance, safety and capability among automobiles over the past 115 years.
The first cellphones released in 1973 were the size and weight of a brick. Fifteen years later, we were still using the heavy, large “bag phone” that was so named because it came in a bag with an over-the-shoulder strap to allow for “easy” carrying. However, today, we have the choice of a wide array of cell phones with many options in size, memory, video and a host of other capabilities. There is even a cellphone in the form of a miniature wearable pin with most all of the capabilities of a standard cellphone and more.
Training modes and methods for genAI-powered bots will continue to bring about improvements. In just the last month, many of the apps announced that they have implemented new and improved versions of their LLMs. Notably, Anthropic announced Claude 2.1, Open AI announced GPT-4 Turbo and Inflection announced version 2. Each of these represents major improvements and enhancements of the capabilities of the associated genAI app.
Fortunately, there is an increasing number of different genAI apps using different LLMs, so one can quickly and easily sample responses to the same prompt from a variety of apps to identify any responses that seem outside the norm, possibly hallucinations by the app. It is this process of competition that helps to drive improvement and minimize aberrations. Independent research labs monitor and report results across the industry: “A new hallucination index developed by the research arm of San Francisco–based Galileo, which helps enterprises build, fine-tune and monitor production-grade large language model (LLM) apps, shows that OpenAI’s GPT-4 model works best and hallucinates the least when challenged with multiple tasks.”
While we must continue to be cautious, it is time for us to move on to anticipation of the new capabilities that will be enabled by AI in education. We can anticipate much swifter, deeper research across the spectrum of higher education. The power of AI is in the ability to handle vast volumes of data in microseconds, even approaching nanoseconds, with appropriate hardware and architecture in certain applications. Analysis, synthesis and predictive powers are offered by these tools at speeds and volumes that were unthinkable in the past. We should be anticipating what applications can be enhanced by the power and speed of AI. How will these characteristics enhance our products and processes?
The anticipation should lead us to preparation. It is time, even now, to begin to prepare for the advent of artificial general intelligence. In what areas will it enhance our ability to teach, learn, research and serve the needs of the greater community? How will higher education become more efficient and effective? How will this impact our centuries-old administrative structures, our human staffing and our responsiveness to the needs of our students and wider community? Can we begin now to make adjustments that will prepare us for the changes that we know will inevitably come?
There is much work to be done to prepare for the changes that we know will come, let alone those that we have yet to understand. Who is leading this process at your university? Has this been given the priority and support that is necessary to ensure the future of your institution?