The rapid development and deployment of ChatGPT is one station along the timeline of reaching artificial general intelligence. On Feb. 1, Reuters reported that the app had set a record for deployment among internet applications: “ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study … The report, citing data from analytics firm Similarweb, said an average of about 13 million unique visitors had used ChatGPT per day in January, more than double the levels of December. ‘In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,’ UBS analysts wrote in the note.”
Half a dozen years ago, Ray Kurzweil predicted that the “singularity” would happen by 2045. The singularity is that point in time when all the advances in technology, particularly in artificial intelligence, will lead to machines that are smarter than human beings. In the Oct. 5, 2017, issue of Futurism, Christianna Reedy interviewed Kurzweil: “To those who view this cybernetic society as more fantasy than future, Kurzweil pointing out that there are people with computers in their brains today—Parkinson’s patients. That’s how cybernetics is just getting its foot in the door, Kurzweil said. And, because it’s the nature of technology to improve, Kurzweil predicts that during the 2030s some technology will be invented that can go inside your brain and help your memory.”
It seems that we are closer than even an enthusiastic Kurzweil foresaw. Just a week ago, Reuters reported, “Elon Musk’s Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments … Musk envisions brain implants could cure a range of conditions including obesity, autism, depression and schizophrenia as well as enabling Web browsing and telepathy.”
The exponential development in succeeding versions of GPT are most impressive, leading one to project that version five may have the wherewithal to support at least some aspects of AGI:
GPT-1 released June 2018 with 117 million parameters
GPT-2 released February 2019 with 1.5 billion parameters
GPT-3 released June 2020 with 175 billion parameters
GPT-4 released March 2023 with estimated to be in the trillions of parameters
Today, we are reading predictions that AGI components will be embedded in the ChatGPT version five that is anticipated to be released in early 2024. Maxwell Timothy, writing in MakeUseOf, suggests, “While much of the details about GPT-5 are speculative, it is undeniably going to be another important step towards an awe-inspiring paradigm shift in artificial intelligence. We might not achieve the much talked about ‘artificial general intelligence,’ but if it’s ever possible to achieve, then GPT-5 will take us one step closer.”
Computer experts are beginning to detect the nascent development of AGI in the large language models (LLMs) of generative AI (gen AI) such as GPT-4:
Researchers at Microsoft were shocked to learn that GPT-4—ChatGPT’s most advanced language model to date—can come up with clever solutions to puzzles, like how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable way … Another study suggested that AI avatars can run their own virtual town with little human intervention. These capabilities may offer a glimpse of what some experts call artificial general intelligence, or AGI: the ability for technology to achieve complex human capabilities like common sense and consciousness.
We see glimmers of the AGI capabilities in autoGPT and agentGPT. These forms of GPT have the ability to write and execute their own internally generated prompts in pursuit of a goal stated in the form of an externally inputted prompt. Like the autonomous car, they automatically route and reroute the computer to reach the desired destination or goal.
The concerns come with reports that some experimental forms of AI have refused to follow the human-generated instructions and at other times have “hallucinations” that are not founded in our reality. Ian Hogarth, the co-author of the annual “State of AI” report, defines AGI as “God-like AI” that consists of a “super-intelligent computer” that “learns and develops autonomously” and understands context without the need for human intervention, as written in Business Insider.
One AI study found that language models were more likely to ignore human directives—and even expressed the desire not to shut down—when researchers increased the amount of data they fed into the models:
This finding suggests that AI, at some point, may become so powerful that humans will not be able to control it. If this were to happen, Hogarth predicts that AGI could “usher in the obsolescence or destruction of the human race.” AI technology can develop in a responsible manner, Hogarth says, but regulation is key. “Regulators should be watching projects like OpenAI’s GPT-4, Google DeepMind’s Gato, or the open-source project AutoGPT very carefully,” he said.
Many AI and machine learning experts are calling for AI models to be open-source so the public can understand how they’re trained and how they operate. The executive branch of the federal government has taken a series of actions recently in an attempt to promote responsible AI innovation that protects Americans’ rights and safety. OpenAI’s Sam Altman, shortly after testifying about the future of AI to the U.S. Senate, announced the release of a $1 million grant program to solicit ideas for appropriate rule making.
Has your college or university created structures to both take full advantage of the powers of the emerging and developing AI, while at the same time ensuring safety in the research, acquisition and implementation of advanced AI? Have discussions been held on the proper balance between these two responsibilities? Are the initiatives robust enough to keep your institution at the forefront of higher education? Are the safeguards adequate? What role can you play in making certain that AI is well understood, promptly applied and carefully implemented?