You have /5 articles left.
Sign up for a free account or log in.

I think ChatGPT can make anyone 30 percent smarter—that’s impressive,” said Michael Levitt, the South Africa–born biophysicist who won the Nobel Prize in chemistry in 2013.

“It’s a conversational partner that makes you think outside the box, or a research team who have read a million books and many million journal papers.”

A pioneer of the computer modeling of molecules, Levitt is not easily dazzled by technological wizardry but admits he has been impressed by the large language models that have emerged over the past year. “I didn’t expect to this kind of stuff in my lifetime—they’re a very powerful tool. I still write code every day, but ChatGPT also writes programs very well,” he said.

Based at Stanford University, the biophysicist has seen firsthand how technology can rapidly alter how knowledge is accessed—but nothing compares to the potential of LLMs, he insisted. “I started using Google in 1998—two years before it was released publicly—because its founder Sergey Brin was in my class. A very smart guy who rejected my suggestion to make it a subscription service. Google has similar mind-bending powers, but ChatGPT is even more potent,” he said.

ChatGPT’s so-called hallucinations—in which it invents fictitious scientific papers and authors—concern some scientists, but Levitt was not perturbed, reckoning that researchers should be able to spot troublesome results. “It’s like having an incredibly clever friend who doesn’t always tell the truth—we’re capable of spotting these errors. Half of the work that scientists do is flawed, but we’re good at sorting the data,” he said.

That enthusiasm for AI as an aid to research is shared by Martin Chalfie, the Columbia University biochemist who won the chemistry Nobel in 2008. “I was visiting my doctor recently, and he did all the usual checks and made his diagnosis but mentioned he’d also used AI to analyze the results—I almost stood up and cheered,” recalled Chalfie, who is known for his work on fluorescent green proteins used for microscopic imaging.

“He was doing everything that a doctor does but also getting a second opinion, which might maybe cause him to think differently,” he added, drawing a parallel with how researchers might use AI to think differently about their results. “Obviously if my doctor suggested that he plugged me into a machine and let it decide my treatment, I wouldn’t have been happy. But that’s not what happening in research—I don’t see why you wouldn’t want this kind of assistance.”

Other Nobelists are, however, not entirely convinced that the outputs of ChatGPT and other chat bots scanning the entire corpus of scientific literature should be treated as an unalloyed good. In a discussion this summer at the annual Lindau Nobel Laureates Meeting, which saw dozens of Nobel winners gather in the island town of Lindau in southern Germany, Israeli chemistry laureate Avram Hershko worried that researchers were too trusting of the insights provided by LLMs.

“We have to know what data sets it is using—it should be transparent,” said Hershko, who is based at Technion Israel Institute of Technology. Regulation should require LLMs to “say what the margin of certainty is” or, at least, to acknowledge scientific papers with contradictory conclusions that could prompt researchers to seek out different views, he argued.

That said, AI will be an important force for good in coming years, Hershko predicted. Others go further, saying the Nobel committee should give serious thought to changing its rules to allow AI—or AI researchers, at the very least—to become eligible for winning science’s top prize. DeepMind’s AlphaFold technology, which solved the “protein folding problem” that has vexed science for nearly 50 years and allowed scientists to determine a molecule’s 3-D shape based on its amino acid sequence, is a good example of a discipline-changing advance that should be eligible, some say.

“The Nobel Prize lives on its reputation, and its history is deeply important to them, so I understand why its committee would not want to give it to a computer—this is same prize that Albert Einstein won a century ago,” said Levitt. “But it’s a fair question to ask, because AI has changed everything.”

Indeed, the issue of whether an artificial intelligence will win a Nobel is moot, because there are already several nailed-on future Nobel prizes that have relied heavily on the technology, said Shwetak Patel, winner of the 2018 ACM Prize in Computing, a $250,000 award given to outstanding early and midcareer researchers, the second-biggest prize in computing after ACM’s $1 million Turing Award, dubbed the “Nobel of computing.”

“Whoever wins the Nobel for the COVID vaccine will certainly have used AI, which was crucial in sequencing the SARS-CoV-2 genome so quickly,” said Patel, director of Google’s health technologies section and endowed professor of computing and electrical engineering at the University of Washington.

His research field—collecting health data using mobile phones and wearable tech such as smartwatches—has been transformed by the emergence of LLMs in the past few months, he admits. Methods created by his lab to monitor a patient’s heart rate or check insulin levels in the blood using standard mobile phone cameras, or check for tuberculosis using a phone’s microphone, are undoubtedly exciting innovations, but the American computer scientist explained that a major barrier to this kind of research was processing the mountains of real-time data arriving from digital devices. Thanks to an LLM, researchers no longer needed to code the arriving data sets as algorithms and were able to process and even interpret these data with a minimal amount of training, said Patel.

“It almost as accurate as the system that we’d been working to develop for five years,” he added.

With LLMs able to parse and interpret data from wearable devices, health researchers used to running checks on a handful of patients could soon be receiving data from millions of people, Patel explained.

“That’s incredibly useful if you want to tackle ‘long-tail’ problems, like diagnosing rare diseases before symptoms begin to appear—we’ve already been able to train a model to find a certain health problem based on just three things we were looking for in the data,” he said.

According to Patel, combining AI with the ubiquitous digital devices of modern life will “push the boundaries of what research can achieve in an unprecedented way.” He added that LLM-enabled devices could also be used to create bespoke fitness and nutrition plans to improve public health.

“Instead of telling people that they should exercise or eat less, health ministries should give out smartwatches, and AI would create very specific plans for fitness and nutrition based around individuals’ personalities and routines—if these could access your phone, then each health plan would be tailored to that individual, making them more likely to succeed,” he said.

Some pundits have wondered publicly whether diminished scientific productivity is now the norm in modern science, with larger teams, costlier equipment and more time required to find truly novel ideas that yield far less impact than breakthroughs of the past; a 2020 study in the American Economic Review, titled, “Are Ideas Harder to Find?” estimated that scientific productivity is about 3 percent of what it was in the 1930s.

Like Levitt, Patel could not disagree more. “This kind of research has exploded in the past few years, but it’s really gone to a new level in the past few months,” he said. “Now is the most exciting time to be a researcher.”

Next Story

Found In

More from Global