Earlier this week, the Center for AI Safety put out a statement that’s sobering both in its content and in its brevity.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The 1,100 signatures on this statement include well-known AI researchers at universities including Stanford, MIT, and Harvard. There are CEOs of dominant AI companies such as Open AI and Anthropic; the officers in charge of AI technology at Asana, Microsoft, and Google; the head of the Alan Turing Institute; the president of The New York Academy of Sciences; and the CEO of the Clinton Health Access Initiative.
This is just one of several such documents that have been issued over the last few months warning of the danger posed by the rapid development of artificial intelligence. But anytime the most knowledgeable people in a field warn that their industry should be treated as a threat equivalent to nuclear war, that should demand both public conversation and government regulation.
It’s certainly worth a quick look at what makes all these very smart people so very, very worried.
Some of the dangers the Center for AI Safety warns about seem relatively minor. For example, they are highly concerned about “enfeeblement,” the idea that leaning on AI tools might make humans so weak and dependent they couldn’t handle ordinary tasks on their own.
Every parent frowning at how their children stay glued to the screens of their phones certainly knows the risk of technological addiction. However, the specific concern about AI being too helpful for our own good carries with it a heavy whiff of “back in my day, dagnabbit.” Long before anyone fretted over how Google had robbed us of our vital skills in paging through a 20-volume Funk and Wagnall’s, Plato was out there railing against the whole concept of writing, out of concerns that once people learned how to write, they would “cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”
That one’s a lost cause. So is Google. And so is any idea that AI tools like ChatGPT or generative art programs can be stuffed back into their box. That battle is over. Now both art and every form of text-based expression are in for a transformative reckoning that is already underway and won’t be stopped.
As big a change as that may be, it’s the least of the concerns about where AI is going. To understand one of the more terrifying possibilities, it’s worth a look at how AI solved an extraordinarily difficult issue that had defeated all other computing efforts since the 1960s.
Human beings have somewhere between 20,000 and 25,000 genes. Every one of those genes has just one job: to express a specific protein whose makeup is described by the contents of the gene. When the protein is created, in a process called translation, it consists of a simple linear chain of compounds called amino acids. As that chain grows to be several dozen—or even several hundred— amino acids long, it begins to twist and bend, eventually forming a stable, well-defined, folded protein.
That final shape is as necessary to the function of a protein in the human body as its chemical makeup. Proteins that fail to fold properly, either because of an issue with that original chain or some outside interference, can be anywhere from useless to toxic. Many major genetic diseases come down to a single gene being unable to accurately transcribe a necessary protein. Prion diseases, like Mad Cow Disease, come from triggering the release of proteins with abnormal folding.
But predicting just how a protein will fold, even if you can look at all the amino acids that make it up, is a tremendously complex problem. For decades, uncounted supercomputer hours were hurled at trying to solve the issue of predicting how a change in a protein would alter its final shape. There is even a program called “Folding at home,” which for the last 23 years has been borrowing computer time from individual PCs and using them as a distributed engine in an effort to solve this problem.
There have been many important successes through the years, some of which have led directly to treatments. However, again and again, models created using even the most complex mathematical expressions have failed to accurately predict the form of a previously untested protein. The problem … is hard.
Or at least, it was. In 2020, Science reported on a U.K. company called DeepMind (now Google DeepMind) and their program, AlphaFold.
Artificial intelligence (AI) has solved one of biology's grand challenges: predicting how proteins fold from a chain of amino acids into 3D shapes that carry out life's tasks. This week, organizers of a protein-folding competition announced the achievement by researchers at DeepMind, a U.K.-based AI company. They say the DeepMind method will have far-reaching effects, among them dramatically speeding the creation of new medications.
Just two years later, Nature reported that AlphaFold had predicted the structure of “the entire protein universe.” This included not just every protein coded for in human genes, but … every protein. Every protein that exists. Almost every protein that can exist. Forget 20,000; AlphaFold had figured out the shape of 200 million proteins. Best of all, the results of AlphaFold’s models are a close match to experimental results.
The achievement is tremendous. For all the long years that scientists have been trying to solve this issue, they’ve understood that understanding protein folding is critical to the development of whole categories of treatments. Now, thanks to AI, that huge mountain has finally been scaled.
AlphaFold was derived from the skeleton of another AI application called AlphaGo, which was the first program capable of consistently beating even the most advanced players at the 3,000-year-old Chinese game of Go. Both of these programs are a combination of extensive neural networks that inform, and are informed by, a set of search trees. After being trained with many previous games of Go, or the structures of known proteins, those two AI models far outperform any traditional program designed by humans. One of AlphaGo’s expert-level opponents was so surprised by its style of play that he said, “Surely, AlphaGo is creative.” Within the language of the game, it is.
AlphaFold is also creative, discovering a method of predicting proteins that no human being taught it, despite starting with the same information scientists had at their disposal.
That kind of design is also behind the Large Language Model AIs that are becoming so ubiquitous in everything from search engines to word processors to camera phones. They have been given the training they need to create a business report or produce a new portrait of your family in the style of Vermeer. They can do it because they were trained on the source material. But the specifics of how the AIs are performing a task are unknown, maybe even unknowable, except to other AIs.
And that’s where the first inkling of the threat begins to appear. Right now, there are other AI models which have been trained on the results from AlphaFold and are using those millions of complex shapes as a means of producing potential new drugs. There is no doubt this opens the door to a wondrous new era of medicine.
On the other hand, an AI could be just as easily trained to take that information and produce harmful toxins, maybe even creating a catastrophic epidemic. Prions are very simple, even when compared to a virus. Imagine a dozen new diseases with the debilitating effects of Mad Cow being introduced into the environment as deliberate contaminants.
It’s not even clear that the danger requires a human being seeking to cause deliberate harm. Multiple articles have been written on the propensity of ChatGPT to simply lie. Questioning the tool about my own background informs me confidently that I’ve written for Salon (I haven’t) and HuffPost (not there either) in addition to Daily Kos.
AI researchers call the tendency of LLMs to move beyond the facts into areas of embellishment or outright lies “hallucinations.” Some AIs released to the public have been so beset with these hallucinations that they’ve had to be withdrawn for more training and retooling. Others can seem to be perfectly factual much of the time before suddenly spewing a nonsense value when queried about something of which it has limited knowledge.
Some AIs have even been known to back up their confidently-asserted lies with fake legal citations, links to nonexistent papers, and quotes from books that don’t exist. These are very complex hallucinations. Researchers have difficulty determining both what causes them and how to stop them.
Now imagine a hallucination that isn’t an odd resume falsehood but an improperly folded protein. Or a drug that contains some … embellishment.
It doesn’t take a superintelligent general-purpose AI equipped with Skynet and an army of Terminators to pose a tremendous threat. The threat is there in a toolset whose value is so great that we can’t help but use it, and whose errors are so unpredictable that we can’t understand their source.
We have Rural Organizing’s Aftyn Behn. Markos and Aftyn talk about what has been happening in rural communities across the country and progressives’ efforts to engage those voters. Behn also gives the podcast a breakdown of which issues will make the difference in the coming elections.