Opinion: The Real Potential for Lethal AI Toxicity

Last week, world leaders gathered to discuss the possibility of Vladimir Putin using chemical weapons in Ukraine. It is all the more disturbing to read a report published this month about how AI software used to develop toxinsincluding the infamous VX nerve agent, classified by the UN as a weapon of mass destruction, and even more poisonous compounds.

In less than six hours, commercially available artificial intelligence software commonly used by drug researchers to discover new types of drugs was instead able to find 40,000 toxic compounds. Many of these substances were previously unknown to science and are possibly far more deadly than anything we humans have created ourselves.

While the report’s authors emphasize that they did not synthesize any of the toxins – and that was not their goal – the very fact that widely used machine learning software could so easily develop the deadly compounds should horrify us all.

The software the researchers relied on is commercially used by hundreds of companies in the pharmaceutical industry around the world. It can easily be obtained by rogue states or terrorist groups. Although the authors of the report say that some knowledge is still needed to produce powerful toxins, the addition of AI to the field of drug discovery has dramatically lowered the technical threshold required to develop chemical weapons.

How will we control who gets access to this technology? Can do we guard it?

I’ve never been particularly bothered by the “AI will kill us” argument propagated by rock mongers and featured in movies like The Terminator. While I love the franchise as a computer science trained person, I saw the storyline as a pretty delusional fantasy concocted by the tech guys to reinforce its own significance. Skynet is good science fiction, but computers are far from true intelligence and have a long way to go before they can “take over”.

And still. The scenario, presented in the journal Nature Machine Intelligence, outlines a threat that almost no one in the field of drug discovery even suspected. Certainly not the authors of the report, who could not find any mention of it “in the literature”, and who admit to being shocked by their findings. “We were naive about the potential abuse of our trade,” they write. “Even our research on Ebola and neurotoxins… didn’t alarm us.”

Their study “emphasizes that a non-human autonomous creator of deadly chemical weapons is entirely feasible.” They are not afraid of some distant dystopian future, but of what can happen right now. “This is not science fiction,” they say, expressing emotions rarely found in a technical article.

Let’s step back for a moment and look at how this study came about. The work was originally conceived as a thought experiment: what can AI do if it sets a nefarious goal? The company behind the study, Collaborations Pharmaceuticals Inc., is a respected, albeit small, player in the growing field of AI-driven drug development.

“We have spent decades using computers and AI to improve human health, not worse,” is how the four co-authors describe their work, which is supported by grants from the National Institutes of Health.

The scientists were invited to present a paper at a biennial conference hosted by the Swiss Federal Institute for Nuclear, Biological and Chemical Defense on “How AI technologies for drug discovery could potentially be misused.” It was a purely theoretical exercise.

The four scientists approached the problem with simple logic: Instead of challenging their AI software to find useful chemicals, they flipped the strategy and asked it to find destructive ones. They fed the program the same data they normally use from databases that catalog the therapeutic and toxic effects of various substances.

Within hours, machine learning algorithms produced thousands of terrifying connections. The program produced not only VX (used to assassinate Kim Jong-un’s half-brother in Kuala Lumpur in 2017), but also many other known chemical warfare agents. The researchers confirmed this with “visual identification with molecular structures” recorded in public chemistry databases. Worse, the software suggested a plethora of molecules the researchers had never seen before that “looked equally plausible” as toxins and perhaps more dangerous.

All it took was a change of target, and the “harmless generative model” went from “a useful medical tool to a generator of likely lethal molecules.”

Molecules are just constructs, but as the authors write in their report: “For us, the genie has already been released from the medical bottle.” They can “erase” their records of these substances, but they “cannot erase the knowledge” of how others can recreate them.

The authors’ greatest concern is that, as far as they have been able to find, the potential for misuse of technology designed for good is not considered at all by its user community. Creators again drugs, they point out, are simply not trained to think subversively.

There are countless examples in the history of science of good work being turned into harmful goals. Newton’s laws of motion are used to design rockets; the splitting of the atom gave rise to atomic bombs; pure mathematics helps governments develop surveillance software. Knowledge is often a double-edged sword.

Forget Skynet. Software and know-how designed to save our lives may prove to be one of the greatest threats we face.

Margaret Wertheim is a writer and artist who has written books on the cultural history of physics.