How Artificial Intelligence Could Trigger the Next Pandemic

Here’s an important and probably unappreciated ingredient in the glue that holds society together: Google makes it moderately difficult to learn how to commit an act of terrorism. The first few pages of results for a Google search about how to build a bomb, or how to commit murder, or how to unleash a biological or chemical weapon, aren’t really going to tell you much about how to do it.



It’s not impossible to learn these things from the Internet. People have successfully built working bombs from publicly available information. Scientists have warned others against publishing the designs for deadly viruses due to similar fears. But while the information is certainly out there on the Internet, learning how to kill a lot of people isn’t straightforward, thanks to a concerted effort by Google and other search engines.

How many lives does it save? This is a difficult question to answer. It’s not as if we could responsibly run a controlled experiment where sometimes instructions on how to commit great atrocities are easy to seek out and sometimes they aren’t.

But it turns out we could be irresponsibly conducting a runaway experiment in just that, thanks to rapid advances in large language models (LLMs).

Security through the dark

When they were first released, AI systems like ChatGPT were generally willing to provide detailed and correct instructions on how to carry out a bioweapon attack or build a bomb. Over time, Open AI has corrected this trend, for the most part. But a class exercise at MIT, written in a preprint paper earlier this month and covered last week Sciencefound that it was easy for groups of college students without relevant biology training to get detailed suggestions for biological weapons from artificial intelligence systems.

In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, provided the names of DNA synthesis companies that are unlikely to check orders, identified detailed protocols, and how solve them and have recommended that anyone lacking the skills to perform reverse genetics involve a central facility or contract research organization, says the paper, whose lead authors include MIT biohazard expert Kevin Esvelt.

To be clear, building bioweapons requires a lot of detailed work and academic skill, and ChatGPT’s instructions are probably too incomplete to actually allow non-virologists to do it thus far. But it’s worth considering: Is security through obscurity a viable approach to preventing mass atrocities, in a future where access to information might be easier?

In almost every respect, increased access to information, detailed support coaching, personalized recommendations, and other benefits we expect to see from language models are great news. But when a cheerful personal trainer advises users to commit acts of terror, that’s not good news.

But it seems to me that you can solve the problem from two angles.

Information control in a world of artificial intelligence

We need better controls at all choke points, said Jaime Yassif of the Nuclear Threat Initiative Science. It should be more difficult to get AI systems to give detailed instructions on building biological weapons. But also, many of the security flaws that AI systems have inadvertently revealed, such as noting that users could contact DNA synthesis companies that don’t check orders, and therefore would be more likely to authorize a request to synthesize a dangerous virus they are fixable!

We may require all DNA synthesis companies to screen in all cases. We could also remove dangerous virus docs from training data for powerful AI systems, a solution favored by Esvelt. And we may be more careful in the future to publish documents that provide detailed recipes for creating deadly viruses.

The good news is that positive players in the biotech world are starting to take this threat seriously. Ginkgo Bioworks, a leading synthetic biology company, has partnered with US intelligence agencies to develop software that can detect engineered DNA at scale, giving investigators the means to detect an artificially generated germ. That alliance demonstrates the ways that cutting-edge technology can protect the world from the evil effects of… cutting-edge technology.

Artificial intelligence and biotechnology both have the potential to be extraordinary forces for good in the world. And risk management from one can also help with the risks of the other, for example, making it more difficult to synthesize deadly plagues protects against some forms of AI catastrophe just as it protects against human-mediated catastrophe. The important thing is, instead of letting detailed instructions for bioterrorism go online as a natural experiment, we stay proactive and make sure that printing bioweapons is hard enough to stop anyone from doing it trivially, with the help of ChatGPT or less.

A version of this story was initially published in the Future Perfect newsletter. Register here to subscribe!

#Artificial #Intelligence #Trigger #Pandemic

Leave a Comment