A row of ampoules with a bio-hazard sign
Feature

Government and Tech Firms Confront New AI-Driven Bio Risks

5 minute read
Sharon Fisher avatar
By
SAVED
AI models now rival virologists, fueling fears they could guide biological weapon development if left unchecked.

While a lot of attention has been paid recently to AI-powered cyberattacks and how to protect against them, some researchers say there’s an even more dangerous threat on the horizon: AI-powered biological attacks.  

In fact, Dan Hendrycks, director of the Center for AI Safety, and one of the primary opponents of an “AI Manhattan Project,” believes the threat of AI-powered bioweapons is even more serious than that of AI-powered cybersecurity. “For cyber, that's more in the future, but I think expert level virology capabilities are much more plausible in the short term,” he said on the Big Technology Podcast.  

Table of Contents

AI’s Rapid Mastery of Microbiology Raises Red Flags

At the same time that AI is learning to generate images of people without six fingers on each hand, it’s been learning more detailed knowledge about biology, researchers warned

In an analysis on the performance of LLMs on 101 microbiology questions, researchers at the UK's AI Safety Institute (AISI) found that LLM accuracy has increased from ~5% to 60% in the span of two years.  

And that was in early 2024. Now, it may be worse. 

Hendrycks explained why, saying “Harvard and MIT-expert level virologists” are taking photos of themselves in the wet lab — in other words, a laboratory space intended for experiments with actual liquids and biological matter — and asking AI systems what the next step should be.

“The most recent reasoning models are getting around the 90th percentile, compared to these expert level virologists in their area of expertise,” he said. “This suggests that they have some of these wet lab type of skills and so if they can guide somebody through it step-by-step that could be very dangerous."

Even Hendrycks said he was surprised by how quickly AI systems have advanced. “Now they have these image understanding skills, so that's a problem. They didn't used to have that. That makes it a lot easier for them to do guidance or be a guide on one shoulder saying, ’Now do this, now do that.’ Since they've read basically every academic paper written, maybe that's the cause of it, but it's a surprise. I was thinking that this practical tacit knowledge wouldn't be something they would pick up on necessarily.”

Related Article: Why AI Data Centers Are Turning to Nuclear Power

Lax Biosecurity Opens the Door to AI-Driven Bioweapons

The other part of the problem is that biology researchers have demonstrated that getting access to DNA samples — even from potentially deadly diseases like the 1918 influenza virus — isn’t very hard. 

The researchers, evaluating the scrutiny that new customers go through when attempting to get these DNA samples, set up accounts using an email address associated with a recently founded non-profit. This non-profit did not perform lab research, the email was not linked to an individual listed on the organization's website and the order requested shipping to an office address that, when Googled, lacked any form of laboratory space. 

All 38 vendors contacted fulfilled the order, while only one raised questions. Once obtained, such samples could hypothetically be modeled, constructed and improved using AI.

Can AI Save Us From AI-Powered Bioweapons?

Fortunately, in the same way that AI can be used both for creating and fighting cybersecurity attacks, AI may be used to help prevent bioweapons as well as create them.

“There are reasons to be optimistic about the future,” wrote the AISI in February 2024. “More powerful AI systems will also give us enhanced defensive capabilities. These models will also get better at spotting harmful content on the internet or defending from cyber-attacks by identifying and helping to fix vulnerabilities.”

In addition, many biosecurity experts are calling on the AI industry to consider the issue of biosecurity more seriously. For example, the US government’s AI Action Plan, released in July 2025, makes a number of recommendations in biosecurity, which experts welcomed.

Global Biosecurity Rules Need an AI Upgrade

One suggestion: The National Science and Technology Council, along with the Subcommittee on Machine Learning and AI, recommend minimum data quality standards for the use of biological data in AI model training, as well as that of other scientific disciplines. 

“This is important and needs to be advanced without delay,” wrote Crystal Grant, a Senior Fellow at the Council on Strategic Risks’ Janne E. Nolan Center on Strategic Weapons. There, she focuses on the effect of AI and other emerging technologies on biosecurity. “High-quality data in AI model training will both accelerate the biosecurity progress driven by AI and address the risks that poor standards could have in hampering the federal response to bioincidents.”

Other ways that AI could help protect against biological attacks include using AI to: 

  • Detect and identify disease outbreaks
  • Design drugs
  • Strengthen defenses through acquisition and distribution of protective gear

Related Article: AI Risks Grow as Companies Prioritize Speed Over Safety

Closing the Loopholes in AI and Biosecurity

While the effectiveness of “security through obscurity,” or keeping vulnerabilities secret in the hopes that nobody finds out about them, is a matter of debate, some experts are calling for just that.

For example, one recommendation in the AI Action Plan is that grant-makers carefully consider the impact of scientific and engineering datasets from a researcher’s previously funded efforts when considering proposals for new projects. “For some specific cases," wrote Grant, "the potential information hazard risks of this incentivization must be seriously considered alongside the benefits of the data’s release." 

In addition, researchers like the MIT Media Lab and the RAND Corp. recommend that it be more difficult to obtain DNA disease fragments. “Just as ink printers recognize and reject attempts to counterfeit money, DNA synthesizers and assemblers should deny unauthorized requests to make viral DNA that could be used to ignite a pandemic,” wrote SecureDNA, a Swiss nonprofit that runs a system intended to detect such requests. 

Learning Opportunities

Inside Anthropic’s Plan to Contain AI’s Bioweapon Risk 

On the AI side, Anthropic, makers of Claude, regularly studies its cybersecurity and bioweapons capabilities using what it calls “frontier threats red teaming.” In May 2025, the company activated its AI Safety Level 3 Security Standard in response to the Claude Opus 4 capabilities in the context of chemical, biological, radiological and nuclear (CBRN) weapons. 

“To be clear, we have not yet determined whether Claude Opus 4 has definitively passed the Capabilities Threshold that requires ASL-3 protections,” the company wrote in its announcement. “Rather, due to continued improvements in CBRN-related knowledge and capabilities, we have determined that clearly ruling out ASL-3 risks is not possible for Claude Opus 4 in the way it was for every previous model, and more detailed study is required to conclusively assess the model’s level of risk.”

Increased protections involve increased internal security measures that make it harder to steal model weights, as well as some deployment measures intended to limit the risk of Claude being misused specifically for CBRN weapons, Anthropic said. “These measures should not lead Claude to refuse queries except on a very narrow set of topics.”

About the Author
Sharon Fisher

Sharon Fisher has written for magazines, newspapers and websites throughout the computer and business industry for more than 40 years and is also the author of "Riding the Internet Highway" as well as chapters in several other books. She holds a bachelor’s degree in computer science from Rensselaer Polytechnic Institute and a master’s degree in public administration from Boise State University. She has been a digital nomad since 2020 and lived in 18 countries so far. Connect with Sharon Fisher:

Main image: junky_jess | Adobe Stock
Featured Research