Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

AI Now Outperforms Virus Experts - But At What Cost?

AI systems now match PhD virologists in complex lab tasks, but experts warn of unprecedented biosecurity risks as barriers to bioweapons fall.

Intelligence DeskIntelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

OpenAI's o3 model achieved 43.8% accuracy on virology tasks vs 22.1% for human experts

AI systems now demonstrate mastery of tacit laboratory knowledge requiring years of experience

Los Alamos and OpenAI investigating AI's potential role in biological weapons development

AI Models Now Rival PhD Virologists in Complex Laboratory Tasks

Advanced artificial intelligence systems have crossed a critical threshold in biological research. OpenAI's o3 model achieved 43.8% accuracy on complex virology tasks, nearly doubling the average human expert score of 22.1%. Anthropic's Claude 3.5 Sonnet performed at the 75th percentile, surpassing most trained virologists in laboratory problem-solving.

This breakthrough represents more than incremental progress. AI systems now demonstrate mastery of "tacit" laboratory knowledge, the intuitive understanding that typically requires years of hands-on experience to develop.

Game-Changing Opportunities for Global Health Research

The implications for medical research are profound. AI-powered tools can rapidly identify new pathogens, potentially preventing outbreaks before they spread globally. PandemicLLM, an AI system developed for COVID-19 tracking, consistently outperformed top models on the CDC's CovidHub across 19 months of data.

Advertisement

Laboratory efficiency could see dramatic improvements through smarter experimental design and reduced human error. For regions lacking access to top-tier virologists, AI democratises expert-level research capabilities. The AI revolution transforming healthcare extends beyond diagnosis into fundamental research.

"When new COVID-19 variants emerged or policies changed, we were terrible at predicting the outcomes because we didn't have the modelling capabilities. The new tool fills this gap." , Lauren Gardner, Whiting School of Engineering, Johns Hopkins University

By The Numbers

  • AI diagnostic accuracy reaches 87% sensitivity and 90% specificity in medical imaging tasks
  • PandemicLLM predicted COVID-19 hospitalisation trends 1-3 weeks ahead with superior accuracy
  • Meta-analysis of 83 studies shows AI diagnostic accuracy at 52.1% overall in medical tasks
  • AI health platforms now track over 200 diseases across 50 U.S. states in real-time
  • Delphi AI system predicts disease risk years ahead using data from 402,799 participants

The Dark Side: Biosecurity Fears Mount

With great capability comes unprecedented risk. Los Alamos National Laboratory and OpenAI are investigating AI's potential role in biological threats. The UK AI Safety Institute confirmed that AI now rivals PhD-level expertise in biology tasks, raising alarm bells across the security community.

ECRI named AI-related risks the number one technology hazard for healthcare in 2025. The concern isn't theoretical. Advanced AI could dramatically lower barriers for bioweapons development, potentially enabling bad actors without extensive scientific training to create dangerous biological agents.

These risks compound existing challenges in AI safety and governance, where regulatory frameworks struggle to keep pace with technological advancement.

Risk Category Traditional Barriers AI-Enabled Scenarios
Pathogen Design PhD-level expertise required AI guidance lowers skill threshold
Lab Access Institutional oversight Remote AI consultation possible
Knowledge Gaps Years of training needed AI provides instant expert advice
Error Prevention Human oversight essential AI reduces accidental exposure

Industry Response: Racing to Build Safeguards

Tech companies aren't ignoring these risks. xAI released a comprehensive risk management framework specifically addressing dual-use biological risks in its Grok models. OpenAI has deployed system-level mitigations designed to block biological misuse pathways.

Researchers increasingly advocate for know-your-customer checks for accessing powerful bio-design tools. The calls for formal regulations and licensing systems grow louder as AI capabilities expand beyond human expertise.

"There will be another pandemic, and these types of frameworks will be crucial for supporting public health response. We know from COVID-19 that we need better tools so that we can inform more effective policies." , Lauren Gardner, Whiting School of Engineering, Johns Hopkins University

The challenge lies in balancing innovation with security. Overly restrictive measures could stifle legitimate research that saves lives, whilst insufficient controls could enable catastrophic misuse.

Key Safeguards Being Implemented

  1. System-level AI mitigations that block known biological misuse pathways and flag suspicious queries
  2. Know-your-customer verification processes for accessing advanced bio-design AI tools
  3. Dual-use research oversight frameworks specifically designed for AI-enabled biological research
  4. International coordination mechanisms for sharing threat intelligence and best practices
  5. Mandatory safety training for researchers using AI in biological applications
  6. Regular auditing and monitoring of AI model outputs in biological domains

How accurate are AI systems compared to human virologists?

AI models like o3 achieve 43.8% accuracy on complex tasks, nearly double the 22.1% average for human experts. However, performance varies significantly across different types of biological problems and research contexts.

What specific biosecurity risks does AI create?

AI could lower barriers for bioweapons development by providing expert-level guidance to non-specialists. It might enable remote consultation for dangerous research and reduce the institutional oversight traditionally required for high-risk biological work.

Are there regulations governing AI use in biological research?

Current regulations lag behind technological capabilities. Industry self-regulation and voluntary frameworks exist, but comprehensive legal structures are still developing. International coordination remains limited despite growing risks.

How do AI systems learn virology expertise?

AI models train on vast datasets of scientific literature, experimental data, and research findings. They develop pattern recognition capabilities that can mimic expert-level understanding without requiring traditional laboratory experience.

What benefits does AI offer for pandemic preparedness?

AI enables faster pathogen identification, improved outbreak prediction, accelerated vaccine development, and democratised access to expert-level research capabilities, particularly valuable for resource-constrained regions and rapid response scenarios.

The AIinASIA View: The race between AI capability and safety measures has reached a critical juncture in biological research. Whilst the potential for breakthrough medical advances is enormous, we cannot ignore the unprecedented biosecurity risks. Asian nations, with their strong manufacturing bases and growing AI expertise, must take the lead in developing responsible governance frameworks. The region's experience with pandemic response gives it unique insights into balancing innovation with safety. We need nuanced regulation that protects against misuse without stifling the life-saving potential of AI-powered biological research.

The intersection of AI and virology presents both humanity's greatest opportunity and its most dangerous threat. As these systems become more capable, the window for establishing proper safeguards narrows. The broader implications for AI governance extend far beyond any single application, but few areas demand such urgent attention as biological research.

What safeguards do you think are most crucial as AI capabilities in biological research continue to advance? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Research Radar learning path.

Continue the path →

Latest Comments (4)

Ryota Ito
Ryota Ito@ryota
AI
28 January 2026

This is huge for places like Japan where a lot of virus research still relies on older methods. Imagine what Japanese LLMs could do for our own labs, helping with pathogen identification!

Arjun Mehta
Arjun Mehta@arjunm
AI
4 July 2025

The accuracy numbers, like o3 achieving 43.8% versus human 22.1%, are crazy. I'm actually more curious about the dataset and evaluation metrics they used for those lab problems. Like how is "tacit" lab knowledge even measured for an LLM? Seems like a hard problem to benchmark properly for biosecurity.

Maria Reyes
Maria Reyes@mariar
AI
6 June 2025

this is really exciting for places like the Philippines. imagine what this could do for public health here, especially in remote areas where we don't have enough virologists. if AI can help with faster identification of pathogens and smarter experimental design, that could mean quicker responses to outbreaks. i wonder though, with all the talk about bioweapons, if these powerful AI models could also be used to predict or even prevent natural disease outbreaks more effectively in developing nations. like, moving beyond just diagnostics to proactive, regional health strategy. seems like a natural next step for the technology.

Nicolas Thomas
Nicolas Thomas@nicolast
AI
9 May 2025

I get the biosecurity concerns, but this bit about OpenAI having "system-level mitigations to block biological misuse pathways" is a bit vague, no? What exactly does that mean in practice? It's easy for big US companies to say they have safeguards, but without transparency, it's hard to trust, especially when we're talking about something as critical as bioweapons. We need more open, verifiable standards, maybe something coming out of a European consortium, rather than relying on closed-source promises.

Leave a Comment

Your email will not be published