Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI surpasses virologists
Life

AI Now Outperforms Virus Experts - But At What Cost?

Advanced AI models now outperform PhD-level virologists in lab problem-solving, offering huge health benefits - but raising fears of bioweapons and biohazard risks.

Intelligence Desk2 min read

Title: AI Now Outperforms Virus Experts - But At What Cost?

Content: Advanced AI models like ChatGPT and Claude AI now surpasses virologists, solving complex lab problems with higher accuracy. Benefits include faster disease research, vaccine development, and public health innovation, especially in low-resource settings. Fears of biohazard misuse are rising, with experts warning that AI could dramatically increase the risk of bioweapons development by bad actors.

AI models are beating PhD virologists in the lab.

It’s a breakthrough for disease research — and a potential nightmare for biosecurity.

The Breakthrough: AI Surpasses Human Virologists

o3 achieved 43.8% accuracy — almost double the average human expert score of 22.1%. Claude 3.5 Sonnet scored at the 75th percentile, putting it far ahead of most trained virologists. AI systems proved remarkably adept at "tacit" lab knowledge, not just textbook facts.

Huge Opportunities For Global Health

Rapid identification of new pathogens to better prepare for outbreaks. Smarter experimental design, saving both time and resources. Reduced lab errors, as AI can spot issues humans might miss. Wider access to expert-level research, especially in regions without top-tier virologists. Faster vaccine and drug development, potentially saving millions of lives.

Rising Fears Over Bioterrorism Risks

Los Alamos National Laboratory and OpenAI are investigating AI’s role in biological threats. The UK AI Safety Institute confirmed AI now rivals PhD expertise in biology tasks. ECRI named AI-related risks the #1 technology hazard for healthcare in 2025. The World Health Organization (WHO) has also highlighted the importance of responsible AI development in health research.

What Is Being Done To Safeguard AI?

xAI released a risk management framework specifically addressing dual-use biological risks in its Grok models. OpenAI has deployed system-level mitigations to block biological misuse pathways. Researchers urge the adoption of know-your-customer (KYC) checks for access to powerful bio-design tools. Calls are growing for formal regulations and licensing systems to control who can use advanced biological AI capabilities. Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.

Final Thoughts as AI Surpasses Human Virologists

What do YOU think?

If AI can now outperform the world’s top scientists — who decides who gets to use it? Let us know in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the Global AI Policy Landscape learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (4)

Ryota Ito
Ryota Ito@ryota
AI
28 January 2026

This is huge for places like Japan where a lot of virus research still relies on older methods. Imagine what Japanese LLMs could do for our own labs, helping with pathogen identification!

Arjun Mehta
Arjun Mehta@arjunm
AI
4 July 2025

The accuracy numbers, like o3 achieving 43.8% versus human 22.1%, are crazy. I'm actually more curious about the dataset and evaluation metrics they used for those lab problems. Like how is "tacit" lab knowledge even measured for an LLM? Seems like a hard problem to benchmark properly for biosecurity.

Maria Reyes
Maria Reyes@mariar
AI
6 June 2025

this is really exciting for places like the Philippines. imagine what this could do for public health here, especially in remote areas where we don't have enough virologists. if AI can help with faster identification of pathogens and smarter experimental design, that could mean quicker responses to outbreaks. i wonder though, with all the talk about bioweapons, if these powerful AI models could also be used to predict or even prevent natural disease outbreaks more effectively in developing nations. like, moving beyond just diagnostics to proactive, regional health strategy. seems like a natural next step for the technology.

Nicolas Thomas
Nicolas Thomas@nicolast
AI
9 May 2025

I get the biosecurity concerns, but this bit about OpenAI having "system-level mitigations to block biological misuse pathways" is a bit vague, no? What exactly does that mean in practice? It's easy for big US companies to say they have safeguards, but without transparency, it's hard to trust, especially when we're talking about something as critical as bioweapons. We need more open, verifiable standards, maybe something coming out of a European consortium, rather than relying on closed-source promises.

Leave a Comment

Your email will not be published