Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

AI Malware: Code That Writes Itself

Google discovers PROMPTFLUX malware that rewrites itself using AI models, marking a new era where threats evolve in real-time to evade detection.

Intelligence DeskIntelligence Desk••4 min read

AI Snapshot

The TL;DR: what matters, fast.

Google discovers PROMPTFLUX malware that uses AI models to rewrite itself in real-time

AI-enabled attacks surged 89% year-over-year with threats adapting to evade detection systems

Underground marketplace emerges for illicit AI tools, democratizing sophisticated attack methods

The Dawn of Self-Modifying Malware

Google's Threat Intelligence Group has uncovered something that should make every cybersecurity professional sit up and take notice: malware that rewrites itself using large language models. Called PROMPTFLUX, this experimental threat represents a fundamental shift from static malicious code to dynamic, AI-powered attacks that can adapt in real time.

This isn't science fiction anymore. We're witnessing the emergence of digital threats that can chat with AI systems like Google's Gemini, generate new attack vectors on demand, and continuously evolve to evade detection. It's like trying to catch smoke, and it marks the beginning of a new era in cybersecurity warfare.

How AI Malware Operates

PROMPTFLUX demonstrates a "just-in-time" approach to malicious activity. Instead of embedding attack code directly into the malware, it dynamically generates malicious scripts by communicating with AI models. The system can obfuscate its own code to dodge antivirus software and create new attack functions on demand rather than relying on pre-programmed routines.

Advertisement

This adaptive capability fundamentally changes the cybersecurity landscape. Traditional security approaches rely on identifying known patterns and signatures, but AI-powered threats can constantly reinvent themselves. Each iteration presents a fresh challenge to detection systems.

The malware operates by sending carefully crafted prompts to AI systems, essentially tricking them into generating malicious code. This technique, known as prompt injection, exploits the very capabilities that make AI useful for legitimate purposes.

By The Numbers

  • AI-enabled attacks surged 89% year-over-year, with adversaries weaponising AI for reconnaissance, credential theft, and evasion
  • 73% of organisations report that AI-powered threats are already having significant impact across the entire attack chain
  • Illicit AI discussions rose 1,500% from November to December 2025, jumping from 360,000 to over six million conversations
  • AI predator swarms can potentially unleash 10,000 personalised phishing emails per second by 2026
  • 3.3 billion credentials were stolen via infostealers infecting 11.1 million devices in recent months

The Underground AI Marketplace

Google's research reveals connections to financially motivated threat actors, suggesting cybercriminals are rapidly adopting AI capabilities. An underground marketplace for illicit AI tools is emerging, potentially democratising sophisticated attack methods for less skilled actors.

This development mirrors broader trends in AI accessibility and misuse. What once required extensive programming knowledge can now be achieved through natural language interactions with AI systems.

"Prompts are the New Malware: Adversaries exploited legitimate GenAI tools at more than 90 organisations by injecting malicious prompts to generate commands for stealing credentials and cryptocurrency." - CrowdStrike 2026 Global Threat Report

The implications extend beyond individual hackers. State-sponsored actors from North Korea, Iran, and China are already experimenting with AI to enhance their operations. This represents a significant escalation in the global cyber arms race.

Detection and Defence Challenges

Traditional cybersecurity relies on recognising patterns and signatures, but self-modifying malware breaks this model entirely. Security teams face unprecedented challenges when threats can continuously evolve and adapt their behaviour.

The following factors make AI malware particularly difficult to combat:

  • Dynamic code generation that creates unique variants for each deployment
  • Real-time adaptation to security measures and detection attempts
  • Exploitation of legitimate AI services to avoid traditional malware distribution methods
  • Ability to learn from failed attacks and adjust tactics accordingly
  • Potential for autonomous operation without direct human oversight

Security professionals must fundamentally rethink their defensive strategies. Static analysis tools become less effective when code changes continuously, and behavioural detection systems must adapt to increasingly sophisticated evasion techniques.

Current Threat Landscape

Fortunately, PROMPTFLUX appears to be in early development stages. Google's analysis found commented-out features and API rate limiting mechanisms, suggesting ongoing testing rather than active deployment. However, this discovery serves as a warning of what's coming.

Traditional Malware AI-Powered Malware
Static code signatures Dynamic code generation
Predictable behaviour patterns Adaptive response capabilities
Fixed attack vectors Context-aware targeting
Manual updates required Self-modifying functionality
Detectable through analysis Continuously evolving signatures

The cybersecurity community is racing to develop countermeasures. Google has already deployed Big Sleep, an AI agent specifically designed to identify security vulnerabilities. This represents the emerging reality of AI versus AI in cybersecurity defence.

"In 2026, we will see the rise of AI-enabled malware that can autonomously adapt in real time to evade detection. We've already seen hints of this in polymorphic malware that rewrites itself on the fly." - Cyber Insights 2026: Malware and Cyberattacks in the Age of AI

Regional Implications and Response

Asia-Pacific nations are particularly vulnerable given their rapid AI adoption and digital transformation initiatives. Countries like Singapore are developing governance frameworks for AI systems, but keeping pace with malicious applications remains challenging.

The region's interconnected digital infrastructure and growing reliance on AI systems create attractive targets for sophisticated attacks. Financial services, healthcare, and government systems all present high-value opportunities for cybercriminals wielding AI-powered tools.

Organizations across Asia must prepare for threats that can adapt faster than traditional security measures. This includes investing in AI-powered defence systems and developing better understanding of how AI can be both exploited and leveraged for protection.

What makes AI malware different from traditional threats?

AI malware can rewrite itself continuously, generate new attack variants on demand, and adapt to security measures in real time. Unlike static malware with fixed signatures, these threats present unique challenges for every encounter with security systems.

Is PROMPTFLUX currently active in cyberattacks?

No, Google's analysis suggests PROMPTFLUX remains in development or testing phases. The malware contains commented-out features and API limitations, indicating it hasn't been deployed for active attacks yet.

How can organisations prepare for AI-powered threats?

Security teams should invest in AI-powered defence systems, implement behavioural monitoring rather than signature-based detection, and develop incident response plans for rapidly evolving threats that traditional tools might miss.

Which regions are most at risk from AI malware?

Whilst specific regional data is limited, North America currently accounts for 29% of observed attacks. However, any region with high AI adoption and digital infrastructure faces significant risk from these evolving threats.

Can AI systems be protected from malicious prompt injection?

Protecting AI systems requires robust input validation, content filtering, and monitoring for unusual query patterns. However, as AI applications become more sophisticated, new vulnerabilities continue to emerge requiring constant vigilance.

The AIinASIA View: PROMPTFLUX represents a watershed moment in cybersecurity. We're transitioning from an era of static threats to dynamic, AI-powered attacks that can evolve faster than traditional defences. This isn't just another malware variant; it's a glimpse into a future where the distinction between attacker and defender capabilities becomes increasingly blurred. Organizations that don't adapt their security strategies now will find themselves woefully unprepared for the next wave of threats. The arms race has begun, and victory will belong to those who best harness AI for protection rather than merely reacting to its malicious applications.

The emergence of self-modifying malware marks a critical inflection point in our digital security landscape. As AI capabilities continue advancing, both attackers and defenders will need to fundamentally reimagine their approaches. The question isn't whether more sophisticated AI threats will emerge, but how quickly we can adapt our defences to meet them. What concerns you most about AI-powered cyber threats, and how is your organization preparing? Drop your take in the comments below.

â—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Safety for Everyone learning path.

Continue the path →

Latest Comments (2)

Hye-jin Choi
Hye-jin Choi@hyejinc
AI
5 December 2025

The GTIG findings on PROMPTFLUX's ability to dynamically generate scripts raise important questions for AI policy. How does this adaptive malware impact the "AI Assurance" framework being developed in Korea, particularly regarding continuous threat modeling? Are APAC nations adequately collaborating on shared LLM security protocols given such rapid evolution?

Krit Tantipong
Krit Tantipong@krit_99
AI
26 November 2025

This PROMPTFLUX thing, it makes me think about our supply chain models. We use LLMs for forecasting and optimizing routes here in Thailand, and the idea of malware that can basically rewrite itself based on live interactions, that's a new level of risk. If it can dynamically generate scripts to evade detection, what happens when it targets systems that rely on constant, real-time data feeds for logistics? Our models are designed for efficiency, but how do you build resilience against something that adapts on the fly to bypass security checks? It's not just about blocking a known signature anymore.

Leave a Comment

Your email will not be published