Skip to main content
AI in ASIA
AI browser security
Life

AI Browsers Under Threat as Researchers Expose Deep Flaws

Security researchers expose critical flaws in AI browsers like Perplexity's Comet, revealing how malicious actors can hijack user sessions through hidden prompts.

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Brave Software researchers expose critical security flaws in Perplexity's Comet and similar AI browsers

Indirect prompt injection attacks allow malicious actors to hijack user sessions through hidden webpage instructions

Asia's digital-first economy faces amplified risks due to rapid AI browser adoption and browser-centric services

Advertisement

Advertisement

Security Researchers Sound the Alarm on AI Browser Vulnerabilities

Perplexity AI's Comet browser and similar AI-powered browsing tools face mounting scrutiny after researchers at Brave Software exposed critical security flaws. The vulnerabilities, centred on indirect prompt injection attacks, could allow malicious actors to hijack user sessions and access sensitive accounts without explicit permission.

The findings cast doubt over the entire "AI browser" category, which promises to revolutionise web browsing by letting users command their browser through natural language. With Asia leading global digital adoption rates, the implications extend far beyond a single product.

The Promise That Became a Problem

AI browsers like Comet and OpenAI's upcoming Atlas represent the next evolution of web browsing. Users can ask these tools to summarise articles, manage tasks, or even conduct research across multiple tabs. The appeal is obvious: why click through dozens of pages when you can simply ask your browser to "find the best flight deals to Tokyo"?

However, the same capabilities that make AI browsers powerful also make them vulnerable. Unlike traditional browsers that operate within well-established security boundaries, AI agents blur the lines between user instructions and web content.

"When users ask [Comet] to 'Summarise this webpage,' Comet feeds a part of the webpage directly to its LLM without distinguishing between the user's instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands."
, Brave Software Research Team

By The Numbers

  • One hidden payload in a Reddit post successfully extracted one-time password tokens from a user's email
  • Traditional web security measures like same-origin policy and CORS fail to protect against these AI-specific attacks
  • Zero convincing solutions exist to distinguish between user instructions and malicious web content
  • Multiple AI browser products currently in development could face identical vulnerabilities
  • Asia accounts for over 60% of global internet users, amplifying the potential impact

The core vulnerability lies in how AI browsers process information. When a user asks the AI to analyse a webpage, the browser feeds both the user's request and the webpage content to the same language model. Malicious actors can exploit this by embedding hidden instructions within seemingly innocent web content.

Asia's Unique Exposure to AI Browser Risks

Asia's digital-first economy creates specific vulnerabilities. The region's rapid adoption of new technologies, combined with high mobile usage and browser-centric services, means AI browser flaws could have outsized consequences.

Consider Singapore's financial sector, where employees might use AI browsers for research while logged into corporate systems. A single compromised session could expose banking credentials, customer data, or proprietary information. Similar risks exist across Indonesia's fintech landscape or Vietnam's e-commerce platforms.

"The attack demonstrates how easy it is to manipulate AI assistants into performing actions that were prevented by long-standing Web security techniques."
, Brave Software Security Analysis

Regional regulators from the Monetary Authority of Singapore to Australia's ACCC will likely scrutinise these developments closely. The blurred boundaries between browser provider, AI service, and user agent create regulatory grey areas that could hamper broader AI adoption across Asia.

Risk Category Traditional Browsers AI Browsers
Cross-site scripting Blocked by same-origin policy Bypassed via prompt injection
Session hijacking Limited to current tab Can span multiple authenticated sessions
Unauthorised actions Requires user interaction Can execute without explicit consent
Data extraction Blocked by CORS policies AI agent can access cross-origin content

Practical Steps for Organisations and Users

Until AI browser security matures, organisations across Asia should implement strict boundaries around these tools:

  • Segregate AI browsers from sensitive work: Use traditional browsers for banking, corporate systems, and authenticated services
  • Require explicit confirmation for all automated actions: Never allow AI agents to act without clear user consent
  • Monitor unusual session activity: Watch for unexpected logins or service access patterns
  • Educate teams about prompt injection risks: Traditional phishing awareness doesn't cover AI-specific vulnerabilities
  • Treat AI browser sessions as potentially compromised: Assume any automated action could be maliciously triggered

The AI browser revolution promised by companies like Perplexity faces a fundamental architectural challenge. Current language models cannot reliably distinguish between legitimate user instructions and malicious content embedded in webpages.

The Technical Challenge Behind the Headlines

The security flaws aren't simple bugs that can be patched. They represent systemic problems with how AI browsers process mixed content sources. When an AI agent reads both user instructions and webpage content as input, it treats both as equally valid commands.

Researchers have attempted various solutions, from input sanitisation to prompt engineering, but none provide robust protection. The fundamental issue remains: language models lack the contextual understanding to separate trusted instructions from untrusted web content.

Can these vulnerabilities be fixed with current AI technology?

Not effectively. Despite numerous attempts, no one has demonstrated a reliable method for AI models to distinguish between user instructions and potentially malicious web content when both are processed together.

Are all AI browsers vulnerable to these attacks?

Any AI browser that feeds web content directly to language models alongside user instructions faces similar risks. The vulnerability is architectural rather than product-specific.

Should businesses ban AI browsers entirely?

Complete bans may be excessive, but organisations should restrict AI browser use to low-risk activities and maintain strict separation from sensitive systems and authenticated services.

How do these risks compare to traditional browser security threats?

AI browser vulnerabilities bypass established web security measures like same-origin policy and CORS, creating entirely new attack vectors that existing protections cannot address.

What's the timeline for secure AI browser development?

Given the fundamental nature of the challenge, secure AI browsers may require significant advances in AI model design and training, potentially taking years rather than months.

The broader implications extend beyond individual products. As OpenAI prepares to challenge Chrome with its own AI-powered browser, the security concerns raised by Brave's research become increasingly urgent.

The AIinASIA View: The rush to deploy AI browsers reflects a broader pattern we've seen across the AI industry: impressive capabilities undermined by inadequate security considerations. While we support innovation in browsing technology, the current generation of AI browsers appears fundamentally flawed from a security perspective. Asian organisations, with their high digital adoption rates and integrated online services, face particular risks. We recommend extreme caution until these architectural issues are resolved, not through patches but through fundamental redesign of how AI agents interact with web content.

The expansion of AI browsers to mobile platforms adds another layer of complexity. Mobile devices often store more sensitive data and have weaker security boundaries than desktop systems, potentially amplifying the impact of successful attacks.

For now, the promise of AI-powered browsing remains tantalisingly out of reach. The technology works impressively in demonstrations, but the security foundation isn't solid enough for widespread deployment, especially in enterprise environments or sensitive applications.

The next few months will prove crucial as more AI browser products enter the market. Will developers find solutions to these fundamental security challenges, or will the category need to retreat and rebuild from a more secure foundation?

What security measures is your organisation taking as AI tools proliferate in the workplace? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the This Week in Asian AI learning path.

Continue the path →

Latest Comments (4)

Tony Leung@tonyleung
AI
18 November 2025

The indirect prompt injection vulnerability described, particularly with Comet feeding raw webpage content to its LLM, is exactly the kind of risk we stress to regulators here regarding consumer data. It's not just about the data itself, but how untrusted inputs can manipulate agent actions at scale. This needs hardening before any real adoption in regulated markets.

Pierre Dubois
Pierre Dubois@pierred
AI
12 November 2025

This indirect prompt injection is a known vector, we've seen similar patterns with other LLM integrations en effet. It's not unique to Perplexity. Voilà.

Rohan Kumar
Rohan Kumar@rohank
AI
4 November 2025

This is exactly what we're explaining to clients! The "tell it what you want, it does it" part is so powerful for automating workflows. We've seen incredible efficiency gains in summarising documents and even drafting emails from meeting notes. The security aspect with Comet and indirect prompt injection is a real consideration, but the utility for businesses is just too high to ignore. Gotta find the balance!

Ji-hoon Kim@jihoonk
AI
1 November 2025

The indirect prompt injection described for Comet is exactly why we need stronger on-device LLMs. Sending parts of a webpage to a cloud LLM for summarization introduces too many attack vectors. If the processing happens locally, without shipping data off to a third party, the risk profile changes completely. It's not just about privacy, it's a fundamental security layer that's missing when you rely on remote inference for something so critical. This whole "AI browser" concept needs to bake in edge computation from the start to be truly viable.

Leave a Comment

Your email will not be published