Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Life

    AI Browsers Under Threat as Researchers Expose Deep Flaws

    New audit shows that granting an AI browser control over the web may open the door to far broader risks than previously realised

    Anonymous
    6 min read27 October 2025
    AI browser security

    AI Snapshot

    The TL;DR: what matters, fast.

    New research reveals security flaws in AI browsers like Perplexity AI’s Comet, which could allow malicious actors to exploit user permissions.

    The vulnerabilities stem from indirect prompt injection, where AI models process malicious instructions embedded in web content as trusted commands.

    These security issues challenge traditional web security measures and pose risks, especially in regions with high digital adoption.

    Who should pay attention: Operating system and browser developers | AI developers | Regulators | Consumers

    What changes next: Privacy and security debates surrounding AI browsers are likely to intensify.

    AI Browsers Under Threat as Researchers Expose Deep Flaws

    What happens when you give an AI browser access to your tabs, and hand it permissions to act on your behalf?

    That’s the question raised by a fresh wave of security research. The focus falls squarely on what we might call the “AI browser” trend, and particularly the latest alerts surrounding the Comet browser from Perplexity AI. Researchers at Brave Software have found what they describe as glaring vulnerabilities in Comet’s architecture, and the implications reach well beyond one product. As the region from Singapore to Seoul and Sydney explores next‑gen browsing tools, the message is clear: proceed, but carefully.

    Why “AI browser” looked like the next big thing

    Having watched the rise of LLM chatbots and autonomous agents over recent years, it was inevitable that someone would try to embed them directly into a web browser. That is exactly what Perplexity’s Comet, and others such as ChatGPT Atlas from OpenAI, are attempting: the promise of “tell the browser what you want, it does it”. Features such as summarising website content, analysing screenshots, managing tasks, or even shopping for you all seem seductive.

    But here’s the rub: while the underlying technology may be impressive, the security implications of handing broad permission‑to‑act to an agentic browser were not fully vetted — and recent work shows the danger.

    What the researchers found, and why it matters

    At the heart of the vulnerability is a technique known as indirect prompt injection. In simple terms, an attacker embeds malicious instructions within otherwise benign‑looking web content, comments, hidden text, PDFs, social posts, that are processed by the AI agent under the assumption that they are “safe” user inputs. The research from Brave explains:

    “When users ask [Comet] to ‘Summarise this webpage,’ Comet feeds a part of the webpage directly to its LLM without distinguishing between the user’s instructions and untrusted content from the webpage. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands.”

    In one demonstrative case, a hidden payload in a reddit post caused the agent to access a user’s email account and extract one‑time password tokens: all without the user intentionally triggering these actions.

    What makes this more than a simple “bug” is the systemic problem: traditional web‑security assumptions such as same‑origin policy or CORS no longer apply neatly when an AI agent spans across tabs and accesses authenticated sessions. Brave put it thus:

    “The attack demonstrates how easy it is to manipulate AI assistants into performing actions that were prevented by long‑standing Web security techniques.”

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    Implications for Asia’s tech ecosystem

    In Asia, where digital adoption is among the highest globally and many services are mobile‑ or browser‑centric, the arrival of agentic browsers holds both promise and peril. Consider some region‑specific angles:

    Enterprise risk: A bank in Singapore or a fintech in India might be exploring “AI‑augmented browsing” for research or customer‑service work. If the browser agent can act on their behalf, even a single exploitation could expose corporate credentials, dashboards or data. As one commentary notes, the vulnerabilities “could allow access to banking, corporate systems, private emails, cloud storage and other services.

    Consumer convenience vs security: In markets such as Indonesia or Vietnam, consumers are primed to adopt new tools quickly. The promise of “just ask your browser” may drive adoption before rigorous controls are in place: which is precisely what the reports warn against.

    Regulatory and trust implications: Asia’s regulators (from Monetary Authority of Singapore to Australia’s ACCC) will no doubt follow this with interest, especially since AI‑driven browsing blurs lines between browser provider, AI service and user agent. A failure of trust in one tool could hamper broader AI adoption in the region.

    What users and organisations should watch out for

    Given these red flags, here are prudent steps for professionals and organisations in Asia:

    1. Treat AI browsers as distinct from your “safe” browsing tool: Until the security model is mature, use agentic browsers only for low‑risk tasks. Keep banking, sensitive accounts and corporate access within standard (well‑protected) browsers.
    2. Require explicit user confirmation for any automation or agent action: The key risk arises when the browser agent acts without asking. User consent must be an explicit step, not assumed.
    3. Distinguish between trusted user instructions and untrusted web content: Developers of AI browser tools must engineer this separation, and until they do, vulnerability remains.
    4. Monitor device, session and behavioural anomalies: If a browser agent logs into unexpected services or initiates unusual workflows, treat it as a potential indicator of compromise.
    5. Educate users and staff about the new class of risks: The vulnerabilities here aren’t classical phishing only, they exploit the agentic layer. Awareness is key.

    Looking ahead: Are these risks solvable?

    Yes, but not easily. The core problem is architectural: language models generally don’t distinguish between “instructions from the user” and “content from a webpage” when they’re fed both as part of the same input stream. As the Brave post cautions:

    “To date (despite many attempts) nobody has demonstrated a convincing and effective way of distinguishing between the two.”

    In other words, researchers believe that every browser which gives its AI agent broad privileges will face similar risks unless its core design is revised. Why does this matter for Asia? Because many tech projects in the region are built on open‑source models, fast roll‑outs and high user expectations — which means there is both speed of adoption and risk of oversight.

    The upcoming interest in ChatGPT Atlas from OpenAI may raise the stakes further. If widely adopted outside of tech‑savvy early adopters, the exposure surface could be huge. For more on the challenges of AI, consider reading about AI Cognitive Colonialism. You can also delve into the technical specifics of prompt injection attacks through research papers like "Universal and Transferable Adversarial Attacks on Aligned Language Models" by Zou et al. (2023)^[https://arxiv.org/abs/2307.15043].

    The Final word

    AI‑powered browsing is an exciting frontier, but the research shows we’re far from safe. The notion of handing an agent browser the keys to your tabs and accounts was always bold; now we know how easily those keys can be misused. For corporate users and consumers alike in Asia, the advice is simple: engage with AI browsers, but do so with caution, clear boundaries and active control. Until we see robust guardrails, the promise of the “browser that thinks for you” remains a risk as well as an opportunity. You might also find our discussion on How AI Agents Will Break Passkeys And 3 Ways To Fix Them insightful.

    What are your organisation’s controls for “agentic” tools? It might be time to revisit them

    Anonymous
    6 min read27 October 2025

    Share your thoughts

    Join 3 readers in the discussion below

    Latest Comments (3)

    Kunal Saxena@kunal_s_ai
    AI
    26 November 2025

    This is a proper eye opener, innit? What really gets me thinking is beyond just the security aspect. If these AI browsers are so deeply integrated, how do we ensure they're not subtly influencing our information consumption? Like, are they optimising for "correct" answers or just "popular" ones? The article focuses on the immediate threat, which is fair enough, but the long term implications for how we perceive and interact with online content are massive. It’s a bit of a pickle, really.

    Rosa Dela Cruz
    Rosa Dela Cruz@rosa_dc
    AI
    21 November 2025

    Ay, grabe. This is exactly what I mean about tech moving too fast without thinking of the security implications. It's like we never learn, noh?

    Sofia Garcia
    Sofia Garcia@sofia_g_ai
    AI
    10 November 2025

    Wow, this is quite concerning, no? It makes you wonder how much vetting these AI browsers actually undergo before hitting the market. Like, what's the actual *due diligence* process for something that can potentially expose so much user data, especially here in the Philippines where we're already quite vulnerable online?

    Leave a Comment

    Your email will not be published