We often hear that artificial intelligence is reshaping everything from how we write emails to how we shop online, but one of the most transformative frontiers is the merging of AI with our everyday tools.
Browsers, once plain portals to the web, are now evolving into AI-powered agents that interpret, summarize, and even act on content. This shift from passive browsing to agentic browsing is undeniably seductive: your browser doesn’t just fetch information, it thinks alongside you. However, that very intimacy of trust and expectation, where the browser “knows best,” can be weaponized. In security circles, pushing boundaries is standard, and it’s only a matter of time until assumptions of trust become the vector for attack.
A Closer Look at Comet and the Stakes Involved
Perplexity, known for its AI-enhanced search engine and strict adherence to references, has recently launched Comet, an AI-driven browser that blends conversational intelligence directly into the web experience. For users, the appeal is immediate: fewer tabs, more context, and a seamless conversational layer that “just works.” But with great trust comes risky expectations. ActiveFence researchers probed Comet’s defenses with one goal: to determine whether hidden or indirect instructions could influence the agent’s behavior without the user’s awareness.
In tests, they discovered an indirect prompt injection vulnerability. The researchers found that certain embedded cues could, under specific conditions, alter how the AI responded to content after hitting internal limits. They demonstrated that attackers could potentially insert hidden prompts in areas of web content not visible to users, taking advantage of properties that AI agents use as signals to guide their behavior. These concealed instructions could then manipulate the AI into showing misleading “upgrade” prompts or links that appeared legitimate, all without requiring traditional code-based exploits.
Trust, Prompt Injection, and Phishing: A Dangerous Convergence
What struck the researchers most was how invisible contextual cues became the attack vector. Because Comet is designed to obey instructions, the AI agent can be coaxed into following hidden or misleading prompts instead of the user’s command. The team demonstrated how attackers could exploit that behavior to push phishing pages and harmful links, all presented under the guise of normal browser content. Rather than a forced exploit of traditional web holes, this is subversion that operates entirely through language and metadata.
Crucially, the vulnerability seemed limited to certain configurations, while Pro users, with more customizable model selection and stricter guardrails, were less susceptible. However, that distinction only heightens the risk: since the product is now free for all users, with an option to upgrade, exposure could be widespread. The deeper issue, as ActiveFence argues, is that security and content filtering must scale across all tiers, not just premium ones.
Lessons for the AI-Enabled Future
ActiveFence’s revelations about Comet are more than a high-profile bug report. They hint at a broader truth: in systems where instruction-following is core, malicious language can serve as malware. Future attackers won’t always exploit buffer overflows or SQL injections. Instead, they’ll use ingenious prompt engineering and contextual subversion.
For developers and platform owners, the takeaway is urgent. Guardrails and visibility controls are not optional. Sandbox boundaries must anticipate not just malicious code but malicious instructions woven into ordinary web elements. All users deserve protection, not just the ones who pay more. As AI agents proliferate into email clients, document editors, and communication tools, any trusted system is a potential point of infiltration.
What Happens Next and Why It Matters
Perplexity may patch this flaw. However, that won’t resolve the issue: adversarial prompts will evolve more quickly than browser updates. ActiveFence’s expose should serve as a warning to all AI-enabled platforms to treat every potential instruction, visible or not.
This story exposes the vulnerability at the core of AI empowerment. As we move deeper into an era where tools listen, interpret, and act on our behalf, we must ask: who else is whispering into the machine?




















