AI Chatbots Leak User Conversations to Meta, Google and TikTok Trackers
Study finds AI chat services expose conversation URLs and content to third‑party trackers, raising privacy and GDPR concerns.

The privacy myth: why you shouldn't trust AI with your secrets
TL;DR
Major AI chatbots transmit conversation links and message text to trackers from Meta, Google and TikTok, leaving chats accessible to anyone with the URL.
Context Researchers at IMDEA Networks Institute examined four popular generative AI assistants—ChatGPT, Claude, Grok and Perplexity. Their analysis revealed that the services embed advertising and analytics trackers in the same way traditional websites do. Users often share health, work or personal details with these bots, assuming the dialogue is private.
Key Facts - Grok and Perplexity send permanent conversation URLs to Meta Pixel, a tracking pixel used by Meta for analytics. The URLs lack proper access controls, so any holder can view the full chat. - Grok also places the raw message text in Open Graph metadata, a format TikTok scrapes for link previews. This exposes the content to TikTok’s tracking infrastructure. - All four chatbots embed scripts or pixels from Meta, Google and TikTok, allowing those firms to collect chat titles, URLs and associated cookies. - Weak or missing access control means a simple link grants public read‑only access to the conversation, effectively turning private chats into public pages. - The combination of cookies, hashed emails and server‑side identifiers creates a persistent profile that can re‑identify users across services.
What It Means The findings highlight a gap between user expectations and the technical reality of AI chat platforms. While the interfaces appear isolated, they rely on the same data‑collection pipelines that power web advertising. Under the EU General Data Protection Regulation (GDPR), the lack of a clear legal basis for sharing conversation data and the insufficient user notice could constitute non‑compliance. Organizations that integrate these bots into internal workflows may inadvertently expose confidential information to third parties.
Mitigations - Review vendor privacy policies and request documentation on tracker usage. - Block known tracking domains (e.g., *.facebook.com/tr, *.google-analytics.com, *.tiktok.com) at the network perimeter or via browser extensions. - Enforce strict access controls on any generated conversation URLs; treat them as sensitive links. - Deploy content‑security‑policy headers to limit third‑party script execution in embedded chatbot widgets. - Conduct regular audits for data leakage using tools that detect outbound requests to advertising networks. - For GDPR‑covered entities, perform a data‑protection impact assessment (DPIA) before deploying AI chat services.
What to Watch Next Monitor upcoming regulator guidance on AI‑driven data collection and watch for vendor patches that tighten URL permissions or remove third‑party trackers.
Continue reading
More in this thread
NVIDIA Confirms GeForce NOW Breach Limited to Armenian Partner, Exposes User Data
Peter Olaleru
Hackers Deface Canvas Login Pages, Threaten to Leak 231 Million Records
Peter Olaleru
Columbia Students Navigate Exam Stress as Canvas Data Breach Exposes Personal Data
Peter Olaleru
Conversation
Reader notes
Loading comments...