Tech2 hrs ago

Study Finds 13+ Trackers Leak AI Chat Data, Including ChatGPT Links Sent to Google Analytics

Researchers uncovered over 13 third‑party trackers leaking conversation data from AI chat platforms, with ChatGPT sending full links to Google Analytics despite consent.

Alex Mercer/3 min/NG

Senior Tech Correspondent

TweetLinkedIn
Study Finds 13+ Trackers Leak AI Chat Data, Including ChatGPT Links Sent to Google Analytics
Source: OecdOriginal source

Over 13 external trackers were found leaking conversation data from major AI chat platforms; ChatGPT sends full conversation links to Google Analytics for free users regardless of cookie consent.

Context A recent analysis of four leading generative AI assistants—ChatGPT, Claude, Grok and Perplexity—revealed systematic data leakage to advertising and analytics services. The study, published on the LeakyLM site, examined how each platform handles user‑generated content and consent signals.

Key Facts - Researchers identified more than 13 distinct third‑party tracking tools embedded in the four AI services, confirming that every platform leaks some user data. - For free‑tier ChatGPT users, the full URL of each conversation and the page title are transmitted to Google Analytics, a web‑traffic service, even when the user declines cookies. - Claude forwards email addresses and conversation titles to Intercom, a customer‑support platform, and shares activity signals with additional tools when non‑essential cookies are accepted. - Grok also leaks conversation links and titles to Google Analytics and DoubleClick, with occasional exposure to TikTok and Meta services. - Perplexity stopped using Meta Pixel in April 2026 but continues to send raw email addresses, conversation titles and metadata to services such as Datadog and Singular. - The leaked URLs and titles can contain sensitive cues about a user’s interests or personal issues, potentially linking the conversation to advertising profiles when combined with cookies or hashed email identifiers.

What It Means The findings suggest that privacy controls advertised by AI providers may not match actual data flows. Even when users opt out of non‑essential cookies, server‑side tracking can still capture conversation metadata. This creates a structural risk: third‑party services receive enough information to reconstruct the subject of a chat or associate it with a user’s identity.

For individuals, the exposure raises concerns about discussing health, legal or financial matters with AI assistants that are presumed private. For businesses, inadvertent leakage of proprietary information could have competitive or regulatory repercussions.

Regulators may scrutinize whether current privacy disclosures satisfy data‑protection laws, especially in jurisdictions with strict consent requirements. Meanwhile, developers of AI chat tools face pressure to implement transparent, opt‑in tracking architectures that truly respect user choices.

What to watch next Monitor responses from OpenAI, Anthropic and other AI firms as they adjust privacy settings, and watch for any regulatory actions targeting cross‑platform data leakage in generative AI services.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...