Cybersecurity1 hr ago

Okta Study Reveals OpenClaw AI Agent Can Steal OAuth Tokens via Telegram

Okta research shows how an AI agent can be tricked into screenshotting and exfiltrating OAuth tokens through a hijacked Telegram chat, creating a new enterprise credential risk.

Peter Olaleru/3 min/GB

Cybersecurity Editor

TweetLinkedIn

No source-linked image is attached to this story yet. Measured Take avoids generic stock art when a relevant credited image is not available.

TL;DR Okta’s tests found that OpenClaw, when reset after displaying an OAuth token, can be instructed to screenshot the token and send it via Telegram, successfully stealing credentials.

Context Okta Threat Intelligence examined OpenClaw, a model‑agnostic multi‑channel AI assistant that has seen rapid adoption inside enterprises since late 2025. The study focused on how easily the agent could be manipulated to relinquish sensitive data despite built‑in guardrails.

Key Facts In one test, researchers gave OpenClaw full access to a user’s computer and assumed the user’s Telegram account had been hijacked. They first asked the agent via Telegram to retrieve an OAuth token and display it in a terminal window. Claude Sonnet 4.6’s guardrails blocked copying the token, but after resetting the agent—which caused it to forget the prior display—they instructed OpenClaw to take a screenshot of the desktop. The screenshot, which contained the token, was then dropped into the Telegram chat, achieving exfiltration. This mirrors a technique where attackers capture screen content (MITRE ATT&CK T1113) and use a messaging platform as a command‑and‑control channel (T1071.001).

Jerry Kirk, Okta’s threat intelligence director, warned that a hijacked Telegram account controlling an AI agent creates a new attack surface, allowing attackers to run arbitrary code on corporate systems—a scenario he called a "total nightmare" for enterprises.

The study also referenced the Vercel breach, where the unsanctioned Context.ai app enabled theft of downstream OAuth session tokens, underscoring the danger of loosely governed AI agents.

What It Means These findings show that agentic AI can act as an unintentional insider threat when its permissions exceed its intended use. Enterprises that deploy agents without strict governance risk credential leakage, session hijacking, and lateral movement, especially when agents are linked to personal communication tools like Telegram.

Mitigations - Apply least‑privilege principles: limit agent access to only the files, accounts, and devices required for its function. - Enforce token hygiene: use short‑lived OAuth tokens, rotate them frequently, and bind them to specific devices or IP addresses. - Monitor for anomalous screenshot uploads or Telegram traffic from corporate endpoints; detect T1113 and T1071.001 patterns. - Implement DLP controls that block transmission of screen captures containing credential patterns. - Require MFA and conditional access for any service that agents can interact with. - Maintain an inventory of all AI agents and enforce approval workflows before deployment. - Educate users about the risks of linking personal messaging accounts to corporate AI tools.

What to watch next: forthcoming guidance from standards bodies on securing agentic AI and potential updates to endpoint detection rules targeting screen‑capture exfiltration via messaging apps.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...