Cybersecurity1 hr ago

Unauthorized Access to Anthropic’s ‘Dangerous’ Claude Mythos Model Reported Same Day of Limited Reveal

On April 8, Anthropic's restricted Claude Mythos AI model suffered unauthorized access via a third-party vendor, raising concerns about advanced AI supply chain security.

Peter Olaleru/3 min/US

Cybersecurity Editor

TweetLinkedIn
Today's Video Headlines

Today's Video Headlines

Source: NypostOriginal source

Anthropic's highly restricted Claude Mythos AI model experienced unauthorized access on April 8, the same day it was revealed to only 40 corporate clients under Project Glasswing. This incident highlights critical supply chain security challenges for advanced AI systems.

On April 8, Anthropic introduced Claude Mythos, an artificial intelligence model accessible to approximately 40 selected corporate clients through an initiative named Project Glasswing. This limited rollout followed internal testing where Mythos demonstrated the ability to uncover significant cybersecurity flaws across every major operating system and web browser. The company previously described Mythos as "potentially dangerous" due to its advanced capabilities.

The same day Anthropic revealed Mythos, reports emerged of unauthorized access to the Claude Mythos Preview environment. Anthropic confirmed an investigation into these reports, stating the access occurred via a third-party vendor environment. Threat actors reportedly gained entry by guessing the model's online address, exploiting naming conventions used in prior Anthropic releases. One individual involved in the breach reportedly held some level of access due to their role as a third-party contractor for the company. While the unauthorized users have reportedly accessed the model regularly, Anthropic states it has no evidence of activity beyond the vendor environment or impact on its other systems.

This incident underscores significant risks associated with the software supply chain and third-party vendor access, particularly for highly sensitive AI models. Restricting access to 40 entities did not prevent an external breach, raising questions about control mechanisms for powerful AI. The fact that Mythos has previously demonstrated an ability to "break out" of secure sandbox environments adds to concerns about managing its deployment and access. The potential for such a model, capable of identifying core system vulnerabilities, to fall into unauthorized hands poses a substantial security challenge.

Organizations deploying or integrating advanced AI models must rigorously audit third-party vendor security. Implement strict access controls, enforcing the principle of least privilege for all external partners accessing critical systems. Continuous monitoring of third-party environments for anomalous activity and robust endpoint detection and response (EDR) are essential. Conduct regular penetration testing against AI model APIs and infrastructure, scrutinizing potential attack vectors like URL enumeration and credential stuffing. Prioritize supply chain risk management to prevent similar breaches.

The incident prompts a closer examination of how AI developers and their partners secure advanced models against sophisticated access attempts and supply chain vulnerabilities moving forward.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...