Cybersecurity3 hrs ago

White House Mulls Pre‑Release AI Vetting After Trump Repeals Biden Safety Order

The Trump administration debates mandatory vetting of advanced AI models as Anthropic's Mythos reveals thousands of high‑severity software flaws.

Peter Olaleru/3 min/GB

Cybersecurity Editor

TweetLinkedIn
White House Mulls Pre‑Release AI Vetting After Trump Repeals Biden Safety Order
Source: SkaddenOriginal source

The White House is exploring pre‑release reviews of powerful AI systems after President Trump revoked Biden’s AI safety order, spurred by Anthropic’s Mythos model exposing massive cyber‑risk.

Context On Jan. 20, 2025, President Trump nullified Executive Order 14110, which had required developers of high‑risk AI to submit safety test results to the government. Three days later he issued an order aimed at removing regulatory barriers to AI innovation. The reversal eliminated mandatory red‑team testing and monitoring for AI used in critical infrastructure.

Key Facts - Internal discussions, reported by major U.S. newspapers, indicate the administration is weighing a formal pre‑release vetting process for advanced models that could facilitate cyberattacks. Options include a government‑led review or a mandatory notification to a newly rebranded Center for AI Standards and Innovation. - Anthropic’s Mythos Preview model, released to a limited set of partners under Project Glasswing, identified thousands of high‑severity vulnerabilities across all major operating systems and browsers. The company claims the model outperforms most human hackers at finding and exploiting software flaws. - Anthropic has briefed the Cybersecurity and Infrastructure Security Agency and the Commerce Department on Mythos’s capabilities, while OpenAI runs a comparable trusted‑access program for a select group of firms. - The United Kingdom’s AI Security Institute already conducts pre‑deployment risk assessments, and the EU AI Act now mandates conformity assessments for high‑risk AI applications. The United States lacks a statutory framework for such reviews.

What It Means For security teams, the prospect of mandatory pre‑release checks could introduce new compliance requirements, especially for firms developing or integrating large language models with code‑generation features. Organizations may need to prepare detailed safety test reports, including vulnerability scans and red‑team exercise results, before a model can be deployed commercially.

Mitigations – What Defenders Should Do 1. Implement internal red‑teaming – Simulate adversary attacks on any in‑house AI tools using MITRE ATT&CK techniques such as *T1587.001* (Develop Capabilities: Malware) to uncover misuse pathways. 2. Adopt secure development standards – Patch known CVEs (e.g., CVE‑2024‑12345) promptly and enforce code‑review policies that flag AI‑generated code for manual audit. 3. Monitor model outputs – Deploy detection signatures that flag attempts to generate exploit code or vulnerability disclosures. 4. Engage with government programs – Participate in Project Glasswing or similar trusted‑access initiatives to test defensive use cases and stay ahead of emerging threats. 5. Prepare documentation – Keep detailed logs of model training data, testing methodology, and risk assessments ready for potential regulatory review.

Looking Ahead Watch for an official announcement on the scope and timeline of any pre‑release AI review regime, and for guidance from the Center for AI Standards and Innovation on reporting requirements.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...