Experts Urge AI Cybersecurity Regulation After IBM Finds 44% Rise in AI‑Powered Attacks
IBM’s 2026 study reports a 44% increase in AI‑driven cyberattacks, with a November breach of Anthropic showing how threat actors use AI to scan source code for flaws.
**TL;DR:** IBM’s 2026 study found a 44% increase in cyberattacks targeting AI‑enabled public‑facing applications year over year. In November, attackers used their own AI models to probe Anthropic’s source code, exposing internal details and underscoring the need for clearer AI security regulation.
Context: The rise reflects broader trends as agentic AI models become more capable of autonomous data analysis and code generation. While these abilities help defenders automate threat hunting, they also lower the barrier for attackers to discover weaknesses in software supply chains. Experts warn that without coordinated rules, the imbalance between offensive and defensive success rates will grow.
Key Facts: IBM’s 2026 study measured a 44% year‑over‑year increase in attacks on public‑facing software and systems that frequently incorporate AI components. In November, threat actors deployed custom LLMs to scan Anthropic’s repositories, identifying vulnerable functions and leaking internal source code snippets. The breach was discovered when anomalous API calls triggered internal alerts, leading to a forced reset of developer credentials and a public disclosure of the exposed data. James Mickens noted that attackers need only one successful intrusion, whereas defenders must maintain perfect coverage across all assets.
What It Means: Security teams should treat AI‑generated code as any other third‑party component and apply the same rigor. Recommended actions include: updating software composition analysis tools to detect AI‑generated libraries (CISA AA23-001A), enforcing strict API rate limiting and anomaly detection (MITRE ATT&CK T1059.007), conducting regular SAST/DAST scans on AI‑model outputs, and subscribing to the UK National Cyber Security Centre’s advisory on AI‑related threats (NCSC‑AA‑2024‑009). Organizations should also review access controls for development environments and rotate secrets after any suspected AI‑driven reconnaissance. Looking ahead, policymakers in the UK and EU are expected to draft AI‑specific cybersecurity provisions within the next 12 months, which teams should monitor for compliance impacts.
Conversation
Reader notes
Loading comments...