Most Companies Lack Ability to Halt AI Systems During Incidents, ISACA Survey Shows
A new ISACA survey shows 59% of organizations don't know how fast they can stop an AI system during an incident, revealing widespread governance failures and critical risks.

TL;DR
A new ISACA survey reveals most organizations cannot quickly stop AI systems during incidents, highlighting a critical lack of governance in AI deployment. This gap leaves businesses vulnerable to unchecked AI failures and operational risks.
Companies are rapidly integrating artificial intelligence into core operations, yet many remain critically unprepared for system failures or security incidents. A recent survey by ISACA, a global professional association focused on digital trust, highlights a significant gap in organizational readiness to manage AI systems effectively during a crisis. This exposure represents a key vulnerability in how businesses deploy AI technologies.
Fifty-nine percent of digital trust professionals—those tasked with an organization's technology security, risk, and governance—report they do not know how quickly their organization could halt an AI system during a security incident, signalling a profound lack of clear response protocols. Furthermore, only twenty-one percent of respondents could meaningfully intervene to stop an AI system within thirty minutes. This limited capacity for rapid action means compromised or malfunctioning AI systems could operate unchecked for extended periods, potentially causing greater harm.
These findings point to a major structural issue within current AI adoption strategies, according to Ali Sarrafi, CEO of Kovant. Sarrafi states that AI is frequently deployed without adequate governance in place. This critical absence prevents businesses from effectively halting, explaining the behavior of, or assigning accountability for AI systems when issues arise. Without such fundamental governance, organizations effectively lose control over increasingly vital digital assets that operate autonomously.
The inability to quickly intervene or fully understand AI system behavior creates substantial operational and reputational risks. Unchecked AI errors can lead to immediate operational disruptions, costly data breaches, and significant financial or reputational damage. Implementing robust AI governance from the outset is therefore crucial, moving beyond mere technical deployment to integrate clear oversight, audit trails, and control mechanisms. Organizations must establish comprehensive frameworks for managing the entire AI lifecycle responsibly.
The coming months will show whether companies prioritize implementing the robust AI governance structures necessary to regain control over these increasingly vital systems and mitigate future risks.
Continue reading
More in this thread
AI Poetry Appeals to Lay Readers While GPT-4 Favors Simple, Repetitive Forms
Alex Mercer
Musk Skips French Probe Interview as U.S. Justice Dept. Declines to Assist
Alex Mercer
Mahomet-Seymour Board to Vote on AI Guidebook Mandating Human Oversight, Tiered Approval, and Age-Based Access
Alex Mercer
Conversation
Reader notes
Loading comments...