Cybersecurity2 hrs ago

Gigamon Survey Finds AI Involved in 83% of Recent Security Breaches

Survey of 1,023 security leaders finds AI involved in 83% of breaches, 65% of firms hit, and AI used throughout attack chain. Includes mitigation steps.

Peter Olaleru/3 min/NG

Cybersecurity Editor

TweetLinkedIn
ChatGPT DAN, Jailbreaks prompt. Contribute to 0xk1h0/ChatGPT_DAN development by creating an account on GitHub.
Source: GithubOriginal source

AI was present in 83% of reported security breaches over the past year, according to a Gigamon survey of 1,023 security and IT leaders. Two‑thirds of organizations experienced a breach, and attackers are using AI at every stage of the attack chain.

Context The survey covered respondents from Australia, France, Germany, Singapore, the UK, and the US. It found that breach rates have risen 18% year‑over‑year and 40% over three years. Executives noted that AI is now embedded in nearly every phase of an intrusion, from reconnaissance to exfiltration.

Key Facts - 83% of breaches involved AI in some form. - 65% of organizations suffered at least one breach in the last 12 months. - Reported AI‑related incident types include external AI‑enabled attacks (41%), direct attacks on large language model systems (33%), and internal leaks or unsanctioned AI use (30%). - Defensive automation is widespread: 94% say AI autonomously initiates some security functions, and 53% cite alert triage and prioritisation as the top use case.

What It Means Attackers use AI to automate reconnaissance, craft convincing phishing lures, and evade detection, aligning with MITRE ATT&CK techniques such as T1566 (phishing) and T1059 (command‑line scripting). Defenders face a surge in alerts from AI‑driven tools, which can overwhelm teams if not paired with effective signal processing. The findings stress the need for better visibility into data in motion and metadata to spot anomalous behavior early.

Mitigations - Deploy packet‑level monitoring and metadata pipelines to gain visibility into cloud and hybrid workloads. - Restrict unsanctioned AI applications through application‑control policies and monitor data‑lake access logs. - Tune AI‑based security tools to reduce false positives; prioritize alerts using behavioral baselines and threat‑intelligence feeds. - Patch known vulnerabilities (e.g., CVEs affecting LLM inference servers) and enforce MFA on privileged accounts. - Implement zero‑trust network segmentation to limit lateral movement after an initial compromise.

What to Watch Next Track shifts toward private data lakes for AI workloads, changes in mean time to detect and respond as alert volumes grow, and the adoption of encryption‑key management strategies to counter harvest‑now‑decrypt‑later risks.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...