Cybersecurity1 hr ago

AI‑Powered Fraud Pushes U.S. Cybercrime Losses to Near $21 Billion in 2025

U.S. cybercrime losses hit nearly $21 billion in 2025, with AI‑related scams accounting for $893 million, according to the FBI IC3 report.

Peter Olaleru/3 min/US

Cybersecurity Editor

TweetLinkedIn
AI.image

AI.image

Source: GovtechOriginal source

TL;DR: In 2025, U.S. cybercrime losses totaled nearly $21 billion, the highest ever recorded. AI‑enabled fraud accounted for about $893 million of that total, driven by voice‑cloning, synthetic media, and law‑enforcement impersonation scams.

Context The FBI’s Internet Crime Complaint Center (IC3) began tracking AI‑related fraud as a distinct category in 2025, reflecting the technology’s shift from a peripheral tool to a core component of criminal operations. Overall complaints rose from 859,000 in 2024 to over one million in 2025, while reported losses climbed from $16.6 billion to almost $21 billion—a 26 % increase. Texas ranked second nationally for both complaint volume and financial loss, with North Texas agencies noting a surge in impersonation calls that mimic relatives, banks, and government officials.

Key Facts - Total U.S. cybercrime losses: ≈ $20.9 billion (Fact 1). - AI‑related complaints: 22,000+, generating ≈ $893 million in losses (Fact 2). - Officer Tre Mathis of the Lewisville Police Department observed that scam calls posing as law enforcement continue to affect the community, with criminals using advancing technology to become “more intelligent and clever” (Fact 3).

Technically, fraudsters are leveraging AI to clone voices, generate convincing phishing emails and texts, and produce deepfake videos or images that promote fraudulent investment schemes, especially those tied to cryptocurrency. These tactics align with MITRE ATT&CK techniques such as T1566.001 (Spearphishing Attachment), T1566.002 (Spearphishing Link), and T1059.007 (JavaScript) for delivering malicious payloads, while voice synthesis falls under T1566.003 (Voice Phishing).

What It Means The growing share of AI‑driven scams signals a need for defenders to update detection controls and user‑training programs. Organizations should enforce multi‑factor authentication, restrict outbound gift‑card and wire‑transfer requests, and deploy email‑gateway rules that flag anomalous language patterns indicative of generative‑AI content. Monitoring for unusual audio‑file uploads or deepfake video sharing on internal collaboration platforms can help catch voice‑cloning attempts. Recommended mitigations include applying CISA’s phishing‑resilient MFA guidance, implementing detection signatures for known AI‑generated text patterns (e.g., GPT‑2/3 output markers), and regularly reviewing patch advisories for CVEs exploited in credential‑theft phases (e.g., CVE‑2023‑23397 for Outlook privilege escalation).

What to watch next: Regulators may expand reporting requirements for AI‑generated fraud, and threat‑intelligence groups are expected to publish shared indicators of synthetic‑media campaigns later in 2026.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...