Cybersecurity3 hrs ago

Microsoft AI Blocks $4 Billion in Scams as AI‑Powered Fraud Rises in Southeast Asia

Microsoft’s AI systems blocked over $4 billion in scams from April 2024‑2025 as Interpol warns Southeast Asian fraud rings use low‑cost AI tools to scale attacks.

Peter Olaleru/3 min/US

Cybersecurity Editor

TweetLinkedIn

No source-linked image is attached to this story yet. Measured Take avoids generic stock art when a relevant credited image is not available.

Source: TechnologyreviewOpen original reporting

Microsoft’s AI‑driven threat intelligence blocked $4 billion in scams and fraudulent transactions between April 2024 and April 2025, many involving AI‑generated content. Interpol reports Southeast Asian scam centers are adopting low‑cost AI tools to expand victim pools and shift locations rapidly.

Context Since the public release of generative AI models in late 2022, criminals have leveraged large language models to craft convincing phishing emails, deepfake videos, and malware that evades traditional signatures. These tactics lower the barrier for entry and enable high‑volume, low‑complexity attacks that rely on chance rather than sophistication. Microsoft processes more than 100 trillion telemetry signals each day across its cloud, endpoint, and productivity services, feeding AI models that flag anomalous behavior in real time.

Key Facts - From April 2024 to April 2025, Microsoft’s AI systems intercepted and stopped $4 billion worth of fraudulent transactions, a figure derived from blocked payment attempts, fraudulent account creations, and malicious link clicks. - The detection pipeline correlates signals such as unusual sign‑in patterns, abnormal email traffic, and known malicious file hashes, applying machine‑learning classifiers trained on both benign and adversarial datasets. - Interpol’s warning highlights that scam hubs in countries like Thailand, Vietnam, and the Philippines are using inexpensive generative AI services to produce mass‑mail phishing lures and deepfake impersonation videos, then relocating infrastructure to evade law‑enforcement takedowns. - Technical details observed in blocked campaigns include spear‑phishing emails with AI‑generated text (MITRE ATT&CK T1566.002), malicious macros delivered via Office documents (T1204.002), and credential‑harvesting sites hosted on newly registered domains (T1583.001).

What It Means Organizations should treat AI‑generated content as a standard attack vector and update defenses accordingly. Recommended actions: enable Microsoft Defender for Office 365 safe‑links and safe‑attachments, enforce conditional access policies that block legacy authentication, deploy AI‑based anomaly detection on network traffic (e.g., Azure Sentinel analytics rules for T1078), and conduct regular user training on identifying deepfake and synthetic‑media scams. Keep systems patched against known vulnerabilities (e.g., CVE‑2023‑23397 for Outlook privilege escalation) and monitor for rapid domain registration spikes using threat‑intelligence feeds.

What to watch next Watch for regulators to issue guidance on AI‑generated disinformation and for threat actors to combine deepfake audio with real‑time voice‑cloning in business‑email‑compromise schemes.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...