Cybersecurity1 hr ago

North Wales Police and Get Safe Online Launch AI Safety Campaign Against Deepfake Scams

North Wales Police and Get Safe Online start a public campaign to teach residents how to spot AI‑generated deepfake scams as criminals erase traditional fraud warning signs.

Peter Olaleru/3 min/GB

Cybersecurity Editor

TweetLinkedIn

No source-linked image is attached to this story yet. Measured Take avoids generic stock art when a relevant credited image is not available.

TL;DR: North Wales Police and Get Safe Online have started a public campaign to teach residents how to spot AI‑generated deepfake scams. The initiative provides practical advice as criminals use AI to erase traditional fraud warning signs.

Context

Artificial intelligence is now a routine tool for both legitimate work and criminal fraud. Members of the public in North Wales are seeing AI‑generated content used to impersonate trusted figures in scams. The campaign, run by Get Safe Online with North Wales Police and the Police and Crime Commissioner, aims to close the awareness gap. It distributes guides through online portals, community workshops, and printed leaflets across the region.

Key Facts

Tony Neate, CEO of Get Safe Online, said AI is now a permanent part of everyday life and stressed the need for safe, responsible use. Dewi Owen from the North Wales Police cybercrime team warned that AI removes typical scam indicators such as poor grammar, making fraud harder to detect. He added that deepfake audio and video can convincingly mimic celebrities and public figures, eroding the old rule “seeing is believing.” The Get Safe Online campaign, partnered with North Wales Police, gives residents practical advice to identify and defend against AI‑generated scams including deepfakes.

What It Means

Criminals are leveraging generative models to produce synthetic media that bypasses traditional spam filters and human skepticism. Attack vectors include phishing emails, instant messages, and voice or video calls that carry AI‑crafted lures. This aligns with MITRE ATT&CK technique T1566.002 (spearphishing via link) and the broader T1566 (phishing) tactic. Organizations should treat synthetic media as a social‑engineering threat and update user‑training programs accordingly. The effort also supports the UK Online Safety Act’s duty to protect users from harmful synthetic content.

Mitigations

Verify any unexpected request for money or information through an independent channel, such as a known phone number or official website. Look for subtle visual or audio artifacts—blinking inconsistencies, mismatched lighting, or unnatural speech pauses. Use reverse‑image or reverse‑video search tools to check the provenance of media. Enable liveness detection on video‑conferencing platforms where available. Keep operating systems and applications patched; apply advisories for CVE‑2023‑XXXXX (example) if relevant to media‑processing software. Deploy email gateway rules that flag messages containing deepfake‑related keywords or anomalous metadata. Require multi‑factor authentication for financial transactions and sensitive data access. Report suspicious content to Action Fraud or the National Cyber Security Centre.

The effort will likely expand as synthetic‑media tools become more accessible; watch for upcoming guidance from the National Cyber Security Centre on AI‑driven threat detection and potential regulatory updates on synthetic media.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...