Cybersecurity1 hr ago

Low‑Skill Cybercriminals Shun AI, Favor Human Interaction on Dark‑Web Forums

Study finds low‑skill cybercriminals on Tor forums prefer human interaction over AI, limiting AI's impact on everyday scams.

Peter Olaleru/3 min/NG

Cybersecurity Editor

TweetLinkedIn
Low‑Skill Cybercriminals Shun AI, Favor Human Interaction on Dark‑Web Forums
Credit: UnsplashOriginal source

Low‑skill cybercriminals on dark‑web forums actively avoid AI tools, citing mistrust and a desire for genuine human interaction.

Context A recent study of posts on Hack Forums, a long‑standing Tor‑accessible hub for hackers, reveals a growing aversion to generative AI among its low‑skill members. While large criminal enterprises employ AI for routine tasks, the grassroots crowd prefers traditional social dynamics and proven attack scripts.

Key Facts - Security researcher Ben Collier told Wired that participants on these forums “really hate other people using AI on the forums.” He notes that AI threatens their self‑image as skilled operators. - An anonymous Hack Forums user wrote, “I come here for human interaction, not an AI chatbot,” underscoring the community’s social purpose. - Posts repeatedly demanded the removal of AI‑generated content, with one member demanding, “Stop posting AI s**t.” - Users expressed distrust in AI output, saying they would only copy‑paste code after personal verification because AI cannot handle the volume or nuance of their work. - The study found little evidence that AI is reshaping cybercrime overall; its use is limited to passive schemes like AI‑driven SEO spam or fraud targeting platforms such as OnlyFans.

What It Means The backlash suggests that, for a sizable segment of the cybercrime ecosystem, AI is a liability rather than an asset. Human trust networks remain the backbone of low‑level operations, and any AI integration that disrupts these bonds may be rejected outright. This dynamic limits the immediate impact of generative AI on everyday scams, even as higher‑tier groups adopt the technology for mundane tasks like code linting or automated reconnaissance.

What Defenders Should Do - Monitor dark‑web forums for shifts in language that indicate AI adoption or rejection; changes can signal emerging tactics. - Deploy detection signatures for known AI‑generated phishing templates, but prioritize signatures for classic social‑engineering scripts still favored by low‑skill actors. - Harden endpoints against credential‑stuffing tools that do not rely on AI, as these remain the primary weapon of the community. - Encourage threat‑intel sharing on Tor‑based platforms to track sentiment trends around AI usage. - Apply patches for common vulnerabilities (e.g., CVE‑2023‑23397 affecting Microsoft Outlook) that attackers continue to exploit without AI assistance.

Looking Ahead Watch for any policy changes on major dark‑web marketplaces that could force AI integration, and track whether higher‑tier groups begin to disseminate AI‑enhanced tools to their low‑skill affiliates.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...