Ofcom Probes Telegram Over Alleged CSAM Amid Deepfake Channel Reports
Ofcom has launched an investigation into Telegram over alleged failures to prevent child sexual abuse material (CSAM) sharing, following reports of deepfake channels and thousands of users distributing illicit content.
TL;DR
Ofcom has launched an investigation into Telegram following allegations the platform fails to prevent the sharing of child sexual abuse material (CSAM) under the UK Online Safety Act. This probe follows reports of 150 deepfake CSAM channels and thousands of users distributing non-consensual intimate images.
The UK communications regulator, Ofcom, initiated a formal investigation into the messaging service Telegram. This action assesses whether Telegram complies with the UK Online Safety Act, specifically regarding the restriction of illegal content, including child sexual abuse material (CSAM). The probe began after evidence provided by the Canadian Centre for Child Protection suggested the presence of CSAM on the platform.
The investigation comes amid concerning reports detailing the scale of such content. Researchers identified 150 Telegram channels globally, including some within the UK, dedicated to the creation and sharing of AI-generated deepfake intimate images, often non-consensual. Further analysis by AI Forensics revealed 24,671 Telegram users actively distributed non-consensual intimate images, including CSAM, across Italy and Spain.
The UK Online Safety Act mandates that user-to-user services must restrict illegal content uploaded by users. Ofcom has the authority to impose significant penalties for non-compliance, including fines up to £18 million or 10% of a company’s worldwide annual revenue. In severe cases, a court could mandate the withdrawal of advertising or payment services, or even block platform access within the UK.
Telegram, however, denies these accusations. The company states it has largely eliminated the public spread of CSAM through world-class detection algorithms and cooperation with NGOs. Telegram also expressed concern that the investigation might be part of a broader challenge to online platforms that defend freedom of speech and the right to privacy.
For platforms operating under such legislation, robust cybersecurity and content moderation strategies are paramount. Organizations must implement sophisticated AI and machine learning tools for proactive content detection and removal. Strengthening user reporting mechanisms and ensuring rapid response to reports are critical. Furthermore, platforms should enhance collaboration with law enforcement and child protection agencies, sharing threat intelligence and enforcing clear terms of service that explicitly prohibit illegal content.
All eyes are now on the outcome of Ofcom’s investigation, which will set a precedent for platform accountability under the Online Safety Act and shape future expectations for content moderation and digital safety across the UK's online landscape.
Continue reading
More in this thread
Booking.com breach exposes guest data, sparking fears of targeted fraud
Peter Olaleru
UK Data Breaches Surge 107% in Q1 2026 Amid Global AI-Driven Leak Spike
Peter Olaleru
France Titres Confirms Breach Exposing 19 Million Identity Records, Including Passports
Peter Olaleru
Conversation
Reader notes
Loading comments...