OpenAI CEO Sam Altman Apologizes for Failing to Report Suspended ChatGPT Account of Canadian Mass Shooter
OpenAI CEO Sam Altman apologizes for not reporting a suspended ChatGPT account of a Canadian mass shooter, sparking debate on AI content moderation.

Community members honor the victims of one of Canada's deadliest mass shootings in Tumbler Ridge on February 13, 2026.
OpenAI CEO Sam Altman has issued an apology for the company's failure to report a suspended ChatGPT account linked to a Canadian mass shooting. An 18-year-old killed eight people in February after OpenAI had internally flagged and suspended the account for violent misuse months prior.
OpenAI CEO Sam Altman publicly apologized for his company’s failure to alert law enforcement about a suspended ChatGPT account connected to a recent mass shooting. The incident involved Jesse Van Rootselaar, an 18-year-old, who killed eight people during a shooting spree in Tumbler Ridge, British Columbia, on February 10. This tragedy devastated the remote community, claiming the lives of family members and five students from the local secondary school.
OpenAI confirmed its systems identified Rootselaar’s ChatGPT account for violent misuse in June, preceding the February shooting by several months. The company promptly suspended the account following this detection. However, OpenAI did not notify law enforcement at the time, stating the account's usage did not meet their internal threshold for posing a credible or imminent threat of harm.
Altman's apology directly addresses this oversight, stating, "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." He further acknowledged the "harm and irreversible loss" the Tumbler Ridge community suffered. This apology follows earlier discussions with British Columbia Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka, who conveyed the community's anger and concern.
This event intensifies the ongoing public debate surrounding artificial intelligence companies' responsibilities regarding content moderation and public safety. It compels a re-evaluation of internal policies for identifying and reporting potential threats, balancing user privacy against the imperative to prevent real-world violence. Future discussions will likely focus on establishing clearer industry-wide reporting standards and refining the algorithms that detect such concerning behavior in large language models. This aims to ensure better communication protocols with authorities, potentially preventing future tragedies.
Continue reading
More in this thread
AI and Gendered Storytelling Meet in Barcelona This November
Alex Mercer
IX International AI and Gender Storytelling Conference Sets Hybrid Format and June 2026 Abstract Deadline
Alex Mercer
NCC Starts Telecom Compensation as Tinubu Grants Airlines Debt Relief
Alex Mercer
Conversation
Reader notes
Loading comments...