
Sam Altman Apologises Again as Scrutiny Grows Over Tumbler Ridge Killings Case
The head of OpenAI , Sam Altman , has issued a renewed apology over the company’s failure to alert law enforcement about concerning online activity linked to a suspect in the Tumbler Ridge school shooting in British Columbia, Canada , a case now driving wider scrutiny of AI safety systems and reporting rules.
In a letter shared publicly, Altman expressed deepest condolences to victims’ families and the affected community, acknowledging that OpenAI did not escalate a flagged account to the Royal Canadian Mounted Police , despite identifying it through internal abuse detection systems for possible violent behaviour.
The company has said it detected the account in June and banned it for policy violations after identifying potential “furtherance of violent activities.” However, it did not refer the account to police at the time, concluding it did not meet the internal threshold for law enforcement escalation , a decision now central to criticism.
“I am deeply sorry that we did not alert law enforcement,” Altman wrote, adding that the apology was necessary to recognise the “irreversible loss” suffered by the community. He also reaffirmed OpenAI’s commitment to strengthening safety measures and working with governments to prevent similar tragedies.
The letter was shared by British Columbia Premier David Eby and published by local media, intensifying public and political attention on the case.
On February 10, police say 18-year-old Jesse Van Rootselaar killed her mother and stepbrother before carrying out a shooting at Tumbler Ridge Secondary School , killing five children and an educator and injuring around 25 others before dying by suicide. The incident has been described as a mass shooting .
Premier Eby has said the apology is “necessary, and yet grossly insufficient,” while previously stating that OpenAI appeared to have had an opportunity to prevent the tragedy.
The case has since escalated into a broader debate over AI accountability , with growing calls in Canada for stricter rules requiring tech companies to report high-risk behaviour directly to authorities and clearer standards on when escalation is mandatory.
