Canadian Ministers Press OpenAI After Tumbler Ridge Shooting: Ottawa — Canada’s artificial intelligence minister says he left a high-level meeting with OpenAI executives “disappointed” after pressing the company for answers about its handling of the ChatGPT account linked to the deadly Tumbler Ridge, B.C., shooting.
Evan Solomon said the federal government expected clearer explanations and stronger commitments from the California-based technology firm during Tuesday’s talks. The meeting was convened after revelations that OpenAI had banned the suspect’s account months before the February killings but did not alert law enforcement at the time.
“We were disappointed that they did not have substantial answers for us, and we asked them to have substantial answers,” Solomon told CBC News following the session.
OpenAI confirmed it had suspended the ChatGPT account belonging to Jesse Van Rootselaar in June 2025 after internal systems flagged troubling content, including fictional scenarios involving gun violence. However, the company said the activity did not meet its threshold for contacting police because it did not indicate credible or imminent real-world planning.
The case has ignited a national debate over the responsibilities of artificial intelligence companies when users post disturbing or violent content on their platforms.
Timeline of the Tragedy
On Feb. 10, Van Rootselaar fatally shot her mother and half-brother at their home in Tumbler Ridge before going to a local secondary school, where she killed five students and an educational assistant. She later died by suicide. The attack stunned the small northeastern British Columbia community and prompted questions about warning signs that may have been missed.
OpenAI has said it reached out to the Royal Canadian Mounted Police after the shooting, once the gravity of the situation became clear. But officials acknowledged that no notification was made when the account was first banned months earlier.
Reporting by The Wall Street Journal revealed that the account had been flagged for posts describing violent scenarios, which ultimately led to its suspension.
Government Frustration
Solomon said federal ministers were seeking concrete evidence that OpenAI had strengthened its safety protocols in the aftermath of the tragedy.
“We expected them to have some clear proposals that we could understand — that they had changed their protocols in the wake of the horrific events in Tumbler Ridge,” he said. “But we did not hear substantial new safety measures beyond adjustments to their model.”
The minister added that further meetings are anticipated and emphasized that regulatory action remains a possibility.
“All options for us are on the table, because at the end of the day, Canadians want to feel safe,” Solomon said.
Other senior officials echoed that sentiment.
Marc Miller, Canada’s minister of identity and culture, described himself as troubled by what he heard from company representatives.
“I think there’s a lot that remains in the hands of the OpenAI folks, and the government will act,” Miller said as he left the meeting.
Public Safety Minister Gary Anandasangaree was similarly blunt.
“Nothing substantial came out of it other than an expectation from us that they need to do a lot better,” he said, adding that he anticipates additional discussions.
In British Columbia, Premier David Eby expressed anger over the company’s response, telling CBC’s Power & Politics that earlier notification to authorities might have created an opportunity for intervention.
Company Response
In a statement issued after the meeting, an OpenAI spokesperson said the discussion with ministers was “frank” and constructive.
“The ministers underscored that Canadians expect continued concrete action and we heard that message loud and clear,” the statement read. “We’ve committed to follow up in the coming days with an update on additional steps we’re taking.”
The company has said it is reviewing and updating its policies governing when to escalate user activity to law enforcement. It maintains that its moderation systems are designed to distinguish between fictional or hypothetical prompts and credible threats, a distinction that can be difficult in practice.
Regulatory Debate Intensifies
The controversy arrives as Ottawa considers new legislation aimed at addressing online harms and digital platform accountability. Youth advocates and community leaders have called on the federal government to consult widely before introducing new rules, warning that poorly designed regulations could have unintended consequences.
At the same time, public pressure is mounting for stronger oversight of rapidly evolving AI systems. Experts note that generative AI tools like ChatGPT are capable of producing realistic narratives that may blur the line between creative writing and violent ideation.
For policymakers, the central question is whether existing voluntary safety standards are sufficient — or whether mandatory reporting requirements should be imposed when AI companies encounter content that could signal potential harm.
As investigations into the Tumbler Ridge shooting continue, the tragedy has become a focal point in Canada’s broader conversation about technology, accountability and public safety.
For now, federal ministers say they are awaiting further details from OpenAI. But their message is clear: assurances alone will not be enough.
Xbox’s Next Chapter: Strategic Reset or Just a Facelift? | Maya
