OpenAI failed to stop a banned user from opening a second ChatGPT account before a school shooting in Tumbler Ridge, British Columbia, on February 10, 2026, that left eight people dead. The company later sent a letter to the Canadian government detailing its safety lapses and promising reforms. That sequence of events, combined with regulatory fines in Europe and a copyright lawsuit from Canadian publishers, raises a pointed question: should Canada continue relying on a private American AI firm for critical digital infrastructure, or should it build a publicly accountable alternative?
A Banned Account, a Second Login, and Eight Deaths
OpenAI suspended the shooter’s initial ChatGPT account in June 2025 after detecting concerning activity. But the company did not refer that account to police at the time, even though its own systems had flagged the behavior as troubling enough to warrant a ban. The shooter then created a second account, which OpenAI discovered only after the February 2026 attack. That eight‑month gap between the first suspension and the shooting exposed a basic weakness in the company’s safety architecture: it could cut off access to a specific login, but it had no reliable, privacy‑respecting mechanism to prevent the same person from returning or to ensure that law enforcement would be alerted when the original behavior might justify it.
In response, OpenAI told the Canadian government it would adopt “more flexible criteria” for referrals to law enforcement and create direct contact points with Canadian authorities, while also strengthening its detection and support systems. These changes amount to an admission that the prior threshold for involving police was too rigid and that its internal escalation processes were not calibrated to real‑world risks. For a company whose charter pledges a “primary fiduciary duty to humanity,” the fact that it took a mass shooting to trigger a policy update is difficult to reconcile with that stated mission, and it underscores how much discretion a single private provider currently wields over matters with clear public‑safety implications.
A Pattern of Regulatory Failures Abroad
The Tumbler Ridge case did not emerge in isolation. Italy’s privacy watchdog fined OpenAI 15 million euros over ChatGPT for violations in collecting and handling users’ personal data. The regulator concluded that OpenAI lacked a sufficient legal basis and adequate transparency, and that its safeguards for minors were deficient. Italy also mandated a public information campaign to notify affected users and imposed concrete compliance demands, including clearer notices about data use, adjustments to the legal basis for training, and tools that allow both users and non‑users to exercise their data‑subject rights.
Those requirements were spelled out in a formal decision by the Italian Supervisory Authority, which insisted on age‑gating and verification measures as conditions for lifting temporary limits on ChatGPT’s operations. Meanwhile, the European Union has moved to formalize broader AI oversight through Regulation 2024/1689, which sets binding rules for high‑risk systems and emphasizes transparency, risk management, and fundamental‑rights protections. OpenAI does publish its own transparency reports, including statistics on government requests and child‑safety interventions, but voluntary disclosure is not the same as enforceable accountability. Italy’s enforcement action showed that when an independent regulator examined OpenAI’s practices, it found systemic shortcomings that the company had not corrected on its own, an important precedent for any country considering how tightly to regulate foreign AI platforms.
Canada’s Own Legal Confrontation with OpenAI
Canadian institutions have already begun pushing back through the courts. A coalition of outlets, including major dailies and regional publications, brought a multi‑publisher copyright lawsuit alleging that OpenAI used their journalism without permission to train its models. The suit claims that large‑scale scraping of articles and archives effectively converted Canadian newsrooms’ work into a competing product, with ChatGPT and related tools able to summarize, rephrase, or answer questions using information derived from those stories. For publishers already under financial pressure, the allegation is that an American AI firm has captured value from their reporting while undermining traffic and subscription revenue.
The copyright dispute also intersects with broader concerns about how training data is sourced and governed. If a company can ingest a nation’s journalism, government records, and cultural output to build commercial AI systems, the country that produced that content has a legitimate interest in shaping the rules that govern such use. A publicly funded Canadian AI initiative could, in principle, bake licensing frameworks and data‑governance rules into its design from the outset, rather than relying on after‑the‑fact litigation to police boundaries. It could also prioritize local media ecosystems (for example, by integrating with Canadian news apps and broadcasters, rather than steering users toward foreign platforms like the global news aggregators that already dominate digital attention).
OpenAI’s Governance Record Offers Little Reassurance
Even OpenAI’s internal leadership structure has shown signs of instability. The company’s board removed CEO Sam Altman in a surprise move, stating that he was “not consistently candid” with directors and that they no longer had confidence in his leadership. Although Altman was reinstated after a rapid backlash from employees and investors, the episode revealed that the people closest to OpenAI’s operations had serious doubts about whether its top executive was being transparent with them. For governments and regulators being asked to trust OpenAI’s safety commitments, that internal rupture is hard to overlook. It suggests that even at the highest levels, information asymmetries and governance frictions can undermine oversight.
OpenAI’s founding documents emphasize a mission to ensure that advanced AI benefits all of humanity, with “broadly distributed benefits” and a cooperative approach to safety. Yet the gap between those principles and the company’s track record keeps widening. A suspended account that was never reported to police before a mass shooting. A multimillion‑euro fine for data‑protection failures. A copyright lawsuit from the publishers whose content allegedly trained the models. A CEO removed, however briefly, for lacking candor with his own board. Each incident on its own might be explained away as a growing pain. Taken together, they form a pattern that should concern any government relying on OpenAI for services that touch education, health, justice, or public information, domains where mistakes can have outsized consequences for citizens.
Why Canada Should Consider a Public AI Option
These episodes point toward a larger strategic dilemma for Canada: whether to keep outsourcing critical AI capabilities to a single, foreign, investor‑controlled provider, or to invest in a publicly accountable alternative that reflects domestic laws, values, and risk tolerances. Research on digital governance in Europe has found that most experts favor models with strong public oversight and citizen control when dealing with transformative technologies. Applying that logic to AI suggests that systems embedded in public services (such as education platforms, immigration triage tools, or health‑information chatbots) should be governed by institutions that are directly accountable to voters, not only to shareholders or distant boards.
A Canadian public AI option would not need to replace all private offerings, nor would it guarantee perfection. But it could set a benchmark for transparency, data governance, and safety practices tailored to Canadian law and culture. Such a system could be required to publish detailed risk assessments, allow independent audits, and include clear escalation paths to law enforcement that are defined in statute rather than left to corporate discretion. It could also be designed to support Canadian languages, Indigenous communities, and local media, ensuring that the country’s digital infrastructure is not entirely dependent on the strategic choices of a single American firm. After Tumbler Ridge, the question is no longer whether AI can create real‑world harms. It is whether Canada is content to manage those harms through the internal policies of a private company, or whether it will build public institutions capable of governing the technology on its own terms.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.