Heather Van Dort, MJLST Staffer
On February 12, 2026, Canada experienced one of the deadliest mass shootings in its history.[1] The shooting in Tumbler Ridge, British Columbia, claimed the lives of eight people and left another twenty-seven injured.[2] Months before the shooting, in June 2025, the suspect was banned from ChatGPT after they described concerning scenarios about gun violence to the chatbot.[3] OpenAI’s automated review system flagged the suspect’s posts, and about a dozen staffers subsequently reviewed the posts.[4] After internal deliberations, the company banned the account, but decided that the suspect’s activity did not meet the criteria necessary for reporting to law enforcement because there was no credible, imminent threat of harm.[5] It was not until after the shooting that OpenAI reached out to local authorities to share information regarding the suspect’s account.[6] Still, OpenAI did not violate any Canadian law, nor would it have violated any American law if these events had taken place within the United States.[7] In response to the tragedy, Canadian officials met with OpenAI officials in February, but OpenAI could not offer any new substantial safety measures to address situations in which it flags concerning content.[8] This incident highlights the lack of sufficient government oversight of the review policies that technology companies implement to determine when to disclose information to law enforcement.
OpenAI’s current policy (effective Jan. 1, 2026) for reporting to law enforcement permits the disclosure of user data if it believes that the disclosure is necessary “to prevent an emergency involving danger of death or serious physical injury to a person.”[9] This policy is consistent with the current disclosure requirements in the United States under the Stored Communications Act (“Act”).[10] Generally, the Act prohibits electronic communication service providers (“providers”) from disclosing customer data to governmental entities, but it contains an exception for emergencies.[11] Specifically, it allows providers to disclose the contents of customer communication if it “in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay.”[12] However, there is nothing in the Act, nor any other U.S. law, which requires providers to disclose credible, serious threats to law enforcement.[13] As a result, providers are left to their own discretion to decide when user communications on their platforms are sufficiently concerning to justify reporting to law enforcement. This gap in the regulatory framework puts providers in a difficult position of deciding when to disclose closely held consumer data without clear guidelines, which subsequently leaves citizens vulnerable to the whims of providers.
It is time for lawmakers to establish clear mandatory reporting requirements for providers when they encounter concerning threats. Developing a legal framework that balances the need for public safety and privacy in consumer data is by no means easy, but the United States’s child protection laws may provide a helpful model for lawmakers. The United States, by federal statute, imposes a duty on providers to make a report as soon as “reasonably possible” after they obtain actual knowledge of child exploitation material to the CyberTipline operated by the National Center for Missing and Exploited Children (NCMEC) to share information related to child exploitation with law enforcement when they are made aware of it.[14] The report must include the complete communication flagged by the company, including any identifying information about the individual involved and the account’s geographic location.[15] NCMEC then forwards the report to relevant federal, state, local, and foreign law enforcement.[16] The primary enforcement mechanism of the law is steep fines for providers that increase with each violation.[17] Importantly, the law does not require providers to affirmatively screen or search for child exploitation content, nor does it require them to monitor accounts.[18]
Lawmakers could adopt a similar legal model to address other credible threats of serious imminent harm. Providers could be required to report content flagged by their algorithms as posing serious threats of harm to a tipline. After receiving the information, the tipline could consult an organization comprised of experts who could then determine whether to file a report with law enforcement. This model would relieve providers of the stress and potential liability associated with making difficult decisions about when to report to law enforcement. It could also improve public safety by ensuring that experts, rather than providers, screen harmful content. The use of a broader mandatory reporting requirement to address threats beyond child endangerment is not unprecedented. In the European Union, the Digital Services Act requires large online platforms to promptly inform competent authorities when they encounter content that suggests that there is a serious threat to life or safety.[19] Because many of the same large software providers operate in both the United States and Europe, a mandatory reporting requirement will likely be fairly easy for them to adjust to.[20]
There are serious privacy concerns that must be addressed before such a law is adopted. One concern, raised by OpenAI, is the risk of having police show up to investigate individuals who may not have violated the law.[21] While this can happen in regular police work, there is always a risk that police presence will startle people, resulting in escalation that could lead to serious harm. It is not possible to eliminate this risk entirely, but ensuring that experts screen concerning content will help guarantee that law enforcement is involved only when necessary.
A mandatory reporting law may not entirely resolve tough cases, like the Tumbler Ridge tragedy, where a credible threat of imminent harm is not necessarily clear, but it will at least require providers to report to law enforcement in instances where there is a clear threat. Establishing an independent body of experts to review content in difficult cases will relieve providers of some of the pressure of resolving borderline cases and improve public safety by ensuring that experts are making the decision of when to report to law enforcement.
Notes
[1] See Ottilie Mitchell, Tumbler Ridge Suspect’s ChatGPT Account Banned Before Shooting, Brit. Broad. Corp. (Feb. 21, 2026), https://www.bbc.com/news/articles/cn4gq352w89o.
[2] Id.
[3] See Georgia Wells, OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago, Wall St. J., (Feb. 21, 2026, 12:04 ET), https://www.wsj.com/us-news/law/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62?mod=Searchresults&pos=1&page=1 [https://perma.cc/A66B-V4PE].
[4] See id.
[5] Id.
[6] Id.
[7] See Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5, s. 7(3)(e) (Can.) (allowing organizations to disclose personal information to government officials in emergency situations but not requiring it); see also 18 U.S.C § 2702 (permitting disclosure of personal information to government officials in emergency situations, but not requiring it).
[8] See Alyshah Hasham, No ‘Substantial’ New Safety Measure Offered by OpenAI Following Tumbler Ridge Shooting, Says Minister, Toronto Star (Feb. 25, 2026), https://www.thestar.com/news/canada/no-substantial-new-safety-measures-offered-by-openai-following-tumbler-ridge-shooting-says-minister/article_1342f97e-2622-4cfa-bb7a-518e45151019.html.
[9] OpenAI Government User Data Request Policy, OpenAI (Jan. 1, 2026), https://cdn.openai.com/pdf/openai-law-enforcement-policy-v.2025-12.pdf.
[10] See generally 18 U.S.C. §§ 2701 et seq.
[11] 18 U.S.C § 2702(a).
[12] 18 U.S.C § 2702(b)(8).
[13] See 18 U.S.C. §§ 2701 et seq.
[14] See 18 U.S.C.S § 2258A(a).
[15] See 18 U.S.C.S § 2258A(b).
[16] See 18 U.S.C.S § 2258A(c).
[17] See 18 U.S.C.S § 2258A(e) (setting fines at not more than $850,000 for providers with not less than 100,000,000 monthly active users or $600,000 of providers with less than 100,000,000 monthly active users).
[18] See 18 U.S.C.S § 2258A(f).
[19] See Council Regulation 2022/2065, art. 18, 2022 O.J. (L 277) 1, 30.
[20] See Frances Burwell & Kenneth Propp, Digital Sovereignty: Europe’s Declaration of Independence?, Atl. Council (Jan. 14, 2026), https://www.atlanticcouncil.org/in-depth-research-reports/report/digital-sovereignty-europes-declaration-of-independence/.
[21] Vjosa Isai, Canada Presses OpenAI for Answers on Mass Shooter’s Chatbot Use, N.Y. Times (Feb. 23, 2026), https://www.nytimes.com/2026/02/23/world/canada/canada-shooting-openai.html [https://perma.cc/PMR7-W66Q].
