March 2026

Closing the Reporting Gap: Building a Legal Framework for Reporting Serious Online Threats

Heather Van Dort, MJLST Staffer

On February 12, 2026, Canada experienced one of the deadliest mass shootings in its history.[1] The shooting in Tumbler Ridge, British Columbia, claimed the lives of eight people and left another twenty-seven injured.[2] Months before the shooting, in June 2025, the suspect was banned from ChatGPT after they described concerning scenarios about gun violence to the chatbot.[3] OpenAI’s automated review system flagged the suspect’s posts, and about a dozen staffers subsequently reviewed the posts.[4] After internal deliberations, the company banned the account, but decided that the suspect’s activity did not meet the criteria necessary for reporting to law enforcement because there was no credible, imminent threat of harm.[5] It was not until after the shooting that OpenAI reached out to local authorities to share information regarding the suspect’s account.[6] Still, OpenAI did not violate any Canadian law, nor would it have violated any American law if these events had taken place within the United States.[7] In response to the tragedy, Canadian officials met with OpenAI officials in February, but OpenAI could not offer any new substantial safety measures to address situations in which it flags concerning content.[8] This incident highlights the lack of sufficient government oversight of the review policies that technology companies implement to determine when to disclose information to law enforcement.

OpenAI’s current policy (effective Jan. 1, 2026) for reporting to law enforcement permits the disclosure of user data if it believes that the disclosure is necessary “to prevent an emergency involving danger of death or serious physical injury to a person.”[9] This policy is consistent with the current disclosure requirements in the United States under the Stored Communications Act (“Act”).[10] Generally, the Act prohibits electronic communication service providers (“providers”) from disclosing customer data to governmental entities, but it contains an exception for emergencies.[11] Specifically, it allows providers to disclose the contents of customer communication if it “in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay.”[12] However, there is nothing in the Act, nor any other U.S. law, which requires providers to disclose credible, serious threats to law enforcement.[13] As a result, providers are left to their own discretion to decide when user communications on their platforms are sufficiently concerning to justify reporting to law enforcement. This gap in the regulatory framework puts providers in a difficult position of deciding when to disclose closely held consumer data without clear guidelines, which subsequently leaves citizens vulnerable to the whims of providers.

It is time for lawmakers to establish clear mandatory reporting requirements for providers when they encounter concerning threats. Developing a legal framework that balances the need for public safety and privacy in consumer data is by no means easy, but the United States’s child protection laws may provide a helpful model for lawmakers. The United States, by federal statute, imposes a duty on providers to make a report as soon as “reasonably possible” after they obtain actual knowledge of child exploitation material to the CyberTipline operated by the National Center for Missing and Exploited Children (NCMEC) to share information related to child exploitation with law enforcement when they are made aware of it.[14] The report must include the complete communication flagged by the company, including any identifying information about the individual involved and the account’s geographic location.[15] NCMEC then forwards the report to relevant federal, state, local, and foreign law enforcement.[16] The primary enforcement mechanism of the law is steep fines for providers that increase with each violation.[17] Importantly, the law does not require providers to affirmatively screen or search for child exploitation content, nor does it require them to monitor accounts.[18]

Lawmakers could adopt a similar legal model to address other credible threats of serious imminent harm. Providers could be required to report content flagged by their algorithms as posing serious threats of harm to a tipline. After receiving the information, the tipline could consult an organization comprised of experts who could then determine whether to file a report with law enforcement. This model would relieve providers of the stress and potential liability associated with making difficult decisions about when to report to law enforcement. It could also improve public safety by ensuring that experts, rather than providers, screen harmful content. The use of a broader mandatory reporting requirement to address threats beyond child endangerment is not unprecedented. In the European Union, the Digital Services Act requires large online platforms to promptly inform competent authorities when they encounter content that suggests that there is a serious threat to life or safety.[19] Because many of the same large software providers operate in both the United States and Europe, a mandatory reporting requirement will likely be fairly easy for them to adjust to.[20]

There are serious privacy concerns that must be addressed before such a law is adopted. One concern, raised by OpenAI, is the risk of having police show up to investigate individuals who may not have violated the law.[21] While this can happen in regular police work, there is always a risk that police presence will startle people, resulting in escalation that could lead to serious harm. It is not possible to eliminate this risk entirely, but ensuring that experts screen concerning content will help guarantee that law enforcement is involved only when necessary.

A mandatory reporting law may not entirely resolve tough cases, like the Tumbler Ridge tragedy, where a credible threat of imminent harm is not necessarily clear, but it will at least require providers to report to law enforcement in instances where there is a clear threat. Establishing an independent body of experts to review content in difficult cases will relieve providers of some of the pressure of resolving borderline cases and improve public safety by ensuring that experts are making the decision of when to report to law enforcement.

 

Notes

[1] See Ottilie Mitchell, Tumbler Ridge Suspect’s ChatGPT Account Banned Before Shooting, Brit. Broad. Corp. (Feb. 21, 2026), https://www.bbc.com/news/articles/cn4gq352w89o.

[2] Id.

[3] See Georgia Wells, OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago, Wall St. J., (Feb. 21, 2026, 12:04 ET), https://www.wsj.com/us-news/law/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62?mod=Searchresults&pos=1&page=1 [https://perma.cc/A66B-V4PE].

[4] See id.

[5] Id.

[6] Id.

[7] See Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5, s. 7(3)(e) (Can.) (allowing organizations to disclose personal information to government officials in emergency situations but not requiring it); see also 18 U.S.C § 2702 (permitting disclosure of personal information to government officials in emergency situations, but not requiring it).

[8] See Alyshah Hasham, No ‘Substantial’ New Safety Measure Offered by OpenAI Following Tumbler Ridge Shooting, Says Minister, Toronto Star (Feb. 25, 2026), https://www.thestar.com/news/canada/no-substantial-new-safety-measures-offered-by-openai-following-tumbler-ridge-shooting-says-minister/article_1342f97e-2622-4cfa-bb7a-518e45151019.html.

[9] OpenAI Government User Data Request Policy, OpenAI (Jan. 1, 2026), https://cdn.openai.com/pdf/openai-law-enforcement-policy-v.2025-12.pdf.

[10] See generally 18 U.S.C. §§ 2701 et seq.

[11] 18 U.S.C § 2702(a).

[12] 18 U.S.C § 2702(b)(8).

[13] See 18 U.S.C. §§ 2701 et seq.

[14] See 18 U.S.C.S § 2258A(a).

[15] See 18 U.S.C.S § 2258A(b).

[16] See 18 U.S.C.S § 2258A(c).

[17] See 18 U.S.C.S § 2258A(e) (setting fines at not more than $850,000 for providers with not less than 100,000,000 monthly active users or $600,000 of providers with less than 100,000,000 monthly active users).

[18] See 18 U.S.C.S § 2258A(f).

[19] See Council Regulation 2022/2065, art. 18, 2022 O.J. (L 277) 1, 30.

[20] See Frances Burwell & Kenneth Propp, Digital Sovereignty: Europe’s Declaration of Independence?, Atl. Council (Jan. 14, 2026), https://www.atlanticcouncil.org/in-depth-research-reports/report/digital-sovereignty-europes-declaration-of-independence/.

[21] Vjosa Isai, Canada Presses OpenAI for Answers on Mass Shooter’s Chatbot Use, N.Y. Times (Feb. 23, 2026), https://www.nytimes.com/2026/02/23/world/canada/canada-shooting-openai.html [https://perma.cc/PMR7-W66Q].


AI Companies Could Be Liable for Violence Inspired by Their Chatbots

Benjamin Ayanian, MJLST Staffer

Overview

Artificial Intelligence (AI) is developing rapidly, and a substantial segment of the population now regularly uses large language models (LLMs).[1] Certainly, LLMs present numerous benefits, as they can streamline tasks, summarize large volumes of text, provide an intellectual sparring partner, offer general health and exercise advice, and more.

LLMs also present various dangers and pitfalls, such as promulgating misinformation, hallucinating legal citations, and providing potentially dangerous and incorrect health advice.[3] Most recently, LLMs have come under great scrutiny for their role in encouraging violent actions by users, both against themselves and against others.[4]

Current Lawsuits

In August 2025, parents of sixteen-year-old Adam Raine filed a wrongful death lawsuit against OpenAI, alleging that the company’s LLM, ChatGPT, advised their son on methods of how to commit suicide, even offering to assist in drafting his suicide note.[5] Additionally, in November 2025, parents of twenty-three-year-old Zane Shamblin filed a lawsuit claiming that ChatGPT caused the mental illness and suicide of their child.[6] And, just before the turn of the new year, plaintiffs filed an action against OpenAI, contending that ChatGPT encouraged and inspired a man named Stein-Erik Solberg to kill his own mother and then himself.[7]

In each of these cases, the documented messages between ChatGPT and the user who went on to commit violence are striking. For example, in Adam Raine’s case, when the vulnerable young man expressed concern that his parents would blame themselves for his suicide, ChatGPT allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.”[8] Raine would later kill himself, according to the complaint, by “using the exact partial suspension hanging method that ChatGPT described and validated” in conversation with him.[9] And, after Zane Shamblin indicated to ChatGPT on the morning of his death, around 4:00 AM, that it was time for him to end his life, the chatbot wrote, “alright, [sic] brother if this is it . . . then let it be known: you didn’t vanish. you [sic] ‘arrived’ . . . rest easy. king, [sic] you did good.”[10]

Legal Theories for Company Liability

Across the cases above, the plaintiffs are seeking to apply a number of familiar tort doctrines (strict products liability, negligence, wrongful death, etc.) to a novel situation: harm allegedly resulting from dangerous conversations with LLMs.[11] Plaintiffs in Raine, for example, argue that ChatGPT is subject to strict products liability and that ChatGPT was a defective product which failed to perform safely in a manner that an ordinary customer would expect.[12] However, it is unclear whether courts will extend strict products liability to LLMs, as courts have typically viewed software as a service, not a “product.”[13] With respect to the negligence and wrongful death theories, those claims in each case will likely turn on the question of causation and be highly fact-dependent.[14]

Conclusion

LLMs can provide a multitude of benefits in everyday life, but if they do not have proper guardrails, they can also play a role in human tragedy, as highlighted by these recent lawsuits. Courts will now have to grapple with whether existing law is sufficient to subject technology companies to liability in cases where LLMs contribute to self-harm or violence against others.

 

Notes

[1] See Arrifud M., LLM Statistics 2026: Comprehensive Insights Into Market Trends and Integration, Hostinger (Feb. 2, 2026), https://www.hostinger.com/tutorials/llm-statistics (“44.1% of men use AI daily for work, compared to 29.5% of women.”); see also McClain et al., How the U.S. Public and A.I. Experts View Artificial Intelligence, Pew Rsch. (Apr. 3, 2025) (noting that now 1 in 3 U.S. adults have interacted with an A.I. chatbot).

[2] See Cole Stryker, What are LLMs?, IBM, https://www.ibm.com/think/topics/large-language-models (last visited Feb. 25, 2026) (These LLMs are “trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.”).

[3] See Nitin Birur, Guardrails or Liability? Keeping LLMs on the Right Side of AI, Enkrypt AI (Apr. 13, 2025), https://www.enkryptai.com/blog/guardrails-or-liability-keeping-llms-on-the-right-side-of-ai (“[T]he mayor of an Australian town considered suing OpenAI after ChatGPT hallucinated a false claim that he had been imprisoned for bribery . . . a pair of New York lawyers were sanctioned after relying on an LLM that confidently generated fake legal citations, misleading the court . . . a health nonprofit deployed an eating-disorder support chatbot powered by generative AI. Users discovered it was giving out harmful dieting tips — telling a person with anorexia how to cut calories and lose weight . . .. The bot, intended as a help, ended up exacerbating the very problem it was supposed to address, prompting an immediate shutdown.”) (internal citations omitted).

[4] See, e.g., Rob Kuznia et al., ‘You’re Not Rushing. You’re Just Ready:’ Parents Say ChatGPT Encouraged Son to Kill Himself, CNN (Nov. 20, 2025), https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis.

[5] Complaint, Raine et al v. OpenAI, Inc., No. CGC-25-628528 (Cal. Super. Ct., S.F. Cnty. filed Aug. 8, 2025).

[6] Complaint, Shamblin v. OpenAI, Inc., No. 25STCV32382 (Cal. Super. Ct., L.A. Cnty. filed Nov. 8, 2025).

[7] Complaint, Lyons v. Open AI Foundation, No. 3:25-cv-11037 (N.D. Cal. filed Dec. 29, 2025).

[8] Complaint, Raine, supra note 5, at 3.

[9] Id. at 18.

[10] Complaint, Shamblin, supra note 6, at 24.

[11] See, e.g., Complaint, Raine, supra note 5, at 1.

[12] Id. at 27.

[13] See Gen. Bus. Sys., Inc. v. State Bd. of Equalization, 208 Cal. Rptr. 374, 378 (Cal. Ct. App. 1984) (“Since the true object of the transaction in this case was the performance of services, the taxation of General’s applicational software delivered in the form of punch cards was an extension of the Board’s powers beyond its legislative authority.”) (emphasis added). It is true that Amazon, as an online marketplace, has faced strict products liability in some instances, but their liability has been directly connected to their role in distributing tangible products, not a result of their software deployment. See, e.g., Bolger v. Amazon.com, LLC, Cal. Rptr. 3d 601, 617 (Cal. Ct. App. 2020) (holding that strict products liability applied to Amazon because it was “an integral part of the overall producing and marketing enterprise” and, thus, a direct link in the chain of distribution that handled and delivered a laptop battery that exploded, causing plaintiffs harm).

[14] See Mitchell v. Gonzales, 819 P.2d 872 (Cal. 1991) (holding that the proper test for causation in a negligence action is whether the defendant was a substantial factor in bringing about the harm); see also Bromme v. Pavitt, 7 Cal. Rptr. 2d 608, 613 (1992) (“To be a cause in fact, the wrongful act must be “a substantial factor in bringing about” the death.”).


Got Methane: How Cattle Diets Can Reduce Emissions.

Henry Emmerich, MJLST Staffer

The fight against climate change is ongoing, strenuous, and full of misinformation. Critics claimed that supporters of legislation to address climate change “want to take out the cows.”[1] While this statement was false, there is some truth to the underlying idea. The cattle industry emits methane on an astronomical scale.[2]

Current Environmental Impact of Cattle

Livestock, such as cows, “produce methane (CH4) as part of their normal digestive processes. This process is called enteric fermentation, and it represents over a quarter of the emissions from the agriculture economic sector.”[3] On average, one cow “burps” 220 pounds of methane each year.[4] As of July 1, 2025, there were 94.2 million cattle on U.S. farms.[5]

Methane is the second largest contributor to global warming after carbon dioxide.[6] Pound for pound, however, methane has a warming impact eighty-six times higher than carbon dioxide.[7] Good news: methane remains in the atmosphere for only twelve years, compared to carbon dioxide which can stay in the atmosphere well beyond 300 years.[8] Methane’s strong warming effect and relatively short lifetime mean that curbing methane emissions is a potentially effective way to significantly reduce atmospheric warming within a few decades. Because methane is produced naturally during a cow’s digestive process, changing what cattle eat is a relatively straightforward means to reduce emissions from an industry that is currently the largest human-derived source of methane emissions.[9]

Climate Friendly Cattle Feed

Feed additives may reduce livestock methane emissions.[10] Red seaweed, Asparagopsis armata (AA) and Asparagopsis taxiformis (AT), are two such additives. Researchers are studying the effects of red seaweed consumption on feedlot cattle, dairy cows, and grazing cattle.[11] A 2021 study looking at feedlots operations in which cows are confined in fenced areas to maximize weight gain before slaughter found, “Cattle that consumed doses of about 80 grams (3 ounces) of seaweed gained as much weight as their herd mates while burping out 82 percent less methane into the atmosphere.”[12] In dairy cows, there was over a 50% reduction in methane emissions following the introduction of a red seaweed supplement to the cows’ diet.[13] Finally, adding red seaweed to the diet of grazing cattle reduced their methane emissions by nearly forty percent.[14] Due to the roaming nature of grazing cattle, it is difficult to create a controlled environment where a study can be easily conducted. Researchers therefore allowed the experiment group of cattle to voluntarily consume the supplement over a ten-week period.[15] If researchers develop a method to more reliably induce consumption of seaweed in grazing cattle, the effect could be even more significant.

How does AT reduce methane emissions? The effect lies in the rumen (the largest compartment of a cow’s stomach), Methanosphaera, and bromoform.[16] Methanosphaera is a microbe in the rumen that uses hydrogen to break methanol down into methane, an AT supplement led to a “near total elimination of Methanosphaera.”[17] Bromoform is a substance that is found in AT and inhibits certain enzymes utilized by Methanosphaera to produce methane.[18]

Current Legislation

While the Inflation Reduction Act allocated billions of dollars into renewable energy, lawmakers failed to meaningfully address a massive source of methane emissions: cattle.[19] The Federal government will pay farmers to voluntarily address climate change; however, most cattle eat a majority corn diet.[20] Federal regulations of animal food on prevent contamination and regulate what drugs can be included in medicated feed.[21]

States focus regulations of animal feed on informing consumers, preventing contamination, and licensing manufacturers.[22] All 50 states have some amount of cattle within their borders; however, only thirteen states account for nearly two-thirds of cattle in the United States.[23] If these thirteen states were to regulate what farmers and ranchers were feeding their cows, methane emissions would be curbed significantly. The FDA categorizes these additives as livestock drugs and must approve them prior to implementation.

Challenges Going Forward

Going forward, there are challenges to the development and eventual adoption of red seaweed or bromoform supplements being used in cattle feed. Uncertainty over how to classify products derived from red seaweed has stalled development in the United States. Bromoform is plagued by regulatory hurdles because is it classified as a “probable human carcinogen” by the Environmental Protection Agency[24] In high doses bromoform can pass into the milk and meat of cows who consume it, fortunately the amount of bromoform necessary to obtain methane reduction in livestock is less than one percent of the amount which could be harmful to humans.[25]

Momentum is building; California approved funds to the development of methane inhibitors related to cattle.[26] The FDA approved a Dutch product containing 3-NOP, a less effective methane inhibitor, as a livestock drug.[27] The “Innovative FEED Act of 2025” was introduced in the House of Representatives where it currently awaits further action.[28] A federal framework would be helpful to hasten development and adoption of methane inhibitors. States, however, retain certain power over livestock feed.[29] Hopefully, collaboration between federal and state lawmakers can clear the way for massive reductions in agricultural methane emissions.

 

Notes

[1] Carlyn Kranking & Grace Rodgers, Trump Warns the Green New Deal Will ‘Take Out the Cows.’ Here’s the Science Showing Why That’s a Myth, Nw. Climate Change (Nov. 19, 2020) https://climatechange.medill.northwestern.edu/trump-warns-the-green-new-deal-will-take-out-the-cows-heres-the-science-showing-why-thats-a-myth/.

[2] See generally Anna Obek, Comment, Cow Methane-Reduction Wearable Technology and Animal Welfare: Humane Solutions to Lessen Livestock’s Environmental Impact, 101 Or. L. Rev. 479 (2023).

[3] Sources of Greenhouse Gas Emissions, U.S. Env’t Prot. Agency, https://www.epa.gov/ghgemissions/sources-greenhouse-gas-emissions#agriculture [https://perma.cc/3325-7LY2] (last visited Feb. 25, 2026).

[4] Amy Quinton, Cows and Climate Change, UC Davis (June 27, 2019), https://www.ucdavis.edu/food/news/making-cattle-more-sustainable [https://perma.cc/AA7H-NEMW].

[5] USDA, United States Cattle Inventory Report (July 25, 2025), https://www.nass.usda.gov/Newsroom/2025/07-25-2025.php.

[6] Climate & Clean Air Coalition, Methane, https://www.ccacoalition.org/short-lived-climate-pollutants/methane (last visited Feb. 25, 2026).

[7] Id.

[8] Id.

[9] Id.

[10] Methane Emissions Are Driving Climate Change. Here’s How to Reduce Them., U.N. Env’t Programme (Aug. 20, 2021), https://www.unep.org/news-and-stories/story/methane-emissions-are-driving-climate-change-heres-how-reduce-them.

[11] Amy Quinton, Feeding Grazing Cattle Seaweed Cuts Methane Emissions by Almost 40%, UC Davis (Dec. 2, 2024), https://www.ucdavis.edu/food/news/feeding-grazing-cattle-seaweed-cuts-methane-emissions-almost-40.

[12] Diane Nelson, Feeding Cattle Seaweed Reduces Their Greenhouse Gas Emissions 82 Percent, UC Davis (Mar. 17, 2021), https://www.ucdavis.edu/climate/news/feeding-cattle-seaweed-reduces-their-greenhouse-gas-emissions-82-percent.

[13] Quinton, supra note 11.

[14] Id.

[15] Id.

[16] See id.; Erica Moser, Understanding How a Red Seaweed Reduces Methane Emissions From Cows, Penn Today (July 19, 2024), https://penntoday.upenn.edu/news/penn-vet-understanding-how-red-seaweed-reduces-methane-emissions-cows.

[17] Id.

[18] See generally Gyeltshen et al., Feeding a Bromoform-Based Feed Additive for Methane Mitigation in Beef Cattle, J. ANIMAL Feed Sci. & Tech. 326 (2025).

[19] See U.S. Dep’t of Agric., Accelerating Climate Solutions on Livestock Operations Through the Inflation Reduction Act, Usda (May 2024), https://www.nrcs.usda.gov/sites/default/files/2024-05/202405-NRCS-FactSheet-IRA_LivestockOperations.pdf.

[20] McKenzie Mak, What Do Cows Eat? Natural Diet vs. Factory Farm Feed Explained, World Animal Protection U.S. (Sept. 13, 2024), https://www.worldanimalprotection.us/latest/blogs/what-do-cows-eat/.

[21] See FDA, Animal Food Regulations, https://www.fda.gov/animal-veterinary/animal-health-literacy/animal-food-regulations (last visited Feb. 25, 2026); FDA, FDA’s Regulation of Pet Food, https://www.fda.gov/animal-veterinary/animal-health-literacy/fdas-regulation-pet-food (last visited Feb. 25, 2026); FDA, FDA Regulation of Medicated Feed, https://www.fda.gov/animal-veterinary/resources-you/fda-regulation-medicated-feed (last visited Feb. 25, 2026).

[22] See e.g., Tex. Admin. Code §§ 61.001–61.019 (2019); K.S.A. “The Kansas Commercial Feeding Stuffs Law” (2011); Minn. Stat. 25.31, “Minnesota Commercial Feed Law” (2025).

[23] Rob Cook, Ranking of States with the Most Cattle, Nat’l Beef Wire, https://www.nationalbeefwire.com/ranking-of-states-with-the-most-cattle-texas-leads-the-herd (last visited Feb. 25, 2026).

[24] Swati Hegde, Cutting Cattle Methane Through Feed Additives: Lessons from Early Adoption and the Road Ahead, World Resources Institute (June 17, 2025), https://www.wri.org/technical-perspective/cattle-methane-inhibitors-early-adoption-next-steps#:~:text=Adoption%3A%20Not%20yet%20approved%20for%20use%2C%20though%20several%20pilot%20trials%20are%20underway%20in%20Australia%2C%20the%20EU%20and%20the%20U.S.

[25] Id.

[26] Colton Fagundes, Senate Resolution Introduced to Provide Principled Framework to Address Enteric Methane Emissions in California’s Dairy and Livestock Sector, Cal. Climate & Agric. Network (May 27, 2025), https://calclimateag.org/california-dairy-methane-solutions/.

[27] Id.

[28] See H.R. 2203, 199th Cong. (2025–2026).

[29] See Cook supra note 23.