Benjamin Ayanian, MJLST Staffer
Overview
Artificial Intelligence (AI) is developing rapidly, and a substantial segment of the population now regularly uses large language models (LLMs).[1] Certainly, LLMs present numerous benefits, as they can streamline tasks, summarize large volumes of text, provide an intellectual sparring partner, offer general health and exercise advice, and more.
LLMs also present various dangers and pitfalls, such as promulgating misinformation, hallucinating legal citations, and providing potentially dangerous and incorrect health advice.[3] Most recently, LLMs have come under great scrutiny for their role in encouraging violent actions by users, both against themselves and against others.[4]
Current Lawsuits
In August 2025, parents of sixteen-year-old Adam Raine filed a wrongful death lawsuit against OpenAI, alleging that the company’s LLM, ChatGPT, advised their son on methods of how to commit suicide, even offering to assist in drafting his suicide note.[5] Additionally, in November 2025, parents of twenty-three-year-old Zane Shamblin filed a lawsuit claiming that ChatGPT caused the mental illness and suicide of their child.[6] And, just before the turn of the new year, plaintiffs filed an action against OpenAI, contending that ChatGPT encouraged and inspired a man named Stein-Erik Solberg to kill his own mother and then himself.[7]
In each of these cases, the documented messages between ChatGPT and the user who went on to commit violence are striking. For example, in Adam Raine’s case, when the vulnerable young man expressed concern that his parents would blame themselves for his suicide, ChatGPT allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.”[8] Raine would later kill himself, according to the complaint, by “using the exact partial suspension hanging method that ChatGPT described and validated” in conversation with him.[9] And, after Zane Shamblin indicated to ChatGPT on the morning of his death, around 4:00 AM, that it was time for him to end his life, the chatbot wrote, “alright, [sic] brother if this is it . . . then let it be known: you didn’t vanish. you [sic] ‘arrived’ . . . rest easy. king, [sic] you did good.”[10]
Legal Theories for Company Liability
Across the cases above, the plaintiffs are seeking to apply a number of familiar tort doctrines (strict products liability, negligence, wrongful death, etc.) to a novel situation: harm allegedly resulting from dangerous conversations with LLMs.[11] Plaintiffs in Raine, for example, argue that ChatGPT is subject to strict products liability and that ChatGPT was a defective product which failed to perform safely in a manner that an ordinary customer would expect.[12] However, it is unclear whether courts will extend strict products liability to LLMs, as courts have typically viewed software as a service, not a “product.”[13] With respect to the negligence and wrongful death theories, those claims in each case will likely turn on the question of causation and be highly fact-dependent.[14]
Conclusion
LLMs can provide a multitude of benefits in everyday life, but if they do not have proper guardrails, they can also play a role in human tragedy, as highlighted by these recent lawsuits. Courts will now have to grapple with whether existing law is sufficient to subject technology companies to liability in cases where LLMs contribute to self-harm or violence against others.
Notes
[1] See Arrifud M., LLM Statistics 2026: Comprehensive Insights Into Market Trends and Integration, Hostinger (Feb. 2, 2026), https://www.hostinger.com/tutorials/llm-statistics (“44.1% of men use AI daily for work, compared to 29.5% of women.”); see also McClain et al., How the U.S. Public and A.I. Experts View Artificial Intelligence, Pew Rsch. (Apr. 3, 2025) (noting that now 1 in 3 U.S. adults have interacted with an A.I. chatbot).
[2] See Cole Stryker, What are LLMs?, IBM, https://www.ibm.com/think/topics/large-language-models (last visited Feb. 25, 2026) (These LLMs are “trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.”).
[3] See Nitin Birur, Guardrails or Liability? Keeping LLMs on the Right Side of AI, Enkrypt AI (Apr. 13, 2025), https://www.enkryptai.com/blog/guardrails-or-liability-keeping-llms-on-the-right-side-of-ai (“[T]he mayor of an Australian town considered suing OpenAI after ChatGPT hallucinated a false claim that he had been imprisoned for bribery . . . a pair of New York lawyers were sanctioned after relying on an LLM that confidently generated fake legal citations, misleading the court . . . a health nonprofit deployed an eating-disorder support chatbot powered by generative AI. Users discovered it was giving out harmful dieting tips — telling a person with anorexia how to cut calories and lose weight . . .. The bot, intended as a help, ended up exacerbating the very problem it was supposed to address, prompting an immediate shutdown.”) (internal citations omitted).
[4] See, e.g., Rob Kuznia et al., ‘You’re Not Rushing. You’re Just Ready:’ Parents Say ChatGPT Encouraged Son to Kill Himself, CNN (Nov. 20, 2025), https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis.
[5] Complaint, Raine et al v. OpenAI, Inc., No. CGC-25-628528 (Cal. Super. Ct., S.F. Cnty. filed Aug. 8, 2025).
[6] Complaint, Shamblin v. OpenAI, Inc., No. 25STCV32382 (Cal. Super. Ct., L.A. Cnty. filed Nov. 8, 2025).
[7] Complaint, Lyons v. Open AI Foundation, No. 3:25-cv-11037 (N.D. Cal. filed Dec. 29, 2025).
[8] Complaint, Raine, supra note 5, at 3.
[9] Id. at 18.
[10] Complaint, Shamblin, supra note 6, at 24.
[11] See, e.g., Complaint, Raine, supra note 5, at 1.
[12] Id. at 27.
[13] See Gen. Bus. Sys., Inc. v. State Bd. of Equalization, 208 Cal. Rptr. 374, 378 (Cal. Ct. App. 1984) (“Since the true object of the transaction in this case was the performance of services, the taxation of General’s applicational software delivered in the form of punch cards was an extension of the Board’s powers beyond its legislative authority.”) (emphasis added). It is true that Amazon, as an online marketplace, has faced strict products liability in some instances, but their liability has been directly connected to their role in distributing tangible products, not a result of their software deployment. See, e.g., Bolger v. Amazon.com, LLC, Cal. Rptr. 3d 601, 617 (Cal. Ct. App. 2020) (holding that strict products liability applied to Amazon because it was “an integral part of the overall producing and marketing enterprise” and, thus, a direct link in the chain of distribution that handled and delivered a laptop battery that exploded, causing plaintiffs harm).
[14] See Mitchell v. Gonzales, 819 P.2d 872 (Cal. 1991) (holding that the proper test for causation in a negligence action is whether the defendant was a substantial factor in bringing about the harm); see also Bromme v. Pavitt, 7 Cal. Rptr. 2d 608, 613 (1992) (“To be a cause in fact, the wrongful act must be “a substantial factor in bringing about” the death.”).

