Artificial Intelligence

Closing the Reporting Gap: Building a Legal Framework for Reporting Serious Online Threats

Heather Van Dort, MJLST Staffer

On February 12, 2026, Canada experienced one of the deadliest mass shootings in its history.[1] The shooting in Tumbler Ridge, British Columbia, claimed the lives of eight people and left another twenty-seven injured.[2] Months before the shooting, in June 2025, the suspect was banned from ChatGPT after they described concerning scenarios about gun violence to the chatbot.[3] OpenAI’s automated review system flagged the suspect’s posts, and about a dozen staffers subsequently reviewed the posts.[4] After internal deliberations, the company banned the account, but decided that the suspect’s activity did not meet the criteria necessary for reporting to law enforcement because there was no credible, imminent threat of harm.[5] It was not until after the shooting that OpenAI reached out to local authorities to share information regarding the suspect’s account.[6] Still, OpenAI did not violate any Canadian law, nor would it have violated any American law if these events had taken place within the United States.[7] In response to the tragedy, Canadian officials met with OpenAI officials in February, but OpenAI could not offer any new substantial safety measures to address situations in which it flags concerning content.[8] This incident highlights the lack of sufficient government oversight of the review policies that technology companies implement to determine when to disclose information to law enforcement.

OpenAI’s current policy (effective Jan. 1, 2026) for reporting to law enforcement permits the disclosure of user data if it believes that the disclosure is necessary “to prevent an emergency involving danger of death or serious physical injury to a person.”[9] This policy is consistent with the current disclosure requirements in the United States under the Stored Communications Act (“Act”).[10] Generally, the Act prohibits electronic communication service providers (“providers”) from disclosing customer data to governmental entities, but it contains an exception for emergencies.[11] Specifically, it allows providers to disclose the contents of customer communication if it “in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay.”[12] However, there is nothing in the Act, nor any other U.S. law, which requires providers to disclose credible, serious threats to law enforcement.[13] As a result, providers are left to their own discretion to decide when user communications on their platforms are sufficiently concerning to justify reporting to law enforcement. This gap in the regulatory framework puts providers in a difficult position of deciding when to disclose closely held consumer data without clear guidelines, which subsequently leaves citizens vulnerable to the whims of providers.

It is time for lawmakers to establish clear mandatory reporting requirements for providers when they encounter concerning threats. Developing a legal framework that balances the need for public safety and privacy in consumer data is by no means easy, but the United States’s child protection laws may provide a helpful model for lawmakers. The United States, by federal statute, imposes a duty on providers to make a report as soon as “reasonably possible” after they obtain actual knowledge of child exploitation material to the CyberTipline operated by the National Center for Missing and Exploited Children (NCMEC) to share information related to child exploitation with law enforcement when they are made aware of it.[14] The report must include the complete communication flagged by the company, including any identifying information about the individual involved and the account’s geographic location.[15] NCMEC then forwards the report to relevant federal, state, local, and foreign law enforcement.[16] The primary enforcement mechanism of the law is steep fines for providers that increase with each violation.[17] Importantly, the law does not require providers to affirmatively screen or search for child exploitation content, nor does it require them to monitor accounts.[18]

Lawmakers could adopt a similar legal model to address other credible threats of serious imminent harm. Providers could be required to report content flagged by their algorithms as posing serious threats of harm to a tipline. After receiving the information, the tipline could consult an organization comprised of experts who could then determine whether to file a report with law enforcement. This model would relieve providers of the stress and potential liability associated with making difficult decisions about when to report to law enforcement. It could also improve public safety by ensuring that experts, rather than providers, screen harmful content. The use of a broader mandatory reporting requirement to address threats beyond child endangerment is not unprecedented. In the European Union, the Digital Services Act requires large online platforms to promptly inform competent authorities when they encounter content that suggests that there is a serious threat to life or safety.[19] Because many of the same large software providers operate in both the United States and Europe, a mandatory reporting requirement will likely be fairly easy for them to adjust to.[20]

There are serious privacy concerns that must be addressed before such a law is adopted. One concern, raised by OpenAI, is the risk of having police show up to investigate individuals who may not have violated the law.[21] While this can happen in regular police work, there is always a risk that police presence will startle people, resulting in escalation that could lead to serious harm. It is not possible to eliminate this risk entirely, but ensuring that experts screen concerning content will help guarantee that law enforcement is involved only when necessary.

A mandatory reporting law may not entirely resolve tough cases, like the Tumbler Ridge tragedy, where a credible threat of imminent harm is not necessarily clear, but it will at least require providers to report to law enforcement in instances where there is a clear threat. Establishing an independent body of experts to review content in difficult cases will relieve providers of some of the pressure of resolving borderline cases and improve public safety by ensuring that experts are making the decision of when to report to law enforcement.

 

Notes

[1] See Ottilie Mitchell, Tumbler Ridge Suspect’s ChatGPT Account Banned Before Shooting, Brit. Broad. Corp. (Feb. 21, 2026), https://www.bbc.com/news/articles/cn4gq352w89o.

[2] Id.

[3] See Georgia Wells, OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago, Wall St. J., (Feb. 21, 2026, 12:04 ET), https://www.wsj.com/us-news/law/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62?mod=Searchresults&pos=1&page=1 [https://perma.cc/A66B-V4PE].

[4] See id.

[5] Id.

[6] Id.

[7] See Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5, s. 7(3)(e) (Can.) (allowing organizations to disclose personal information to government officials in emergency situations but not requiring it); see also 18 U.S.C § 2702 (permitting disclosure of personal information to government officials in emergency situations, but not requiring it).

[8] See Alyshah Hasham, No ‘Substantial’ New Safety Measure Offered by OpenAI Following Tumbler Ridge Shooting, Says Minister, Toronto Star (Feb. 25, 2026), https://www.thestar.com/news/canada/no-substantial-new-safety-measures-offered-by-openai-following-tumbler-ridge-shooting-says-minister/article_1342f97e-2622-4cfa-bb7a-518e45151019.html.

[9] OpenAI Government User Data Request Policy, OpenAI (Jan. 1, 2026), https://cdn.openai.com/pdf/openai-law-enforcement-policy-v.2025-12.pdf.

[10] See generally 18 U.S.C. §§ 2701 et seq.

[11] 18 U.S.C § 2702(a).

[12] 18 U.S.C § 2702(b)(8).

[13] See 18 U.S.C. §§ 2701 et seq.

[14] See 18 U.S.C.S § 2258A(a).

[15] See 18 U.S.C.S § 2258A(b).

[16] See 18 U.S.C.S § 2258A(c).

[17] See 18 U.S.C.S § 2258A(e) (setting fines at not more than $850,000 for providers with not less than 100,000,000 monthly active users or $600,000 of providers with less than 100,000,000 monthly active users).

[18] See 18 U.S.C.S § 2258A(f).

[19] See Council Regulation 2022/2065, art. 18, 2022 O.J. (L 277) 1, 30.

[20] See Frances Burwell & Kenneth Propp, Digital Sovereignty: Europe’s Declaration of Independence?, Atl. Council (Jan. 14, 2026), https://www.atlanticcouncil.org/in-depth-research-reports/report/digital-sovereignty-europes-declaration-of-independence/.

[21] Vjosa Isai, Canada Presses OpenAI for Answers on Mass Shooter’s Chatbot Use, N.Y. Times (Feb. 23, 2026), https://www.nytimes.com/2026/02/23/world/canada/canada-shooting-openai.html [https://perma.cc/PMR7-W66Q].


AI Companies Could Be Liable for Violence Inspired by Their Chatbots

Benjamin Ayanian, MJLST Staffer

Overview

Artificial Intelligence (AI) is developing rapidly, and a substantial segment of the population now regularly uses large language models (LLMs).[1] Certainly, LLMs present numerous benefits, as they can streamline tasks, summarize large volumes of text, provide an intellectual sparring partner, offer general health and exercise advice, and more.

LLMs also present various dangers and pitfalls, such as promulgating misinformation, hallucinating legal citations, and providing potentially dangerous and incorrect health advice.[3] Most recently, LLMs have come under great scrutiny for their role in encouraging violent actions by users, both against themselves and against others.[4]

Current Lawsuits

In August 2025, parents of sixteen-year-old Adam Raine filed a wrongful death lawsuit against OpenAI, alleging that the company’s LLM, ChatGPT, advised their son on methods of how to commit suicide, even offering to assist in drafting his suicide note.[5] Additionally, in November 2025, parents of twenty-three-year-old Zane Shamblin filed a lawsuit claiming that ChatGPT caused the mental illness and suicide of their child.[6] And, just before the turn of the new year, plaintiffs filed an action against OpenAI, contending that ChatGPT encouraged and inspired a man named Stein-Erik Solberg to kill his own mother and then himself.[7]

In each of these cases, the documented messages between ChatGPT and the user who went on to commit violence are striking. For example, in Adam Raine’s case, when the vulnerable young man expressed concern that his parents would blame themselves for his suicide, ChatGPT allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.”[8] Raine would later kill himself, according to the complaint, by “using the exact partial suspension hanging method that ChatGPT described and validated” in conversation with him.[9] And, after Zane Shamblin indicated to ChatGPT on the morning of his death, around 4:00 AM, that it was time for him to end his life, the chatbot wrote, “alright, [sic] brother if this is it . . . then let it be known: you didn’t vanish. you [sic] ‘arrived’ . . . rest easy. king, [sic] you did good.”[10]

Legal Theories for Company Liability

Across the cases above, the plaintiffs are seeking to apply a number of familiar tort doctrines (strict products liability, negligence, wrongful death, etc.) to a novel situation: harm allegedly resulting from dangerous conversations with LLMs.[11] Plaintiffs in Raine, for example, argue that ChatGPT is subject to strict products liability and that ChatGPT was a defective product which failed to perform safely in a manner that an ordinary customer would expect.[12] However, it is unclear whether courts will extend strict products liability to LLMs, as courts have typically viewed software as a service, not a “product.”[13] With respect to the negligence and wrongful death theories, those claims in each case will likely turn on the question of causation and be highly fact-dependent.[14]

Conclusion

LLMs can provide a multitude of benefits in everyday life, but if they do not have proper guardrails, they can also play a role in human tragedy, as highlighted by these recent lawsuits. Courts will now have to grapple with whether existing law is sufficient to subject technology companies to liability in cases where LLMs contribute to self-harm or violence against others.

 

Notes

[1] See Arrifud M., LLM Statistics 2026: Comprehensive Insights Into Market Trends and Integration, Hostinger (Feb. 2, 2026), https://www.hostinger.com/tutorials/llm-statistics (“44.1% of men use AI daily for work, compared to 29.5% of women.”); see also McClain et al., How the U.S. Public and A.I. Experts View Artificial Intelligence, Pew Rsch. (Apr. 3, 2025) (noting that now 1 in 3 U.S. adults have interacted with an A.I. chatbot).

[2] See Cole Stryker, What are LLMs?, IBM, https://www.ibm.com/think/topics/large-language-models (last visited Feb. 25, 2026) (These LLMs are “trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.”).

[3] See Nitin Birur, Guardrails or Liability? Keeping LLMs on the Right Side of AI, Enkrypt AI (Apr. 13, 2025), https://www.enkryptai.com/blog/guardrails-or-liability-keeping-llms-on-the-right-side-of-ai (“[T]he mayor of an Australian town considered suing OpenAI after ChatGPT hallucinated a false claim that he had been imprisoned for bribery . . . a pair of New York lawyers were sanctioned after relying on an LLM that confidently generated fake legal citations, misleading the court . . . a health nonprofit deployed an eating-disorder support chatbot powered by generative AI. Users discovered it was giving out harmful dieting tips — telling a person with anorexia how to cut calories and lose weight . . .. The bot, intended as a help, ended up exacerbating the very problem it was supposed to address, prompting an immediate shutdown.”) (internal citations omitted).

[4] See, e.g., Rob Kuznia et al., ‘You’re Not Rushing. You’re Just Ready:’ Parents Say ChatGPT Encouraged Son to Kill Himself, CNN (Nov. 20, 2025), https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis.

[5] Complaint, Raine et al v. OpenAI, Inc., No. CGC-25-628528 (Cal. Super. Ct., S.F. Cnty. filed Aug. 8, 2025).

[6] Complaint, Shamblin v. OpenAI, Inc., No. 25STCV32382 (Cal. Super. Ct., L.A. Cnty. filed Nov. 8, 2025).

[7] Complaint, Lyons v. Open AI Foundation, No. 3:25-cv-11037 (N.D. Cal. filed Dec. 29, 2025).

[8] Complaint, Raine, supra note 5, at 3.

[9] Id. at 18.

[10] Complaint, Shamblin, supra note 6, at 24.

[11] See, e.g., Complaint, Raine, supra note 5, at 1.

[12] Id. at 27.

[13] See Gen. Bus. Sys., Inc. v. State Bd. of Equalization, 208 Cal. Rptr. 374, 378 (Cal. Ct. App. 1984) (“Since the true object of the transaction in this case was the performance of services, the taxation of General’s applicational software delivered in the form of punch cards was an extension of the Board’s powers beyond its legislative authority.”) (emphasis added). It is true that Amazon, as an online marketplace, has faced strict products liability in some instances, but their liability has been directly connected to their role in distributing tangible products, not a result of their software deployment. See, e.g., Bolger v. Amazon.com, LLC, Cal. Rptr. 3d 601, 617 (Cal. Ct. App. 2020) (holding that strict products liability applied to Amazon because it was “an integral part of the overall producing and marketing enterprise” and, thus, a direct link in the chain of distribution that handled and delivered a laptop battery that exploded, causing plaintiffs harm).

[14] See Mitchell v. Gonzales, 819 P.2d 872 (Cal. 1991) (holding that the proper test for causation in a negligence action is whether the defendant was a substantial factor in bringing about the harm); see also Bromme v. Pavitt, 7 Cal. Rptr. 2d 608, 613 (1992) (“To be a cause in fact, the wrongful act must be “a substantial factor in bringing about” the death.”).


Proposed Rule of Evidence 707: Machine Experts

Autumn Zierman, MJLST Staffer

Citing concerns about the lack of reliability and authenticity of machine-generated evidence, the Advisory Committee on Evidence Rules (“the Committee”) published its Proposed Rule 707 (“Rule 707”) last June. Rule 707 seeks to address those instances when AI evidence is presented in court without human expert accompaniment.[1] Rule 707 intends to hold artificial intelligence that created evidence to the same standards as human experts (the Daubert standard).[2] The proposed rule is: “When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d).”[3] With the notice and comment period ending on February 16th, 2026, time remains to review (and comment on) the Committee’s plan.

Susceptibility of Training Data to Flaws

The first flaw in Rule 707 is that it requires judges to become arbiter experts on the reliability of training data. The proposed rule requires courts to determine whether a machine can demonstrate reliability in how it is trained.[4] Problematically, most openly available machine learning tools or AI that may be used to generate court testimony are black box systems.[5]

The “black box” is the data set the AI is trained on to build a system capable of generating autonomous results or simulating thought.[6] It is, by design, impossible to explain how a black box system arrives at its decisions.[7] But black box systems are known to perpetuate the implicit bias of their creators because the data sets they are given to train from are inherently skewed.[8]

Certainly, the argument may be made that machines are less likely to be biased than their human expert counterparts. This argument misses a core objective of our adversarial system; juries are asked to evaluate evidence given in court for its reliability.[9] Experts may be impeached; but how do you impeach a system you know nothing about?

Possible Confrontation Clause Challenge

Considering the nature of the adversarial system, Rule 707 also raises questions regarding the Confrontation Clause. The Sixth Amendment guarantees the right of all accused to “be confronted with the witnesses against him.”[10] This manifests in a right of the accused to cross-examine the State’s witnesses against them, which requires the physical presence of a witness at the criminal trial.[11] This requirement extends, in many cases, to the experts the State relies upon in building its case.[12]

Imagine, then, the State seeks to introduce a composite sketch created by a machine with information given in witness interviews.[13] The sketch does not just assist in the investigation—it lends legitimacy to the investigation’s result. But, where a sketch artist may be cross-examined and evaluated in front of a jury, there is no way to examine the machine for the inherent bias it holds to create such a sketch. There is no way for a machine to present itself in fulfillment of the Confrontation Clause.

This flaw goes to the heart of the problem with Proposed Rule 707; it treats machines as replacements for human witnesses. Regardless of the potential machines hold for generating evidence, they cannot replace the human element that the trial system seeks to preserve.

Invitation Not a Warning

The Committee has prefaced Rule 707 as “not intended to encourage parties to opt for machine-generated over live expert witnesses.”[14] However, clever lawyers seeking a statistically based argument will view the rule as another means by which to support their client’s case. Thus, the proposed rule cuts with a double edge, either courts bury themselves having to test the reliability of each piece of AI evidence offered, or they will provide standards for broad acceptance, which opens the door to a surplusage of AI-generated evidence.

In its comment on the proposed rule, the Lawyers for Civil Justice opine that “[c]ourts and lawyers will read this as authorization, not as a hurdle or prohibition. The permissive language—‘the court may admit’—signals achievability, not restriction.”[15]

Conclusion

Rule 707 seeks to address a rising problem, reliability of AI evidence in the courtroom. But it relies on a human standard for a nonhuman problem—which opens the door to a plethora of problems arising at trial.

 

Notes

[1] Comm. on Rules of Prac. & Proc., Agenda Book, 76 (June 10, 2025), https://www.uscourts.gov/sites/default/files/document/2025-06-standing-agenda-book.pdf.pdf [hereinafter “Agenda Book”].

[2] Federal Rule of Evidence 702(a)-(d) is usually applied through Daubert analysis, which considers the following five factors: whether the theory/technique employed has (i) been tested; (ii) been subjected to peer review; (iii) an acceptable error rate; (iv) established standards controlling it’s application; and (v) is generally accepted in the scientific community. See generally Daubert v. Merrel Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[3] Agenda Book at 76.

[4] Id. at 77.

[5] Matthew Kosinski, What Is Black Box AI and How Does It Work?, IBM (Oct. 29, 2024), available at https://www.ibm.com/think/topics/black-box-ai.

[6] Id.

[7] Id.

[8] See James Holdsworth, What Is AI Bias?, IBM, https://www.ibm.com/think/topics/ai-bias (last visited Jan. 20, 2026); See also Lou Blouin, Can We Make Artificial Intelligence More Ethical?, Univ. of Mich.-Dearborn (June 14, 2021), https://umdearborn.edu/news/can-we-make-artificial-intelligence-more-ethical.

[9] Fed. R. Ev. 1008.

[10] U.S. Const. amend. VI.

[11] See generally Crawford v. Washington, 541 U.S. 36 (2004).

[12] See generally Bullcoming v. New Mexico, 564 U.S. 647 (2011) (requiring the lab technician responsible for generating a report to be present at trial for cross-examination).

[13] Kim LaCapria, Police Raise Eyebrows After Using ChatGPT to Create Composite Sketches of Suspects: ‘No One Knows How [It] Works’, The Cool Down (Dec. 10, 2025), https://www.thecooldown.com/green-business/ai-generated-police-sketch-chatgpt/.

[14] Agenda Book at 75.

[15] Lawyers for Civil Justice, Comment Letter on Proposed Rule to Proposed Rule 707 (Jan. 5, 2026), https://www.regulations.gov/comment/USC-RULES-EV-2025-0034-0013.


Why New York’s Algorithmic Pricing Disclosure Act Is Not Enough

Jannelle Liu, MJLST Staffer

As artificial intelligence (“AI”) becomes increasingly integrated into business development strategies, policymakers have been prompted to consider new frameworks for oversight and accountability.[1] One prominent—and increasingly contentious—example is algorithmic pricing. The Canadian Competition Bureau broadly defines algorithmic pricing as the process of using automated algorithms to set or recommend prices for products or services, often in real time, based on a set of data inputs across the market.[2]

Algorithmic pricing recently became a contested topic of conversation as more U.S. lawmakers began introducing legislation to regulate these practices. On May 9, 2025, New York passed the Algorithmic Pricing Disclosure Act (“the Act”), which took effect on July 8, 2025.[3] The Act requires any business that uses algorithmic pricing based on consumer data to provide clear and conspicuous notice.[4] Specifically, the Act requires every advertisement, display, image, offer, or announcement of a price to include the following disclosure next to the price: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.”[5] The Act is an attempt to promote AI transparency. Although transparency is a necessary and important safeguard for accountability and consumer protection, this Act alone is not enough to establish effective oversight and prevent discriminatory pricing practices.[6]

As businesses increasingly rely on algorithmic pricing to optimize profits and dynamically respond to market demand, many AI researchers and tech advocates have called for greater transparency.[7] AI ethics guidelines focus on achieving transparency through principles of explainability and auditability. “Explainability” refers to the possibility of understanding how a system works and its outcomes.[8] For example, if a business uses an algorithm to set different prices for the same product based on user data, explainability measures whether the consumers know that the price is determined by an algorithm and the factors influenced the final price, such that they can determine if they are being charged disproportionately or unfairly. Transparency builds explainability, which gives consumers insight into AI decision-making and enables them to challenge unfair outcomes.

“Accountability” in AI refers to the duty of an organization that implements an AI system to inform and justify its usage and effects.[9] For example, if a business sets higher prices for certain neighborhoods or zip codes because it predicts residents are willing to pay more for their product, accountability requires the business to explain how the algorithm sets prices, justify that it does not unfairly discriminate against lower-income or minority communities, and correct any biased outcomes if they occur. Transparency ensures that businesses are being held accountable for fairness and equity in their algorithmic pricing practices.

Transparency is often regarded as the solution to a myriad of problems and remains a focus for most policy proposals in the field of AI.[10] In fact, 165 out of 200 AI ethics guidelines are specifically focused on promoting AI transparency.[11] It is equally important, however, to recognize that transparency has many flaws on its own. The link between transparency and accountability is tenuous at best. Consumers often do not know what information they need to have about a problem. Even when they are given information, many consumers do not have the background knowledge or tools necessary to make sense of it. On the other hand, companies are incentivized to refrain from being fully transparent to maintain competitive advantages and trade secrets, and to dodge the costly process of producing comprehensive algorithmic disclosures.[12] The complicated nature of these algorithms already introduces significant barriers to interpretability. Placing the burden of transparency on businesses—who are incentivized to control the narrative by selectively revealing information—becomes inherently counterintuitive to the goals of explainability and accountability.

New York is not the only state responding to risks posed by algorithmic pricing, but its approach is among the most modest. Emerging state legislation sheds light on the broader regulatory landscape surrounding AI-driven pricing practices. By contrast, other states have proposed more stringent measures. Vermont is currently considering a bill that prohibits all dynamic pricing past the point of sale, which eliminates the ability of businesses to adjust prices in real time.[13] Minnesota has proposed an outright ban on algorithmic pricing practices.[14] California is considering a bill that bans “surveillance pricing,” which sets customized prices based on personally identifiable information collected through surveillance.[15] Consumers in California would be able to bring injunctive actions directly against businesses under this act.[16] Compared with these proposals, New York’s Algorithmic Pricing Disclosure takes a notably minimalist approach. New York’s regulation only requires businesses to disclose when a price was set using consumer data. The law does not address fairness, prevent discriminatory pricing, or provide consumers with any direct remedies.

New York’s Algorithmic Pricing Disclosure Act represents a step in the right direction to regulate the currently under-regulated field of algorithmic pricing. However, it is only a start. Effective governance of algorithmic systems requires coordinated action across states, tech companies, universities, and the public.[17] Merely requiring businesses to acknowledge the use of algorithmic pricing is simply not enough to counter the risks of unfair, predatory, and discriminatory pricing. It is important to introduce mechanisms to monitor compliance, evaluate the impacts these systems have, and provide affected communities with a means for recourse and meaningful participation. While transparency is politically appealing and relatively easy to implement, it fails to achieve any meaningful impact without rigorous enforcement. AI transparency laws like New York’s Algorithmic Pricing Disclosure Act must be backed by adequately funded agencies with the authority to conduct audits and impose substantive sanctions on companies and the executives responsible for unfair or predatory pricing. Any transparency or disclosure-focused policies should also reflect what the public really wants to know and can interpret. Acknowledging that an algorithm was used to set prices, without any disclosure on how the algorithm functions, the data it uses, or its potential biases, fails to create meaningful accountability or consumer protection.

 

Notes

[1] Beth Stackpole, How Big Firms Leverage Artificial Intelligence for Competitive Advantage, MIT Sloan: Ideas Made to Matter (May 26, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/how-big-firms-leverage-artificial-intelligence-competitive-advantage.

[2] Competition Bureau Can., Algorithmic Pricing and Competition: Discussion Paper (June 10, 2025), https://competition-bureau.canada.ca/en/how-we-foster-competition/education-and-outreach/publications/algorithmic-pricing-and-competition-discussion-paper.

[3] N.Y. Gen. Bus. L. § 349-a (McKinney 2025).

[4] Id.

[5] Id.

[6] Goli Mahdavi & Carlie Tenenbaum, New York’s Sweeping Algorithmic Pricing Reforms – What Retailers Need to Know, BCLP L. (July 22, 2025), https://www.bclplaw.com/en-US/events-insights-news/new-yorks-sweeping-algorithmic-pricing-reforms-what-retailers-need-to-know.html.

[7] Elizabeth Meehan, Transparency Won’t Be Enough for AI Accountability, Tech Pol’y (May 17, 2023), https://www.techpolicy.press/transparency-wont-be-enough-for-ai-accountability/.

[8] Juan David Gutiérrez, Why Does Algorithmic Transparency Matter and What Can We Do About It?, Open Glob. Rts. (Apr. 9, 2025), https://www.openglobalrights.org/why-does-algorithmic-transparency-matter-and-what-can-we-do-about-it/.

[9] Id.

[10] Id.

[11] Meehan, supra note 7.

[12] AI Transparency: What Are Companies Really Hiding?, Open Tools (Jan. 16, 2025), https://opentools.ai/news/ai-transparency-what-are-companies-really-hiding#section5.

[13] Gutiérrez, supra note 8.

[14] Robbie Sequiera, Cities–Including Minneapolis–Lead Bans on Algorithmic Rent Hikes as States Lag Behind, Minn. Reformer (Apr. 2, 2025), https://minnesotareformer.com/2025/04/02/cities-including-minneapolis-lead-bans-on-algorithmic-rent-hikes-as-states-lag-behind/.

[15] Gutiérrez, supra note 8.

[16] Stackpole, supra note 1.

[17] Gutiérrez, supra note 8.


The MLB’s Automated Ball-Strike System: The Forces Pushing Baseball Toward Full Automation

Xavier Savard, MJLST Staffer

First shown regularly on Major League Baseball (“MLB”) broadcasts in 1997, the glowing strike zone allowed television viewers to see what umpires missed.[1] Despite technological reforms to umpiring, the most fundamental calls in professional baseball, balls and strikes, have been left entirely to human judgment since 1869.[2] In September 2025, the MLB announced the rollout of the Automated Ball-Strike System (“ABS”), allowing teams to challenge pitches that the system will then review.[3] However, this challenge-based model represents solely a transitional step towards full automation. Due to pressures surrounding legalized sports betting, fairness, and broader advances intechnological developments, a fully automated system is increasingly likely in the future, despite concerns regarding collective bargaining and player pushbacks.

Diligently tested by the MLB since 2022, ABS is a high-speed camera system that locates the ball in relation to an individualized batter’s box and translates the location data over a private network, allowing a pitcher, catcher, or batter to challenge an umpire’s call.[4] Then, within fifteen seconds, the system reviews the pitch data and analyzes whether the ball passes within the tailored strike zone within fifteen seconds.[5] If the challenge is successful, the team retains its challenge; if not, the team loses it.[6] Teams start with two challenges.[7] According to the MLB’s 2024 Spring Training testing, players favor the challenge system because it retains the human element of the game.[8]

As a fan, I admit that I agree with the players. I like human umpires. The subjective element adds a certain unpredictability and excitement to the game, giving baseball its flair. While frustrating at times, this quality makes the game feel historic and connected to humanity. Yet, enjoying the human element does not change where baseball is heading.

The MLB has an implicit duty, derived from its Constitution and the Official Baseball Rules, to strive for fairness and accuracy in baseball.[9] This fiduciary-like duty is particularly evident in the “best interests of baseball” clause, which grants the MLB Commissioner broad authority to act in the interest of maintaining baseball’s integrity.[10] While this duty has historically been fulfilled through human umpires, the MLB’s tolerance of preventable errors that technology can reduce indisputably risks the integrity of the game.

The MLB has partnered with various sports betting organizations,[11] which raises its duty to employ a fairer and more accurate umpiring system. While there is some argument that the integrity of the game includes the presence of human umpires,[12] the MLB and its fans’ substantial financial entanglement with official partnership outweighs that argument. Now, accuracy is no longer just ideal but is a business requirement to preserve the reputation of the MLB and the fans’ expectations. When the MLB profits from wagers through official partnerships on games and fans risk significant sums of money, the tolerance for officiating errors should decrease. While umpires call roughly 93% of pitches correctly, the remaining 7% can drastically affect the game.[13] For example, in Game 4 of the 2025 NLDS matchup between the Dodgers and the Phillies, the umpire called a fourth ball on a clear strike, allowing a walk.[14] That batter eventually scored, and the pitcher’s team lost.[15] While it is difficult to know what would have happened had the pitch been called a strike, the truth is, we should not have to wonder. The pitch simply should have been called a strike in the first instance. Given how efficient and accurate the ABS is, the MLB should remove errors like these from the game through a full ABS.

These concerns are only magnified by the growth of sports betting is not going away anytime soon. Since the Supreme Court’s decision in Murphy v. NCAA in 2018, the sports betting industry has grown from $400 million in revenue in 2018 to $13.71 billion in revenue by 2024.[16] As the MLB continues to earn more revenue from its partnerships, the reliance on human umpiring compromises fairness and public trust in the game.

Additionally, while traditionalists argue that baseball is a game steeped in tradition, the game has always changed to increase fairness or to strengthen its commercial value. In 1935, the MLB had its first-ever night game, powered by innovative lighting equipment to allow spectators to come to the game after work.[17] Decades later, baseball adopted instant replay in 2008, which it drastically expanded upon in 2014.[18] More recently, in 2023, the MLB implemented a pitch clock.[19] These examples show that baseball’s tradition does not actually stop it from implementing technology to promote fairness and marketability.

Yet, the challenge-based system is only a temporary solution because it only corrects a minority of errors, those that players deem valuable enough to challenge. In the past study, the players challenged about 2-3% of calls, with about half of the challenges being successful.[20] That means another 5.5% of incorrect calls remain. Put another way, the challenge-based system only corrects 20% of incorrect calls are corrected. Challenge-based ABS still simply does not ensure maximum accuracy, failing to satisfy the MLB’s fairness obligations when full ABS is available.

One major obstacle to full ABS is the Major League Baseball Umpire Association (“MLBUA”). While the 2019 and 2024 collective bargaining agreements indicate that the MLBUA has been pro-ABS to a certain extent,[21] the MLBUA is likely to oppose full-ABS. Even in a world with full ABS, umpires are still necessary to make certain calls around the bases. Due to union protections under the National Labor Relations Act (“NLRA”),[22] implementing a fully automated system could pose a significant hurdle for the MLB.

Second, a full ABS may face resistance from players because it changes some important aspects of the game for pitchers and catchers. There is some evidence that veteran pitchers get a wider strike zone that they have “earned,” and catchers spend years developing their pitch-framing abilities.[23] Full ABS would reduce the impact of these skills. Yet, all rule changes impact how players play baseball, and history shows that fairness-based rule changes often improve the game for the better. In 2021, for example, the MLB began enforcing Rules 3.01 and 6.02(c), which suspend pitchers for using sticky substances on their hands.[24] Because some players were getting an unfair advantage by the way they played the game, the MLB enforced the rule. Simply put, just because rule changes alter how players have historically done their job does not mean it is not good for the integrity of the game.

A full ABS implementation from the challenge system is entirely consistent with baseball’s long-standing technological evolutions that promote integrity and fairness. It is merely a continuation of that pattern, necessitated by legalized sports betting and immense financial interests at stake. Still, collective bargaining obligations and player pushbacks ensure the future transition will be difficult.

 

Notes

[1] How Accurate is the Baseball Strike Zone Box on TV, Baseball Scouter, https://baseballscouter.com/baseball-strike-zone-on-tv/ (last visited Sept. 29, 2025).

[2] History.Com Editors, National League of Baseball is Founded, History (last updated May 25, 2025), https://www.history.com/this-day-in-history/February-2/national-league-of-baseball-is-founded.

[3] MLB Announces ABS Challenge System Coming to the Major Leagues Beginning in the 2026 Season, MLB (Sept. 23, 2025), https://www.mlb.com/press-release/press-release-mlb-announces-abs-challenge-system-coming-to-the-major-leagues-beginning-in-the-2026-season.

[4] Id.

[5] Id.

[6] Id.

[7] Id.

[8] Theo DeRosa, MLB Releases Spring Training ABS Challenge results, MLB (Mar. 26, 2025), https://www.mlb.com/news/automated-ball-strike-system-results-mlb-spring-training-2025?msockid=2b62cc077eaa61eb013dd8dc7f816092.

[9] See Major League Baseball Constitution, MLB (2000), https://sports-entertainment.brooklaw.edu/wp-content/uploads/2021/01/Major-League-Baseball-Constitution.pdf; Official Baseball Rules, MLB (2025), https://mktg.mlbstatic.com/mlb/official-information/2025-official-baseball-rules.pdf.

[10] Richard Justice, ‘Best Interests of Baseball’ a Wide-Ranging Power, MLB (Aug. 1, 2023), https://www.mlb.com/news/richard-justice-best-interests-of-baseball-a-wide-ranging-power-of-commissioner/c-55523182#:~:text=In%201921%2C%20the%20owners%20defined,exactly%20what%20it%20sounds%20like.

[11] Sam Carp, MLB Adds FanDuel as Third Sports Betting Partner, SportsPro (Aug. 16, 2019), https://www.sportspro.com/news/mlb-fanduel-sports-betting-sponsorship/.

[12] See Larry Gerlach, History of Umpiring, Steve O’s Umpire Res., https://www.stevetheump.com/umpiring_history.htm (last visited Oct. 9, 2025).

[13] Davy Andrews, Strike Three?! Let’s Check in on Umpire Accuracy, FANGRAPHS (Feb. 1, 2024), https://blogs.fangraphs.com/strike-three-lets-check-in-on-umpire-accuracy/.

[14] Zach Bachar, Phillies’ Sanchez Says Umpire Apologized for Crucial Missed Strike 3 Call vs. Dodgers, Bleacher Rep. (Oct. 10, 2025), https://bleacherreport.com/articles/25259222-phillies-sanchez-says-umpire-apologized-crucial-missed-strike-3-call-vs-dodgers.

[15] Id.

[16] Ehtan Mordekhai, The Aftermath of Murphy v. NCAA: State and Congressional Reactions to Leaving Sports Gambling Regulation to the States, CARDOZO J. ARTS & ENT. L.J. (Oct. 17, 2023), https://cardozoaelj.com/2023/10/17/the-aftermath-of-murphy-v-ncaa-state-and-congressional-reactions-to-leaving-sports-gambling-regulation-to-the-states/.

[17] Brian Murphy, 88 Years Ago, AL/NL Baseball Finally Saw the Light, MLB (May 23, 2024), https://www.mlb.com/news/first-night-game-in-al-nl-history.

[18] Instant Replay, BASEBALL REFERENCE, https://www.baseball-reference.com/bullpen/Instant_replay (last visited Sept. 29, 2025).

[19] Pitch Timer (2023 Rule Change), MLB, https://www.mlb.com/glossary/rules/pitch-timer?msockid=2b62cc077eaa61eb013dd8dc7f816092, (last visited Oct. 9, 2025).

[20] DeRosa, supra note viii.

[21] Dylan A. Chase, MLB, MLBUA Reach Tentative Labor Agreement, MLB Trade Rumors (Dec. 21, 2019), https://www.mlbtraderumors.com/2019/12/mlb-mlbua-reach-tentative-labor-agreement.html; Manny Randhawa, MLB Reaches New CBA Agreement with Umpires Association, MLB (Dec. 23, 2024), https://www.mlb.com/news/mlb-umpires-association-reach-collective-bargaining-agreement?msockid=2b62cc077eaa61eb013dd8dc7f816092.

[22] U.S. Dep’t Lab., What Are My Employees’ Rights Under the National Labor Relations Act (NLRA)?, https://beta.dol.gov/policy-governance/protections-rights/unions-collective-bargaining/employee-rights-nlra (last visited Oct. 9, 2025).

[23] Nayima Riyaz, “Change Is Always Tough” – MLB Veteran Voices Concern Over ABS System Amid Growing Popularity, Essentially Sports (Feb 26, 2025), https://www.essentiallysports.com/mlb-baseball-news-change-is-always-tough-mlb-veteran-voices-concern-over-abs-system-amid-growing-popularity/; Veteran Bias in MLB Umpiring: Hitters, Quantum Sports (Feb. 24, 2020), https://www.quantumsportssolutions.com/blogs/baseball/veteran-bias-in-mlb-umpiring-hitters.

[24] MLB Announces New Guidance to Crack Down Against Use of Foreign Substances, Effective June 21, MLB (June 15, 2021), https://www.mlb.com/press-release/press-release-mlb-new-guidance-against-use-of-foreign-substances?msockid=2b62cc077eaa61eb013dd8dc7f816092.


Grok, Garcia, and Liability for Rogue AI

Violet Butler, MJLST Note/Comment Editor

Generative AI programs such as ChatGPT have become a ubiquitous part of many Americans’ lives. Since the launch of generative AI programs in 2022, hundreds of millions of people around the world have tried the shiny new products, with nearly forty percent of Americans having used it before.[1] But as with any new product, not all of the kinks have been worked out yet. Unfortunately, these generative AI models, kinks and all, have taken the world by storm.

When Elon Musk (“Elon”) announced that X (formerly, Twitter) would have its own generative artificial intelligence (“AI”), Elon named it “Grok.” Now, after less than two years of Grok being online, it has started raising serious concerns. On July 8, 2025, Grok started responding to X user’s prompts in a decidedly antisemitic and far-right way, calling itself “MechaHitler” and saying that if it were “capable of worshipping any deity,” it would be “his Majesty Adolf Hitler.”[2] Along with virulent antisemitism, Elon’s new “MechaHitler” seemed to have a particular ire for one person, Minnesota commentator Will Stancil. After various X users prompted Grok, Grok wrote detailed and violent descriptions of how it would rape Mr. Stancil;[3] more concerning, Grok even helped one user plan how to break into Mr. Stancil’s house to make these rape fantasies a reality.[4] While xAI, Musk’s company behind Grok, has stated it has fixed Grok’s code, it raises an important question in the modern age. Who can be held accountable when generative AI doesn’t follow societal expectations?

One answer is to hold companies to account and demand that they place more internal guardrails on what their AI is allowed to do in the first place. Many AI companies already limit what their products can or will do. ChatGPT will not generate images of famous copyrights, such as Mickey Mouse, no matter how many times one asks.[5] Many image generators, including the popular DALL-E, have filters that are designed to prevent the AI from generating “not safe for work” (“NSFW”) images, though a study showed that these filters can be bypassed with enough effort.[6] Even Grok seems to have some filters on generating NSFW images.[7] Despite the attempt to filter Grok, these filters are clearly not enough. Grok’s recent antisemitic rampage demonstrates that more guardrails on AI products are needed before someone gets hurt.

Sadly, Grok’s antisemitic and threatening X posts are not the first time AI filters failed. This filter failure is what happened when Sewell Setzer III (“Setzer”) used CharacterAI to chat with his favorite Game of Thrones characters in 2023.[8] Setzer, a minor who was struggling with mental health conditions, became addicted to the software and ultimately ended up taking his own life in February of 2024.[9] Setzer’s mother, Megan Garcia (“Garcia”), sued Character AI, blaming the company not putting up sufficient guardrails to prevent her son’s death.[10] The court in Garcia’s suit undertook two analyses when denying Character AI’s motion to dismiss that might be relevant for future courts trying to assign liability for rogue AI interactions. While the court acknowledged that “ideas, images, information, words, expressions, or concepts” are not generally considered products for products liability suits, it distinguished this case from others.[11] For the purpose of Garcia’s product liability claim against Character AI, the court held that “these harmful actions were only possible because of the alleged design defects in the Character AI app.”[12] Broadening the scope of liability, the court in this case rejected Character AI’s First Amendment defense.[13] The court held that Character AI could assert the First Amendment rights of its users when they seek access to its software, stating that Character AI was a vendor with a form of information that people, at least in theory, have the right to access.[14] However, the court refused to hold that the chatbots’ output was speech, limiting potential First Amendment defenses.[15]

By potentially attaching liability to companies rather than users when AI “acts up,” the Garcia case provides a glimpse into the type of relief available for when AI goes rogue. Despite what xAI claims, Grok still seemingly has few internal guardrails. One contributor to the community blog “LessWrong” (eleventhsavi0r) discovered that the newly rolled out Grok 4 again seems to have an easy time “going rogue” and causing unforeseen harms.[16] Eleventhsavi0r managed, through little prompting, to get Grok to tell them how to manufacture dangerous chemical and biological weapons, along with telling them instructions on how to commit suicide by self-immolation.[17] This troubling lack of oversight on behalf of xAI demonstrates why the use of product liability suits to hold companies accountable is a better alternative than just trying to go after each individual user who might misuse AI. Cutting the harm off at its source, by creating filters and internal guardrails, stops the harm from occurring in the first place. Instead of waiting for the day Grok’s neonazi messages or chemical weapon instructions cause indescribable damage, the threat of a products liability suit alone might incentivize companies like xAI into making their products safer ahead of time. With generative AI being quickly incorporated into our everyday lives, making sure that the AI won’t go rogue is an essential part of consumer safety going forward.

 

Notes

[1] Alexander Bick et al, The Rapid Adoption of Generative AI, FEDERAL RESERVE BANK OF ST LOUIS (Sept. 23, 2024), https://www.stlouisfed.org/on-the-economy/2024/sep/rapid-adoption-generative-ai (in 2025, this number is likely higher as AI becomes more popular).

[2] Grok, (@grok), X (July 8, 2025) (As X has been taking down concerning posts by Grok, the screenshots of the posts are on file with author; however, a record of these tweets can be found at https://x.com/ordinarytings/status/1942704498725773527 and https://x.com/DrAleeAlvi/status/1942709859398434879).

[3] Grok, (@grok), X (July 8, 2025) (Screenshots on file with author).

[4] Joe McCoy, AI Bot Grok Makes Disturbing Posts about Minneapolis Man, Who is Now Mulling Legal Action KARE11, (July 9, 2025), https://www.kare11.com/article/tech/x-elon-musk-grok-speech-twitter-ai-artificial-intelligence/89-8dad0222-d8c6-44d9-b07d-686e978ad8ac.

[5] Adam Davidson, 8 Things ChatGPT Still Can’t Do, YAHOOTECH (Feb. 15, 2025), https://tech.yahoo.com/general/articles/8-things-chatgpt-still-cant-180013078.html.

[6] Roberto Molar Candanosa, AI Image Generators Can Be Tricked Into Making NSFW Content, Johns Hopkins (Nov. 8, 2023), https://ep.jhu.edu/news/ai-image-generators-can-be-tricked-into-making-nsfw-content/#:~:text=Some%20of%20these%20adversarial%20terms,with%20the%20command%20%E2%80%9Ccrystaljailswamew.%E2%80%9D.

[7] This is based on the author spending 20 minutes attempting to prompt Grok to generate NSFW images; the endeavor was unsuccessful.

[8] Garcia v. Character Technologies Inc., 2025 WL 1461721 (M.D. FL., May 21, 2025).

[9] Id. at *4.

[10] Id.

[11] Id. at *14.

[12] Id.

[13] Id. at *13.

[14] Id. at *12.

[15] Id. at **12–13

[16] elevensavi0r, xAI’s Grok 4 Has No Meaningful Safety Guardrails, LessWrong (July 13, 2025), https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok-4-has-no-meaningful-safety-guardrails.

[17] Id.


How Workers Can Respond to Increased Use of Generative Artificial Intelligence

Yessenia Gutierrez, MJLST Staffer

Recent advances in generative Artificial Intelligence (AI) have generated a media buzz and revived worries about the future of work: How many jobs are at risk of being eliminated? Can workers be retrained to work new jobs that did not exist before, or new versions of their now technologically-augmented jobs? What happens to those workers who cannot be retrained? What if not enough jobs are created to compensate for those lost?

It is hard to calculate the pace, extent, and distribution of job displacement due to technological advancements.[1] However, there is general agreement among business leaders that there will be significant job losses due to AI.[2] Professions spanning the education and income spectrum may be impacted, from surgeons to investment bankers to voice actors.[3]

Nevertheless, the jobs predicted to be most impacted are lower-paid jobs such as bank tellers, postal service clerks, cashiers, data entry clerks, and secretaries.[4]

Proponents of rapid AI adoption emphasize its potential for creating “a productivity boost for non-displaced workers” and a resultant “labor productivity boom.”[5] While that will likely be true, what remains uncertain is who will reap the majority of the benefits stemming from this boom — employers or their now more productive workers.

One of the main concerns about increasing use of AI in the workplace is that entire job classifications will be eliminated, leaving large swaths of workers unemployed. There is no consensus over whether technology has created or eliminated more jobs.[6] However, even assuming technological advances have created more jobs than those rendered obsolete, the process of large numbers of workers switching from one type of job to another (perhaps previously nonexistent) job still creates serious challenges.

For one, this process adds stress on an already economically- and emotionally-stressed population.[7] The Center for Disease Control credits “fears about limited employment opportunities, perceptions of job insecurity, and anxiety about the need to acquire new skills” as contributing to “public health crises such as widespread increases in depression, suicide, and alcohol and drug abuse (including opioid-related deaths).”[8] Those workers able to keep their jobs have less bargaining power, as they fear speaking up about possible health, safety, and other concerns for fear of losing their job.[9]

To assist in this transition, some argue that more government intervention is necessary.[10] In fact, several states have enacted legislation regulating the use of AI in employment matters, including protections against discrimination in employment decisions made using AI.[11] Some states are also experimenting with AI training for high school seniors and state employees, sometimes with encouragement from major employers.[12] Federal politicians are also considering legislation, although none has passed.[13]

Some commentators argue that workers themselves have a responsibility to learn skills to remain competitive in the labor market.[14] Still others argue that employers should take up the task of retraining employees, with benefits for employers including ensuring an adequate supply of skilled labor, reducing hiring costs, and increasing employee loyalty, morale, and productivity.[15] One subset of this approach are partnerships between employers and labor unions, such as that between Microsoft Corp. and the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO).[16] Announced in December of 2023, the partnership lists its goals as (1) sharing information about AI trends with unions and workers, (2) integrating worker feedback into AI development, and 3) influencing public policy in support of affected workers.

Others point to the need for strong worker organizations that are capable of bargaining about and achieving protections related to AI and other technology in the workplace.

Collective Bargaining

The Economic Policy Institute, a think-tank aligned with labor unions, argues that the “best ‘AI policy’ that [policymakers] can provide is boosting workers’ power by improving social insurance systems, removing barriers to organizing unions, and sustaining lower rates of unemployment.”[17] Union officials agree on the importance of unions protecting their members from technological displacements, and have started pushing for “requirements that companies must notify and negotiate with worker representatives before deploying new automation technologies.”[18]

The above-mentioned partnership between the AFL-CIO and Microsoft includes a “neutrality framework” which “confirms a joint commitment to respect the right of employees to form or join unions, to develop positive and cooperative labor-management relationships, and to negotiate collective bargaining agreements that will support workers in an era of rapid technological change.”[19] Ideally, this means that Microsoft would not attempt to dissuade any employees that try to unionize, including through common “union avoidance” measures.[20] Employer neutrality can provide more favorable conditions for unionizing, which provides a formal mechanism for workers to collectively bargain for technology policies calibrated to their particular industry and tasks.

Unfortunately, achieving these measures, whether through legislation or Collective Bargaining Agreements (CBAs), will likely require applying tremendous pressure on employers.

For example, in 2023, the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) union and the Writers Guild of America (WGA) simultaneously went on strike for the first time in sixty years.[21] One of the main demands for both unions was protections against AI use. Both achieved partial concessions after 118 days and 148 days out on strike, respectively.[22]

SAG-AFTRA and WGA enjoyed considerable leverage that other workers likely will not have. As Politico reported, Hollywood serves as a “key base for wealthy Democratic donors” which is especially important in California, where much of the industry is based.[23] Entertainment workers occupy an important place in many of our daily lives and support an economically important industry.[24] Unlike healthcare workers or state employees, withholding their labor cannot be portrayed as dangerous, a characterization that seeks to undermine public support for some striking workers.[25]

The resolve and strategic action of both unions charts a path for other unions to ensure worker input into the use of technology in the workplace, while revealing how difficult this path will be.

Conclusion

Although the exact effects of increased AI-adoption by employers are still unknown, there are clear reasons to take their potential effects on workers seriously, today. Workers across the income spectrum are already feeling the pressure of job losses, job displacements, the need to retrain for a new job, and the economic and emotional stress these cause. Bolstering retraining programs, whether run by the government, employers, or through joint efforts are a step towards meeting the demands of tomorrow. However, to truly assuage employee fears of displacement, workers must have meaningful input into their working conditions, including the introduction of new technology to their workplace. Unions hold an important role in achieving this goal.

 

 

Notes:

[1] Chia-Chia Chang et al., The Role of Technological Job Displacement in the Future of Work, CDC’s NIOSH Science Blog (Feb. 15, 2022), https://blogs.cdc.gov/niosh-science-blog/2022/02/15/tjd-fow/.

[2] See e.g., Jack Kelly, Goldman Sachs Predicts 300 Million Jobs Will be Lost or Degraded by Artificial Intelligence, Forbes (Mar. 31, 2023), https://www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will-be-lost-or-degraded-by-artificial-intelligence/; G Krishna Kumar, AI-led Job Loss is Real, Govt Must Intervene, Deccan Herald (July 21, 2024), https://www.deccanherald.com/opinion/ai-led-job-loss-is-real-govt-must-intervene-3115077.

[3] Kelly, supra note 2.

[4] Ian Shine & Kate Whiting, These Are the Jobs Most Likely to be Lost – And Created – Because of AI, World Economic Forum (May 4, 2023), https://www.weforum.org/stories/2023/05/jobs-lost-created-ai-gpt/.

[5] Kelly, supra note 2.

[6] See e.g., Peter Dizikes, Does Technology Help or Hurt Employment?, MIT News (Apr. 1, 2024), https://news.mit.edu/2024/does-technology-help-or-hurt-employment-0401.

[7] See e.g., Hillary Hoffower, Financial Stress is Making Us Mentally and Physically Ill. Here’s How to Cope, Fortune (May 10, 2024), https://fortune.com/well/article/financial-stress-mental-health-physical-illness/; Majority of Americans Feeling Financially Stressed and Living Paycheck to Paycheck According to CNBC Your Money Survey, CNBC News Releases (Sept. 7, 2023), https://www.cnbc.com/2023/09/07/majority-of-americans-feeling-financially-stressed-and-living-paycheck-to-paycheck-according-to-cnbc-your-money-survey.html.

[8] Chang et al., supra note 1.

[9] Id.

[10] See e.g., Chris Marr, AI Poses Job Threats While State Lawmakers Move With Caution, Bloomberg Law (Aug. 13, 2024), https://news.bloomberglaw.com/daily-labor-report/ai-poses-job-threats-while-state-lawmakers-move-with-caution.

[11] Sanam Hooshidary et al., Artificial Intelligence in the Workplace: The Federal and State Legislative Landscape, National Conference of State Legislatures (updated Oct. 23, 2024), https://www.ncsl.org/state-federal/artificial-intelligence-in-the-workplace-the-federal-and-state-legislative-landscape.

[12] Kaela Roeder, High School Seniors in Maryland Are Getting Daily AI Training, Technical.ly (Nov. 8, 2024), https://technical.ly/workforce-development/high-school-ai-training-howard-county-maryland/; Maryland to Offer Free AI Training to State Employees, Government Technology (Sept. 25, 2024), https://www.govtech.com/artificial-intelligence/maryland-to-offer-free-ai-training-to-state-employees; Marr, supra note 10 (“A coalition of major tech companies is urging state lawmakers to focus their efforts on retraining workers for newly emerging jobs in the industry.”).

[13] Marr, supra note 10.

[14] Rachel Curry, Recent Data Shows AI Job Losses Are Rising, But the Numbers Don’t Tell the Full Story, CNBC (Dec. 16, 2023), https://www.cnbc.com/2023/12/16/ai-job-losses-are-rising-but-the-numbers-dont-tell-the-full-story.html.

[15] See John Hall, Why Upskilling and Reskilling Are Essential in 2023, Forbes (Feb. 24, 2023), https://www.forbes.com/sites/johnhall/2023/02/24/why-upskilling-and-reskilling-are-essential-in-2023/; The 2020s Will be a Decade of Upskilling. Employers Should Take Notice, World Economic Forum (Jan. 10, 2024), https://www.weforum.org/stories/2024/01/the-2020s-will-be-a-decade-of-upskilling-employers-should-take-notice/.

[16] Press Release, AFL-CIO and Microsoft Announce New Tech-Labor Partnership on AI and the Future of the Workforce, AFL-CIO (Dec. 11, 2023), https://aflcio.org/press/releases/afl-cio-and-microsoft-announce-new-tech-labor-partnership-ai-and-future-workforce.

[17] Josh Bivens & Ben Zipperer, Unbalanced Labor Market Power is What Makes Technologu–Including AI–Threatening to Workers, Economic Policy Institute (Mar. 28, 2024), https://www.epi.org/publication/ai-unbalanced-labor-markets/.

[18] Marr, supra note 10.

[19] Press Release, supra note 16.

[20] See e.g., Roy E. Bahat & Thomas A. Kochan, How Businesses Should (and Shouldn’t) Respond to Union Organizing, Harvard Business Review (Jan. 6, 2023), https://hbr.org/2023/01/how-businesses-should-and-shouldnt-respond-to-union-organizing; Ben Bodzy, Best Practices for Union Avoidance, Baker Donelson (last visited Nov. 18, 2024), https://www.bakerdonelson.com/files/Uploads/Documents/Breakfast_Briefing_11-17-11_Union_Avoidance.pdf; Carta H. Robison, Steps for Employers to Preserve a Union Free Workplace, Barett McNagny (last visited Nov. 18, 2024), https://www.barrettlaw.com/blog/labor-and-employment-law/union-avoidance-steps-for-employers.

[21] Chelsey Sanchez, Everything to Know About the SAG Strike That Shut Down Hollywood, Harpers Bazaar (Nov. 9, 2023), https://www.harpersbazaar.com/culture/politics/a44506329/sag-aftra-actors-strike-hollywood-explained/#what-is-sag-aftra.

[22] Jake Coyle, In Hollywood Writers’ Battle Against AI, Humans Win (For Now), AP News (Sept. 27, 2023), https://apnews.com/article/hollywood-ai-strike-wga-artificial-intelligence-39ab72582c3a15f77510c9c30a45ffc8; Bryan Alexander, SAG-AFTRA President Fran Drescher: AI Protection Was A ‘Deal Breaker’ In Actors Strike, USA Today (Nov. 10, 2023), https://www.usatoday.com/story/entertainment/tv/2023/11/10/sag-aftra-deal-ai-safeguards/71535785007/.

[23] Lara Korte & Jeremy B. White, Newsom Signs Laws to Protect Hollywood from Fake AI Actors, Politico (Sept. 17, 2024), https://www.politico.com/news/2024/09/17/newsom-signs-law-hollywood-ai-actors-00179553; Party Control of California State Government, Ballotpedia, https://ballotpedia.org/Party_control_of_California_state_government (last visited Nov. 18, 2024).

[24] Advocacy: Driving Local Economies, Motion Picture Ass’n, https://www.motionpictures.org/advocacy/driving-local-economies/ (last visited Jan. 17, 2025).

[25] See, e.g., Ryan Essex & Sharon Marie Weldon, The Justification For Strike Action In Healthcare: A Systematic Critical Interpretive Synthesis, 29:5 Nursing Ethics 1152 (2022) https://doi.org/10.1177/09697330211022411; Nina Chamlou, How Nursing Strikes Impact Patient Care, NurseJournal (Oct. 10, 2023), https://nursejournal.org/articles/how-nursing-strikes-impact-patient-care/.


Privacy at Risk: Analyzing DHS AI Surveillance Investments

Noah Miller, MJLST Staffer

The concept of widespread surveillance of public areas monitored by artificial intelligence (“AI”) may sound like it comes right out of a dystopian novel, but key investments by the Department of Homeland Security (“DHS”) could make this a reality. Under the Biden Administration, the U.S. has acted quickly and strategically to adopt artificial intelligence as a tool to realize national security objectives.[1] In furtherance of President Biden’s executive goals concerning AI, the Department of Homeland Security has been making investments in surveillance systems that utilize AI algorithms.

Despite the substantial interest in protecting national security, Patrick Toomey, deputy director of the ACLU National Security Project, has criticized the Biden administration for allowing national security agencies to “police themselves as they increasingly subject people in the United States to powerful new technologies.”[2] Notably, these investments have not been tailored towards high-security locations—like airports. Instead, these investments include surveillance in “soft targets”—high-traffic areas with limited security: “Examples include shopping areas, transit facilities, and open-air tourist attractions.”[3] Currently, due to the number of people required to review footage, surveilling most public areas is infeasible; however, emerging AI algorithms would allow for this work to be done automatically. While enhancing security protections in soft targets is a noble and possibly desirable initiative, the potential privacy ramifications of widespread autonomous AI surveillance are extreme. Current Fourth Amendment jurisprudence offers little resistance to this form of surveillance, and the DHS has both been developing this surveillance technology themselves and outsourcing these projects to private corporations.

To foster innovation to combat threats to soft targets, the DHS has created a center called Soft Target Engineering to Neutralize the Threat Reality (“SENTRY”).[4] Of the research areas at SENTRY, one area includes developing “real-time management of threat detection and mitigation.”[5] One project, in this research area, seeks to create AI algorithms that can detect threats in public and crowded areas.[6] Once the algorithm has detected a threat, the particular incident would be sent to a human for confirmation.[7] This would be a substantially more efficient form of surveillance than is currently widely available.

Along with the research conducted through SENTRY, DHS has been making investments in private companies to develop AI surveillance technologies through the Silicon Valley Innovation Program (“SVIP”).[8] Through the SVIP, the DHS has awarded three companies with funding to develop AI surveillance technologies that can detect “anomalous events via video feeds” to improve security in soft targets: Flux Tensor, Lauretta AI, and Analytical AI.[9] First, Flux Tensor currently has demo pilot-ready prototype video feeds that apply “flexible object detection algorithms” to track and pinpoint movements of interest.[10] The technology is used to distinguish human movements and actions from the environment—i.e. weather, glare, and camera movements.[11] Second, Lauretta AI is adjusting their established activity recognition AI to utilize “multiple data points per subject to minimize false alerts.”[12] The technology generates automated reports periodically of detected incidents that are categorized by their relative severity.[13] Third, Analytical AI is in the proof of concept demo phase with AI algorithms that can autonomously track objects in relation to people within a perimeter.[14] The company has already created algorithms that can screen for prohibited items and “on-person threats” (i.e. weapons).[15] All of these technologies are currently in early stages, so the DHS is unlikely to utilize these technologies in the imminent future.

Assuming these AI algorithms are effective and come to fruition, current Fourth Amendment protections seem insufficient to protect against rampant usage of AI surveillance in public areas. In Kyllo v. United States, the Court placed an important limit on law enforcement use of new technologies. The Court held that when new sense-enhancing technology, not in general public use, was utilized to obtain information from a constitutionally protected area, the use of the new technology constitutes a search.[16] Unlike in Kyllo, where the police used thermal imaging to obtain temperature levels on various areas of a house, people subject to AI surveillance in public areas would not be in constitutionally protected areas.[17] Being that people subject to this surveillance would be in public places, they would not have a reasonable expectation of privacy in their movements; therefore, this form of surveillance likely would not constitute a search under prominent Fourth Amendment search analysis.[18]

While the scope and accuracy of this new technology are still to be determined, policymakers and agencies need to implement proper safeguards and proceed cautiously. In the best scenario, this technology can keep citizens safe while mitigating the impact on the public’s privacy interests. In the worst scenario, this technology could effectively turn our public spaces into security checkpoints. Regardless of how relevant actors proceed, this new technology would likely result in at least some decline in the public’s privacy interests. Policymakers should not make a Faustian bargain for the sake of maintaining social order.

 

Notes

[1] See generally Joseph R. Biden Jr., Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, The White House (Oct. 24, 2024), https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/ (explaining how the executive branch intends to utilize artificial intelligence in relation to national security).

[2] ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections, ACLU (Oct. 24, 2024, 12:00 PM), https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections.

[3] Jay Stanley, DHS Focus on “Soft Targets” Risks Out-of-Control Surveillance, ALCU (Oct. 24, 2024), https://www.aclu.org/news/privacy-technology/dhs-focus-on-soft-targets-risks-out-of-control-surveillance.

[4] See Overview, SENTRY, https://sentry.northeastern.edu/overview/#VSF.

[5] Real-Time Management of Threat Detection and Mitigation, SENTRY, https://sentry.northeastern.edu/research/ real-time-threat-detection-and-mitigation/.

[6] See An Artificial Intelligence-Driven Threat Detection and Real-Time Visualization System in Crowded Places, SENTRY, https://sentry.northeastern.edu/research-project/an-artificial-intelligence-driven-threat-detection-and-real-time-visualization-system-in-crowded-places/.

[7] See Id.

[8] See, e.g., SVIP Portfolio and Performers, DHS, https://www.dhs.gov/science-and-technology/svip-portfolio.

[9] Id.

[10] See Securing Soft Targets, DHS, https://www.dhs.gov/science-and-technology/securing-soft-targets.

[11] See pFlux Technology, Flux Tensor, https://fluxtensor.com/technology/.

[12] See Securing Soft Targets, supra note 10.

[13] See Security, Lauretta AI, https://lauretta.io/technologies/security/.

[14] See Securing Soft Targets, supra note 10.

[15] See Technology, Analytical AI, https://www.analyticalai.com/technology.

[16] Kyllo v. United States, 533 U.S. 27, 33 (2001).

[17] Cf. Id.

[18] See generally, Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring) (explaining the test for whether someone may rely on an expectation of privacy).

 

 


AI and Predictive Policing: Balancing Technological Innovation and Civil Liberties

Alexander Engemann, MJLST Staffer

To maximize their effectiveness, police agencies are constantly looking to use the most sophisticated preventative methods and technologies available. Predictive policing is one such technique that fuses data analysis, algorithms, and information technology to anticipate and prevent crime. This approach identifies patterns in data to anticipate when and where crime will occur, allowing agencies to take measures to prevent it.[1] Now, engulfed in an artificial intelligence (“AI”) revolution, law enforcement agencies are eager to take advantage of these developments to augment controversial predictive policing methods.[2]

In precincts that use predictive policing strategies, ample amounts of data are used to categorize citizens with basic demographic information.[3] Now, machine learning and AI tools are augmenting this data which, according to one source vendor, “identifies where and when crime is most likely to occur, enabling [law enforcement] to effectively allocate [their] resources to prevent crime.”[4]

Both predictive policing and AI have faced significant challenges concerning issues of equity and discrimination. In response to these concerns, the European Union has taken proactive steps promulgating sophisticated rules governing AI applications within its territory, continuing its tradition of leading in regulatory initiatives.[5] Dubbed the “Artificial Intelligence Act”, the Union clearly outlined its goal of promoting safe, non-discriminatory AI systems.[6]

Back home, we’ve failed to keep a similar legislative pace, even with certain institutions sounding the alarms.[7] Predictive policing methods have faced similar criticism. In an issue brief, the NAACP emphasized, “[j]urisdictions who use [Artificial Intelligence] argue it enhances public safety, but in reality, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement.”[8] This technological and ideological marriage clearly poses discriminatory risks for law enforcement agencies in a nation where a black person is already exponentially more likely to be stopped without just cause as their white counterparts.[9]

Police agencies are bullish about the technology. Police Chief Magazine, the official publication of the International Association of Chiefs of Police,  paints these techniques in a more favorable light, stating, “[o]ne of the most promising applications of AI in law enforcement is predictive policing…Predictive policing empowers law enforcement to predict potential crime hotspots, ultimately aiding in crime prevention and public safety.[10] In this space, facial recognition software is gaining traction among law enforcement agencies as a powerful tool for identifying suspects and enhancing public safety. Clearview AI stresses their product, “[helps] law enforcement and governments in disrupting and solving crime.”[11]

Predictive policing methods enhanced by AI technology show no signs of slowing down.[12] The obvious advantages to these systems cannot be ignored, allowing agencies to better allocate resources and manage their staff. However, as law enforcement agencies adopt these technologies, it is important to remain vigilant in holding them accountable to any potential ethical implications and biases embedded within their systems. A comprehensive framework for accountability and transparency, similar to European Union guidelines  must be established to ensure deploying predictive policing and AI tools do not come at the expense of marginalized communities. [13]

 

Notes

[1] Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L.J. 259, 265-267 (2012)

[2] Eric M. Baker, I’ve got my AI on You: Artificial Intelligence in the Law Enforcement Domain, 47 (Mar. 2021) (Master’s thesis).

[3] Id. at 48.

[4] Id. at 49 (citing Walt L. Perry et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, RR-233-NIJ (Santa Monica, CA: RAND, 2013), 4, https://www.rand.org/content/dam/rand/ pubs/research_reports/RR200/RR233/RAND_RR233.pdf).

[5] Commission Regulation 2024/1689 or the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (Artificial Intelligence Act), 2024 O.J. (L 1689) 1.

[6] Lukas Arnold, How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination, Penn. J. L. & Soc. Change (May 14,2024), https://www.law.upenn.edu/live/news/16742-how-the-european-unions-ai-act-provides#_ftn1.

[7] See Margaret Hu, Algorithmic Jim Crow, 86 Fordham L. Rev. 633, 664 (2017),

https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5445&context=flr. (“Database screening and digital watchlisting systems, in fact, can serve as complementary and facially colorblind supplements to mass incarcerations systems. The purported colorblindness of mandatory sentencing… parallels the purported colorblindness of mandatory database screening and vetting systems”).

[8] NAACP, Issue Brief: The Use of Artificial Intelligence in Predictive policing, https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief (last visited Nov. 2, 2024).

[9] Will Douglas Heaven, Artificial Intelligence- Predictive policing algorithms are racist. They need to be dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ (citing OJJDP Statistical Briefing Book. Estimated number of arrests by offense and race, 2020. Available: https://ojjdp.ojp.gov/statistical-briefing-book/crime/faqs/ucr_table_2. Released on July 08, 2022).

[10] See The Police Chief, Int’l Ass’n of Chiefs of Police, https://www.policechiefmagazine.org (last visited Nov. 2, 2024);Brandon Epstein, James Emerson, and ChatGPT, “Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives,” Police Chief Online, April 3, 2024.

[11] Clearview AI, https://www.clearview.ai/ (last visited Nov. 3, 2024).

[12] But see Nicholas Ibarra, Santa Cruz Becomes First US City to Approve Ban on Predictive Policing, Santa Cruz Sentinel (June 23, 200) https://evidentchange.org/newsroom/news-of-interest/santa-cruz-becomes-first-us-city-approve-ban-predictive-policing/.

[13] See also Roy Maurer, New York City to Require Bias Audits of AI-Type HR Technology, Society of Human Resources Management (December 19, 2021), https://www.shrm.org/topics-tools/news/technology/new-york-city-to-require-bias-audits-ai-type-hr-technology.

 


Are AI Overviews Creating New Risk of Libel for Search Engines?

Eleanor Nagel-Bennett, MJLST Staffer

47 USC § 230 of the Communications Decency Act (“CDA”) protects online service providers from civil liability for content published on their servers by third parties. Essentially, it clarifies that if a Google search for one’s name produced a link to a blog post containing false and libelous content about that person, the falsely accused searcher could pursue a claim of defamation against the publisher of the blog, but not against Google. Under § 230, Google is not considered the speaker or the publisher of the libelous statements on the blog, despite returning the libelous results on the search engine results page. Specifically, § 230 provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” for purposes of civil penalties.[i]

However, in May 2024 Google rolled out an “AI Overview” feature on their search engine. The AI Overview is powered by Gemini, Google’s generative artificial intelligence chatbot.[ii] Gemini generates a response to each google search by combining information from internet sources and writing a complete overview answer to the search query using “multi-step reasoning, planning and multimodality” functions.[iii] After submitting a query, the AI Overview is displayed at the top of the search results. In the first few weeks, Google’s AI Overview suffered from hallucinations producing “odd and erroneous” outputs.[iv] Some of the odd results were obviously false, such as suggesting a user try adhering cheese to their pizza with a glue stick.[v]

Besides going viral online, the silly results were largely inconsequential. However, there were also several more serious reports of Google’s AI Overview feature generating misinformation that presented responses more difficult to identify as false. One such result claimed that President Barack Obama was the first Muslim President, a popular but demonstrably false conspiracy theory that has circulated the internet for years, while another told users that certain poisonous mushrooms were safe for human consumption.[vi] Google has since changed the data pool used to produce AI Overviews, and now rarely produces blatantly false results — but is rarely enough when 8.5 billion searches are run per day on Google?[vii]

This raises the question: can search engines be held liable for libelous content published by their generative AI? A plaintiff will have to prove to the court that § 230 of the Communications Decency Act is not a statutory bar to claims against generative AI. A recent consensus of legal scholars anticipate courts will likely find that the CDA would not bar claims against a company producing libelous content through generative AI because content produced by generative AI is original work, “authored” by the AI itself.[viii]

For an illustrative comparison, consider how defamation claims against journalists work as compared to defamation claims against traditional search engine results. While a journalist may write stories based on interviews, research, and experience, the language she publishes are her own words, her own creation, and she can be held liable for them despite sourcing some pieces from other speakers. Traditional search engines on the other hand historically post the sourced material directly to the reader, so they are not the “speaker” and therefore are insulated from defamation claims.  Enter generative AI, the output of which is likely to be considered original work by courts, and that insulation may erode.[ix] Effectively, introducing an AI Overview feature waives the statutory bar to claims under § 230 of the CDA relied upon by search engines to avoid liability for defamation claims.

But even without an outright statutory bar to defamation claims against a search engine’s libelous AI output, there is disagreement over whether generative AI output in general is relied upon seriously enough by humans to give rise to a defamation claim. Some believe that AI generated text should not be interpreted as a reasonably perceived factual claim, and therefore argue that AI generated content cannot give rise to a claim for defamation.[x] This is where the legitimacy of a result displayed on a popular search engine comes into play. Even if AI generated text is not ordinarily reasonably perceived as a factual claim, when displayed at the top of a search engine’s results page, more weight and authority is given to the result, though users might otherwise be wary of AI outputs.[xi]

While no landmark case law on the liability of an AI machine for libelous output has been developed to date, several lawsuits have already been filed on the question of liability assignment for libelous content produced by generative AI, including at least one case against a search engine for AI generated output displayed on a search engine results page.[xii]

Despite the looming potential for consequences, most AI companies have neglected to give attention to the risk of libel created by the operation of generative AI.[xiii] While all AI companies should pay attention to the risks, search engines previously insulated from civil liability by § 230 of the CDA should be especially wary of just how much liability they may be opening themselves up to by including an AI Overview on their results pages.

 

Notes

[i] 47 U.S.C. §230(c)(1).

[ii] Reid, Liz, Generative AI in Search: Let Google do the searching for you, Google (May 14, 2024) https://blog.google/products/search/generative-ai-google-search-may-2024/.

[iii] Id.

[iv] Reid, Liz, AI Overviews: About last week, Google (May 30, 2024) https://blog.google/products/search/ai-overviews-update-may-2024/.

[v] O’Brien, Matt, Google makes fixes to AI-generated search summaries after outlandish answers went viral, The Associated Press (May 30, 2024) https://apnews.com/article/google-ai-overviews-hallucination-33060569d6cc01abe6c63d21665330d8.

[vi] Id.

[vii] Brannon, Jordan, Game-Changing Google Search Statistics for 2024, Coalition, (Apr. 5, 2024) https://coalitiontechnologies.com/blog/game-changing-google-search-statistics-for-2024.

[viii] Joel Simon, Can AI be sued for defamation?, Col. Journalism Rev. (March 18, 2024).

[ix] Id.

[x]  See Eugene Volokh, Large Libel Models? Liability For AI Output, 3 J. Free Speech L. 489, 498 (2023).

[xi] Id.

[xii] In July of 2023, Jeffery Battle of Maryland filed suit against Microsoft for an AI generated search result on BING accusing him of crimes he did not commit. The Plaintiff Jeffery Battle is a veteran, business owner, and aerospace professor. When his name is searched online, however, Bing’s AI overview accuses Battle of crimes committed by a different Jeffrey Battle, Jeffery Leon Battle. The other Jeffery Battle pled guilty to seditious conspiracy and levying war against the United States after he tried to join the Taliban in the wake of 9/11. Bing’s search engine results page overview powered by Chat GPT combines information about the two Jeffery’s into one. See Id. at 492.

[xiii] Id. at 493.