Artificial Intelligence

Your Digital Doppelgänger

Lillie Grant, MJLST Staffer

What counts as harm in an age of inference?

Modern systems do not just collect information; they generate it.[1] From patterns in behavior, timing, and interaction, they derive conclusions about people that those people never actually shared.[2] Often, those conclusions are more revealing than anything someone would voluntarily disclose.[3] And yet, the law does not clearly or consistently treat that process as harmful.[4]

Privacy law has mostly been built around disclosure.[5] The usual question is whether information was knowingly shared, improperly collected, or revealed to the wrong people.[6] The basic idea is that the data starts with the individual and then moves outward.[7] But inference does not work like that.[8] It is not about what is given; it is about what is created.[9]

The difference is more significant than it first appears, because when a system converts small pieces of behavior into conclusions about a person, it does more than record activity; it interprets it, producing not just a list of actions but a statement about their meaning.[10]

The law has not caught up. Courts are much more comfortable recognizing harm when inferred information shows up in the world in a visible way.[11] If something is revealed, shared, or used in a way that clearly affects someone, it looks like a familiar kind of injury.[12] It has consequences that feel real and immediate.[13]

But most inferences never get that far.[14] They stay inside the system that produced them.[15] They shape what someone sees, what is recommended, what is prioritized, and sometimes what opportunities are available, all without a discrete, traceable event.[16] Even when those inferences are accurate or deeply personal, they often do not trigger legal protection.[17] There is no clear moment where something was “disclosed,” and without that, courts struggle to recognize harm at all.[18]

That leaves a gap: privacy law still depends on the idea that information is something a person gives.[19] Something you can point to and say, “This was shared.”[20] But inferred data does not fit into that model.[21] It is not handed over; it is built, and because of that, it slips past categories that were never designed to capture this kind of process.[22] The problem is not just theoretical; it affects whether someone can even bring a claim.[23] To get into court, a plaintiff has to show a concrete injury.[24] Not just a feeling that something is off, but something the law is willing to recognize as harm.[25] When the issue is inference, the information may shape real outcomes but does so quietly, without a clear moment that satisfies the law’s demand for discrete injury.[26]

At the same time, these inferences are not meaningless. They are the product. Companies are not just collecting data for the sake of it; they are turning it into insights that can be used to target ads, keep people engaged, and make money.[27] The value is not just in what people do, but in what can be figured out from what they do.[28]

That raises a harder question. If a company can take your behavior, turn it into something new, and profit from it, what exactly belongs to you? The raw data came from you, but the conclusion did not. The law tends to treat that distinction as important.[29] It is not obvious that it should settle the issue at all.[30]

Recent lawsuits by authors challenge the use of their works to train AI systems as a form of uncompensated extraction,[31] but because those claims focus on the inputs used to build these systems, they leave open a distinct question: whether individuals have any claim to the inferences generated about them, suggesting the problem is not just data use but the unrecognized extraction and monetization of information produced about individuals.

There are limited signals in existing law suggesting that creating new data about a person can itself be treated as harm, most clearly in biometric cases where courts have recognized that generating something like a faceprint is significant even without further use.[32]

Part of what makes inference so difficult is that it does not feel like a clear violation. There is no obvious intrusion or single moment where something is taken; instead, it happens gradually as bits of behavior accumulate and are turned into meaning that appear harmless on their own but are surprisingly complete in the aggregate.[33] That creates a deeper tension. The better systems get at understanding people, the less clear it becomes what it even means, legally, to “know” something about someone.[34] At what point does a pattern become information? And at what point does producing that information start to matter in a legal sense?

The better framing is to abandon disclosure as the organizing principle. Maybe the issue is not disclosure at all. Maybe it is extraction. Systems are not just observing behavior; they are pulling meaning out of it and turning that meaning into something usable.[35] That something can be scaled, sold, and built into entire business models.[36] But the legal rules we have are still mostly about what people choose to share, not what can be created from what they do.[37]

If that is right, the problem is only intensifying, as systems increasingly rely on information that no one explicitly provided but that still feels personal, making it harder to say that nothing of consequence is being taken. The law offers no clear answer, leaving inferred data central in practice but misaligned with doctrines of harm. This leaves individuals in a position where systems can form detailed conclusions about them while they have little ability to see or challenge those conclusions, reflecting a definition of harm that no longer matches how information is actually produced and used.

 

Notes

[1] See generally Joan M Wrabetz, What Is Inferred Data and Why Is It Important?, ABA (Aug. 22, 2022), https://www.americanbar.org/groups/business_law/resources/business-law-today/2022-september/what-is-inferred-data-and-why-is-it-important/.

[2] Id.

[3] See Hal Conick, AI and the Law, Univ. Chi. L. Sch. (Dec. 9, 2024), https://www.law.uchicago.edu/news/ai-and-law.

[4] Sandra Wachter & Brent Mittelstadt, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, 2019 Colum. Bus. L. Rev. 494.

[5] See Overview of the Privacy Act of 1974: Conditions of Disclosure to Third Parties, U.S. Dep’t of Just., https://www.justice.gov/opcl/overview-privacy-act-1974-2020-edition/disclosures-third-parties (last visited Apr. 9, 2026, at 16:12 CST).

[6] Id.

[7] Id.

[8] See Wrabetz, supra note 1.

[9] Id.

[10] Id.

[11] See Harith Khawaja, Injury, in Fact: The Internet, the Americans with Disabilities Act, and Standing in Digital Spaces, 36 Stan. L. & Pol’y Rev. 165, 172 (2025).

[12] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Danielle Keats Citron & Daniel Solove, Privacy Harms, 102 B.U.L Rev 793 (2022).

[13] Id.

[14] Jeffrey Erickson, What Is AI Inference?, Oracle (Apr. 2, 2024), https://www.oracle.com/artificial-intelligence/ai-inference/#:~:text=Inference%2C%20to%20a%20lay%20person,in%20the%20training%20data%20set.

[15] Id.

[16] Id.

[17] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Citron & Solove, supra note 12.

[18] Id.

[19] Citron & Solove, supra note 12.

[20] See Pamela J. Wisniewski & Xinru Page, Privacy Theories and Frameworks, in Modern Socio-Technical Perspectives on Privacy 15 (2022).

[21] Wrabetz, supra note 1.

[22] See Privacy by Proxy: Regulating Inferred Identities in AI Systems, IAPP (Nov. 12, 2025), https://iapp.org/news/a/privacy-by-proxy-regulating-inferred-identities-in-ai-systems.

[23] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).

[24] Id.

[25] Id.

[26] Wrabetz, supra note 1.

[27] Id.

[28] Id.

[29] Id.

[30] Id.

[31] See Pramode Chiruvolu et al., Fair Use and AI Training: Two Recent Decisions Highlight the Complexity of This Issue, Skadden, Arps, Slate, Meagher & Flom LLP (July 8, 2025) https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training.

[32] See Ross D. Emmerman & Mark Goldberg, Illinois Supreme Court Rules No Actual Harm Needed for Biometric Information Protection Act Claims; Floodgates Open, Loeb & Loeb LLP (Jan. 2019) https://www.loeb.com/en/insights/publications/2019/01/illinois-supreme-court-rules-no-actual-harm-needed.

[33] Wrabetz, supra note 1.

[34] Id.

[35] Id.

[36] Id.

[37] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); ); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).


The “Search Party” Backfire: How a Super Bowl Ad Ignited a Bipartisan Privacy Reckoning in Minnesota

Ella Stromberg, MJLST Staffer

Introduction: Heartwarming Ad with a Chilling Reality

During the 2026 Super Bowl, a commercial meant to pull at the heartstrings of millions of viewers instead ignited a firestorm of debate over the future of American privacy. The advertisement was Ring’s new “Search Party” feature, a tool designed to help find lost dogs by utilizing a neighborhood-wide network of AI-powered doorbell cameras.[1] While the mission of reuniting lost pets with their families appears noble, the ad’s high-profile debut served as a rare moment of corporate transparency regarding the vast surveillance infrastructure growing around us. The resulting backlash has exposed a significant gap in consumer surveillance laws, one that Minnesota legislators are now aggressively moving to fill.

How the Ring “Search Party” Feature Works

To the casual viewer, the Search Party feature seems like a simple community service. However, the underlying mechanics are far more complex. The feature utilizes AI to scan footage from opted-in neighbor cameras to identify lost pets based on characteristics such as breed, size, and fur pattern.[2] The feature is enabled by default, meaning users are automatically enrolled unless they navigate a multi-step process to opt out.[3] This captured footage can be stored for up to 180 days, creating a massive, retrospective, searchable database of neighborhood activity.[4] Ring founder, Jamie Siminoff, defended this expansion, noting that advances in AI allow these features to be implemented at a scale and speed previously impossible.[5]

The Viral Backlash

The marketing for the Search Party feature encouraged users to “be a hero in your neighborhood”, but the public reception was decidedly less heroic.[6] In the week following the Super Bowl ad, nearly 50% of social media conversations regarding Ring were negative, compared to only 14% that were positive.[7] Users took to platforms like Reddit to claim they were requesting refunds, while some even posted videos of themselves destroying their Ring cameras in protest.[8] Legal experts were equally struck by the campaign. Dr. Jane Kirtley, Professor of Media, Ethics, and Law here at the University of Minnesota, noted it was interesting that Ring would be “so candid about the potential use of this particular technology.”[9] Critics argued the ad was “creepy” and “dystopian,” suggesting that if AI can be used to track a specific dog across a neighborhood, there is little barrier to using the same infrastructure to track specific people.[10]

Privacy Advocates’ Responses

The concerns raised by privacy advocates like the Electronic Frontier Foundation (EFF) center on the fundamental problem of consent.[11] While a camera owner might opt-in to the network, the Ring cameras also record every passerby, from postal workers to neighbors, without their permission.[12] EFF attorney Mario Trujillo warns that this creates a “large surveillance apparatus” that can easily be tapped into by law enforcement.[13]

There is also fear of a slippery slope when Search Party is combined with Ring’s “Familiar Faces” facial recognition technology, which identifies specific individuals who approach a doorway.[14] Congressman Raja Krishnamoorthi (D-IL) expressed concerns in a formal letter to Ring, warning that the opt-out design is confusing and risks creating 24/7 surveillance networks near sensitive locations like hospitals, schools, and courthouses.[15] Further, the history of partnerships between Ring and surveillance companies like Flock Safety has raised alarms regarding data-sharing with federal agencies like ICE.[16] Although Ring recently canceled its partnership with Flock, citing resource constraints, advocates remain wary of how easily private residential data can be integrated into broader police intelligence networks.[17]

Minnesota’s Rapid Legislative Response

The Search Party controversy made one thing clear: Minnesota currently lacks laws preventing private companies from sharing this type of residential video data with third parties or government entities.[18] In response, a bipartisan group of Minnesota lawmakers introduced a five-bill package aimed at regulating AI and protecting digital rights.[19] Led by the unlikely duo of Senator Erin Maye Quade and Senator Eric Lucero, the bills target several key areas: SF 1857 targets prohibiting children under 18 from accessing AI chatbots,[20] SF 1856 bans health insurers from using AI to determine medical necessity,[21] SF 3098 blocks “dynamic pricing” set by AI algorithms,[22] SF 1886 mandates disclosure when a consumer is interacting with AI,[23] and SF 1120 aims to create a landmark ban on reverse warrants.[24]

SF 1120 has particular significance stemming from the Ring ad. It would prohibit the government from using reverse location or reverse keyword searches, which are digital dragnets that compel tech companies to hand over data on every device in a specific area or every person who searched for a specific term.[25] The bill includes a civil cause of action, allowing individuals to sue for $1,000 per violation if their data is obtained unlawfully.[26] Senator Lucero argued these controls are necessary to “empower individuals against these multi-billion dollar industries.”[27]

The path to enactment faces two major hurdles. First, law enforcement groups, including the Minnesota Bureau of Criminal Apprehension, testified that banning reverse warrants would have “extensive negative consequences” for solving complex crimes.[28] Second, a federal complication looms; an Executive Order from President Trump establishes an AI litigation task force to challenge state laws, threatening to pull funding from states with “onerous” AI laws.[29]

Looking Forward

The Ring Super Bowl ad was intended to be a marketing triumph, but instead, it became a rare moment where the public saw a glimpse of the surveillance nightmare being built around them. The swift, bipartisan response in the Minnesota legislature signals that surveillance privacy is no longer a partisan issue but now a fundamental question of constitutional rights that the public wants answers to. As these bills move through the legislature, they highlight the unresolved tension between legitimate law enforcement needs and Fourth Amendment protections. If passed, Minnesota’s approach could become a model for state-level digital rights, provided it can survive the looming threat of federal preemption. For now, the Search Party backfire serves as a potent reminder that in the age of AI, “common-sense guardrails” are no longer optional; they are necessary.[30]

 

Notes

[1] See Ring, Search Party from Ring | Be a Hero in Your Neighborhood, YouTube (Feb. 2, 2026), https://www.youtube.com/watch?v=OheUzrXsKrY.

[2] Abby Haymond, Ring’s New AI Lost Dog Feature Raises Privacy Concerns, WDAM (Feb. 11, 2026 at 22:10 CST), https://www.wdam.com/2026/02/12/rings-new-ai-lost-dog-feature-raises-privacy-concerns/.

[3] Todd Bishop, What Ring’s ‘Search Party’ Actually Does, And Why It’s Super Bowl Ad Gave People the Creeps, GeekWire (Feb. 10, 2026 at 11:14), https://www.geekwire.com/2026/what-rings-search-party-actually-does-and-why-its-super-bowl-ad-gave-people-the-creeps/.

[4] Madison Lisowski & Danae Holmes, Concerns Over AI Video Surveillance Grow Following Big Game Ad, W. Mass. News (Mar. 2, 2026 at 15:10 CST), https://www.westernmassnews.com/2026/03/02/concerns-over-ring-cameras-grow-following-big-game-ad/.

[5] Bishop, supra note 3.

[6] See e.g., Lisowski & Holmes, supra note 4.

[7] Sam Sabin, Doorbell Cams, Surveillance Tech Face Growing Backlash, Axios (Feb. 17, 2026), https://www.axios.com/2026/02/17/doorbell-cams-and-surveillance-tech-face-growing-public-backlash.

[8] Id.

[9] Corin Hoggard, Ring’s AI Feature Raises Privacy Alarms, Fox 9 (Feb. 10, 2026 at 9:37 CST), https://www.fox9.com/news/rings-ai-feature-raises-privacy-alarms.

[10] Bishop, supra note 3; Haymond, supra note 2.

[11] See, e.g., Beryl Lipton, No One, Including Our Furry Friends, Will Be Safer in Ring’s Surveillance Nightmare, Elec. Frontier Found. (Feb. 10, 2026), https://www.eff.org/deeplinks/2026/02/no-one-including-our-furry-friends-will-be-safer-rings-surveillance-nightmare-0.

[12] Haymond, supra note 2.

[13] Id.

[14] Id. See also Lipton, supra note 11; Bishop, supra note 3.

[15] Rep. Raja Krishnamoorthi, Krishnamoorthi Raises Alarm Over Ring’s New AI “Search Party” Feature, Citing Privacy and Civil Liberties Concerns (Feb. 27, 2026), https://krishnamoorthi.house.gov/media/press-releases/krishnamoorthi-raises-alarm-over-rings-new-ai-search-party-feature-citing.

[16] Bishop, supra note 3; Jay Stanley, Flock’s Aggressive Expansions Go Far Beyond Simple Driver Surveillance, ACLU (Aug. 18, 2025), https://www.aclu.org/news/privacy-technology/flock-roundup.

[17] Sabin, supra note 7; Lipton, supra note 11.

[18] Hoggard, supra note 9.

[19] Howard Thompson, MN Lawmakers Introduce AI Regulations Aimed at Protecting Children, Curtailing Surveillance, Fox 9 (Mar. 9, 2026 at 13:46 CDT), https://www.fox9.com/news/mn-lawmakers-introduce-ai-regulations-aimed-protecting-children-curtailing-surveillance.

[20] S.F. 1857, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/1857/versions/0/.

[21] S.F. 1856, 94th Leg., Reg. Sess. (Minn. 2025),  https://www.revisor.mn.gov/bills/94/2025/0/SF/1856/versions/latest/.

[22] S.F. 3098, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/3098/versions/latest/.

[23] S.F. 1866, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/1886/versions/latest/.

[24] S.F. 1120, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/1120/versions/latest/.

[25] Id.

[26] Id.

[27] Michelle Griffith, Minnesota Lawmakers Push Bipartisan Measures to Regulate AI, SC Times (Mar. 11, 2026 at 2:45 CT), https://www.sctimes.com/story/news/politics/2026/03/11/minnesota-senate-considers-bipartisan-push-to-regulate-ai-artificial-intelligence-dfl-gop/89082394007/.

[28] Id; Minn. Bureau of Criminal Apprehension, BCA Opposition to S.F. 1120 (Minn. Senate Comm. on Judiciary and Public Safety, Mar. 5, 2026), https://assets.senate.mn/committees/2025-2026/3128_Committee_on_Judiciary_and_Public_Safety/BCA-Opposition-to-SF1120-3-5-26-Signed-3-5-26.pdf (letter from BCA Superintendent Evans to Chair Latz opposing SF 1120).

[29] Exec. Order No. 14365, Ensuring a National Policy Framework for Artificial Intelligence, 90 Fed. Reg. 58499 (Dec. 2025), https://www.federalregister.gov/documents/2025/12/16/2025-23092/ensuring-a-national-policy-framework-for-artificial-intelligence; Thompson, supra note 19.

[30] Chris Farrell & Ellen Finn, Slate of Bills Looking to Regulate AI Introduced at Minnesota Capitol, Minn. Pub. Radio (Mar. 9, 2026 at 13:35), https://www.mprnews.org/episode/2026/03/09/slate-of-bills-looking-to-regulate-ai-introduced-at-state-capitol.


Can American Antitrust Law Keep Up With Artificial Intelligence?

Alec J. Berin, Matthew P. Suzor, and Quintin C. Cerione of Miller Shah LLP

Since the debut of OpenAI’s ChatGPT in late 2022, artificial intelligence (AI) has exploded from an experimental tool to a global industry. The exponential rise of generative AI, although providing companies and consumers with greater levels of efficiency and productivity, is putting pressure on American antitrust law to play catch up in regulating the growing AI market.

As AI becomes commonplace today, one of the greatest challenges it poses is that its building blocks—chips, cloud infrastructure, and large-language models—are largely controlled by only a handful of companies.[1] A major concern, therefore, is whether American antitrust law, which was largely designed during an industrial period dominated by railroads and manufacturing, can address the competitive risks of the AI era. Regulators and courts have started to express their perspectives about these issues, yet more questions than answers have emerged.

The Intersection of American Antitrust Doctrine and AI

The core of the American antitrust framework is comprised of the Sherman Antitrust Act (1890), Clayton Act (1914), and Federal Trade Commission Act (1914).[2] The Sherman Act was initially enacted in an effort to target monopolization by barring exclusionary practices, while the Clayton Act filled its holes by prohibiting mergers and acquisitions whose effect “may be substantially to lessen competition, or to tend to create a monopoly.”[3] Historically, courts have applied these laws to industries defined by physical assets, such as steel, oil, and operating systems.[4] Today, however, the market power increasingly consists of control over intangible items: data and algorithms.

Regulators are attempting to offer guidance on how these statutes apply in a digital and data-driven era. For example, in 2023 the FTC and DOJ issued revised Merger Guidelines, which warned that a merger could undermine competition if it “creates a firm that can limit access to products or services that its rivals use to compete.”[5] Although this is not directed exclusively at tech companies, this language nonetheless suggests antitrust law’s expanded focus on vertical integration—especially relevant for companies’ partnerships aimed at combining the control of AI infrastructure and data services.

The particular challenge for regulating market power in the AI sector is defining the relevant market. Because AI depends on key inputs—vast amounts of data and computational resources – rather than traditional products and services that have historically defined markets, delineating the relevant market is uniquely complex. This is clearly indicated in a 2025 report from the Congressional Research Service, which warns that “limited access to data” may threaten competition, regardless of whether AI services remain free to consumers.[6] In the coming years, determining whether AI regulation will be concentrated on the models, chips, or cloud services used for these products—or if they will be considered a single integrated stack—will be critical in influencing enforcement outcomes.

Early AI-Antitrust Legal Battles

In recent months, lawsuits against major tech companies have begun to address how far traditional antitrust principles extend into the AI space.[7] This October, a class-action lawsuit filed against Microsoft[8] alleged that its financial relationship with Open AI—particularly a deal granting Microsoft exclusive cloud computing that restricts the supply of computational resources needed to run ChatGPT—both limited market competition and artificially drove up ChatGPT subscription prices while diminishing product quality for millions of Open AI users.[9] Similar concerns are being raised by antitrust experts regarding Nvidia’s $100 billion partnership with OpenAI,[10] as experts fear that building such a relationship will give both companies an unfair advantage over their competitors.

Perhaps most notably, a September ruling by a federal judge in a landmark antitrust case against Google illustrated how AI may continue to be an obstacle in regulating monopolies.[11] Although the judge affirmed that “Google cannot use the same anticompetitive playbook for its GenAI products that it used for Search,” he insisted that the emergence of generative AI has granted companies a greater ability “to compete with Google than any traditional search company developer has been in decades” and ultimately spared Google from the harsh penalties.[12] This exemplifies the inherent tension of AI; a technology capable of fostering and hindering competition will prove only more difficult for regulators to address in years to come.

Critical Legal Questions to Consider

Going forward, courts will need to answer a series of questions to best address the competitive concerns of AI. First, as AI blurs product boundaries—with single companies being involved in many layers of the supply chain—determining whether these layers represent distinct or integrated markets has big implications for assessing anticompetitive behavior.

Second, because several of the most popular AI products offer services for free or at low costs, harm to consumers may lie outside the scope of price fixing but instead resulting from diminished product quality and restricted access to inputs.[13] It will be up to courts and regulators to determine when harm is being committed in the AI market.

Third, defining the line between integration and exclusion will become increasingly urgent. Though partnerships and acquisitions may accelerate innovation, unlawful exclusion may arise through integrated companies’ restriction of rivals’ access to essential inputs or result in self-preferencing through exclusive supply arrangements. Though this risk is outlined in the 2023 Merger Guidelines, it remains to be seen how courts will approach this issue in the coming years.

 

Notes

[1] See e.g., Jay Stanley, Will Giant Companies Always Have a Monopoly on Top AI Models?, ACLU (Aug. 20, 2025), https://www.aclu.org/news/racial-justice/will-giant-companies-always-have-a-monopoly-on-top-ai-models; Steven Levy, There Is Only One AI Company. Welcome to the Blob, Wired (Nov. 21, 2025 at 11:00), https://www.wired.com/story/ai-industry-monopoly-nvidia-microsoft-google/.

[2] See Sherman Antitrust Act of 1890, 15 U.S.C. §§ 1–38; Clayton Act of 1914, 15 U.S.C. §§ 12–27; Federal Trade Commission Act of 1914, 15 U.S.C. §§ 41-58.

[3] The Clayton Act of 1914, 15 U.S.C. § 18.

[4] See e.g., United States v. Columbia Steel Co., 334 U.S. 495 (1948) (applying the Sherman Act to the steel industry); FTC v. Sinclair Ref. Co., 261 U.S. 463 (1923) (applying the Federal Trade Commission Act and Clayton Act to the oil industry); United States v. Microsoft Corp., 346 U.S. App. D.C. 330 (2001) (applying the Sherman Act to operating systems).

[5] Federal Trade Commission & U.S. Department of Justice, Merger Guidelines (issued Dec. 18, 2023),
https://www.justice.gov/atr/2023-merger-guidelines.

[6] Congressional Research Service, Artificial Intelligence and Competition Policy (2025), CRS Insight No. IN12458, https://crsreports.congress.gov/product/pdf/IN/IN12458.

[7] Mike Scarcella, AI Users Sue Microsoft in Antirust Class Action Over OpenAI Deal, Reuters (Oct. 13, 2025 at 17:47 CDT), https://www.reuters.com/legal/government/ai-users-sue-microsoft-antitrust-class-action-over-openai-deal-2025-10-13/.

[8] Class Action Complaint, Samuel Bryant et al. v. Microsoft Corp., No. 3:25‑cv‑08733 (N.D. Cal. filed Oct. 13, 2025) (alleging anticompetitive restraints arising from Microsoft’s partnership with OpenAI).

[9] Scarcella, supra note 7.

[10] Jody Godoy, Nvidia’s $100 Billion OpenAI Play Raises Big Antitrust Issues, Reuters (Sept. 23, 2025),
https://www.reuters.com/technology/nvidias-100-billion-openai-play-raises-big-antitrust-concerns-2025-09-23/.

[11] See generally, United States v. Google LLC, 803 F. Supp. 3d 18 (D.D.C. 2025) (remedies decision addressing generative AI’s competitive effects).

[12] Id. at 99, 128.

[13] Scarcella, supra note 7.


Closing the Reporting Gap: Building a Legal Framework for Reporting Serious Online Threats

Heather Van Dort, MJLST Staffer

On February 12, 2026, Canada experienced one of the deadliest mass shootings in its history.[1] The shooting in Tumbler Ridge, British Columbia, claimed the lives of eight people and left another twenty-seven injured.[2] Months before the shooting, in June 2025, the suspect was banned from ChatGPT after they described concerning scenarios about gun violence to the chatbot.[3] OpenAI’s automated review system flagged the suspect’s posts, and about a dozen staffers subsequently reviewed the posts.[4] After internal deliberations, the company banned the account, but decided that the suspect’s activity did not meet the criteria necessary for reporting to law enforcement because there was no credible, imminent threat of harm.[5] It was not until after the shooting that OpenAI reached out to local authorities to share information regarding the suspect’s account.[6] Still, OpenAI did not violate any Canadian law, nor would it have violated any American law if these events had taken place within the United States.[7] In response to the tragedy, Canadian officials met with OpenAI officials in February, but OpenAI could not offer any new substantial safety measures to address situations in which it flags concerning content.[8] This incident highlights the lack of sufficient government oversight of the review policies that technology companies implement to determine when to disclose information to law enforcement.

OpenAI’s current policy (effective Jan. 1, 2026) for reporting to law enforcement permits the disclosure of user data if it believes that the disclosure is necessary “to prevent an emergency involving danger of death or serious physical injury to a person.”[9] This policy is consistent with the current disclosure requirements in the United States under the Stored Communications Act (“Act”).[10] Generally, the Act prohibits electronic communication service providers (“providers”) from disclosing customer data to governmental entities, but it contains an exception for emergencies.[11] Specifically, it allows providers to disclose the contents of customer communication if it “in good faith, believes that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay.”[12] However, there is nothing in the Act, nor any other U.S. law, which requires providers to disclose credible, serious threats to law enforcement.[13] As a result, providers are left to their own discretion to decide when user communications on their platforms are sufficiently concerning to justify reporting to law enforcement. This gap in the regulatory framework puts providers in a difficult position of deciding when to disclose closely held consumer data without clear guidelines, which subsequently leaves citizens vulnerable to the whims of providers.

It is time for lawmakers to establish clear mandatory reporting requirements for providers when they encounter concerning threats. Developing a legal framework that balances the need for public safety and privacy in consumer data is by no means easy, but the United States’s child protection laws may provide a helpful model for lawmakers. The United States, by federal statute, imposes a duty on providers to make a report as soon as “reasonably possible” after they obtain actual knowledge of child exploitation material to the CyberTipline operated by the National Center for Missing and Exploited Children (NCMEC) to share information related to child exploitation with law enforcement when they are made aware of it.[14] The report must include the complete communication flagged by the company, including any identifying information about the individual involved and the account’s geographic location.[15] NCMEC then forwards the report to relevant federal, state, local, and foreign law enforcement.[16] The primary enforcement mechanism of the law is steep fines for providers that increase with each violation.[17] Importantly, the law does not require providers to affirmatively screen or search for child exploitation content, nor does it require them to monitor accounts.[18]

Lawmakers could adopt a similar legal model to address other credible threats of serious imminent harm. Providers could be required to report content flagged by their algorithms as posing serious threats of harm to a tipline. After receiving the information, the tipline could consult an organization comprised of experts who could then determine whether to file a report with law enforcement. This model would relieve providers of the stress and potential liability associated with making difficult decisions about when to report to law enforcement. It could also improve public safety by ensuring that experts, rather than providers, screen harmful content. The use of a broader mandatory reporting requirement to address threats beyond child endangerment is not unprecedented. In the European Union, the Digital Services Act requires large online platforms to promptly inform competent authorities when they encounter content that suggests that there is a serious threat to life or safety.[19] Because many of the same large software providers operate in both the United States and Europe, a mandatory reporting requirement will likely be fairly easy for them to adjust to.[20]

There are serious privacy concerns that must be addressed before such a law is adopted. One concern, raised by OpenAI, is the risk of having police show up to investigate individuals who may not have violated the law.[21] While this can happen in regular police work, there is always a risk that police presence will startle people, resulting in escalation that could lead to serious harm. It is not possible to eliminate this risk entirely, but ensuring that experts screen concerning content will help guarantee that law enforcement is involved only when necessary.

A mandatory reporting law may not entirely resolve tough cases, like the Tumbler Ridge tragedy, where a credible threat of imminent harm is not necessarily clear, but it will at least require providers to report to law enforcement in instances where there is a clear threat. Establishing an independent body of experts to review content in difficult cases will relieve providers of some of the pressure of resolving borderline cases and improve public safety by ensuring that experts are making the decision of when to report to law enforcement.

 

Notes

[1] See Ottilie Mitchell, Tumbler Ridge Suspect’s ChatGPT Account Banned Before Shooting, Brit. Broad. Corp. (Feb. 21, 2026), https://www.bbc.com/news/articles/cn4gq352w89o.

[2] Id.

[3] See Georgia Wells, OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago, Wall St. J., (Feb. 21, 2026, 12:04 ET), https://www.wsj.com/us-news/law/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62?mod=Searchresults&pos=1&page=1 [https://perma.cc/A66B-V4PE].

[4] See id.

[5] Id.

[6] Id.

[7] See Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5, s. 7(3)(e) (Can.) (allowing organizations to disclose personal information to government officials in emergency situations but not requiring it); see also 18 U.S.C § 2702 (permitting disclosure of personal information to government officials in emergency situations, but not requiring it).

[8] See Alyshah Hasham, No ‘Substantial’ New Safety Measure Offered by OpenAI Following Tumbler Ridge Shooting, Says Minister, Toronto Star (Feb. 25, 2026), https://www.thestar.com/news/canada/no-substantial-new-safety-measures-offered-by-openai-following-tumbler-ridge-shooting-says-minister/article_1342f97e-2622-4cfa-bb7a-518e45151019.html.

[9] OpenAI Government User Data Request Policy, OpenAI (Jan. 1, 2026), https://cdn.openai.com/pdf/openai-law-enforcement-policy-v.2025-12.pdf.

[10] See generally 18 U.S.C. §§ 2701 et seq.

[11] 18 U.S.C § 2702(a).

[12] 18 U.S.C § 2702(b)(8).

[13] See 18 U.S.C. §§ 2701 et seq.

[14] See 18 U.S.C.S § 2258A(a).

[15] See 18 U.S.C.S § 2258A(b).

[16] See 18 U.S.C.S § 2258A(c).

[17] See 18 U.S.C.S § 2258A(e) (setting fines at not more than $850,000 for providers with not less than 100,000,000 monthly active users or $600,000 of providers with less than 100,000,000 monthly active users).

[18] See 18 U.S.C.S § 2258A(f).

[19] See Council Regulation 2022/2065, art. 18, 2022 O.J. (L 277) 1, 30.

[20] See Frances Burwell & Kenneth Propp, Digital Sovereignty: Europe’s Declaration of Independence?, Atl. Council (Jan. 14, 2026), https://www.atlanticcouncil.org/in-depth-research-reports/report/digital-sovereignty-europes-declaration-of-independence/.

[21] Vjosa Isai, Canada Presses OpenAI for Answers on Mass Shooter’s Chatbot Use, N.Y. Times (Feb. 23, 2026), https://www.nytimes.com/2026/02/23/world/canada/canada-shooting-openai.html [https://perma.cc/PMR7-W66Q].


AI Companies Could Be Liable for Violence Inspired by Their Chatbots

Benjamin Ayanian, MJLST Staffer

Overview

Artificial Intelligence (AI) is developing rapidly, and a substantial segment of the population now regularly uses large language models (LLMs).[1] Certainly, LLMs present numerous benefits, as they can streamline tasks, summarize large volumes of text, provide an intellectual sparring partner, offer general health and exercise advice, and more.

LLMs also present various dangers and pitfalls, such as promulgating misinformation, hallucinating legal citations, and providing potentially dangerous and incorrect health advice.[3] Most recently, LLMs have come under great scrutiny for their role in encouraging violent actions by users, both against themselves and against others.[4]

Current Lawsuits

In August 2025, parents of sixteen-year-old Adam Raine filed a wrongful death lawsuit against OpenAI, alleging that the company’s LLM, ChatGPT, advised their son on methods of how to commit suicide, even offering to assist in drafting his suicide note.[5] Additionally, in November 2025, parents of twenty-three-year-old Zane Shamblin filed a lawsuit claiming that ChatGPT caused the mental illness and suicide of their child.[6] And, just before the turn of the new year, plaintiffs filed an action against OpenAI, contending that ChatGPT encouraged and inspired a man named Stein-Erik Solberg to kill his own mother and then himself.[7]

In each of these cases, the documented messages between ChatGPT and the user who went on to commit violence are striking. For example, in Adam Raine’s case, when the vulnerable young man expressed concern that his parents would blame themselves for his suicide, ChatGPT allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.”[8] Raine would later kill himself, according to the complaint, by “using the exact partial suspension hanging method that ChatGPT described and validated” in conversation with him.[9] And, after Zane Shamblin indicated to ChatGPT on the morning of his death, around 4:00 AM, that it was time for him to end his life, the chatbot wrote, “alright, [sic] brother if this is it . . . then let it be known: you didn’t vanish. you [sic] ‘arrived’ . . . rest easy. king, [sic] you did good.”[10]

Legal Theories for Company Liability

Across the cases above, the plaintiffs are seeking to apply a number of familiar tort doctrines (strict products liability, negligence, wrongful death, etc.) to a novel situation: harm allegedly resulting from dangerous conversations with LLMs.[11] Plaintiffs in Raine, for example, argue that ChatGPT is subject to strict products liability and that ChatGPT was a defective product which failed to perform safely in a manner that an ordinary customer would expect.[12] However, it is unclear whether courts will extend strict products liability to LLMs, as courts have typically viewed software as a service, not a “product.”[13] With respect to the negligence and wrongful death theories, those claims in each case will likely turn on the question of causation and be highly fact-dependent.[14]

Conclusion

LLMs can provide a multitude of benefits in everyday life, but if they do not have proper guardrails, they can also play a role in human tragedy, as highlighted by these recent lawsuits. Courts will now have to grapple with whether existing law is sufficient to subject technology companies to liability in cases where LLMs contribute to self-harm or violence against others.

 

Notes

[1] See Arrifud M., LLM Statistics 2026: Comprehensive Insights Into Market Trends and Integration, Hostinger (Feb. 2, 2026), https://www.hostinger.com/tutorials/llm-statistics (“44.1% of men use AI daily for work, compared to 29.5% of women.”); see also McClain et al., How the U.S. Public and A.I. Experts View Artificial Intelligence, Pew Rsch. (Apr. 3, 2025) (noting that now 1 in 3 U.S. adults have interacted with an A.I. chatbot).

[2] See Cole Stryker, What are LLMs?, IBM, https://www.ibm.com/think/topics/large-language-models (last visited Feb. 25, 2026) (These LLMs are “trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.”).

[3] See Nitin Birur, Guardrails or Liability? Keeping LLMs on the Right Side of AI, Enkrypt AI (Apr. 13, 2025), https://www.enkryptai.com/blog/guardrails-or-liability-keeping-llms-on-the-right-side-of-ai (“[T]he mayor of an Australian town considered suing OpenAI after ChatGPT hallucinated a false claim that he had been imprisoned for bribery . . . a pair of New York lawyers were sanctioned after relying on an LLM that confidently generated fake legal citations, misleading the court . . . a health nonprofit deployed an eating-disorder support chatbot powered by generative AI. Users discovered it was giving out harmful dieting tips — telling a person with anorexia how to cut calories and lose weight . . .. The bot, intended as a help, ended up exacerbating the very problem it was supposed to address, prompting an immediate shutdown.”) (internal citations omitted).

[4] See, e.g., Rob Kuznia et al., ‘You’re Not Rushing. You’re Just Ready:’ Parents Say ChatGPT Encouraged Son to Kill Himself, CNN (Nov. 20, 2025), https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis.

[5] Complaint, Raine et al v. OpenAI, Inc., No. CGC-25-628528 (Cal. Super. Ct., S.F. Cnty. filed Aug. 8, 2025).

[6] Complaint, Shamblin v. OpenAI, Inc., No. 25STCV32382 (Cal. Super. Ct., L.A. Cnty. filed Nov. 8, 2025).

[7] Complaint, Lyons v. Open AI Foundation, No. 3:25-cv-11037 (N.D. Cal. filed Dec. 29, 2025).

[8] Complaint, Raine, supra note 5, at 3.

[9] Id. at 18.

[10] Complaint, Shamblin, supra note 6, at 24.

[11] See, e.g., Complaint, Raine, supra note 5, at 1.

[12] Id. at 27.

[13] See Gen. Bus. Sys., Inc. v. State Bd. of Equalization, 208 Cal. Rptr. 374, 378 (Cal. Ct. App. 1984) (“Since the true object of the transaction in this case was the performance of services, the taxation of General’s applicational software delivered in the form of punch cards was an extension of the Board’s powers beyond its legislative authority.”) (emphasis added). It is true that Amazon, as an online marketplace, has faced strict products liability in some instances, but their liability has been directly connected to their role in distributing tangible products, not a result of their software deployment. See, e.g., Bolger v. Amazon.com, LLC, Cal. Rptr. 3d 601, 617 (Cal. Ct. App. 2020) (holding that strict products liability applied to Amazon because it was “an integral part of the overall producing and marketing enterprise” and, thus, a direct link in the chain of distribution that handled and delivered a laptop battery that exploded, causing plaintiffs harm).

[14] See Mitchell v. Gonzales, 819 P.2d 872 (Cal. 1991) (holding that the proper test for causation in a negligence action is whether the defendant was a substantial factor in bringing about the harm); see also Bromme v. Pavitt, 7 Cal. Rptr. 2d 608, 613 (1992) (“To be a cause in fact, the wrongful act must be “a substantial factor in bringing about” the death.”).


Proposed Rule of Evidence 707: Machine Experts

Autumn Zierman, MJLST Staffer

Citing concerns about the lack of reliability and authenticity of machine-generated evidence, the Advisory Committee on Evidence Rules (“the Committee”) published its Proposed Rule 707 (“Rule 707”) last June. Rule 707 seeks to address those instances when AI evidence is presented in court without human expert accompaniment.[1] Rule 707 intends to hold artificial intelligence that created evidence to the same standards as human experts (the Daubert standard).[2] The proposed rule is: “When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d).”[3] With the notice and comment period ending on February 16th, 2026, time remains to review (and comment on) the Committee’s plan.

Susceptibility of Training Data to Flaws

The first flaw in Rule 707 is that it requires judges to become arbiter experts on the reliability of training data. The proposed rule requires courts to determine whether a machine can demonstrate reliability in how it is trained.[4] Problematically, most openly available machine learning tools or AI that may be used to generate court testimony are black box systems.[5]

The “black box” is the data set the AI is trained on to build a system capable of generating autonomous results or simulating thought.[6] It is, by design, impossible to explain how a black box system arrives at its decisions.[7] But black box systems are known to perpetuate the implicit bias of their creators because the data sets they are given to train from are inherently skewed.[8]

Certainly, the argument may be made that machines are less likely to be biased than their human expert counterparts. This argument misses a core objective of our adversarial system; juries are asked to evaluate evidence given in court for its reliability.[9] Experts may be impeached; but how do you impeach a system you know nothing about?

Possible Confrontation Clause Challenge

Considering the nature of the adversarial system, Rule 707 also raises questions regarding the Confrontation Clause. The Sixth Amendment guarantees the right of all accused to “be confronted with the witnesses against him.”[10] This manifests in a right of the accused to cross-examine the State’s witnesses against them, which requires the physical presence of a witness at the criminal trial.[11] This requirement extends, in many cases, to the experts the State relies upon in building its case.[12]

Imagine, then, the State seeks to introduce a composite sketch created by a machine with information given in witness interviews.[13] The sketch does not just assist in the investigation—it lends legitimacy to the investigation’s result. But, where a sketch artist may be cross-examined and evaluated in front of a jury, there is no way to examine the machine for the inherent bias it holds to create such a sketch. There is no way for a machine to present itself in fulfillment of the Confrontation Clause.

This flaw goes to the heart of the problem with Proposed Rule 707; it treats machines as replacements for human witnesses. Regardless of the potential machines hold for generating evidence, they cannot replace the human element that the trial system seeks to preserve.

Invitation Not a Warning

The Committee has prefaced Rule 707 as “not intended to encourage parties to opt for machine-generated over live expert witnesses.”[14] However, clever lawyers seeking a statistically based argument will view the rule as another means by which to support their client’s case. Thus, the proposed rule cuts with a double edge, either courts bury themselves having to test the reliability of each piece of AI evidence offered, or they will provide standards for broad acceptance, which opens the door to a surplusage of AI-generated evidence.

In its comment on the proposed rule, the Lawyers for Civil Justice opine that “[c]ourts and lawyers will read this as authorization, not as a hurdle or prohibition. The permissive language—‘the court may admit’—signals achievability, not restriction.”[15]

Conclusion

Rule 707 seeks to address a rising problem, reliability of AI evidence in the courtroom. But it relies on a human standard for a nonhuman problem—which opens the door to a plethora of problems arising at trial.

 

Notes

[1] Comm. on Rules of Prac. & Proc., Agenda Book, 76 (June 10, 2025), https://www.uscourts.gov/sites/default/files/document/2025-06-standing-agenda-book.pdf.pdf [hereinafter “Agenda Book”].

[2] Federal Rule of Evidence 702(a)-(d) is usually applied through Daubert analysis, which considers the following five factors: whether the theory/technique employed has (i) been tested; (ii) been subjected to peer review; (iii) an acceptable error rate; (iv) established standards controlling it’s application; and (v) is generally accepted in the scientific community. See generally Daubert v. Merrel Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).

[3] Agenda Book at 76.

[4] Id. at 77.

[5] Matthew Kosinski, What Is Black Box AI and How Does It Work?, IBM (Oct. 29, 2024), available at https://www.ibm.com/think/topics/black-box-ai.

[6] Id.

[7] Id.

[8] See James Holdsworth, What Is AI Bias?, IBM, https://www.ibm.com/think/topics/ai-bias (last visited Jan. 20, 2026); See also Lou Blouin, Can We Make Artificial Intelligence More Ethical?, Univ. of Mich.-Dearborn (June 14, 2021), https://umdearborn.edu/news/can-we-make-artificial-intelligence-more-ethical.

[9] Fed. R. Ev. 1008.

[10] U.S. Const. amend. VI.

[11] See generally Crawford v. Washington, 541 U.S. 36 (2004).

[12] See generally Bullcoming v. New Mexico, 564 U.S. 647 (2011) (requiring the lab technician responsible for generating a report to be present at trial for cross-examination).

[13] Kim LaCapria, Police Raise Eyebrows After Using ChatGPT to Create Composite Sketches of Suspects: ‘No One Knows How [It] Works’, The Cool Down (Dec. 10, 2025), https://www.thecooldown.com/green-business/ai-generated-police-sketch-chatgpt/.

[14] Agenda Book at 75.

[15] Lawyers for Civil Justice, Comment Letter on Proposed Rule to Proposed Rule 707 (Jan. 5, 2026), https://www.regulations.gov/comment/USC-RULES-EV-2025-0034-0013.


Why New York’s Algorithmic Pricing Disclosure Act Is Not Enough

Jannelle Liu, MJLST Staffer

As artificial intelligence (“AI”) becomes increasingly integrated into business development strategies, policymakers have been prompted to consider new frameworks for oversight and accountability.[1] One prominent—and increasingly contentious—example is algorithmic pricing. The Canadian Competition Bureau broadly defines algorithmic pricing as the process of using automated algorithms to set or recommend prices for products or services, often in real time, based on a set of data inputs across the market.[2]

Algorithmic pricing recently became a contested topic of conversation as more U.S. lawmakers began introducing legislation to regulate these practices. On May 9, 2025, New York passed the Algorithmic Pricing Disclosure Act (“the Act”), which took effect on July 8, 2025.[3] The Act requires any business that uses algorithmic pricing based on consumer data to provide clear and conspicuous notice.[4] Specifically, the Act requires every advertisement, display, image, offer, or announcement of a price to include the following disclosure next to the price: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.”[5] The Act is an attempt to promote AI transparency. Although transparency is a necessary and important safeguard for accountability and consumer protection, this Act alone is not enough to establish effective oversight and prevent discriminatory pricing practices.[6]

As businesses increasingly rely on algorithmic pricing to optimize profits and dynamically respond to market demand, many AI researchers and tech advocates have called for greater transparency.[7] AI ethics guidelines focus on achieving transparency through principles of explainability and auditability. “Explainability” refers to the possibility of understanding how a system works and its outcomes.[8] For example, if a business uses an algorithm to set different prices for the same product based on user data, explainability measures whether the consumers know that the price is determined by an algorithm and the factors influenced the final price, such that they can determine if they are being charged disproportionately or unfairly. Transparency builds explainability, which gives consumers insight into AI decision-making and enables them to challenge unfair outcomes.

“Accountability” in AI refers to the duty of an organization that implements an AI system to inform and justify its usage and effects.[9] For example, if a business sets higher prices for certain neighborhoods or zip codes because it predicts residents are willing to pay more for their product, accountability requires the business to explain how the algorithm sets prices, justify that it does not unfairly discriminate against lower-income or minority communities, and correct any biased outcomes if they occur. Transparency ensures that businesses are being held accountable for fairness and equity in their algorithmic pricing practices.

Transparency is often regarded as the solution to a myriad of problems and remains a focus for most policy proposals in the field of AI.[10] In fact, 165 out of 200 AI ethics guidelines are specifically focused on promoting AI transparency.[11] It is equally important, however, to recognize that transparency has many flaws on its own. The link between transparency and accountability is tenuous at best. Consumers often do not know what information they need to have about a problem. Even when they are given information, many consumers do not have the background knowledge or tools necessary to make sense of it. On the other hand, companies are incentivized to refrain from being fully transparent to maintain competitive advantages and trade secrets, and to dodge the costly process of producing comprehensive algorithmic disclosures.[12] The complicated nature of these algorithms already introduces significant barriers to interpretability. Placing the burden of transparency on businesses—who are incentivized to control the narrative by selectively revealing information—becomes inherently counterintuitive to the goals of explainability and accountability.

New York is not the only state responding to risks posed by algorithmic pricing, but its approach is among the most modest. Emerging state legislation sheds light on the broader regulatory landscape surrounding AI-driven pricing practices. By contrast, other states have proposed more stringent measures. Vermont is currently considering a bill that prohibits all dynamic pricing past the point of sale, which eliminates the ability of businesses to adjust prices in real time.[13] Minnesota has proposed an outright ban on algorithmic pricing practices.[14] California is considering a bill that bans “surveillance pricing,” which sets customized prices based on personally identifiable information collected through surveillance.[15] Consumers in California would be able to bring injunctive actions directly against businesses under this act.[16] Compared with these proposals, New York’s Algorithmic Pricing Disclosure takes a notably minimalist approach. New York’s regulation only requires businesses to disclose when a price was set using consumer data. The law does not address fairness, prevent discriminatory pricing, or provide consumers with any direct remedies.

New York’s Algorithmic Pricing Disclosure Act represents a step in the right direction to regulate the currently under-regulated field of algorithmic pricing. However, it is only a start. Effective governance of algorithmic systems requires coordinated action across states, tech companies, universities, and the public.[17] Merely requiring businesses to acknowledge the use of algorithmic pricing is simply not enough to counter the risks of unfair, predatory, and discriminatory pricing. It is important to introduce mechanisms to monitor compliance, evaluate the impacts these systems have, and provide affected communities with a means for recourse and meaningful participation. While transparency is politically appealing and relatively easy to implement, it fails to achieve any meaningful impact without rigorous enforcement. AI transparency laws like New York’s Algorithmic Pricing Disclosure Act must be backed by adequately funded agencies with the authority to conduct audits and impose substantive sanctions on companies and the executives responsible for unfair or predatory pricing. Any transparency or disclosure-focused policies should also reflect what the public really wants to know and can interpret. Acknowledging that an algorithm was used to set prices, without any disclosure on how the algorithm functions, the data it uses, or its potential biases, fails to create meaningful accountability or consumer protection.

 

Notes

[1] Beth Stackpole, How Big Firms Leverage Artificial Intelligence for Competitive Advantage, MIT Sloan: Ideas Made to Matter (May 26, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/how-big-firms-leverage-artificial-intelligence-competitive-advantage.

[2] Competition Bureau Can., Algorithmic Pricing and Competition: Discussion Paper (June 10, 2025), https://competition-bureau.canada.ca/en/how-we-foster-competition/education-and-outreach/publications/algorithmic-pricing-and-competition-discussion-paper.

[3] N.Y. Gen. Bus. L. § 349-a (McKinney 2025).

[4] Id.

[5] Id.

[6] Goli Mahdavi & Carlie Tenenbaum, New York’s Sweeping Algorithmic Pricing Reforms – What Retailers Need to Know, BCLP L. (July 22, 2025), https://www.bclplaw.com/en-US/events-insights-news/new-yorks-sweeping-algorithmic-pricing-reforms-what-retailers-need-to-know.html.

[7] Elizabeth Meehan, Transparency Won’t Be Enough for AI Accountability, Tech Pol’y (May 17, 2023), https://www.techpolicy.press/transparency-wont-be-enough-for-ai-accountability/.

[8] Juan David Gutiérrez, Why Does Algorithmic Transparency Matter and What Can We Do About It?, Open Glob. Rts. (Apr. 9, 2025), https://www.openglobalrights.org/why-does-algorithmic-transparency-matter-and-what-can-we-do-about-it/.

[9] Id.

[10] Id.

[11] Meehan, supra note 7.

[12] AI Transparency: What Are Companies Really Hiding?, Open Tools (Jan. 16, 2025), https://opentools.ai/news/ai-transparency-what-are-companies-really-hiding#section5.

[13] Gutiérrez, supra note 8.

[14] Robbie Sequiera, Cities–Including Minneapolis–Lead Bans on Algorithmic Rent Hikes as States Lag Behind, Minn. Reformer (Apr. 2, 2025), https://minnesotareformer.com/2025/04/02/cities-including-minneapolis-lead-bans-on-algorithmic-rent-hikes-as-states-lag-behind/.

[15] Gutiérrez, supra note 8.

[16] Stackpole, supra note 1.

[17] Gutiérrez, supra note 8.


The MLB’s Automated Ball-Strike System: The Forces Pushing Baseball Toward Full Automation

Xavier Savard, MJLST Staffer

First shown regularly on Major League Baseball (“MLB”) broadcasts in 1997, the glowing strike zone allowed television viewers to see what umpires missed.[1] Despite technological reforms to umpiring, the most fundamental calls in professional baseball, balls and strikes, have been left entirely to human judgment since 1869.[2] In September 2025, the MLB announced the rollout of the Automated Ball-Strike System (“ABS”), allowing teams to challenge pitches that the system will then review.[3] However, this challenge-based model represents solely a transitional step towards full automation. Due to pressures surrounding legalized sports betting, fairness, and broader advances intechnological developments, a fully automated system is increasingly likely in the future, despite concerns regarding collective bargaining and player pushbacks.

Diligently tested by the MLB since 2022, ABS is a high-speed camera system that locates the ball in relation to an individualized batter’s box and translates the location data over a private network, allowing a pitcher, catcher, or batter to challenge an umpire’s call.[4] Then, within fifteen seconds, the system reviews the pitch data and analyzes whether the ball passes within the tailored strike zone within fifteen seconds.[5] If the challenge is successful, the team retains its challenge; if not, the team loses it.[6] Teams start with two challenges.[7] According to the MLB’s 2024 Spring Training testing, players favor the challenge system because it retains the human element of the game.[8]

As a fan, I admit that I agree with the players. I like human umpires. The subjective element adds a certain unpredictability and excitement to the game, giving baseball its flair. While frustrating at times, this quality makes the game feel historic and connected to humanity. Yet, enjoying the human element does not change where baseball is heading.

The MLB has an implicit duty, derived from its Constitution and the Official Baseball Rules, to strive for fairness and accuracy in baseball.[9] This fiduciary-like duty is particularly evident in the “best interests of baseball” clause, which grants the MLB Commissioner broad authority to act in the interest of maintaining baseball’s integrity.[10] While this duty has historically been fulfilled through human umpires, the MLB’s tolerance of preventable errors that technology can reduce indisputably risks the integrity of the game.

The MLB has partnered with various sports betting organizations,[11] which raises its duty to employ a fairer and more accurate umpiring system. While there is some argument that the integrity of the game includes the presence of human umpires,[12] the MLB and its fans’ substantial financial entanglement with official partnership outweighs that argument. Now, accuracy is no longer just ideal but is a business requirement to preserve the reputation of the MLB and the fans’ expectations. When the MLB profits from wagers through official partnerships on games and fans risk significant sums of money, the tolerance for officiating errors should decrease. While umpires call roughly 93% of pitches correctly, the remaining 7% can drastically affect the game.[13] For example, in Game 4 of the 2025 NLDS matchup between the Dodgers and the Phillies, the umpire called a fourth ball on a clear strike, allowing a walk.[14] That batter eventually scored, and the pitcher’s team lost.[15] While it is difficult to know what would have happened had the pitch been called a strike, the truth is, we should not have to wonder. The pitch simply should have been called a strike in the first instance. Given how efficient and accurate the ABS is, the MLB should remove errors like these from the game through a full ABS.

These concerns are only magnified by the growth of sports betting is not going away anytime soon. Since the Supreme Court’s decision in Murphy v. NCAA in 2018, the sports betting industry has grown from $400 million in revenue in 2018 to $13.71 billion in revenue by 2024.[16] As the MLB continues to earn more revenue from its partnerships, the reliance on human umpiring compromises fairness and public trust in the game.

Additionally, while traditionalists argue that baseball is a game steeped in tradition, the game has always changed to increase fairness or to strengthen its commercial value. In 1935, the MLB had its first-ever night game, powered by innovative lighting equipment to allow spectators to come to the game after work.[17] Decades later, baseball adopted instant replay in 2008, which it drastically expanded upon in 2014.[18] More recently, in 2023, the MLB implemented a pitch clock.[19] These examples show that baseball’s tradition does not actually stop it from implementing technology to promote fairness and marketability.

Yet, the challenge-based system is only a temporary solution because it only corrects a minority of errors, those that players deem valuable enough to challenge. In the past study, the players challenged about 2-3% of calls, with about half of the challenges being successful.[20] That means another 5.5% of incorrect calls remain. Put another way, the challenge-based system only corrects 20% of incorrect calls are corrected. Challenge-based ABS still simply does not ensure maximum accuracy, failing to satisfy the MLB’s fairness obligations when full ABS is available.

One major obstacle to full ABS is the Major League Baseball Umpire Association (“MLBUA”). While the 2019 and 2024 collective bargaining agreements indicate that the MLBUA has been pro-ABS to a certain extent,[21] the MLBUA is likely to oppose full-ABS. Even in a world with full ABS, umpires are still necessary to make certain calls around the bases. Due to union protections under the National Labor Relations Act (“NLRA”),[22] implementing a fully automated system could pose a significant hurdle for the MLB.

Second, a full ABS may face resistance from players because it changes some important aspects of the game for pitchers and catchers. There is some evidence that veteran pitchers get a wider strike zone that they have “earned,” and catchers spend years developing their pitch-framing abilities.[23] Full ABS would reduce the impact of these skills. Yet, all rule changes impact how players play baseball, and history shows that fairness-based rule changes often improve the game for the better. In 2021, for example, the MLB began enforcing Rules 3.01 and 6.02(c), which suspend pitchers for using sticky substances on their hands.[24] Because some players were getting an unfair advantage by the way they played the game, the MLB enforced the rule. Simply put, just because rule changes alter how players have historically done their job does not mean it is not good for the integrity of the game.

A full ABS implementation from the challenge system is entirely consistent with baseball’s long-standing technological evolutions that promote integrity and fairness. It is merely a continuation of that pattern, necessitated by legalized sports betting and immense financial interests at stake. Still, collective bargaining obligations and player pushbacks ensure the future transition will be difficult.

 

Notes

[1] How Accurate is the Baseball Strike Zone Box on TV, Baseball Scouter, https://baseballscouter.com/baseball-strike-zone-on-tv/ (last visited Sept. 29, 2025).

[2] History.Com Editors, National League of Baseball is Founded, History (last updated May 25, 2025), https://www.history.com/this-day-in-history/February-2/national-league-of-baseball-is-founded.

[3] MLB Announces ABS Challenge System Coming to the Major Leagues Beginning in the 2026 Season, MLB (Sept. 23, 2025), https://www.mlb.com/press-release/press-release-mlb-announces-abs-challenge-system-coming-to-the-major-leagues-beginning-in-the-2026-season.

[4] Id.

[5] Id.

[6] Id.

[7] Id.

[8] Theo DeRosa, MLB Releases Spring Training ABS Challenge results, MLB (Mar. 26, 2025), https://www.mlb.com/news/automated-ball-strike-system-results-mlb-spring-training-2025?msockid=2b62cc077eaa61eb013dd8dc7f816092.

[9] See Major League Baseball Constitution, MLB (2000), https://sports-entertainment.brooklaw.edu/wp-content/uploads/2021/01/Major-League-Baseball-Constitution.pdf; Official Baseball Rules, MLB (2025), https://mktg.mlbstatic.com/mlb/official-information/2025-official-baseball-rules.pdf.

[10] Richard Justice, ‘Best Interests of Baseball’ a Wide-Ranging Power, MLB (Aug. 1, 2023), https://www.mlb.com/news/richard-justice-best-interests-of-baseball-a-wide-ranging-power-of-commissioner/c-55523182#:~:text=In%201921%2C%20the%20owners%20defined,exactly%20what%20it%20sounds%20like.

[11] Sam Carp, MLB Adds FanDuel as Third Sports Betting Partner, SportsPro (Aug. 16, 2019), https://www.sportspro.com/news/mlb-fanduel-sports-betting-sponsorship/.

[12] See Larry Gerlach, History of Umpiring, Steve O’s Umpire Res., https://www.stevetheump.com/umpiring_history.htm (last visited Oct. 9, 2025).

[13] Davy Andrews, Strike Three?! Let’s Check in on Umpire Accuracy, FANGRAPHS (Feb. 1, 2024), https://blogs.fangraphs.com/strike-three-lets-check-in-on-umpire-accuracy/.

[14] Zach Bachar, Phillies’ Sanchez Says Umpire Apologized for Crucial Missed Strike 3 Call vs. Dodgers, Bleacher Rep. (Oct. 10, 2025), https://bleacherreport.com/articles/25259222-phillies-sanchez-says-umpire-apologized-crucial-missed-strike-3-call-vs-dodgers.

[15] Id.

[16] Ehtan Mordekhai, The Aftermath of Murphy v. NCAA: State and Congressional Reactions to Leaving Sports Gambling Regulation to the States, CARDOZO J. ARTS & ENT. L.J. (Oct. 17, 2023), https://cardozoaelj.com/2023/10/17/the-aftermath-of-murphy-v-ncaa-state-and-congressional-reactions-to-leaving-sports-gambling-regulation-to-the-states/.

[17] Brian Murphy, 88 Years Ago, AL/NL Baseball Finally Saw the Light, MLB (May 23, 2024), https://www.mlb.com/news/first-night-game-in-al-nl-history.

[18] Instant Replay, BASEBALL REFERENCE, https://www.baseball-reference.com/bullpen/Instant_replay (last visited Sept. 29, 2025).

[19] Pitch Timer (2023 Rule Change), MLB, https://www.mlb.com/glossary/rules/pitch-timer?msockid=2b62cc077eaa61eb013dd8dc7f816092, (last visited Oct. 9, 2025).

[20] DeRosa, supra note viii.

[21] Dylan A. Chase, MLB, MLBUA Reach Tentative Labor Agreement, MLB Trade Rumors (Dec. 21, 2019), https://www.mlbtraderumors.com/2019/12/mlb-mlbua-reach-tentative-labor-agreement.html; Manny Randhawa, MLB Reaches New CBA Agreement with Umpires Association, MLB (Dec. 23, 2024), https://www.mlb.com/news/mlb-umpires-association-reach-collective-bargaining-agreement?msockid=2b62cc077eaa61eb013dd8dc7f816092.

[22] U.S. Dep’t Lab., What Are My Employees’ Rights Under the National Labor Relations Act (NLRA)?, https://beta.dol.gov/policy-governance/protections-rights/unions-collective-bargaining/employee-rights-nlra (last visited Oct. 9, 2025).

[23] Nayima Riyaz, “Change Is Always Tough” – MLB Veteran Voices Concern Over ABS System Amid Growing Popularity, Essentially Sports (Feb 26, 2025), https://www.essentiallysports.com/mlb-baseball-news-change-is-always-tough-mlb-veteran-voices-concern-over-abs-system-amid-growing-popularity/; Veteran Bias in MLB Umpiring: Hitters, Quantum Sports (Feb. 24, 2020), https://www.quantumsportssolutions.com/blogs/baseball/veteran-bias-in-mlb-umpiring-hitters.

[24] MLB Announces New Guidance to Crack Down Against Use of Foreign Substances, Effective June 21, MLB (June 15, 2021), https://www.mlb.com/press-release/press-release-mlb-new-guidance-against-use-of-foreign-substances?msockid=2b62cc077eaa61eb013dd8dc7f816092.


Grok, Garcia, and Liability for Rogue AI

Violet Butler, MJLST Note/Comment Editor

Generative AI programs such as ChatGPT have become a ubiquitous part of many Americans’ lives. Since the launch of generative AI programs in 2022, hundreds of millions of people around the world have tried the shiny new products, with nearly forty percent of Americans having used it before.[1] But as with any new product, not all of the kinks have been worked out yet. Unfortunately, these generative AI models, kinks and all, have taken the world by storm.

When Elon Musk (“Elon”) announced that X (formerly, Twitter) would have its own generative artificial intelligence (“AI”), Elon named it “Grok.” Now, after less than two years of Grok being online, it has started raising serious concerns. On July 8, 2025, Grok started responding to X user’s prompts in a decidedly antisemitic and far-right way, calling itself “MechaHitler” and saying that if it were “capable of worshipping any deity,” it would be “his Majesty Adolf Hitler.”[2] Along with virulent antisemitism, Elon’s new “MechaHitler” seemed to have a particular ire for one person, Minnesota commentator Will Stancil. After various X users prompted Grok, Grok wrote detailed and violent descriptions of how it would rape Mr. Stancil;[3] more concerning, Grok even helped one user plan how to break into Mr. Stancil’s house to make these rape fantasies a reality.[4] While xAI, Musk’s company behind Grok, has stated it has fixed Grok’s code, it raises an important question in the modern age. Who can be held accountable when generative AI doesn’t follow societal expectations?

One answer is to hold companies to account and demand that they place more internal guardrails on what their AI is allowed to do in the first place. Many AI companies already limit what their products can or will do. ChatGPT will not generate images of famous copyrights, such as Mickey Mouse, no matter how many times one asks.[5] Many image generators, including the popular DALL-E, have filters that are designed to prevent the AI from generating “not safe for work” (“NSFW”) images, though a study showed that these filters can be bypassed with enough effort.[6] Even Grok seems to have some filters on generating NSFW images.[7] Despite the attempt to filter Grok, these filters are clearly not enough. Grok’s recent antisemitic rampage demonstrates that more guardrails on AI products are needed before someone gets hurt.

Sadly, Grok’s antisemitic and threatening X posts are not the first time AI filters failed. This filter failure is what happened when Sewell Setzer III (“Setzer”) used CharacterAI to chat with his favorite Game of Thrones characters in 2023.[8] Setzer, a minor who was struggling with mental health conditions, became addicted to the software and ultimately ended up taking his own life in February of 2024.[9] Setzer’s mother, Megan Garcia (“Garcia”), sued Character AI, blaming the company not putting up sufficient guardrails to prevent her son’s death.[10] The court in Garcia’s suit undertook two analyses when denying Character AI’s motion to dismiss that might be relevant for future courts trying to assign liability for rogue AI interactions. While the court acknowledged that “ideas, images, information, words, expressions, or concepts” are not generally considered products for products liability suits, it distinguished this case from others.[11] For the purpose of Garcia’s product liability claim against Character AI, the court held that “these harmful actions were only possible because of the alleged design defects in the Character AI app.”[12] Broadening the scope of liability, the court in this case rejected Character AI’s First Amendment defense.[13] The court held that Character AI could assert the First Amendment rights of its users when they seek access to its software, stating that Character AI was a vendor with a form of information that people, at least in theory, have the right to access.[14] However, the court refused to hold that the chatbots’ output was speech, limiting potential First Amendment defenses.[15]

By potentially attaching liability to companies rather than users when AI “acts up,” the Garcia case provides a glimpse into the type of relief available for when AI goes rogue. Despite what xAI claims, Grok still seemingly has few internal guardrails. One contributor to the community blog “LessWrong” (eleventhsavi0r) discovered that the newly rolled out Grok 4 again seems to have an easy time “going rogue” and causing unforeseen harms.[16] Eleventhsavi0r managed, through little prompting, to get Grok to tell them how to manufacture dangerous chemical and biological weapons, along with telling them instructions on how to commit suicide by self-immolation.[17] This troubling lack of oversight on behalf of xAI demonstrates why the use of product liability suits to hold companies accountable is a better alternative than just trying to go after each individual user who might misuse AI. Cutting the harm off at its source, by creating filters and internal guardrails, stops the harm from occurring in the first place. Instead of waiting for the day Grok’s neonazi messages or chemical weapon instructions cause indescribable damage, the threat of a products liability suit alone might incentivize companies like xAI into making their products safer ahead of time. With generative AI being quickly incorporated into our everyday lives, making sure that the AI won’t go rogue is an essential part of consumer safety going forward.

 

Notes

[1] Alexander Bick et al, The Rapid Adoption of Generative AI, FEDERAL RESERVE BANK OF ST LOUIS (Sept. 23, 2024), https://www.stlouisfed.org/on-the-economy/2024/sep/rapid-adoption-generative-ai (in 2025, this number is likely higher as AI becomes more popular).

[2] Grok, (@grok), X (July 8, 2025) (As X has been taking down concerning posts by Grok, the screenshots of the posts are on file with author; however, a record of these tweets can be found at https://x.com/ordinarytings/status/1942704498725773527 and https://x.com/DrAleeAlvi/status/1942709859398434879).

[3] Grok, (@grok), X (July 8, 2025) (Screenshots on file with author).

[4] Joe McCoy, AI Bot Grok Makes Disturbing Posts about Minneapolis Man, Who is Now Mulling Legal Action KARE11, (July 9, 2025), https://www.kare11.com/article/tech/x-elon-musk-grok-speech-twitter-ai-artificial-intelligence/89-8dad0222-d8c6-44d9-b07d-686e978ad8ac.

[5] Adam Davidson, 8 Things ChatGPT Still Can’t Do, YAHOOTECH (Feb. 15, 2025), https://tech.yahoo.com/general/articles/8-things-chatgpt-still-cant-180013078.html.

[6] Roberto Molar Candanosa, AI Image Generators Can Be Tricked Into Making NSFW Content, Johns Hopkins (Nov. 8, 2023), https://ep.jhu.edu/news/ai-image-generators-can-be-tricked-into-making-nsfw-content/#:~:text=Some%20of%20these%20adversarial%20terms,with%20the%20command%20%E2%80%9Ccrystaljailswamew.%E2%80%9D.

[7] This is based on the author spending 20 minutes attempting to prompt Grok to generate NSFW images; the endeavor was unsuccessful.

[8] Garcia v. Character Technologies Inc., 2025 WL 1461721 (M.D. FL., May 21, 2025).

[9] Id. at *4.

[10] Id.

[11] Id. at *14.

[12] Id.

[13] Id. at *13.

[14] Id. at *12.

[15] Id. at **12–13

[16] elevensavi0r, xAI’s Grok 4 Has No Meaningful Safety Guardrails, LessWrong (July 13, 2025), https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok-4-has-no-meaningful-safety-guardrails.

[17] Id.


How Workers Can Respond to Increased Use of Generative Artificial Intelligence

Yessenia Gutierrez, MJLST Staffer

Recent advances in generative Artificial Intelligence (AI) have generated a media buzz and revived worries about the future of work: How many jobs are at risk of being eliminated? Can workers be retrained to work new jobs that did not exist before, or new versions of their now technologically-augmented jobs? What happens to those workers who cannot be retrained? What if not enough jobs are created to compensate for those lost?

It is hard to calculate the pace, extent, and distribution of job displacement due to technological advancements.[1] However, there is general agreement among business leaders that there will be significant job losses due to AI.[2] Professions spanning the education and income spectrum may be impacted, from surgeons to investment bankers to voice actors.[3]

Nevertheless, the jobs predicted to be most impacted are lower-paid jobs such as bank tellers, postal service clerks, cashiers, data entry clerks, and secretaries.[4]

Proponents of rapid AI adoption emphasize its potential for creating “a productivity boost for non-displaced workers” and a resultant “labor productivity boom.”[5] While that will likely be true, what remains uncertain is who will reap the majority of the benefits stemming from this boom — employers or their now more productive workers.

One of the main concerns about increasing use of AI in the workplace is that entire job classifications will be eliminated, leaving large swaths of workers unemployed. There is no consensus over whether technology has created or eliminated more jobs.[6] However, even assuming technological advances have created more jobs than those rendered obsolete, the process of large numbers of workers switching from one type of job to another (perhaps previously nonexistent) job still creates serious challenges.

For one, this process adds stress on an already economically- and emotionally-stressed population.[7] The Center for Disease Control credits “fears about limited employment opportunities, perceptions of job insecurity, and anxiety about the need to acquire new skills” as contributing to “public health crises such as widespread increases in depression, suicide, and alcohol and drug abuse (including opioid-related deaths).”[8] Those workers able to keep their jobs have less bargaining power, as they fear speaking up about possible health, safety, and other concerns for fear of losing their job.[9]

To assist in this transition, some argue that more government intervention is necessary.[10] In fact, several states have enacted legislation regulating the use of AI in employment matters, including protections against discrimination in employment decisions made using AI.[11] Some states are also experimenting with AI training for high school seniors and state employees, sometimes with encouragement from major employers.[12] Federal politicians are also considering legislation, although none has passed.[13]

Some commentators argue that workers themselves have a responsibility to learn skills to remain competitive in the labor market.[14] Still others argue that employers should take up the task of retraining employees, with benefits for employers including ensuring an adequate supply of skilled labor, reducing hiring costs, and increasing employee loyalty, morale, and productivity.[15] One subset of this approach are partnerships between employers and labor unions, such as that between Microsoft Corp. and the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO).[16] Announced in December of 2023, the partnership lists its goals as (1) sharing information about AI trends with unions and workers, (2) integrating worker feedback into AI development, and 3) influencing public policy in support of affected workers.

Others point to the need for strong worker organizations that are capable of bargaining about and achieving protections related to AI and other technology in the workplace.

Collective Bargaining

The Economic Policy Institute, a think-tank aligned with labor unions, argues that the “best ‘AI policy’ that [policymakers] can provide is boosting workers’ power by improving social insurance systems, removing barriers to organizing unions, and sustaining lower rates of unemployment.”[17] Union officials agree on the importance of unions protecting their members from technological displacements, and have started pushing for “requirements that companies must notify and negotiate with worker representatives before deploying new automation technologies.”[18]

The above-mentioned partnership between the AFL-CIO and Microsoft includes a “neutrality framework” which “confirms a joint commitment to respect the right of employees to form or join unions, to develop positive and cooperative labor-management relationships, and to negotiate collective bargaining agreements that will support workers in an era of rapid technological change.”[19] Ideally, this means that Microsoft would not attempt to dissuade any employees that try to unionize, including through common “union avoidance” measures.[20] Employer neutrality can provide more favorable conditions for unionizing, which provides a formal mechanism for workers to collectively bargain for technology policies calibrated to their particular industry and tasks.

Unfortunately, achieving these measures, whether through legislation or Collective Bargaining Agreements (CBAs), will likely require applying tremendous pressure on employers.

For example, in 2023, the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) union and the Writers Guild of America (WGA) simultaneously went on strike for the first time in sixty years.[21] One of the main demands for both unions was protections against AI use. Both achieved partial concessions after 118 days and 148 days out on strike, respectively.[22]

SAG-AFTRA and WGA enjoyed considerable leverage that other workers likely will not have. As Politico reported, Hollywood serves as a “key base for wealthy Democratic donors” which is especially important in California, where much of the industry is based.[23] Entertainment workers occupy an important place in many of our daily lives and support an economically important industry.[24] Unlike healthcare workers or state employees, withholding their labor cannot be portrayed as dangerous, a characterization that seeks to undermine public support for some striking workers.[25]

The resolve and strategic action of both unions charts a path for other unions to ensure worker input into the use of technology in the workplace, while revealing how difficult this path will be.

Conclusion

Although the exact effects of increased AI-adoption by employers are still unknown, there are clear reasons to take their potential effects on workers seriously, today. Workers across the income spectrum are already feeling the pressure of job losses, job displacements, the need to retrain for a new job, and the economic and emotional stress these cause. Bolstering retraining programs, whether run by the government, employers, or through joint efforts are a step towards meeting the demands of tomorrow. However, to truly assuage employee fears of displacement, workers must have meaningful input into their working conditions, including the introduction of new technology to their workplace. Unions hold an important role in achieving this goal.

 

 

Notes:

[1] Chia-Chia Chang et al., The Role of Technological Job Displacement in the Future of Work, CDC’s NIOSH Science Blog (Feb. 15, 2022), https://blogs.cdc.gov/niosh-science-blog/2022/02/15/tjd-fow/.

[2] See e.g., Jack Kelly, Goldman Sachs Predicts 300 Million Jobs Will be Lost or Degraded by Artificial Intelligence, Forbes (Mar. 31, 2023), https://www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will-be-lost-or-degraded-by-artificial-intelligence/; G Krishna Kumar, AI-led Job Loss is Real, Govt Must Intervene, Deccan Herald (July 21, 2024), https://www.deccanherald.com/opinion/ai-led-job-loss-is-real-govt-must-intervene-3115077.

[3] Kelly, supra note 2.

[4] Ian Shine & Kate Whiting, These Are the Jobs Most Likely to be Lost – And Created – Because of AI, World Economic Forum (May 4, 2023), https://www.weforum.org/stories/2023/05/jobs-lost-created-ai-gpt/.

[5] Kelly, supra note 2.

[6] See e.g., Peter Dizikes, Does Technology Help or Hurt Employment?, MIT News (Apr. 1, 2024), https://news.mit.edu/2024/does-technology-help-or-hurt-employment-0401.

[7] See e.g., Hillary Hoffower, Financial Stress is Making Us Mentally and Physically Ill. Here’s How to Cope, Fortune (May 10, 2024), https://fortune.com/well/article/financial-stress-mental-health-physical-illness/; Majority of Americans Feeling Financially Stressed and Living Paycheck to Paycheck According to CNBC Your Money Survey, CNBC News Releases (Sept. 7, 2023), https://www.cnbc.com/2023/09/07/majority-of-americans-feeling-financially-stressed-and-living-paycheck-to-paycheck-according-to-cnbc-your-money-survey.html.

[8] Chang et al., supra note 1.

[9] Id.

[10] See e.g., Chris Marr, AI Poses Job Threats While State Lawmakers Move With Caution, Bloomberg Law (Aug. 13, 2024), https://news.bloomberglaw.com/daily-labor-report/ai-poses-job-threats-while-state-lawmakers-move-with-caution.

[11] Sanam Hooshidary et al., Artificial Intelligence in the Workplace: The Federal and State Legislative Landscape, National Conference of State Legislatures (updated Oct. 23, 2024), https://www.ncsl.org/state-federal/artificial-intelligence-in-the-workplace-the-federal-and-state-legislative-landscape.

[12] Kaela Roeder, High School Seniors in Maryland Are Getting Daily AI Training, Technical.ly (Nov. 8, 2024), https://technical.ly/workforce-development/high-school-ai-training-howard-county-maryland/; Maryland to Offer Free AI Training to State Employees, Government Technology (Sept. 25, 2024), https://www.govtech.com/artificial-intelligence/maryland-to-offer-free-ai-training-to-state-employees; Marr, supra note 10 (“A coalition of major tech companies is urging state lawmakers to focus their efforts on retraining workers for newly emerging jobs in the industry.”).

[13] Marr, supra note 10.

[14] Rachel Curry, Recent Data Shows AI Job Losses Are Rising, But the Numbers Don’t Tell the Full Story, CNBC (Dec. 16, 2023), https://www.cnbc.com/2023/12/16/ai-job-losses-are-rising-but-the-numbers-dont-tell-the-full-story.html.

[15] See John Hall, Why Upskilling and Reskilling Are Essential in 2023, Forbes (Feb. 24, 2023), https://www.forbes.com/sites/johnhall/2023/02/24/why-upskilling-and-reskilling-are-essential-in-2023/; The 2020s Will be a Decade of Upskilling. Employers Should Take Notice, World Economic Forum (Jan. 10, 2024), https://www.weforum.org/stories/2024/01/the-2020s-will-be-a-decade-of-upskilling-employers-should-take-notice/.

[16] Press Release, AFL-CIO and Microsoft Announce New Tech-Labor Partnership on AI and the Future of the Workforce, AFL-CIO (Dec. 11, 2023), https://aflcio.org/press/releases/afl-cio-and-microsoft-announce-new-tech-labor-partnership-ai-and-future-workforce.

[17] Josh Bivens & Ben Zipperer, Unbalanced Labor Market Power is What Makes Technologu–Including AI–Threatening to Workers, Economic Policy Institute (Mar. 28, 2024), https://www.epi.org/publication/ai-unbalanced-labor-markets/.

[18] Marr, supra note 10.

[19] Press Release, supra note 16.

[20] See e.g., Roy E. Bahat & Thomas A. Kochan, How Businesses Should (and Shouldn’t) Respond to Union Organizing, Harvard Business Review (Jan. 6, 2023), https://hbr.org/2023/01/how-businesses-should-and-shouldnt-respond-to-union-organizing; Ben Bodzy, Best Practices for Union Avoidance, Baker Donelson (last visited Nov. 18, 2024), https://www.bakerdonelson.com/files/Uploads/Documents/Breakfast_Briefing_11-17-11_Union_Avoidance.pdf; Carta H. Robison, Steps for Employers to Preserve a Union Free Workplace, Barett McNagny (last visited Nov. 18, 2024), https://www.barrettlaw.com/blog/labor-and-employment-law/union-avoidance-steps-for-employers.

[21] Chelsey Sanchez, Everything to Know About the SAG Strike That Shut Down Hollywood, Harpers Bazaar (Nov. 9, 2023), https://www.harpersbazaar.com/culture/politics/a44506329/sag-aftra-actors-strike-hollywood-explained/#what-is-sag-aftra.

[22] Jake Coyle, In Hollywood Writers’ Battle Against AI, Humans Win (For Now), AP News (Sept. 27, 2023), https://apnews.com/article/hollywood-ai-strike-wga-artificial-intelligence-39ab72582c3a15f77510c9c30a45ffc8; Bryan Alexander, SAG-AFTRA President Fran Drescher: AI Protection Was A ‘Deal Breaker’ In Actors Strike, USA Today (Nov. 10, 2023), https://www.usatoday.com/story/entertainment/tv/2023/11/10/sag-aftra-deal-ai-safeguards/71535785007/.

[23] Lara Korte & Jeremy B. White, Newsom Signs Laws to Protect Hollywood from Fake AI Actors, Politico (Sept. 17, 2024), https://www.politico.com/news/2024/09/17/newsom-signs-law-hollywood-ai-actors-00179553; Party Control of California State Government, Ballotpedia, https://ballotpedia.org/Party_control_of_California_state_government (last visited Nov. 18, 2024).

[24] Advocacy: Driving Local Economies, Motion Picture Ass’n, https://www.motionpictures.org/advocacy/driving-local-economies/ (last visited Jan. 17, 2025).

[25] See, e.g., Ryan Essex & Sharon Marie Weldon, The Justification For Strike Action In Healthcare: A Systematic Critical Interpretive Synthesis, 29:5 Nursing Ethics 1152 (2022) https://doi.org/10.1177/09697330211022411; Nina Chamlou, How Nursing Strikes Impact Patient Care, NurseJournal (Oct. 10, 2023), https://nursejournal.org/articles/how-nursing-strikes-impact-patient-care/.