Business Law

Your Digital Doppelgänger

Lillie Grant, MJLST Staffer

What counts as harm in an age of inference?

Modern systems do not just collect information; they generate it.[1] From patterns in behavior, timing, and interaction, they derive conclusions about people that those people never actually shared.[2] Often, those conclusions are more revealing than anything someone would voluntarily disclose.[3] And yet, the law does not clearly or consistently treat that process as harmful.[4]

Privacy law has mostly been built around disclosure.[5] The usual question is whether information was knowingly shared, improperly collected, or revealed to the wrong people.[6] The basic idea is that the data starts with the individual and then moves outward.[7] But inference does not work like that.[8] It is not about what is given; it is about what is created.[9]

The difference is more significant than it first appears, because when a system converts small pieces of behavior into conclusions about a person, it does more than record activity; it interprets it, producing not just a list of actions but a statement about their meaning.[10]

The law has not caught up. Courts are much more comfortable recognizing harm when inferred information shows up in the world in a visible way.[11] If something is revealed, shared, or used in a way that clearly affects someone, it looks like a familiar kind of injury.[12] It has consequences that feel real and immediate.[13]

But most inferences never get that far.[14] They stay inside the system that produced them.[15] They shape what someone sees, what is recommended, what is prioritized, and sometimes what opportunities are available, all without a discrete, traceable event.[16] Even when those inferences are accurate or deeply personal, they often do not trigger legal protection.[17] There is no clear moment where something was “disclosed,” and without that, courts struggle to recognize harm at all.[18]

That leaves a gap: privacy law still depends on the idea that information is something a person gives.[19] Something you can point to and say, “This was shared.”[20] But inferred data does not fit into that model.[21] It is not handed over; it is built, and because of that, it slips past categories that were never designed to capture this kind of process.[22] The problem is not just theoretical; it affects whether someone can even bring a claim.[23] To get into court, a plaintiff has to show a concrete injury.[24] Not just a feeling that something is off, but something the law is willing to recognize as harm.[25] When the issue is inference, the information may shape real outcomes but does so quietly, without a clear moment that satisfies the law’s demand for discrete injury.[26]

At the same time, these inferences are not meaningless. They are the product. Companies are not just collecting data for the sake of it; they are turning it into insights that can be used to target ads, keep people engaged, and make money.[27] The value is not just in what people do, but in what can be figured out from what they do.[28]

That raises a harder question. If a company can take your behavior, turn it into something new, and profit from it, what exactly belongs to you? The raw data came from you, but the conclusion did not. The law tends to treat that distinction as important.[29] It is not obvious that it should settle the issue at all.[30]

Recent lawsuits by authors challenge the use of their works to train AI systems as a form of uncompensated extraction,[31] but because those claims focus on the inputs used to build these systems, they leave open a distinct question: whether individuals have any claim to the inferences generated about them, suggesting the problem is not just data use but the unrecognized extraction and monetization of information produced about individuals.

There are limited signals in existing law suggesting that creating new data about a person can itself be treated as harm, most clearly in biometric cases where courts have recognized that generating something like a faceprint is significant even without further use.[32]

Part of what makes inference so difficult is that it does not feel like a clear violation. There is no obvious intrusion or single moment where something is taken; instead, it happens gradually as bits of behavior accumulate and are turned into meaning that appear harmless on their own but are surprisingly complete in the aggregate.[33] That creates a deeper tension. The better systems get at understanding people, the less clear it becomes what it even means, legally, to “know” something about someone.[34] At what point does a pattern become information? And at what point does producing that information start to matter in a legal sense?

The better framing is to abandon disclosure as the organizing principle. Maybe the issue is not disclosure at all. Maybe it is extraction. Systems are not just observing behavior; they are pulling meaning out of it and turning that meaning into something usable.[35] That something can be scaled, sold, and built into entire business models.[36] But the legal rules we have are still mostly about what people choose to share, not what can be created from what they do.[37]

If that is right, the problem is only intensifying, as systems increasingly rely on information that no one explicitly provided but that still feels personal, making it harder to say that nothing of consequence is being taken. The law offers no clear answer, leaving inferred data central in practice but misaligned with doctrines of harm. This leaves individuals in a position where systems can form detailed conclusions about them while they have little ability to see or challenge those conclusions, reflecting a definition of harm that no longer matches how information is actually produced and used.

 

Notes

[1] See generally Joan M Wrabetz, What Is Inferred Data and Why Is It Important?, ABA (Aug. 22, 2022), https://www.americanbar.org/groups/business_law/resources/business-law-today/2022-september/what-is-inferred-data-and-why-is-it-important/.

[2] Id.

[3] See Hal Conick, AI and the Law, Univ. Chi. L. Sch. (Dec. 9, 2024), https://www.law.uchicago.edu/news/ai-and-law.

[4] Sandra Wachter & Brent Mittelstadt, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, 2019 Colum. Bus. L. Rev. 494.

[5] See Overview of the Privacy Act of 1974: Conditions of Disclosure to Third Parties, U.S. Dep’t of Just., https://www.justice.gov/opcl/overview-privacy-act-1974-2020-edition/disclosures-third-parties (last visited Apr. 9, 2026, at 16:12 CST).

[6] Id.

[7] Id.

[8] See Wrabetz, supra note 1.

[9] Id.

[10] Id.

[11] See Harith Khawaja, Injury, in Fact: The Internet, the Americans with Disabilities Act, and Standing in Digital Spaces, 36 Stan. L. & Pol’y Rev. 165, 172 (2025).

[12] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Danielle Keats Citron & Daniel Solove, Privacy Harms, 102 B.U.L Rev 793 (2022).

[13] Id.

[14] Jeffrey Erickson, What Is AI Inference?, Oracle (Apr. 2, 2024), https://www.oracle.com/artificial-intelligence/ai-inference/#:~:text=Inference%2C%20to%20a%20lay%20person,in%20the%20training%20data%20set.

[15] Id.

[16] Id.

[17] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Citron & Solove, supra note 12.

[18] Id.

[19] Citron & Solove, supra note 12.

[20] See Pamela J. Wisniewski & Xinru Page, Privacy Theories and Frameworks, in Modern Socio-Technical Perspectives on Privacy 15 (2022).

[21] Wrabetz, supra note 1.

[22] See Privacy by Proxy: Regulating Inferred Identities in AI Systems, IAPP (Nov. 12, 2025), https://iapp.org/news/a/privacy-by-proxy-regulating-inferred-identities-in-ai-systems.

[23] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).

[24] Id.

[25] Id.

[26] Wrabetz, supra note 1.

[27] Id.

[28] Id.

[29] Id.

[30] Id.

[31] See Pramode Chiruvolu et al., Fair Use and AI Training: Two Recent Decisions Highlight the Complexity of This Issue, Skadden, Arps, Slate, Meagher & Flom LLP (July 8, 2025) https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training.

[32] See Ross D. Emmerman & Mark Goldberg, Illinois Supreme Court Rules No Actual Harm Needed for Biometric Information Protection Act Claims; Floodgates Open, Loeb & Loeb LLP (Jan. 2019) https://www.loeb.com/en/insights/publications/2019/01/illinois-supreme-court-rules-no-actual-harm-needed.

[33] Wrabetz, supra note 1.

[34] Id.

[35] Id.

[36] Id.

[37] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); ); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).


Can American Antitrust Law Keep Up With Artificial Intelligence?

Alec J. Berin, Matthew P. Suzor, and Quintin C. Cerione of Miller Shah LLP

Since the debut of OpenAI’s ChatGPT in late 2022, artificial intelligence (AI) has exploded from an experimental tool to a global industry. The exponential rise of generative AI, although providing companies and consumers with greater levels of efficiency and productivity, is putting pressure on American antitrust law to play catch up in regulating the growing AI market.

As AI becomes commonplace today, one of the greatest challenges it poses is that its building blocks—chips, cloud infrastructure, and large-language models—are largely controlled by only a handful of companies.[1] A major concern, therefore, is whether American antitrust law, which was largely designed during an industrial period dominated by railroads and manufacturing, can address the competitive risks of the AI era. Regulators and courts have started to express their perspectives about these issues, yet more questions than answers have emerged.

The Intersection of American Antitrust Doctrine and AI

The core of the American antitrust framework is comprised of the Sherman Antitrust Act (1890), Clayton Act (1914), and Federal Trade Commission Act (1914).[2] The Sherman Act was initially enacted in an effort to target monopolization by barring exclusionary practices, while the Clayton Act filled its holes by prohibiting mergers and acquisitions whose effect “may be substantially to lessen competition, or to tend to create a monopoly.”[3] Historically, courts have applied these laws to industries defined by physical assets, such as steel, oil, and operating systems.[4] Today, however, the market power increasingly consists of control over intangible items: data and algorithms.

Regulators are attempting to offer guidance on how these statutes apply in a digital and data-driven era. For example, in 2023 the FTC and DOJ issued revised Merger Guidelines, which warned that a merger could undermine competition if it “creates a firm that can limit access to products or services that its rivals use to compete.”[5] Although this is not directed exclusively at tech companies, this language nonetheless suggests antitrust law’s expanded focus on vertical integration—especially relevant for companies’ partnerships aimed at combining the control of AI infrastructure and data services.

The particular challenge for regulating market power in the AI sector is defining the relevant market. Because AI depends on key inputs—vast amounts of data and computational resources – rather than traditional products and services that have historically defined markets, delineating the relevant market is uniquely complex. This is clearly indicated in a 2025 report from the Congressional Research Service, which warns that “limited access to data” may threaten competition, regardless of whether AI services remain free to consumers.[6] In the coming years, determining whether AI regulation will be concentrated on the models, chips, or cloud services used for these products—or if they will be considered a single integrated stack—will be critical in influencing enforcement outcomes.

Early AI-Antitrust Legal Battles

In recent months, lawsuits against major tech companies have begun to address how far traditional antitrust principles extend into the AI space.[7] This October, a class-action lawsuit filed against Microsoft[8] alleged that its financial relationship with Open AI—particularly a deal granting Microsoft exclusive cloud computing that restricts the supply of computational resources needed to run ChatGPT—both limited market competition and artificially drove up ChatGPT subscription prices while diminishing product quality for millions of Open AI users.[9] Similar concerns are being raised by antitrust experts regarding Nvidia’s $100 billion partnership with OpenAI,[10] as experts fear that building such a relationship will give both companies an unfair advantage over their competitors.

Perhaps most notably, a September ruling by a federal judge in a landmark antitrust case against Google illustrated how AI may continue to be an obstacle in regulating monopolies.[11] Although the judge affirmed that “Google cannot use the same anticompetitive playbook for its GenAI products that it used for Search,” he insisted that the emergence of generative AI has granted companies a greater ability “to compete with Google than any traditional search company developer has been in decades” and ultimately spared Google from the harsh penalties.[12] This exemplifies the inherent tension of AI; a technology capable of fostering and hindering competition will prove only more difficult for regulators to address in years to come.

Critical Legal Questions to Consider

Going forward, courts will need to answer a series of questions to best address the competitive concerns of AI. First, as AI blurs product boundaries—with single companies being involved in many layers of the supply chain—determining whether these layers represent distinct or integrated markets has big implications for assessing anticompetitive behavior.

Second, because several of the most popular AI products offer services for free or at low costs, harm to consumers may lie outside the scope of price fixing but instead resulting from diminished product quality and restricted access to inputs.[13] It will be up to courts and regulators to determine when harm is being committed in the AI market.

Third, defining the line between integration and exclusion will become increasingly urgent. Though partnerships and acquisitions may accelerate innovation, unlawful exclusion may arise through integrated companies’ restriction of rivals’ access to essential inputs or result in self-preferencing through exclusive supply arrangements. Though this risk is outlined in the 2023 Merger Guidelines, it remains to be seen how courts will approach this issue in the coming years.

 

Notes

[1] See e.g., Jay Stanley, Will Giant Companies Always Have a Monopoly on Top AI Models?, ACLU (Aug. 20, 2025), https://www.aclu.org/news/racial-justice/will-giant-companies-always-have-a-monopoly-on-top-ai-models; Steven Levy, There Is Only One AI Company. Welcome to the Blob, Wired (Nov. 21, 2025 at 11:00), https://www.wired.com/story/ai-industry-monopoly-nvidia-microsoft-google/.

[2] See Sherman Antitrust Act of 1890, 15 U.S.C. §§ 1–38; Clayton Act of 1914, 15 U.S.C. §§ 12–27; Federal Trade Commission Act of 1914, 15 U.S.C. §§ 41-58.

[3] The Clayton Act of 1914, 15 U.S.C. § 18.

[4] See e.g., United States v. Columbia Steel Co., 334 U.S. 495 (1948) (applying the Sherman Act to the steel industry); FTC v. Sinclair Ref. Co., 261 U.S. 463 (1923) (applying the Federal Trade Commission Act and Clayton Act to the oil industry); United States v. Microsoft Corp., 346 U.S. App. D.C. 330 (2001) (applying the Sherman Act to operating systems).

[5] Federal Trade Commission & U.S. Department of Justice, Merger Guidelines (issued Dec. 18, 2023),
https://www.justice.gov/atr/2023-merger-guidelines.

[6] Congressional Research Service, Artificial Intelligence and Competition Policy (2025), CRS Insight No. IN12458, https://crsreports.congress.gov/product/pdf/IN/IN12458.

[7] Mike Scarcella, AI Users Sue Microsoft in Antirust Class Action Over OpenAI Deal, Reuters (Oct. 13, 2025 at 17:47 CDT), https://www.reuters.com/legal/government/ai-users-sue-microsoft-antitrust-class-action-over-openai-deal-2025-10-13/.

[8] Class Action Complaint, Samuel Bryant et al. v. Microsoft Corp., No. 3:25‑cv‑08733 (N.D. Cal. filed Oct. 13, 2025) (alleging anticompetitive restraints arising from Microsoft’s partnership with OpenAI).

[9] Scarcella, supra note 7.

[10] Jody Godoy, Nvidia’s $100 Billion OpenAI Play Raises Big Antitrust Issues, Reuters (Sept. 23, 2025),
https://www.reuters.com/technology/nvidias-100-billion-openai-play-raises-big-antitrust-concerns-2025-09-23/.

[11] See generally, United States v. Google LLC, 803 F. Supp. 3d 18 (D.D.C. 2025) (remedies decision addressing generative AI’s competitive effects).

[12] Id. at 99, 128.

[13] Scarcella, supra note 7.


Why New York’s Algorithmic Pricing Disclosure Act Is Not Enough

Jannelle Liu, MJLST Staffer

As artificial intelligence (“AI”) becomes increasingly integrated into business development strategies, policymakers have been prompted to consider new frameworks for oversight and accountability.[1] One prominent—and increasingly contentious—example is algorithmic pricing. The Canadian Competition Bureau broadly defines algorithmic pricing as the process of using automated algorithms to set or recommend prices for products or services, often in real time, based on a set of data inputs across the market.[2]

Algorithmic pricing recently became a contested topic of conversation as more U.S. lawmakers began introducing legislation to regulate these practices. On May 9, 2025, New York passed the Algorithmic Pricing Disclosure Act (“the Act”), which took effect on July 8, 2025.[3] The Act requires any business that uses algorithmic pricing based on consumer data to provide clear and conspicuous notice.[4] Specifically, the Act requires every advertisement, display, image, offer, or announcement of a price to include the following disclosure next to the price: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.”[5] The Act is an attempt to promote AI transparency. Although transparency is a necessary and important safeguard for accountability and consumer protection, this Act alone is not enough to establish effective oversight and prevent discriminatory pricing practices.[6]

As businesses increasingly rely on algorithmic pricing to optimize profits and dynamically respond to market demand, many AI researchers and tech advocates have called for greater transparency.[7] AI ethics guidelines focus on achieving transparency through principles of explainability and auditability. “Explainability” refers to the possibility of understanding how a system works and its outcomes.[8] For example, if a business uses an algorithm to set different prices for the same product based on user data, explainability measures whether the consumers know that the price is determined by an algorithm and the factors influenced the final price, such that they can determine if they are being charged disproportionately or unfairly. Transparency builds explainability, which gives consumers insight into AI decision-making and enables them to challenge unfair outcomes.

“Accountability” in AI refers to the duty of an organization that implements an AI system to inform and justify its usage and effects.[9] For example, if a business sets higher prices for certain neighborhoods or zip codes because it predicts residents are willing to pay more for their product, accountability requires the business to explain how the algorithm sets prices, justify that it does not unfairly discriminate against lower-income or minority communities, and correct any biased outcomes if they occur. Transparency ensures that businesses are being held accountable for fairness and equity in their algorithmic pricing practices.

Transparency is often regarded as the solution to a myriad of problems and remains a focus for most policy proposals in the field of AI.[10] In fact, 165 out of 200 AI ethics guidelines are specifically focused on promoting AI transparency.[11] It is equally important, however, to recognize that transparency has many flaws on its own. The link between transparency and accountability is tenuous at best. Consumers often do not know what information they need to have about a problem. Even when they are given information, many consumers do not have the background knowledge or tools necessary to make sense of it. On the other hand, companies are incentivized to refrain from being fully transparent to maintain competitive advantages and trade secrets, and to dodge the costly process of producing comprehensive algorithmic disclosures.[12] The complicated nature of these algorithms already introduces significant barriers to interpretability. Placing the burden of transparency on businesses—who are incentivized to control the narrative by selectively revealing information—becomes inherently counterintuitive to the goals of explainability and accountability.

New York is not the only state responding to risks posed by algorithmic pricing, but its approach is among the most modest. Emerging state legislation sheds light on the broader regulatory landscape surrounding AI-driven pricing practices. By contrast, other states have proposed more stringent measures. Vermont is currently considering a bill that prohibits all dynamic pricing past the point of sale, which eliminates the ability of businesses to adjust prices in real time.[13] Minnesota has proposed an outright ban on algorithmic pricing practices.[14] California is considering a bill that bans “surveillance pricing,” which sets customized prices based on personally identifiable information collected through surveillance.[15] Consumers in California would be able to bring injunctive actions directly against businesses under this act.[16] Compared with these proposals, New York’s Algorithmic Pricing Disclosure takes a notably minimalist approach. New York’s regulation only requires businesses to disclose when a price was set using consumer data. The law does not address fairness, prevent discriminatory pricing, or provide consumers with any direct remedies.

New York’s Algorithmic Pricing Disclosure Act represents a step in the right direction to regulate the currently under-regulated field of algorithmic pricing. However, it is only a start. Effective governance of algorithmic systems requires coordinated action across states, tech companies, universities, and the public.[17] Merely requiring businesses to acknowledge the use of algorithmic pricing is simply not enough to counter the risks of unfair, predatory, and discriminatory pricing. It is important to introduce mechanisms to monitor compliance, evaluate the impacts these systems have, and provide affected communities with a means for recourse and meaningful participation. While transparency is politically appealing and relatively easy to implement, it fails to achieve any meaningful impact without rigorous enforcement. AI transparency laws like New York’s Algorithmic Pricing Disclosure Act must be backed by adequately funded agencies with the authority to conduct audits and impose substantive sanctions on companies and the executives responsible for unfair or predatory pricing. Any transparency or disclosure-focused policies should also reflect what the public really wants to know and can interpret. Acknowledging that an algorithm was used to set prices, without any disclosure on how the algorithm functions, the data it uses, or its potential biases, fails to create meaningful accountability or consumer protection.

 

Notes

[1] Beth Stackpole, How Big Firms Leverage Artificial Intelligence for Competitive Advantage, MIT Sloan: Ideas Made to Matter (May 26, 2021), https://mitsloan.mit.edu/ideas-made-to-matter/how-big-firms-leverage-artificial-intelligence-competitive-advantage.

[2] Competition Bureau Can., Algorithmic Pricing and Competition: Discussion Paper (June 10, 2025), https://competition-bureau.canada.ca/en/how-we-foster-competition/education-and-outreach/publications/algorithmic-pricing-and-competition-discussion-paper.

[3] N.Y. Gen. Bus. L. § 349-a (McKinney 2025).

[4] Id.

[5] Id.

[6] Goli Mahdavi & Carlie Tenenbaum, New York’s Sweeping Algorithmic Pricing Reforms – What Retailers Need to Know, BCLP L. (July 22, 2025), https://www.bclplaw.com/en-US/events-insights-news/new-yorks-sweeping-algorithmic-pricing-reforms-what-retailers-need-to-know.html.

[7] Elizabeth Meehan, Transparency Won’t Be Enough for AI Accountability, Tech Pol’y (May 17, 2023), https://www.techpolicy.press/transparency-wont-be-enough-for-ai-accountability/.

[8] Juan David Gutiérrez, Why Does Algorithmic Transparency Matter and What Can We Do About It?, Open Glob. Rts. (Apr. 9, 2025), https://www.openglobalrights.org/why-does-algorithmic-transparency-matter-and-what-can-we-do-about-it/.

[9] Id.

[10] Id.

[11] Meehan, supra note 7.

[12] AI Transparency: What Are Companies Really Hiding?, Open Tools (Jan. 16, 2025), https://opentools.ai/news/ai-transparency-what-are-companies-really-hiding#section5.

[13] Gutiérrez, supra note 8.

[14] Robbie Sequiera, Cities–Including Minneapolis–Lead Bans on Algorithmic Rent Hikes as States Lag Behind, Minn. Reformer (Apr. 2, 2025), https://minnesotareformer.com/2025/04/02/cities-including-minneapolis-lead-bans-on-algorithmic-rent-hikes-as-states-lag-behind/.

[15] Gutiérrez, supra note 8.

[16] Stackpole, supra note 1.

[17] Gutiérrez, supra note 8.


The Power of Preference or Monopoly? Unpacking Google’s Search Engine Domination

Donovan Ennevor, MJLST Staffer

When searching for an answer to a query online, would you ever use a different search engine than Google? The answer for most people is almost certainly no. Google’s search engine has achieved such market domination that “to Google” has become a verb in the English language.[1] Google controls 90% of the U.S. search engine market, with its closest competitors Yahoo and Bing holding around 3% each.[2] Is this simply because Google offers a superior product or is there some other more nefarious reason?

According to the Department of Justice (“DOJ”), the answer is the latter: Google has dominated its competitors by engaging in illegal practices and creating a monopoly. Federal Judge Amit Mehta agreed with the DOJ’s position and ruled in August 2024 that Google’s market domination was a monopoly achieved through improper means.[3] The remedies for Google’s breach of antitrust law are yet to be determined; however, their consequences could have far reaching implications for the future of Google and Big Tech.

United States v. Google LLC

In October 2020, the DOJ and 11 states filed a civil suit against Google in the U.S. District Court for the District of Columbia, alleging violations of U.S. antitrust laws.[4] A coalition of 35 states, Guam, Puerto Rico, and Washington D.C. filed a similar lawsuit in December 2020.[5] In 2021, the cases were consolidated into a single proceeding to address the overlapping claims.[6] An antitrust case of this magnitude had not been brought in nearly two decades.[7]

The petitioners’ complaint argued that Google’s dominance did not solely arise through superior technology, but rather, through exclusionary agreements designed to stifle competition in online search engine and search advertising markets.[8] The complaint alleged that Google maintained its monopolies by engaging in practices such as entering into exclusivity agreements that prohibited the preinstallation of competitors’ search engines, forcing preinstallation of Google’s search engine in prime mobile device locations, and making it undeletable regardless of consumer preference.[9] For example, Google’s agreement with Apple required that all Apple products and tools have Google as the preinstalled default—essentially an exclusive—search engine.[10] Google also allegedly used its monopoly profits to fund the payments to secure preferential treatment on devices, web browsers, and other search access points, creating a self-reinforcing cycle of monopolization.[11]

According to the petitioners, these practices not only limited competitor opportunities, but also harmed consumers by reducing search engine options and diminishing quality, particularly in areas like privacy and data use.[12] Furthermore, Google’s dominance in search advertising has allowed it to charge higher prices, impacting advertisers and lowering service quality—outcomes unlikely in a more competitive market.[13]

Google rebutted the petitioners’ argument, asserting instead that its search product is preferred due to its superiority and is freely chosen by its consumers.[14] Google also noted that if users wish to switch to a different search engine, they can do so easily.[15]

However, Judge Mehta agreed with the arguments posed by the petitioners and held Google’s market dominance in search and search advertising constituted a monopoly, achieved through exclusionary practices violating U.S. antitrust laws.[16] The case will now move to the remedy determination phase, where the DOJ and Google will argue what remedies are appropriate to impose on Google during a hearing in April 2025.[17]

The Proposed Remedies and Implications

In November, the petitioners filed their final proposed remedies—both behavioral and structural—for Google with the court.[18] Behavioral remedies govern a company’s conduct whereas structural remedies generally refer to reorganization and or divestment.[19]  The proposed behavioral remedies include barring Google from entering exclusive preinstallation agreements and requiring Google to license certain indexes, data, and models that drive its search engine.[20] These remedies would help create more opportunities for competing search engines to gain visibility and improve their search capabilities and ad services. The petitioner’s filing mentioned they would also pursue structural remedies including forcing Google to breakup or divest from its Chrome browser and Android mobile operating system.[21] To ensure Google adheres to these changes, the petitioners proposed appointing a court-monitored technical committee to oversee Google’s compliance.[22]

It could be many years before any of the proposed remedies are actually instituted, given that Google has indicated it will appeal Judge Mehta’s ruling.[23] Additionally, given precedent it is unlikely that any structural remedies will be imposed or enforced.[24] However, any remedies ultimately approved would set a precedent for regulatory control over Big Tech, signaling that the U.S. government is willing to take strong steps to curb monopolistic practices. This could encourage further action against other tech giants and redefine regulatory expectations across the industry, particularly around data transparency and competition in digital advertising.

 

Notes

[1] See Virginia Heffernan, Just Google It: A Short History of a Newfound Verb, Wired (Nov. 15, 2017, 7:00 AM), https://www.wired.com/story/just-google-it-a-short-history-of-a-newfound-verb/.

[2] Justice Department Calls for Sanctions Against Google in Landmark Antitrust Case, Nat’l Pub. Radio, (Oct. 9, 2024, 12:38 AM), https://www.npr.org/2024/10/09/nx-s1-5146006/justice-department-sanctions-google-search-engine-lawsuit [hereinafter Calls for Sanctions Against Google].

[3] United States v. Google LLC, 2024 WL 3647498, 1, 134 (2024).

[4] Justice Department Sues Monopolist Google For Violating Antitrust Laws, U.S. Dep’t of Just. (Oct. 20, 2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws [hereinafter Justice Department Calls for Sanctions].

[5] Dara Kerr, United States Takes on Google in Biggest Tech Monopoly Trial of 21st Century, Nat’l Pub. Radio, (Sept. 12, 2023, 5:00 AM), https://www.npr.org/2023/09/12/1198558372/doj-google-monopoly-antitrust-trial-search-engine.

[6] Tracker Detail US v. Google LLC / State of Colorado v. Google LLC, TechPolicy.Press, https://www.techpolicy.press/tracker/us-v-google-llc/ (last visited Nov. 20, 2024).

[7] Calls for Sanctions Against Google, supra note 2 (“The last antitrust case of this magnitude to make it to trial was in 1998, when the Justice Department sued Microsoft.”).

[8] Justice Department Calls for Sanctions, supra note 4.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Kerrr, supra note 5.

[15] Id.

[16] United States v. Google LLC, 2024 WL 3647498, 1, 4 (2024).

[17] Calls for Sanctions Against Google, supra note 2.

[18] Steve Brachmann, DOJ, State AGs File Proposed Remedial Framework in Google Search Antitrust Case, (Oct. 13, 2024, 12:15 PM), https://ipwatchdog.com/2024/10/13/doj-state-ags-file-proposed-remedial-framework-google-search-antitrust-case/id=182031/.

[19] Dan Robinson, Uncle Sam may force Google to sell Chrome browser, or Android OS, The Reg. (Oct. 9, 2024, 12:56 pm), https://www.theregister.com/2024/10/09/usa_vs_google_proposed_remedies/.

[20] Brachmann, supra note 18.

[21] Exec. Summary of Plaintiff’s Proposed Final Judgement at 3–4, United States v. Google LLC No. 1:20-cv-03010-APM (D.D.C. Nov. 20, 2024). Id at 4.

[22] Id.

[23] See Jane Wolfe & Miles Kruppa, Google Loses Antitrust Case Over Search-Engine Dominance, Wall Street J. (Aug. 5, 2024, 5:02 pm), https://www.wsj.com/tech/google-loses-federal-antitrust-case-27810c43?mod=article_inline.

[24] See Makenzie Holland, Google Breakup Unlikely in Event of Guilty Verdict, Tech Target (Oct. 11, 2023), https://www.techtarget.com/searchcio/news/366555177/Google-breakup-unlikely-in-event-of-guilty-verdict. See also Michael Brick, U.S. Appeals Court Overturns Microsoft Antitrust Ruling, N.Y. Times (Jun 28, 2001), https://www.nytimes.com/2001/06/28/business/us-appeals-court-overturns-microsoft-antitrust-ruling.html. (summarizing the U.S. Court of Appeals decision overturning of the structural remedies imposed on Microsoft in an antitrust case).

 

 


The Introduction of “Buy Now, Pay Later” Products

Yanan Tang, MJLST Staffer

As of June 2024, it is estimated that more than half of Americans turn to Buy Now, Pay Later (“BNPL”) options to purchase products during financially stressful times. [1] BNPL allows customers to split up the payment of their purchases into four equal payments, requiring a down payment of 25 percent, with the remaining cost covered by three periodic payment installments. [2]

 

Consumer Financial Protection Bureau’s Interpretive Rules

In response to the popularity of BNPL products, the Consumer Financial Protection Bureau (“CFPB”) took action to regulate BNPL products.[3] In issuing its interpretive rules for BNPL, the CFPB aims to outline how these products fit within existing credit regulations. The CFPB’s interpretive rules for BNPL products were introduced in May 2024, following a 60-day review period with mixed feedback. The rules became effective in July, aiming to apply credit card-like consumer protections to BNPL services under the Truth in Lending Act (“TILA”).

Specifically, the interpretive rules assert that these BNPL providers meet the criteria for being “card issuers” and “creditors”, and therefore should be subject to relevant regulations of TILA, which govern credit card disputes and refund rights.[4] Under CFPB’s interpretive rules, BNPL firms are required to investigate disputes, refund returned products or voided services, and provide billing statements.[5]

This blog will first explain the distinction between interpretive rules and notice-and-comment rulemaking to contextualize the CFPB’s regulatory approach. It will then explore the key consumer protections these rules aim to enforce and examine the mixed responses from various stakeholders. Finally, it will analyze the Financial Technology Association’s lawsuit challenging the CFPB’s rules and consider the broader implications for BNPL regulation.

 

Interpretive Rules and Notice-and-Comment Rulemaking Explained

In general, interpretive rules are non-binding and do not require public input, while notice-and-comment rules are binding with the force of law and must follow a formal process, including public feedback, as outlined in the Administrative Procedural Act (“APA”) §553.[6] The “legal effect test” from American Mining Congress v. MSHA helps determine whether a rule is interpretive or legislative by examining factors like legislative authority, the need for a legal basis for enforcement, and whether the rule amends an existing law.[7] While some courts vary in factors to distinguish legislative and interpretive rules, they generally agree that agencies cannot hide real regulations in interpretive rules.

 

Comments Received from Consumer Groups, Traditional Banks, and BNPL Providers

After soliciting comments, CFPB received conflicting feedback on the proposed interpretive rules.[8] However, they also urged the agency to take further action to protect consumers who use BNPL credit.[9] In addition, traditional banks largely supported the rule, because BNPL’s digital user accounts are similar to those of credit cards and should be regulated similarly.[10] In contrast, major BNPL providers protested against CFPB’s rule.[11] Many BNPL providers, like PayPal, raised concerns about administrative procedures and urged CFPB to proceed through notice-and-comment rulemaking.[12] In sum, the conflicting comments highlight the challenge of applying traditional credit regulations to innovative financial products, leading to broader disputes about the rule’s implementation.

 

Financial Technology Association’s Lawsuit against CFPB’s New Rules

After the interpretive rules went into effect in July, FTA filed a lawsuit against the agency to stop the interpretive rule.[13] In their complaint, FTA contends that CFPB bypassed APA’s notice-and-comment rulemaking process, despite the significant change imposed by the rule.[14] FTA argues that the agency exceeded statutory authority under the Truth in Lending Act (TILA) as the act’s definition of “credit card” does not apply to BNPL products.[15] FTA also argues that the rule is arbitrary and capricious because it fails to account for the unique structure of BNPL products and their compliance challenges with Regulation Z.[16]

The ongoing case between FTA and CFPB will likely focus on whether CFPB’s rule is a permissible interpretation of existing law or a substantive rule requiring formal rulemaking under APA § 553. This decision should weigh the nature of BNPL products in relation to consumer protections traditionally associated with credit card-like products. In defending the agency’s interpretive rules against FTA, CFPB could consider highlighting the legislative intent of TILA’s flexibility and rationale for using an interpretive rule.

 

Notes

[1] See Block, Inc., More than Half of Americans Turn to Buy Now, Pay Later During Financially Stressful Times (June 26, 2024), https://investors.block.xyz/investor-news/default.aspx.

[2] Id.

[3] See Paige Smith & Paulina Cachero, Buy Now, Pay Later Needs Credit Card-Like Oversight, CFPB Says, Bloomberg Law (May 22, 2024), https://news.bloomberglaw.com/banking-law/buy-now-pay-later-soon-will-be-treated-more-like-credit-cards.

[4] Id.

[5] Id.

[6] 5 U.S.C.A. § 553.

[7] Am. Mining Cong. v. Mine Safety & Health Admin., 302 U.S. App. D.C. 38, 995 F.2d 1106 (1993).

[8] See Evan Weinberger, CFPB’s ‘Buy Now, Pay Later’ Rule Sparks Conflicting Reactions, Bloomberg Law (Aug. 1, 2024), https://news.bloomberglaw.com/banking-law/cfpbs-buy-now-pay-later-rule-sparks-conflicting-reactions.

[9] See New York City Dep’t of Consumer & Worker Prot., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (Aug. 31, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0027; see also Nat’l Consumer L. Ctr., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017, at 1 (Aug. 1, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0028.

[10] See Independent Community Bankers of Am., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 31, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0023.

[11] See Financial Technology Ass’n, Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 19, 2024). https://www.regulations.gov/comment/CFPB-2024-0017-0038.

[12] See PayPal, Inc., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 31, 2024). https://www.regulations.gov/comment/CFPB-2024-0017-0025.

[13] See Evan Weinberger, CFPB Buy Now, Pay Later Rule Hit With Fintech Group Lawsuit, Bloomberg Law (Oct. 18, 2024), https://news.bloomberglaw.com/banking-law/cfpbs-buy-now-pay-later-rule-hit-with-fintech-group-lawsuit.

[14] Complaint, Fin. Tech. Ass’n v. Consumer Fin. Prot. Bureau, No. 1:24-cv-02966 (D.D.C. Oct. 18, 2024).

[15] Id.

[16] Id.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


Payment Pending: CFPB Proposes to Regulate Digital Wallets

Kevin Malecha, MJLST Staffer

Federal regulators are increasingly concerned about digital wallets and person-to-person payment (P2P) apps like Apply Pay, Google Pay, Cash App, and Venmo, and how such services might impact the rights of financial consumers. As many as three-quarters of American adults use digital wallets or payment apps and, in 2022, the total value of transactions was estimated at $893 billion, expected to increase to $1.6 trillion by 2027.[1] In November of 2023, the Consumer Financial Protection Bureau proposed a rule that would expand its supervisory powers to cover certain nonbank providers of these services. The CFPB, an independent federal agency within the broader Federal Reserve System, was created by the Dodd-Frank Act in response to the 2007-2008 financial crisis and subsequent recession. The Bureau is tasked with protecting consumers in the financial space by promulgating and enforcing rules governing a wide variety of financial activities like mortgage lending, debt collection, and electronic payments.[2]

The CFPB has identified digital wallets and payment apps as products that threaten consumer financial rights and well-being.[3] First, because these services collect mass amounts of transaction and financial data, they pose a substantial risk to consumer data privacy.[4] Second, if the provider ceases operations or faces a “bank” run, any funds held in digital accounts may be lost because Federal Deposit Insurance Corporation (FDIC) protection, which insures deposits up to $250,000 in traditional banking institutions, is often unavailable for digital wallets.[5]

Enforcement and Supervision

The CFPB holds dual enforcement and supervisory roles. As one of the federal agencies charged with “implementing the Federal consumer financial laws,”[6] the enforcement powers of the CFPB are broad, but enforcement actions are relatively uncommon. In 2022, the Bureau brought twenty enforcement actions.[7] By contrast, the Commodity Futures Trading Commission (CFTC), which is also tasked in part with protecting financial consumers, brought eighty-two enforcement actions in the same period.[8] In contrast to the limited and reactionary nature of enforcement actions, the CFPB’s supervisory authority requires regulated entities to disclose certain documents and data, such as internal policies and audit reports, and allows CFPB examiners to proactively review their actions to ensure compliance.[9] The Bureau describes its supervisory process as a tool for identifying issues and addressing them before violations become systemic or cause significant harm to consumers.[10]

The CFPB already holds enforcement authority over all digital wallet and payment app services via its broad power to adjudicate violations of financial laws wherever they occur.[11] However, the Bureau has so far enjoyed only limited supervisory authority over the industry.[12] Currently, the CFPB only supervises digital wallets and payment apps when those services are provided by banks or when the provider falls under another CFPB supervision rule.[13] As tech companies like Apple and Google – which do not fall under other CFPB supervision rules – have increasingly entered the market, they have gone unsupervised.

Proposed Rule

Under the organic statute, CFPB’s existing supervisory authority covers nonbank persons that offer certain financial services including real estate and mortgage loans, private education loans, and payday loans.[14] In addition, the statute allows the Bureau to promulgate rules to cover other entities that are “larger participant[s] of a market for other consumer financial products or services.”[15] The proposed rule takes advantage of the power to define “larger participants” and expands the definition to include providers of “general-use digital consumer applications,” which the Bureau defines as funds transfer or wallet functionality through a digital application that the consumer uses to make payments for personal, household, or family purposes.[16] An entity is a “larger participant” if it (1) provides general-use digital consumer payment applications with an annual volume of at least five million transactions and (2) is not a small business as defined by the Small Business Administration.[17] The Bureau will make determinations on an individualized basis and may request documents and information from the entity to determine if it satisfies the requirements, which the entity can then dispute.

Implications for Digital Wallet and Payment App Providers

Major companies like Apple and Google can easily foresee that the CFPB intends to supervise them under the new rule. The Director of the CFPB recently compared the two American companies to Chinese tech companies Alibaba and WeChat that offer similar products and that, in the Director’s view, pose a similar risk to consumer data privacy and financial security.[18] For smaller firms, predicting the Bureau’s intentions is challenging, but existing regulations indicate that the Bureau will issue a written communication to initiate supervision.[19] The entity will then have forty-five days to dispute the finding that they meet the regulatory definition of a “larger participant.”[20] In their response, entities may include a statement of the reason for their objection and records, documents, or other information. Then the Assistant Director of the CFPB will review the response and make a determination. The regulation gives the Assistant Director the ability to request records and documents from the entity prior to the initial notification of intended supervision and throughout the determination process.[21] The Assistant Director also may extend the timeframe for determination beyond the forty-five-day window.[22]

If an entity becomes supervised, the Bureau will contact it for an initial conference.[23] The examiners will then determine the scope of future supervision, taking into consideration the responses at the conference, any records requested prior to or during the conference, and a review of the entity’s compliance management program.[24] The Bureau prioritizes its supervisory activities based on entity size, volume of transactions, size and risk of the relevant market, state oversight, and other market information to which the Bureau has access.[25] Ongoing supervision is likely to vary based on these factors, as well, but may include on-site or remote examination, review of documents and records, testing accounts and transactions for compliance with federal statutes and regulations, and continued review of the compliance management system.[26] The Bureau may then issue a confidential report or letter stating the examiner’s opinion that the entity has violated or is at risk of violating a statute or regulation.[27] While these findings are not final determinations, they do outline specific steps for the entity to regain or ensure compliance and should be taken seriously.[28] Supervisory reports or letters are distinct from enforcement actions and generally do not result in an enforcement action.[29] However, violations may be referred to the Bureau’s Office of Enforcement, which would then launch its own investigation.[30]

The likelihood of the proposed rule resulting in an enforcement action is, therefore, relatively low, but the exposure for regulated entities is difficult to measure because the penalties in enforcement actions vary widely. From October 2022 to October 2023, amounts paid by regulated entities ranged from $730,000 paid by a remittance provider that violated Electronic Funds Transfer rules,[31] to $3.7 billion in penalties and redress paid by Wells Fargo for headline-making violations of the Consumer Financial Protection Act.[32]

Notes

[1] Analysis of Deposit Insurance Coverage on Funds Stored Through Payment Apps, Consumer Fin. Prot. Bureau (Jun. 1, 2023), https://www.consumerfinance.gov/data-research/research-reports/issue-spotlight-analysis-of-deposit-insurance-coverage-on-funds-stored-through-payment-apps/full-report.

[2] Final Rules, Consumer Fin. Prot. Bureau, https://www.consumerfinance.gov/rules-policy/final-rules (last visited Nov. 16, 2023).

[3] CFPB Proposes New Federal Oversight of Big Tech Companies and Other Providers of Digital Wallets and Payment Apps, Consumer Fin. Prot. Bureau (Nov. 7, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-new-federal-oversight-of-big-tech-companies-and-other-providers-of-digital-wallets-and-payment-apps.

[4] Id.

[5] Id.

[6] 12 U.S.C. § 5492.

[7] Enforcement by the numbers, Consumer Fin. Prot. Bureau (Nov. 8, 2023), https://www.consumerfinance.gov/enforcement/enforcement-by-the-numbers.

[8] CFTC Releases Annual Enforcement Results, Commodity Futures Trading Comm’n (Oct. 20, 2022), https://www.cftc.gov/PressRoom/PressReleases/8613-22.

[9] CFPB Supervision and Examination Manual, Consumer Fin. Prot. Bureau at Overview 10 (Mar. 2017), https://files.consumerfinance.gov/f/documents/cfpb_supervision-and-examination-manual_2023-09.pdf.

[10] An Introduction to CFPB’s Exams of Financial Companies, Consumer Fin. Prot. Bureau 4 (Jan. 9, 2023), https://files.consumerfinance.gov/f/documents/cfpb_an-introduction-to-cfpbs-exams-of-financial-companies_2023-01.pdf.

[11] 12 U.S.C. §5563(a).

[12] CFPB Proposes New Federal Oversight of Big Tech Companies and Other Providers of Digital Wallets and Payment Apps, Consumer Fin. Prot. Bureau (Nov. 7, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-new-federal-oversight-of-big-tech-companies-and-other-providers-of-digital-wallets-and-payment-apps.

[13] Id.

[14] 12 U.S.C. § 5514.

[15] Id.

[16] Defining Larger Participants of a Market for General-Use Digital Consumer Payment, Consumer Fin. Prot. Bureau 3 (Nov. 7, 2023), https://files.consumerfinance.gov/f/documents/cfpb_nprm-digital-payment-apps-lp-rule_2023-11.pdf.

[17] Id. at 4.

[18] Rohit Chopra, Prepared Remarks of CFPB Director Rohit Chopra at the Brookings Institution Event on Payments in a Digital Century, Consumer Fin. Prot. Bureau (Oct. 6, 2023), https://www.consumerfinance.gov/about-us/newsroom/prepared-remarks-of-cfpb-director-rohit-chopra-at-the-brookings-institution-event-on-payments-in-a-digital-century.

[19] 12 CFR § 1090.103(a).

[20] 12 CFR § 1090.103(b).

[21] 12 CFR § 1090.103(c).

[22] 12 CFR § 1090.103(d).

[23] Defining Larger Participants of a Market for General-Use Digital Consumer Payment, Consumer Fin. Prot. Bureau 6 (Nov. 7, 2023), https://files.consumerfinance.gov/f/documents/cfpb_nprm-digital-payment-apps-lp-rule_2023-11.pdf.

[24] Id.

[25] Id. at 5.

[26] Id. at 6.

[27] An Introduction to CFPB’s Exams of Financial Companies, Consumer Fin. Prot. Bureau 3 (Jan. 9, 2023), https://files.consumerfinance.gov/f/documents/cfpb_an-introduction-to-cfpbs-exams-of-financial-companies_2023-01.pdf.

[28] Id.

[29] Id.

[30] Id.

[31] CFPB Orders Servicio UniTeller to Refund Fees and Pay Penalty for Failing to Follow Remittance, Consumer Fin. Prot. Bureau (Dec. 22, 2022), https://www.consumerfinance.gov/enforcement/actions/servicio-uniteller-inc.

[32] CFPB Orders Wells Fargo to Pay $3.7 Billion for Widespread Mismanagement of Auto Loans, Mortgages, and Deposit Accounts, Consumer Fin. Prot. Bureau (Dec. 20, 2022), https://www.consumerfinance.gov/enforcement/actions/wells-fargo-bank-na-2022.


Whistleblowers Reveals…—How Can the Legal System Protect and Encourage Whistleblowing?

Vivian Lin, MJLST Staffer

In July 2022, Twitter’s former head of security, Peiter Zatko, filed a 200+ page complaint with Congress and several federal agencies, disclosing Twitter’s potential major security problems that pose a threat to its users and national security.[1] Though it is still unclear whether  these allegations were confirmed, the disclosure drew significant attention because of data privacy implications and calls for whistleblower protection. Whistleblowers play an important role in detecting major issues in corporations and the government. A 2007 survey reported that in private companies, professional auditors were only able to detect 19% of instances of fraud but whistleblowers were able to expose 43% of incidents.[2]In fact, this recent Twitter scandal, along with Facebook’s online safety scandal in 2021[3] and the famous national security scandal disclosed by Edward Snowden, were all revealed by inside whistleblowers. Without these disclosures, the public may never learn of incidents that involve their personal information and security.

An Overview of the U.S. Whistleblower Protection Regulations

Whistleblower laws aim to protect individuals who report illegal or unethical activities in their workplace or government agency. The primary federal law protecting whistleblowers is the Whistleblower Protection Act (WPA), passed in 1989. The WPA provides protections for federal employees who report violations such as  gross mismanagement, gross waste of funds, abuse of authority, or dangers to public health or safety.[4]

In addition to the WPA, there are other federal laws that provide industry specific whistleblower protections in private sectors. For example, the Sarbanes-Oxley Act (SOX) was enacted in response to the corporate accounting scandals of the early 2000s. It requires public companies to establish and maintain internal controls to ensure the accuracy of their financial statements. Whistleblowers who report violations of securities law can receive protection against retaliation, including reinstatement, back pay, and special damages. To further encourage more whistleblowers to come forward with potential securities violations, Congress passed the Dodd-Frank           Wall Street Reform and Consumer Protection Act (Dodd-Frank) in 2010 which provides incentives and additional protections for whistleblowers. The Securities and Exchange Commission (SEC) established its whistleblower protection program under Dodd-Frank to award qualified whistleblowers for their tips that lead to a successful SEC sanction. Finally, the False Claims Act (FCA) allows individuals to file lawsuits on behalf of the government against entities that have committed fraud against the government. Whistleblowers who report fraud under the FCA can receive a percentage of the amount recovered by the government. In general, these laws give protections for whistleblowers in the private corporate setting, providing anti-retaliation protection and incentives for reporting violations.

Concerns Involved in Whistleblowing and Related Laws

While whistleblower laws in the United States provide important protections for individuals who speak out against illegal or unethical activities, there are still risks associated with whistleblowing. Even with the anti-retaliation provisions, whistleblowers still face retaliation from their employer, such as demotion or termination, and may face difficulties finding new employment in their field. For example, a 2011 report indicated that while the percentage of employees who noticed wrongdoings at their workplaces decreased from the 1992 survey, about one-third of those who called out wrongdoings and were identified as whistleblowers experienced retaliation in the form of threats and/or reprisals.[5]

Besides the fear of retaliation, another concern is the low success rate under the WPA when whistleblowers step up to make a claim. A 2015 research analyzed 151 cases where employees sought protection under the WPA and found that 79% of the cases were found in favor of the federal government.[6] Such a low success rate, in addition to potential retaliation, likely discourages employees from disclosing when they identify wrongdoings at their workplace.

A third problem with the current whistleblowing law is that financial incentives do not work as effectively as expected and might negatively impact corporate governance. From the incentives perspective, bounty hunting might actually discourage whistleblowers when not used well. For example, Dodd-Frank provides monetary rewards for people who report financial fraud that will allow the SEC impose a more than $1 million sanction on the violator, but if an employee discovers a wrongdoing that will not lead to a sanction over $1 million, a study shows that the employee will be less likely to report it timely.[7] From a corporate governance perspective, a potential whistleblower might turn to a regulatory agency for the reward rather than reporting it to the company’s internal compliance program, providing the company with the opportunity to do the right thing.[8]

Potential Changes 

There are several ways in which the current whistleblower regulations can improve. First, to encourage employees to stand up and identify wrongdoings at the workplace, the SEC’s whistleblower protection program should exclude the $1 million threshold requirement for any potential reward. Those who notice illegal behaviors that might not result in a $1 million sanction should also receive a reward if they report the potential risks.[9] Second, to deter retaliation, compensation for retaliation should be proportionate to the severity of the wrongdoing uncovered.[10] Currently, statutes mostly offer backpay, front pay, reinstatement, etc. as compensation for retaliation, while receiving punitive damages beyond that is rare. This mechanism does not recognize the public interest in retaliation cases—the public benefits from the whistleblower’s act while she risks retaliation. Finally, bounty programs might not be the right approach given that many whistleblowers are motivated more by their own moral calling rather than money. Perhaps a robust system ensuring whistleblower’s reports be thoroughly investigated and building stronger protections  from retaliation would work better than bounty programs.

In conclusion, whistleblowers play a crucial role in exposing illegal and unethical activities within organizations and government agencies. While current U.S. whistleblower protection regulations offer some safeguards, there are still shortcomings that may discourage employees from reporting wrongdoings. Improving whistleblower protections against retaliation, expanding rewards to include a wider range of disclosures, and refining the approach to investigations are essential steps to strengthen the system. By ensuring that their disclosures are thoroughly investigated and their lives are not severely impacted, we can encourage more whistleblowers to come forward with useful information which will better protect the public interest and maintain a higher standard of transparency, accountability, and corporate governance in the society.

Notes

[1] Donie O’Sullivan et al., Ex-Twitter Exec Blows The Whistle, Alleging Reckless and Negligent Cybersecurity Policies, CNN (Aug. 24, 2022, 5:59 AM EDT), https://edition.cnn.com/2022/08/23/tech/twitter-whistleblower-peiter-zatko-security/index.html.

[2] Kai-D. Bussmann, Economic Crime: People, Culture, and Controls 10 (2007).

[3] Ryan Mac & Cecilia Kang, Whistle-Blower Says Facebook ‘Chooses Profits Over Safety’, N.Y. Times (Oct. 3, 2021), https://www.nytimes.com/2021/10/03/technology/whistle-blower-facebook-frances-haugen.html.

[4] Whistleblower Protection, Office of Inspector General, https://www.oig.dhs.gov/whistleblower-protection#:~:text=The%20Whistleblower%20Protection%20Act%20 (last accessed: Mar. 5, 2023).

[5] U.S. Merit Systems Protection Board, Blowing the Whistle: Barriers to Federal Employees Making Disclosures 27 (2011).

[6] Shelley L. Peffer et al., Whistle Where You Work? The Ineffectiveness of the Federal Whistleblower Protection Act of 1989 and the Promise of the Whistleblower Protection Enhancement Act of 2012, 35 Review of Public Personnel Administration 70 (2015).

[7] Leslie Berger, et al., Hijacking the Moral Imperative: How Financial Incentives Can Discourage Whistleblower Reporting. 36 AUDITING: A Journal of Practice & Theory 1 (2017).

[8] Matt A. Vega, Beyond Incentives: Making Corporate Whistleblowing Moral in the New Era of Dodd- Frank Act “Bounty Hunting”, 45 Conn. L. Rev. 483.

[9] Geoffrey C. Rapp, Mutiny by the Bounties? The Attempt to Reform Wall Street by the New Whistleblower Provisions of the Dodd-Frank Act, 2012 B.Y.U.L. Rev. 73.

[10] David Kwok, The Public Wrong of Whistleblower Retaliation, 96 Hastings L.J. 1225.


Data Privacy Regulations in 2023: Is the New Standard Burdensome?

Yolanda Li, MJLST Staffer

Beginning in 2023, businesses will see enhanced regulations on data privacy. There has been an increase in legal requirements for company-held data in protection of companies’ customers as a number of proposed data security laws and regulations came into effect in 2023. Specifically, the FTC Safeguards Rule and the NIS2 Directive.

The FTC Safeguards Rule

The FTC Safeguards Rule came into force in December 2022. The FTC requires non-banking financial institutions “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.”[1] Non-banking financial institutions affected by this rule include mortgage brokers, motor vehicle dealers, and payday lenders. The Safeguards Rule is promulgated under the Gramm-Leach-Bliley Act of 1999, which requires financial institutions to “explain their information-sharing practices to their customers and to safeguard sensitive data.”[2] Financial institutions include companies that offer consumer financial products or services like loans, insurance, and financial or investment advice.[3] Specifically, the rule required that the covered financial institutions “designate a qualified individual to oversee their information security program, develop a written risk assessment, limit and monitor who can access sensitive customer information, encrypt all sensitive information, train security personnel, develop an incident response plan, periodically assess the security practices of service providers, and implement multi-factor authentication or another method with equivalent protection for any individual accessing customer information.”

One specific question that arises is whether the FTC Safeguards Rule will truly elevate data privacy standards. On its face the FTC Safeguards Rule does not run counter to the FTC’s mission of protecting consumers. However, the economic cost and effect behind the rule is debatable. One concern is that the rule may impose substantial costs, especially on small businesses, as the new burdens will render costs that may be unbearable for small businesses with less capital than large companies. According to Commissioner Christine S. Wilson, although financial institutions are already implementing many of the requirements under the rule, or have sophisticated programs that are easily adaptable to new obligations, there are still large burdens underestimated by the FTC Safeguards Rule.[4] Specifically, labor shortages have hampered efforts by financial institutions to implement information security systems. Supply chain issues caused delays in obtaining equipment for updating information systems. What is important to note is, according to Commissioner Wilson, most of these factors are outside the control of the financial institutions. Implementing a heightened standard would thus cause unfairness, especially to small financial institutions who have even more trouble obtaining the necessary equipment during times of supply chain and labor shortages.

Recognizing such difficulties, the FTC did offer a certain extent of leniency for implementation of the rule. Specifically, the FTC extended the deadline by six months, primarily due to supply chain issues that may result in delays and shortage of qualified personnel to implement information security programs. This extension is beneficial to the Rule because it offers the covered financial institutions time for adjustment and compliance.

Another concern that the FTC Safeguards Rule has raised is that the mandates will not result in a significant reduction in data security risks in protecting customers. The answer to this question is still uncertain as the FTC Safeguards Rule just came into effect, and the extension pushes out implementation even farther. One thing to note, however, is that during the rule-making process the FTC sought comments on the proposed Safeguards Rule and during that time extended the deadline for the public to submit comments to changes by 60 days in.[5] This fact may show that the FTC took careful consideration of how to most effectively reduce data security risks by giving the public ample time to weigh in.

NIS2 Directive

A corresponding law is the NIS2 Directive by the EU that came into force on January 16, 2023. This EU-wide legislation provides a variety of legal measures to boost cybersecurity. Specifically, it requires member states to be appropriately equipped with response and information systems, set up a Corporation Group to facilitate corporate exchange of information among member states, and ensure a culture of security that relies heavily on infrastructures, including financial market infrastructure.[6] The Directive also contains a variety of security and notification requirements for service providers to comply with. The NIS2 Directive echoes the FTC Safeguards Rule to a large extent regarding the elevated standard of cybersecurity measures.

However, the NIS2 Directive contains a different measure by implementing duties onto the European Union Agency for Cybersecurity (ENISA) itself. The Directive designates that ENISA assists Member States and the Corporation Groups set up under the Directive by “identifying good practices in the Member States regarding the implementation of the NIS directive, supporting the EU-wide reporting process for cybersecurity incidents, by developing thresholds, templates and tools, agreeing on common approaches and procedures, and helping Member States to address common cybersecurity issues.”[7] The Directive ordering the agency itself to facilitate the carrying out of the Directive may add to the likelihood of success. Although the outcome is uncertain, primarily because of the broad language of the Directive, at least burdens on financial institutions will be lessened to a certain extent. What distinguishes the NIS2 Directive from the FTC Safeguards Rule is that the Member States are given 21 months to transpose to their national legislative framework.[8] This time offers more flexibility as compared to the extension of the FTC Safeguards Rule. As the Directive passes through the legislative framework, more time will be allowed for financial institutions to prepare and respond to the proposed changes.

In summary, data privacy laws are tightening up globally, and the United States should look to and learn from the successes and failures of the EU’s Directive as both countries’ are attempting to do regulate a similar industry. That being said, regardless of the EU, financial institutions in the United States must begin paying attention to and complying with the FTC Safeguards Rule. Though the outcome of the Rule is uncertain, the 6-month extension will at least offer a certain degree of flexibility.

Notes

[1]https://www.ftc.gov/news-events/news/press-releases/2022/11/ftc-extends-deadline-six-months-compliance-some-changes-financial-data-security-rule; 16 CFR 314.

[2] https://www.ftc.gov/business-guidance/privacy-security/gramm-leach-bliley-act.

[3] Id.

[4] Concurring Statement of Commissioner Christine S. Wilson, Regarding Delaying the Effective Date of Certain Provisions of the Recently Amended Safeguards Rule (Nov 2022).

[5] https://www.ftc.gov/news-events/news/press-releases/2019/05/ftc-extends-comment-deadline-proposed-changes-safeguards-rule.

[6] https://digital-strategy.ec.europa.eu/en/policies/nis2-directive.

[7] https://www.enisa.europa.eu/topics/cybersecurity-policy/nis-directive-new#:~:text=On%2016%20January%202023%2C%20the,cyber%20crisis%20management%20structure%20(CyCLONe).

[8] Id.

 


Emptying the Nest: Recent Events at Twitter Prompt Class-Action Litigation, Among Other Things

Ted Mathiowetz, MJLST Staffer

You’d be forgiven if you thought the circumstances that led to Elon Musk ultimately acquiring Twitter would be the end of the drama for the social media company. In the past seven months, Musk went from becoming the largest shareholder of the company, to publicly feuding with then-CEO, Parag Agrawal, to making an offer to take the company private for $44 billion, to deciding he didn’t want to purchase the company, to being sued by Twitter to force him to complete the deal. Eventually, two weeks before trial was scheduled, Musk purchased the company for the original, agreed upon price.[1] However, within the first two-and-a-half weeks that Musk took Twitter private, the drama has continued, if not ramped-up, with one lawsuit already filed and the specter of additional litigation looming.[2]

There’s been the highly controversial rollout and almost immediate suspension of Twitter Blue—Musk’s idea of increasing the reliability of information on Twitter and simultaneously helping ameliorate Twitter’s financial woes.[3]Essentially, users were able to pay $8 a month for verification, albeit without actually verifying their identity. Instead, their username would remain frozen at the time they paid for the service.[4] Users quickly created fake “verified” accounts for real companies and spread misinformation while armed with the “verified” check mark, duping both the public and investors. For example, a newly created account with the handle “@EliLillyandCo” paid for Twitter Blue and tweeted “We are excited to announce insulin is free now.”[5] Eli Lilly’s actual Twitter account, “@LillyPad” had to tweet a message apologizing to those “who have been served a misleading message” from the fake account, after the pharmaceutical company’s shares dipped around 5% after the tweet.[6] In addition to Eli Lilly, several other companies, like Lockheed Martin, faced similar identity theft.[7] Twitter Blue was quickly suspended in the wake of these viral impersonations and advertisers have continued to flee the company, affecting its revenue.[8]

Musk also pulled over 50 engineers from Tesla, the vehicle manufacturing company of which he is CEO, to help him in his reimagining of Twitter.[9] Among those 50 engineers are the director of software development and the senior director of software engineering.[10] Pulling engineers from his publicly traded company to work on his separately owned private company almost assuredly raises questions of a violation of his fiduciary duty to Tesla’s shareholders, especially with Tesla’s share price falling 13% over the last week (as of November 9, 2022).[11]

The bulk of Twitter’s current legal issues reside in Musk’s decision to engage in mass-layoffs of employees at Twitter.[12] After his first week in charge, he sent out notices to around half of Twitter’s 7500 employees that they would be laid off, reasoning that cutbacks were necessary because Twitter was losing over $4 million per day.[13] Soon after the layoffs, a group of employees filed suit alleging that Twitter violated the Worker Adjustment and Retraining Act (WARN) by failing to give adequate notice.[14]

The WARN Act, passed in 1988, applies to employers with 100 or more employees[15] and mandates that an “employer shall not order a [mass layoff]” until it gives sixty-days’ notice to the state and affected employees.[16]Compliance can also be reached if, in lieu of giving notice, the employee is paid for the sixty-day notice period. In Twitter’s case, some employees were offered pay to comply with the sixty-day period after the initial lawsuit was filed,[17] though the lead plaintiff in the class action suit was allegedly laid off on November 1st with no notice or offer of severance pay.[18] Additionally, it appears as though Twitter is now offering severance to employees in return for a signature releasing them from liability in a WARN action.[19]

With regard to those who have not yet signed releases and were not given notice of a layoff, there is a question of what the penalties may be to Twitter and what potential defenses they may have. Each employee is entitled to “back pay for each day of violation” as well as benefits under their respective plan.[20] Furthermore, the employer is subject to a civil penalty of “not more than $500 for each day of violation” unless they pay their liability to each employee within three weeks of the layoff.[21] One possible defense that Twitter may assert in response to this suit is that of “unforeseeable business circumstances.”[22] Considering Musk’s recent comments that there is the potential that Twitter is headed for bankruptcy as well as the saddling of the company with debt to purchase it (reportedly $13 billion, with $1 billion per year in interest payments),[23] it seems there is a chance this defense could suffice. However, an unforeseen circumstance is strongly indicated when the circumstance is “outside the employer’s” control[24], something that’s arguable given the company’s recent conduct.[25] Additionally, Twitter would have to show that it has been exercising “commercially reasonable business judgment as would a similarly situated employer” in their conduct, another burden that may be hard to overcome. In sum, it’s quite clear why Twitter is trying to keep this lawsuit from gaining traction by securing release waivers. It’s also clear that Twitter has learned its lesson in not offering severance but they may be wading into other areas of employment law with recent conduct.[26]

Notes

[1] Timeline of Billionaire Elon Musk’s to Control Twitter, Associated Press (Oct. 28, 2022), https://apnews.com/article/twitter-elon-musk-timeline-c6b09620ee0905e59df9325ed042a609.

[2] Annie Palmer, Twitter Sued by Employees After Mass Layoffs Begin, CNBC (Nov. 4, 2022), https://www.cnbc.com/2022/11/04/twitter-sued-by-employees-after-mass-layoffs-begin.html.

[3] Siladitya Ray, Twitter Blue: Signups for Paid Verification Appear Suspended After Impersonator Chaos, Forbes (Nov. 11, 2022), https://www.forbes.com/sites/siladityaray/2022/11/11/twitter-blue-new-signups-for-paid-verification-appear-suspended-after-impersonator-chaos/?sh=14faf76c385c; see also Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:43 PM), https://twitter.com/elonmusk/status/1589403131770974208?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[4] Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:35 PM), https://twitter.com/elonmusk/status/1589401231545741312?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[5] Steve Mollman, No, Insulin is not Free: Eli Lilly is the Latest High-Profile Casualty of Elon Musk’s Twitter Verification Mess, Fortune(Nov. 11, 2022), https://fortune.com/2022/11/11/no-free-insulin-eli-lilly-casualty-of-elon-musk-twitter-blue-verification-mess/.

[6] Id. Eli Lilly and Company (@LillyPad), Twitter (Nov. 10, 2022, 3:09 PM), https://twitter.com/LillyPad/status/1590813806275469333?s=20&t=4XvAAidJmNLYwSCcWtd4VQ.

[7] Mollman, supra note 5 (showing Lockheed Martin’s stock dipped around 5% as well following a tweet from a “verified” account saying arms sales were being suspended to various countries went viral).

[8] Herb Scribner, Twitter Suffers “Massive Drop in Revenue,” Musk Says, Axios (Nov. 4, 2022), https://www.axios.com/2022/11/04/elon-musk-twitter-revenue-drop-advertisers.

[9] Lora Kolodny, Elon Musk has Pulled More Than 50 Tesla Employees into his Twitter Takeover, CNBC (Oct. 31, 2022), https://www.cnbc.com/2022/10/31/elon-musk-has-pulled-more-than-50-tesla-engineers-into-twitter.html.

[10] Id.

[11] Trefis Team, Tesla Stock Falls Post Elon Musk’s Twitter Purchase. What’s Next?, NASDAQ (Nov. 9, 2022), https://www.nasdaq.com/articles/tesla-stock-falls-post-elon-musks-twitter-purchase.-whats-next.

[12] Dominic Rushe, et al., Twitter Slashes Nearly Half its Workforce as Musk Admits ‘Massive Drop’ in Revenue, The Guardian (Nov. 4, 2022), https://www.theguardian.com/technology/2022/nov/04/twitter-layoffs-elon-musk-revenue-drop.

[13] Id.

[14] Phil Helsel, Twitter Sued Over Short-Notice Layoffs as Elon Musk’s Takeover Rocks Company, NBC News (Nov. 4, 2022), https://www.nbcnews.com/business/business-news/twitter-sued-layoffs-days-elon-musk-purchase-rcna55619.

[15] 29 USC § 2101(a)(1).

[16] 29 USC § 2102(a).

[17] On Point, Boston Labor Lawyer Discusses her Class Action Lawsuit Against Twitter, WBUR Radio Boston (Nov. 10, 2022), https://www.wbur.org/radioboston/2022/11/10/shannon-liss-riordan-musk-class-action-twitter-suit (discussing recent developments in the case with attorney Shannon Liss-Riordan).

[18] Complaint at 5, Cornet et al. v. Twitter, Inc., Docket No. 3:22-cv-06857 (N.D. Cal. 2022).

[19] Id. at 6 (outlining previous attempts by another Musk company, Tesla, to get around WARN Act violations by tying severance agreements to waiver of litigation rights); see also On Point, supra note 17.

[20] 29 USC § 2104.

[21] Id.

[22] 20 CFR § 639.9 (2012).

[23] Hannah Murphy, Musk Warns Twitter Bankruptcy is Possible as Executives Exit, Financial Times (Nov. 10, 2022), https://www.ft.com/content/85eaf14b-7892-4d42-80a9-099c0925def0.

[24] Id.

[25] See e.g., Murphy supra note 22.

[26] See Pete Syme, Elon Musk Sent a Midnight Email Telling Twitter Staff to Commit to an ‘Extremely Hardcore’ Work Schedule – or Get Laid off with Three Months’ Severance, Business Insider (Nov. 16, 2022), https://www.businessinsider.com/elon-musk-twitter-staff-commit-extremely-hardcore-work-laid-off-2022-11; see also Jaclyn Diaz, Fired by Tweet: Elon Musk’s Latest Actions are Jeopardizing Twitter, Experts Say. NPR (Nov. 17, 2022), https://www.npr.org/2022/11/17/1137265843/elon-musk-fires-employee-by-tweet (discussing firing of an employee for correcting Musk on Twitter and potential liability for a retaliation claim under California law).