November 2024

The Power of Preference or Monopoly? Unpacking Google’s Search Engine Domination

Donovan Ennevor, MJLST Staffer

When searching for an answer to a query online, would you ever use a different search engine than Google? The answer for most people is almost certainly no. Google’s search engine has achieved such market domination that “to Google” has become a verb in the English language.[1] Google controls 90% of the U.S. search engine market, with its closest competitors Yahoo and Bing holding around 3% each.[2] Is this simply because Google offers a superior product or is there some other more nefarious reason?

According to the Department of Justice (“DOJ”), the answer is the latter: Google has dominated its competitors by engaging in illegal practices and creating a monopoly. Federal Judge Amit Mehta agreed with the DOJ’s position and ruled in August 2024 that Google’s market domination was a monopoly achieved through improper means.[3] The remedies for Google’s breach of antitrust law are yet to be determined; however, their consequences could have far reaching implications for the future of Google and Big Tech.

United States v. Google LLC

In October 2020, the DOJ and 11 states filed a civil suit against Google in the U.S. District Court for the District of Columbia, alleging violations of U.S. antitrust laws.[4] A coalition of 35 states, Guam, Puerto Rico, and Washington D.C. filed a similar lawsuit in December 2020.[5] In 2021, the cases were consolidated into a single proceeding to address the overlapping claims.[6] An antitrust case of this magnitude had not been brought in nearly two decades.[7]

The petitioners’ complaint argued that Google’s dominance did not solely arise through superior technology, but rather, through exclusionary agreements designed to stifle competition in online search engine and search advertising markets.[8] The complaint alleged that Google maintained its monopolies by engaging in practices such as entering into exclusivity agreements that prohibited the preinstallation of competitors’ search engines, forcing preinstallation of Google’s search engine in prime mobile device locations, and making it undeletable regardless of consumer preference.[9] For example, Google’s agreement with Apple required that all Apple products and tools have Google as the preinstalled default—essentially an exclusive—search engine.[10] Google also allegedly used its monopoly profits to fund the payments to secure preferential treatment on devices, web browsers, and other search access points, creating a self-reinforcing cycle of monopolization.[11]

According to the petitioners, these practices not only limited competitor opportunities, but also harmed consumers by reducing search engine options and diminishing quality, particularly in areas like privacy and data use.[12] Furthermore, Google’s dominance in search advertising has allowed it to charge higher prices, impacting advertisers and lowering service quality—outcomes unlikely in a more competitive market.[13]

Google rebutted the petitioners’ argument, asserting instead that its search product is preferred due to its superiority and is freely chosen by its consumers.[14] Google also noted that if users wish to switch to a different search engine, they can do so easily.[15]

However, Judge Mehta agreed with the arguments posed by the petitioners and held Google’s market dominance in search and search advertising constituted a monopoly, achieved through exclusionary practices violating U.S. antitrust laws.[16] The case will now move to the remedy determination phase, where the DOJ and Google will argue what remedies are appropriate to impose on Google during a hearing in April 2025.[17]

The Proposed Remedies and Implications

In November, the petitioners filed their final proposed remedies—both behavioral and structural—for Google with the court.[18] Behavioral remedies govern a company’s conduct whereas structural remedies generally refer to reorganization and or divestment.[19]  The proposed behavioral remedies include barring Google from entering exclusive preinstallation agreements and requiring Google to license certain indexes, data, and models that drive its search engine.[20] These remedies would help create more opportunities for competing search engines to gain visibility and improve their search capabilities and ad services. The petitioner’s filing mentioned they would also pursue structural remedies including forcing Google to breakup or divest from its Chrome browser and Android mobile operating system.[21] To ensure Google adheres to these changes, the petitioners proposed appointing a court-monitored technical committee to oversee Google’s compliance.[22]

It could be many years before any of the proposed remedies are actually instituted, given that Google has indicated it will appeal Judge Mehta’s ruling.[23] Additionally, given precedent it is unlikely that any structural remedies will be imposed or enforced.[24] However, any remedies ultimately approved would set a precedent for regulatory control over Big Tech, signaling that the U.S. government is willing to take strong steps to curb monopolistic practices. This could encourage further action against other tech giants and redefine regulatory expectations across the industry, particularly around data transparency and competition in digital advertising.

 

Notes

[1] See Virginia Heffernan, Just Google It: A Short History of a Newfound Verb, Wired (Nov. 15, 2017, 7:00 AM), https://www.wired.com/story/just-google-it-a-short-history-of-a-newfound-verb/.

[2] Justice Department Calls for Sanctions Against Google in Landmark Antitrust Case, Nat’l Pub. Radio, (Oct. 9, 2024, 12:38 AM), https://www.npr.org/2024/10/09/nx-s1-5146006/justice-department-sanctions-google-search-engine-lawsuit [hereinafter Calls for Sanctions Against Google].

[3] United States v. Google LLC, 2024 WL 3647498, 1, 134 (2024).

[4] Justice Department Sues Monopolist Google For Violating Antitrust Laws, U.S. Dep’t of Just. (Oct. 20, 2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws [hereinafter Justice Department Calls for Sanctions].

[5] Dara Kerr, United States Takes on Google in Biggest Tech Monopoly Trial of 21st Century, Nat’l Pub. Radio, (Sept. 12, 2023, 5:00 AM), https://www.npr.org/2023/09/12/1198558372/doj-google-monopoly-antitrust-trial-search-engine.

[6] Tracker Detail US v. Google LLC / State of Colorado v. Google LLC, TechPolicy.Press, https://www.techpolicy.press/tracker/us-v-google-llc/ (last visited Nov. 20, 2024).

[7] Calls for Sanctions Against Google, supra note 2 (“The last antitrust case of this magnitude to make it to trial was in 1998, when the Justice Department sued Microsoft.”).

[8] Justice Department Calls for Sanctions, supra note 4.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Kerrr, supra note 5.

[15] Id.

[16] United States v. Google LLC, 2024 WL 3647498, 1, 4 (2024).

[17] Calls for Sanctions Against Google, supra note 2.

[18] Steve Brachmann, DOJ, State AGs File Proposed Remedial Framework in Google Search Antitrust Case, (Oct. 13, 2024, 12:15 PM), https://ipwatchdog.com/2024/10/13/doj-state-ags-file-proposed-remedial-framework-google-search-antitrust-case/id=182031/.

[19] Dan Robinson, Uncle Sam may force Google to sell Chrome browser, or Android OS, The Reg. (Oct. 9, 2024, 12:56 pm), https://www.theregister.com/2024/10/09/usa_vs_google_proposed_remedies/.

[20] Brachmann, supra note 18.

[21] Exec. Summary of Plaintiff’s Proposed Final Judgement at 3–4, United States v. Google LLC No. 1:20-cv-03010-APM (D.D.C. Nov. 20, 2024). Id at 4.

[22] Id.

[23] See Jane Wolfe & Miles Kruppa, Google Loses Antitrust Case Over Search-Engine Dominance, Wall Street J. (Aug. 5, 2024, 5:02 pm), https://www.wsj.com/tech/google-loses-federal-antitrust-case-27810c43?mod=article_inline.

[24] See Makenzie Holland, Google Breakup Unlikely in Event of Guilty Verdict, Tech Target (Oct. 11, 2023), https://www.techtarget.com/searchcio/news/366555177/Google-breakup-unlikely-in-event-of-guilty-verdict. See also Michael Brick, U.S. Appeals Court Overturns Microsoft Antitrust Ruling, N.Y. Times (Jun 28, 2001), https://www.nytimes.com/2001/06/28/business/us-appeals-court-overturns-microsoft-antitrust-ruling.html. (summarizing the U.S. Court of Appeals decision overturning of the structural remedies imposed on Microsoft in an antitrust case).

 

 


Privacy at Risk: Analyzing DHS AI Surveillance Investments

Noah Miller, MJLST Staffer

The concept of widespread surveillance of public areas monitored by artificial intelligence (“AI”) may sound like it comes right out of a dystopian novel, but key investments by the Department of Homeland Security (“DHS”) could make this a reality. Under the Biden Administration, the U.S. has acted quickly and strategically to adopt artificial intelligence as a tool to realize national security objectives.[1] In furtherance of President Biden’s executive goals concerning AI, the Department of Homeland Security has been making investments in surveillance systems that utilize AI algorithms.

Despite the substantial interest in protecting national security, Patrick Toomey, deputy director of the ACLU National Security Project, has criticized the Biden administration for allowing national security agencies to “police themselves as they increasingly subject people in the United States to powerful new technologies.”[2] Notably, these investments have not been tailored towards high-security locations—like airports. Instead, these investments include surveillance in “soft targets”—high-traffic areas with limited security: “Examples include shopping areas, transit facilities, and open-air tourist attractions.”[3] Currently, due to the number of people required to review footage, surveilling most public areas is infeasible; however, emerging AI algorithms would allow for this work to be done automatically. While enhancing security protections in soft targets is a noble and possibly desirable initiative, the potential privacy ramifications of widespread autonomous AI surveillance are extreme. Current Fourth Amendment jurisprudence offers little resistance to this form of surveillance, and the DHS has both been developing this surveillance technology themselves and outsourcing these projects to private corporations.

To foster innovation to combat threats to soft targets, the DHS has created a center called Soft Target Engineering to Neutralize the Threat Reality (“SENTRY”).[4] Of the research areas at SENTRY, one area includes developing “real-time management of threat detection and mitigation.”[5] One project, in this research area, seeks to create AI algorithms that can detect threats in public and crowded areas.[6] Once the algorithm has detected a threat, the particular incident would be sent to a human for confirmation.[7] This would be a substantially more efficient form of surveillance than is currently widely available.

Along with the research conducted through SENTRY, DHS has been making investments in private companies to develop AI surveillance technologies through the Silicon Valley Innovation Program (“SVIP”).[8] Through the SVIP, the DHS has awarded three companies with funding to develop AI surveillance technologies that can detect “anomalous events via video feeds” to improve security in soft targets: Flux Tensor, Lauretta AI, and Analytical AI.[9] First, Flux Tensor currently has demo pilot-ready prototype video feeds that apply “flexible object detection algorithms” to track and pinpoint movements of interest.[10] The technology is used to distinguish human movements and actions from the environment—i.e. weather, glare, and camera movements.[11] Second, Lauretta AI is adjusting their established activity recognition AI to utilize “multiple data points per subject to minimize false alerts.”[12] The technology generates automated reports periodically of detected incidents that are categorized by their relative severity.[13] Third, Analytical AI is in the proof of concept demo phase with AI algorithms that can autonomously track objects in relation to people within a perimeter.[14] The company has already created algorithms that can screen for prohibited items and “on-person threats” (i.e. weapons).[15] All of these technologies are currently in early stages, so the DHS is unlikely to utilize these technologies in the imminent future.

Assuming these AI algorithms are effective and come to fruition, current Fourth Amendment protections seem insufficient to protect against rampant usage of AI surveillance in public areas. In Kyllo v. United States, the Court placed an important limit on law enforcement use of new technologies. The Court held that when new sense-enhancing technology, not in general public use, was utilized to obtain information from a constitutionally protected area, the use of the new technology constitutes a search.[16] Unlike in Kyllo, where the police used thermal imaging to obtain temperature levels on various areas of a house, people subject to AI surveillance in public areas would not be in constitutionally protected areas.[17] Being that people subject to this surveillance would be in public places, they would not have a reasonable expectation of privacy in their movements; therefore, this form of surveillance likely would not constitute a search under prominent Fourth Amendment search analysis.[18]

While the scope and accuracy of this new technology are still to be determined, policymakers and agencies need to implement proper safeguards and proceed cautiously. In the best scenario, this technology can keep citizens safe while mitigating the impact on the public’s privacy interests. In the worst scenario, this technology could effectively turn our public spaces into security checkpoints. Regardless of how relevant actors proceed, this new technology would likely result in at least some decline in the public’s privacy interests. Policymakers should not make a Faustian bargain for the sake of maintaining social order.

 

Notes

[1] See generally Joseph R. Biden Jr., Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, The White House (Oct. 24, 2024), https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/ (explaining how the executive branch intends to utilize artificial intelligence in relation to national security).

[2] ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections, ACLU (Oct. 24, 2024, 12:00 PM), https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections.

[3] Jay Stanley, DHS Focus on “Soft Targets” Risks Out-of-Control Surveillance, ALCU (Oct. 24, 2024), https://www.aclu.org/news/privacy-technology/dhs-focus-on-soft-targets-risks-out-of-control-surveillance.

[4] See Overview, SENTRY, https://sentry.northeastern.edu/overview/#VSF.

[5] Real-Time Management of Threat Detection and Mitigation, SENTRY, https://sentry.northeastern.edu/research/ real-time-threat-detection-and-mitigation/.

[6] See An Artificial Intelligence-Driven Threat Detection and Real-Time Visualization System in Crowded Places, SENTRY, https://sentry.northeastern.edu/research-project/an-artificial-intelligence-driven-threat-detection-and-real-time-visualization-system-in-crowded-places/.

[7] See Id.

[8] See, e.g., SVIP Portfolio and Performers, DHS, https://www.dhs.gov/science-and-technology/svip-portfolio.

[9] Id.

[10] See Securing Soft Targets, DHS, https://www.dhs.gov/science-and-technology/securing-soft-targets.

[11] See pFlux Technology, Flux Tensor, https://fluxtensor.com/technology/.

[12] See Securing Soft Targets, supra note 10.

[13] See Security, Lauretta AI, https://lauretta.io/technologies/security/.

[14] See Securing Soft Targets, supra note 10.

[15] See Technology, Analytical AI, https://www.analyticalai.com/technology.

[16] Kyllo v. United States, 533 U.S. 27, 33 (2001).

[17] Cf. Id.

[18] See generally, Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring) (explaining the test for whether someone may rely on an expectation of privacy).

 

 


AI and Predictive Policing: Balancing Technological Innovation and Civil Liberties

Alexander Engemann, MJLST Staffer

To maximize their effectiveness, police agencies are constantly looking to use the most sophisticated preventative methods and technologies available. Predictive policing is one such technique that fuses data analysis, algorithms, and information technology to anticipate and prevent crime. This approach identifies patterns in data to anticipate when and where crime will occur, allowing agencies to take measures to prevent it.[1] Now, engulfed in an artificial intelligence (“AI”) revolution, law enforcement agencies are eager to take advantage of these developments to augment controversial predictive policing methods.[2]

In precincts that use predictive policing strategies, ample amounts of data are used to categorize citizens with basic demographic information.[3] Now, machine learning and AI tools are augmenting this data which, according to one source vendor, “identifies where and when crime is most likely to occur, enabling [law enforcement] to effectively allocate [their] resources to prevent crime.”[4]

Both predictive policing and AI have faced significant challenges concerning issues of equity and discrimination. In response to these concerns, the European Union has taken proactive steps promulgating sophisticated rules governing AI applications within its territory, continuing its tradition of leading in regulatory initiatives.[5] Dubbed the “Artificial Intelligence Act”, the Union clearly outlined its goal of promoting safe, non-discriminatory AI systems.[6]

Back home, we’ve failed to keep a similar legislative pace, even with certain institutions sounding the alarms.[7] Predictive policing methods have faced similar criticism. In an issue brief, the NAACP emphasized, “[j]urisdictions who use [Artificial Intelligence] argue it enhances public safety, but in reality, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement.”[8] This technological and ideological marriage clearly poses discriminatory risks for law enforcement agencies in a nation where a black person is already exponentially more likely to be stopped without just cause as their white counterparts.[9]

Police agencies are bullish about the technology. Police Chief Magazine, the official publication of the International Association of Chiefs of Police,  paints these techniques in a more favorable light, stating, “[o]ne of the most promising applications of AI in law enforcement is predictive policing…Predictive policing empowers law enforcement to predict potential crime hotspots, ultimately aiding in crime prevention and public safety.[10] In this space, facial recognition software is gaining traction among law enforcement agencies as a powerful tool for identifying suspects and enhancing public safety. Clearview AI stresses their product, “[helps] law enforcement and governments in disrupting and solving crime.”[11]

Predictive policing methods enhanced by AI technology show no signs of slowing down.[12] The obvious advantages to these systems cannot be ignored, allowing agencies to better allocate resources and manage their staff. However, as law enforcement agencies adopt these technologies, it is important to remain vigilant in holding them accountable to any potential ethical implications and biases embedded within their systems. A comprehensive framework for accountability and transparency, similar to European Union guidelines  must be established to ensure deploying predictive policing and AI tools do not come at the expense of marginalized communities. [13]

 

Notes

[1] Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L.J. 259, 265-267 (2012)

[2] Eric M. Baker, I’ve got my AI on You: Artificial Intelligence in the Law Enforcement Domain, 47 (Mar. 2021) (Master’s thesis).

[3] Id. at 48.

[4] Id. at 49 (citing Walt L. Perry et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, RR-233-NIJ (Santa Monica, CA: RAND, 2013), 4, https://www.rand.org/content/dam/rand/ pubs/research_reports/RR200/RR233/RAND_RR233.pdf).

[5] Commission Regulation 2024/1689 or the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (Artificial Intelligence Act), 2024 O.J. (L 1689) 1.

[6] Lukas Arnold, How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination, Penn. J. L. & Soc. Change (May 14,2024), https://www.law.upenn.edu/live/news/16742-how-the-european-unions-ai-act-provides#_ftn1.

[7] See Margaret Hu, Algorithmic Jim Crow, 86 Fordham L. Rev. 633, 664 (2017),

https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5445&context=flr. (“Database screening and digital watchlisting systems, in fact, can serve as complementary and facially colorblind supplements to mass incarcerations systems. The purported colorblindness of mandatory sentencing… parallels the purported colorblindness of mandatory database screening and vetting systems”).

[8] NAACP, Issue Brief: The Use of Artificial Intelligence in Predictive policing, https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief (last visited Nov. 2, 2024).

[9] Will Douglas Heaven, Artificial Intelligence- Predictive policing algorithms are racist. They need to be dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ (citing OJJDP Statistical Briefing Book. Estimated number of arrests by offense and race, 2020. Available: https://ojjdp.ojp.gov/statistical-briefing-book/crime/faqs/ucr_table_2. Released on July 08, 2022).

[10] See The Police Chief, Int’l Ass’n of Chiefs of Police, https://www.policechiefmagazine.org (last visited Nov. 2, 2024);Brandon Epstein, James Emerson, and ChatGPT, “Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives,” Police Chief Online, April 3, 2024.

[11] Clearview AI, https://www.clearview.ai/ (last visited Nov. 3, 2024).

[12] But see Nicholas Ibarra, Santa Cruz Becomes First US City to Approve Ban on Predictive Policing, Santa Cruz Sentinel (June 23, 200) https://evidentchange.org/newsroom/news-of-interest/santa-cruz-becomes-first-us-city-approve-ban-predictive-policing/.

[13] See also Roy Maurer, New York City to Require Bias Audits of AI-Type HR Technology, Society of Human Resources Management (December 19, 2021), https://www.shrm.org/topics-tools/news/technology/new-york-city-to-require-bias-audits-ai-type-hr-technology.

 


The Introduction of “Buy Now, Pay Later” Products

Yanan Tang, MJLST Staffer

As of June 2024, it is estimated that more than half of Americans turn to Buy Now, Pay Later (“BNPL”) options to purchase products during financially stressful times. [1] BNPL allows customers to split up the payment of their purchases into four equal payments, requiring a down payment of 25 percent, with the remaining cost covered by three periodic payment installments. [2]

 

Consumer Financial Protection Bureau’s Interpretive Rules

In response to the popularity of BNPL products, the Consumer Financial Protection Bureau (“CFPB”) took action to regulate BNPL products.[3] In issuing its interpretive rules for BNPL, the CFPB aims to outline how these products fit within existing credit regulations. The CFPB’s interpretive rules for BNPL products were introduced in May 2024, following a 60-day review period with mixed feedback. The rules became effective in July, aiming to apply credit card-like consumer protections to BNPL services under the Truth in Lending Act (“TILA”).

Specifically, the interpretive rules assert that these BNPL providers meet the criteria for being “card issuers” and “creditors”, and therefore should be subject to relevant regulations of TILA, which govern credit card disputes and refund rights.[4] Under CFPB’s interpretive rules, BNPL firms are required to investigate disputes, refund returned products or voided services, and provide billing statements.[5]

This blog will first explain the distinction between interpretive rules and notice-and-comment rulemaking to contextualize the CFPB’s regulatory approach. It will then explore the key consumer protections these rules aim to enforce and examine the mixed responses from various stakeholders. Finally, it will analyze the Financial Technology Association’s lawsuit challenging the CFPB’s rules and consider the broader implications for BNPL regulation.

 

Interpretive Rules and Notice-and-Comment Rulemaking Explained

In general, interpretive rules are non-binding and do not require public input, while notice-and-comment rules are binding with the force of law and must follow a formal process, including public feedback, as outlined in the Administrative Procedural Act (“APA”) §553.[6] The “legal effect test” from American Mining Congress v. MSHA helps determine whether a rule is interpretive or legislative by examining factors like legislative authority, the need for a legal basis for enforcement, and whether the rule amends an existing law.[7] While some courts vary in factors to distinguish legislative and interpretive rules, they generally agree that agencies cannot hide real regulations in interpretive rules.

 

Comments Received from Consumer Groups, Traditional Banks, and BNPL Providers

After soliciting comments, CFPB received conflicting feedback on the proposed interpretive rules.[8] However, they also urged the agency to take further action to protect consumers who use BNPL credit.[9] In addition, traditional banks largely supported the rule, because BNPL’s digital user accounts are similar to those of credit cards and should be regulated similarly.[10] In contrast, major BNPL providers protested against CFPB’s rule.[11] Many BNPL providers, like PayPal, raised concerns about administrative procedures and urged CFPB to proceed through notice-and-comment rulemaking.[12] In sum, the conflicting comments highlight the challenge of applying traditional credit regulations to innovative financial products, leading to broader disputes about the rule’s implementation.

 

Financial Technology Association’s Lawsuit against CFPB’s New Rules

After the interpretive rules went into effect in July, FTA filed a lawsuit against the agency to stop the interpretive rule.[13] In their complaint, FTA contends that CFPB bypassed APA’s notice-and-comment rulemaking process, despite the significant change imposed by the rule.[14] FTA argues that the agency exceeded statutory authority under the Truth in Lending Act (TILA) as the act’s definition of “credit card” does not apply to BNPL products.[15] FTA also argues that the rule is arbitrary and capricious because it fails to account for the unique structure of BNPL products and their compliance challenges with Regulation Z.[16]

The ongoing case between FTA and CFPB will likely focus on whether CFPB’s rule is a permissible interpretation of existing law or a substantive rule requiring formal rulemaking under APA § 553. This decision should weigh the nature of BNPL products in relation to consumer protections traditionally associated with credit card-like products. In defending the agency’s interpretive rules against FTA, CFPB could consider highlighting the legislative intent of TILA’s flexibility and rationale for using an interpretive rule.

 

Notes

[1] See Block, Inc., More than Half of Americans Turn to Buy Now, Pay Later During Financially Stressful Times (June 26, 2024), https://investors.block.xyz/investor-news/default.aspx.

[2] Id.

[3] See Paige Smith & Paulina Cachero, Buy Now, Pay Later Needs Credit Card-Like Oversight, CFPB Says, Bloomberg Law (May 22, 2024), https://news.bloomberglaw.com/banking-law/buy-now-pay-later-soon-will-be-treated-more-like-credit-cards.

[4] Id.

[5] Id.

[6] 5 U.S.C.A. § 553.

[7] Am. Mining Cong. v. Mine Safety & Health Admin., 302 U.S. App. D.C. 38, 995 F.2d 1106 (1993).

[8] See Evan Weinberger, CFPB’s ‘Buy Now, Pay Later’ Rule Sparks Conflicting Reactions, Bloomberg Law (Aug. 1, 2024), https://news.bloomberglaw.com/banking-law/cfpbs-buy-now-pay-later-rule-sparks-conflicting-reactions.

[9] See New York City Dep’t of Consumer & Worker Prot., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (Aug. 31, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0027; see also Nat’l Consumer L. Ctr., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017, at 1 (Aug. 1, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0028.

[10] See Independent Community Bankers of Am., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 31, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0023.

[11] See Financial Technology Ass’n, Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 19, 2024). https://www.regulations.gov/comment/CFPB-2024-0017-0038.

[12] See PayPal, Inc., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 31, 2024). https://www.regulations.gov/comment/CFPB-2024-0017-0025.

[13] See Evan Weinberger, CFPB Buy Now, Pay Later Rule Hit With Fintech Group Lawsuit, Bloomberg Law (Oct. 18, 2024), https://news.bloomberglaw.com/banking-law/cfpbs-buy-now-pay-later-rule-hit-with-fintech-group-lawsuit.

[14] Complaint, Fin. Tech. Ass’n v. Consumer Fin. Prot. Bureau, No. 1:24-cv-02966 (D.D.C. Oct. 18, 2024).

[15] Id.

[16] Id.


Modern Misinformation: Tort Law’s Limitations

Anzario Serrant, MJLST Staffer

Since the ushering in of the new millennium, there has been over a thousand percent increase in the number of active internet users, defined as those who have had access to the internet in the last month.[1]  The internet–and technology as a whole–has planted its roots deeply into our everyday lives and morphed the world into what it is today. As the internet transformed, so did our society, shifting from a time when the internet was solely used by government entities and higher-learning institutions[2] to now, where over 60% of the world’s population has regular access to cyberspace.[3] The ever-evolving nature of the internet and technology has brought an ease and convenience like never imagined while also fostering global connectivity. Although this connection may bring the immediate gratification of instantaneously communicating with friends hundreds of miles away, it has also created an arena conducive to the spread of false or inaccurate information—both deliberate and otherwise.

The evolution of misinformation and disinformation has radically changed how societies interact with information, posing new challenges to individuals, governments, and legal systems. Misinformation, the sharing of a verifiably false statement without intent to deceive, and disinformation, a subset of misinformation distinguished by intent to mislead and actual knowledge that the information is false, are not new phenomena.[4] They have existed throughout history, from the spread of rumors during the Black Death[5] to misinformation about HIV/AIDS in the 1980s.[6] In both examples, misinformation promoted ineffective measures, increased ostracization, and inevitably allowed for the loss of countless lives. Today, the internet has exponentially increased the speed and scale at which misinformation spreads, making our society even more vulnerable to associated harms. But who should bear the liability for these harms—individuals, social media companies, both? Additionally, does existing tort law provide adequate remedies to offset these harms?

The Legal Challenge

Given the global reach of social media and the proliferation of both misinformation and disinformation, one critical question arises: Who should be held legally responsible when misinformation causes harm? This question is becoming more pressing, particularly in light of “recent” events like the COVID-19 pandemic, during which unproven treatments were promoted on social media, leading to widespread confusion and, in some cases, physical harm.[7]

Under tort law, legal remedies exist that could potentially address the spread and use of inaccurate information in situations involving a risk of physical harm. These include fraudulent or negligent misrepresentation, conscious misrepresentation involving risk of physical harm, and negligent misrepresentation involving risk of physical harm.[8] However, these legal concepts were developed prior to the internet and applying them to the realm of social media remains challenging.

Fraudulent Misrepresentation and Disinformation

Current tort law provides limited avenues for addressing disinformation, especially on social media. However, fraudulent misrepresentation can help tackle cases involving deliberate financial deception, such as social media investment scams. These scams arguably meet the fraudulent misrepresentation criteria—false promises meant to induce investment, resulting in financial losses for victims.[9] However, the broad, impersonal nature of social media complicates proving “justifiable reliance.” For instance, would a reasonable person rely on an Instagram post from a stranger to make an investment decision?

In limited instances, courts applying a more subjective analysis might be willing to find the victim’s reliance justifiable, but that still leaves various victims unprotected.[10]  Given these challenges and the limited prospect for success, it may be more effective to consider the role of social media platforms in spreading disinformation.

Conscious misrepresentation involving risk of physical harm (CMIRPH)

Another tort that applies in limited circumstances is CMIRPH. This tort applies when false or unverified information is knowingly spread to induce action, or with disregard for the likelihood of inducing action, that carries an unreasonable risk of physical harm.[11] The most prominent example of this occurred during the COVID-19 pandemic, when false information about hydroxychloroquine and chloroquine spread online, with some public figures promoting the drugs as cures.[12] In such cases, those spreading false information knew, or should have known, that they were not competent to make those statements and that they posed serious risks to public health.

While this tort could be instrumental in holding individuals accountable for spreading harmful medical misinformation, challenges arise in establishing intent and reliance and the broad scope of social media’s reach can make it difficult to apply traditional legal remedies. Moreover, because representations of opinions are covered by the tort,[13] First Amendment arguments would likely be raised if liability were to be placed on people who publicly posted their inaccurate opinions.

Negligent misrepresentation and Misinformation

While fraudulent misrepresentation applies to disinformation, negligent misrepresentation is more suitable to misinformation. A case for negligent misrepresentation must demonstrate (1) declarant pecuniary interest in the transaction, (2) false information supplied for the guidance of others, (3) justifiable reliance, and (4) breach of reasonable care.[14]

Applying negligent misrepresentation to online misinformation proves difficult. For one, the tort requires that the defendant have a pecuniary interest in the transaction. Much of the misinformation inadvertently spread on social media does not involve financial gain for the poster. Moreover, negligent misrepresentation is limited to cases where misinformation was directed at a specific individual or a defined group, making it hard to apply to content posted on public platforms meant to reach as many people as possible.[15]

Even if these obstacles are overcome, the problem of contributory negligence remains. Courts may find that individuals who act on information from social media without verifying its accuracy bear some responsibility for the harm they suffer.

Negligent misrepresentation involving risk of physical harm (NMIRPH)

In cases where there is risk of physical harm, but no financial loss, NMIRPH applies.[16] This tort is particularly relevant in the context of social media, where misinformation about health treatments can spread rapidly—often without monetary motives.

A notable example involves the spread of false claims about natural remedies in African and Caribbean cultures. In these communities, it is common to see misinformation about the health benefits of certain fruits—such as soursop—which is widely believed to have cancer-curing properties. Social media posts frequently promote such claims, leading individuals to rely on these remedies instead of seeking conventional medical treatment, sometimes with harmful results.

In these cases, the tort’s elements are met. False information is shared, individuals reasonably rely on it—within their cultural context—and physical harm follows. However, applying this tort to social media cases is challenging. Courts must assess whether reliance on such information is reasonable and whether the sharer breached a duty of care. Causation is also difficult to prove given the multiple sources of misinformation online. Moreover, the argument for subjective reliance is strongest within the context of smaller communities—leaving the vast majority of social media posts from strangers unprotected.

The Role of Social Media Platforms

One potential solution is to shift the focus of liability from individuals to the platforms themselves. Social media companies have largely been shielded from liability for user-generated content by Section 230 of the U.S. Communications Decency Act, which grants them immunity from being held responsible for third-party content. It can be argued that this immunity, which was granted to aid their development,[17] is no longer necessary, given the vast power and resources these companies now hold. Moreover, blanket immunity might be removing the incentive for these companies to innovate and find a solution, which only they can. There is also an ability to pay quandary as individuals might not be able to compensate for the widespread harm social media platforms allow them to carry out.

While this approach may offer a more practical means of addressing misinformation at scale, it raises concerns about free speech and the feasibility of monitoring all content posted on large platforms like Facebook, Instagram, or Twitter. Additionally, imposing liability on social media companies could incentivize them to over-censor, potentially stifling legitimate expression.[18]

Conclusion

The legal system must evolve to address the unique challenges posed by online platforms. While existing tort remedies like fraudulent misrepresentation and negligent misrepresentation offer potential avenues for redress, their application to social media is limited by questions of reliance, scope, and practicality. To better protect individuals from the harms caused by misinformation, lawmakers may need to consider updating existing laws or creating new legal frameworks tailored to the realities of the digital world. At the same time, social media companies must be encouraged to take a more active role in curbing the spread of false information, while balancing the need to protect free speech.

Solving the problem of misinformation requires a comprehensive approach, combining legal accountability, platform responsibility, and public education to ensure a more informed and resilient society.

 

Notes

[1] Hannah Ritchie et al., Internet, Our World in Data, (2023) ourworldindata.org/internet.

[2] See generally Barry Leiner et al., The Past and Future History of the Internet, 40 Commc’ns ACM 102 (1997) (discussing the origins of the internet).

[3] Lexie Pelchen, Internet Usage Statistics In 2024, Forbes Home, (Mar. 1, 2024) https://www.forbes.com/home-improvement/internet/internet-statistics/#:~:text=There%20are%205.35%20billion%20internet%20users%20worldwide.&text=Out%20of%20the%20nearly%208,the%20internet%2C%20according%20to%20Statista.

[4] Audrey Normandin, Redefining “Misinformation,” “Disinformation,” and “Fake News”: Using Social Science Research to Form an Interdisciplinary Model of Online Limited Forums on Social Media Platforms, 44 Campbell L. Rev., 289, 293 (2022).

[5] Melissa De Witte, For Renaissance Italians, Combating Black Plague Was as Much About Politics as It Was Science, According to Stanford Scholar, Stan. Rep., (Mar. 17, 2020) https://news.stanford.edu/stories/2020/05/combating-black-plague-just-much-politics-science (discussing that poor people and foreigners were believed to be the cause—at least partially—of the plague).

[6] 40 Years of HIV Discovery: The First Cases of a Mysterious Disease in the Early 1980s, Institut Pasteur, (May 5, 2023) https://www.pasteur.fr/en/research-journal/news/40-years-hiv-discovery-first-cases-mysterious-disease-early-1980s (“This syndrome is then called the ‘4H disease’ to designate Homosexuals, Heroin addicts, Hemophiliacs and Haitians, before we understand that it does not only concern ‘these populations.’”).

[7] See generally Kacper Niburski & Oskar Niburski, Impact of Trump’s Promotion of Unproven COVID-19 Treatments and Subsequent Internet Trends: Observational Study, J. Med. Internet Rsch., Nov. 22, 2020 (discussing the impact of former President Trump’s promotion of hydroxychloroquine); Matthew Cohen et al., When COVID-19 Prophylaxis Leads to Hydroxychloroquine Poisoning, 10 Sw. Respiratory & Critical Care Chrons., 52 (discussing increase in hydroxychloroquine overdoses following its brief emergency use authorization).

[8] Geelan Fahimy, Liable for Your Lies: Misrepresentation Law as a Mechanism for Regulating Behavior on Social Networking Sites, 39 Pepp. L. Rev., 367, 370–79 (2012).

[9] See Restatement (Second) of Torts § 525 (Am. L. Inst. 1977) (“One who fraudulently makes a misrepresentation of fact, opinion, intention or law for the purpose of inducing another to act or to refrain from action in reliance upon it, is subject to liability to the other in deceit for pecuniary loss caused to him by his justifiable reliance upon the misrepresentation.”).

[10] Justifiable reliance can be proven through either a subjective or objective standard. Restatement (Second) of Torts § 538 (Am. L. Inst. 1977).

[11] Restatement (Second) of Torts § 310 (Am. L. Inst. 1965) (“An actor who makes a misrepresentation is subject to liability to another for physical harm which results from an act done by the other or a third person in reliance upon the truth of the representation, if the actor (a) intends his statement to induce or should realize that is likely to induce action by the other, or a third person, which involves an unreasonable risk of physical harm to the other, and (b) knows (i) that the statement is false, or (ii) that he has not the knowledge which he professes.”).

[12] See Niburski, supra note 7, for a discussion of former President Trump’s statements.

[13] Restatement (Second) of Torts § 310 cmt. b (Am. L. Inst. 1965).

[14] Restatement (Second) of Torts § 552(1) (Am. L. Inst. 1977) (“One who, in the course of his business, profession or employment, or in any other transaction in which he has a pecuniary interest, supplies false information for the guidance of others in their business transactions, is subject to liability for pecuniary loss caused to them by their justifiable reliance upon the information, if he fails to exercise reasonable care or competence in obtaining or communicating the information.”).

[15] Liability under negligent misrepresentation is limited to the person or group that the declarant intended to guide by supplying the information. Restatement (Second) of Torts § 552(2)(a)(1) (Am. L. Inst. 1977).

[16] Restatement (Second) of Torts § 311 (Am. L. Inst. 1965) (“One who negligently gives false information to another is subject to liability for physical harm caused by action taken by the other in reasonable reliance upon such information, where such harm results (a) to the other, or (b) to such third persons as the actor should expect to be put in peril by the action taken. Such negligence may consist of failure to exercise reasonable care (a) in ascertaining the accuracy of the information, or (b) in the manner in which it is communicated.”).

[17] See George Fishback, How the Wolf of Wall Street Shaped the Internet: A Review of Section 230 of the Communications Decency Act, 28 Tex. Intell. Prop. L.J. 275, 276 (2020) (“Section 230 promoted websites to grow without [the] fear . . . of liability for content beyond their control.”).

[18] See Section 230, Elec. Frontier Found. https://www.eff.org/issues/cda230#:~:text=Section%20230%20allows%20for%20web,what%20content%20they%20will%20distribute (last visited Oct. 23, 2024) (“In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects.”).

 


What Happens to Your Genetic Data in a Sale or Acquisition?

Colin Loyd, MJLST Staffer

Remember 23andMe—the genetic testing company that once skyrocketed in publicity in the 2010s due to its relatively inexpensive access to genetic testing? It’s now heading toward disaster. This September, its board of directors saw all but one member tender their resignation.[1] At the close of that day’s trading, 23andMe’s share price was $0.35, representing a 99.9% decline in valuation from its peak in 2021.[2] This decline in valuation suggests the company may declare bankruptcy, which often leads to a sale of a company’s assets. Bankruptcy or the sale of assets present a host of complex privacy and regulatory issues, particularly concerning the sale of 23andMe’s most valuable asset—its vast collection of consumer DNA data.[3] This uncertain situation underscores serious concerns surrounding the state of comprehensive privacy protections for genetic information that leave consumers’ sensitive genetic data vulnerable to misuse and exploitation.

23andMe collects and stores massive amounts of user genetic information. However, unlike healthcare providers, 23andMe does not have to comply with the stringent privacy regulations set out in the Health Insurance Portability and Accountability Act (HIPAA).[4] While HIPAA is designed to protect sensitive health data, its protections apply only to a small subset of healthcare related entities.[5] HIPAA only regulates the use of genetic information by “group health plan[s], health insurance issuer[s] that issue[] health insurance coverage, or issuer[s] of a medicare supplemental policy.”[6] 23andMe does not fit into any of these categories and therefore operates outside the scope of HIPAA protections with respect to genetic information, leaving any genetic information it holds largely unregulated.

The Genetic Information Nondiscrimination Act (GINA), enacted in 2008, offers consumer protections by prohibiting discrimination based on an individual’s genetic information with respect to health insurance premium amounts or eligibility requirements for health insurance.[7] GINA also prohibits any deprivation of employment opportunities based on genetic information.[8] However, GINA’s protections do not extend to life insurance, disability insurance, or long-term care insurance.[9] This leaves a gap where genetic information may be used against individuals by entities not subject to GINA.

This regulatory gap is a major concern for consumers, especially with a potential bankruptcy sale looming. If 23andMe sells its assets, including its database of genetic information, the new owner would not have to adhere to the same privacy commitments made by 23andMe. For example, 23andMe promises not to use genetic information it receives for personalized or targeted marketing/advertising without a user’s express consent.[10] This policy likely reflects 23andMe’s efforts to comply with the California Privacy Rights Act (CPRA), which grants consumers the right to direct a business to not share or sell their personal information.[11] However, this right under the CPRA is an opt-out right—not an opt-in right—meaning consumers can stop a future sale of their information but by default there is no initial, regulatory limit on the sale of their personal information.[12] As a result, there’s nothing stopping 23andMe from altering its policies and changing how it uses genetic information. In fact, 23andMe’s Privacy Statement states it “may make changes to this Privacy Statement from time to time.”[13] Any such change would likely be binding if it is clearly communicated to users.[14] 23andMe currently lists email or an in-app notification as methods it may notify its users of any change to the Privacy Statement.[15] If it does so, it’s highly possible a court would view this as “clear communication” and there would be little legal recourse for users to prevent their genetic information from being used in ways they did not anticipate, such as for research or commercial purposes.

For example, say a life insurance company acquires an individual’s genetic data through the purchase of 23andMe’s assets. It could potentially use that data to make decisions about coverage or premiums, even though GINA prohibits health insurers to do the same.[16] This loophole highlights the dangers of having genetic information in the hands of entities not bound by strict privacy protections.

In the event of an acquisition or bankruptcy, 23andMe’s Privacy Statement outlines that personal information, including genetic data, may be among the assets sold or transferred to the new entity.[17] In such a case, the new owner could inherit both the data and the rights to use it under the existing terms, including the ability to modify how the data is used. This could result in uses not originally intended by the user so long as the change is communicated to the user.[18] This transfer clause highlights a key concern for users because it allows their deeply personal genetic data to be passed to another company without additional consent, potentially subjecting them to exploitation by organizations with different data usage policies or commercial interests. While 23andMe must notify users about any changes to the privacy statement or its use of genetic information, it does not specify whether the notice will be given in advance.[19] Any new entity could plan a change to the privacy statement terms–altering how it uses the genetic information while leaving users in the dark until the change is communicated to them, at which point the user’s information may have already been shared with third parties.

The potential 23andMe bankruptcy and sale of assets reveals deep flaws in the current regulatory system governing genetic data privacy. Without HIPAA protections, consumers risk their sensitive genetic information being sold or misused in ways they cannot control. GINA–while offering some protections–still leaves significant gaps, especially in life and disability insurance. As the demand for genetic testing continues to grow, the vulnerabilities exposed by 23andMe’s potential financial troubles highlight the urgent need for better privacy protections. Consumers must be made aware of the risks involved in sharing their genetic data, and regulatory measures must be strengthened to ensure this sensitive information is not misused or sold without their explicit consent.

 

Notes

[1] Independent Directors of 23andMe Resign from Board, 23andMe (Sept. 17, 2024) https://investors.23andme.com/news-releases/news-release-details/independent-directors-23andme-resign-board.

[2] Rolfe Winkler, 23andMe Board Resigns in New Blow to DNA-Testing Company, WALL ST. J. (Sept. 18, 2024) https://www.wsj.com/tech/biotech/23andme-board-resigns-in-new-blow-to-dna-testing-company-12f1a355.

[3] Anne Wojcicki (the last remaining board member) has consistently publicized her plan to take the company private, which is looming larger given the current state of the business financials. Id.

[4] See 42 U.S.C. § 1320d-9(a)(2).

[5] See generally 42 U.S.C. §1320d et seq.

[6] 42 U.S.C. § 1320d-9(a)(2).

[7] Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881.

[8] Id.

[9] Jessica D Tenenbaum & Kenneth W Goodman, Beyond the Genetic Information Nondiscrimination Act: Ethical and Economic Implications of the Exclusion of Disability, Long-term Care and Life Insurance, 14 Personalized Med. 153, 154 (2017).

[10] How We Use Your Information, 23andMe, https://www.23andme.com/legal/how-we-use-info/ (last visited Oct. 14, 2024).

[11] Cal. Civ. Code § 1798.120(a) (Deering 2024).

[12] Id.

[13] Privacy Statement, 23andMe (Sept. 24, 2024) https://www.23andme.com/legal/privacy/full-version/.

[14] See Lee v. Ticketmaster LLC, 817 Fed. App’x 393 (9th Cir. 2019)(upholding terms of use where notice was clearly given to the user, even if the user didn’t check a box to assent to the terms).

[15] Privacy Statement, supra note 13.

[16] See K.S.A. § 40-2259(c)-(d) (carving out the ability for life insurance policies to take into account genetic information when underwriting the policy).

[17] Privacy Statement, supra note 13.

[18] See Ticketmaster, 817 Fed. App’x 393 (2019).

[19] Privacy Statement, supra note 13.