Internet

The Power of Preference or Monopoly? Unpacking Google’s Search Engine Domination

Donovan Ennevor, MJLST Staffer

When searching for an answer to a query online, would you ever use a different search engine than Google? The answer for most people is almost certainly no. Google’s search engine has achieved such market domination that “to Google” has become a verb in the English language.[1] Google controls 90% of the U.S. search engine market, with its closest competitors Yahoo and Bing holding around 3% each.[2] Is this simply because Google offers a superior product or is there some other more nefarious reason?

According to the Department of Justice (“DOJ”), the answer is the latter: Google has dominated its competitors by engaging in illegal practices and creating a monopoly. Federal Judge Amit Mehta agreed with the DOJ’s position and ruled in August 2024 that Google’s market domination was a monopoly achieved through improper means.[3] The remedies for Google’s breach of antitrust law are yet to be determined; however, their consequences could have far reaching implications for the future of Google and Big Tech.

United States v. Google LLC

In October 2020, the DOJ and 11 states filed a civil suit against Google in the U.S. District Court for the District of Columbia, alleging violations of U.S. antitrust laws.[4] A coalition of 35 states, Guam, Puerto Rico, and Washington D.C. filed a similar lawsuit in December 2020.[5] In 2021, the cases were consolidated into a single proceeding to address the overlapping claims.[6] An antitrust case of this magnitude had not been brought in nearly two decades.[7]

The petitioners’ complaint argued that Google’s dominance did not solely arise through superior technology, but rather, through exclusionary agreements designed to stifle competition in online search engine and search advertising markets.[8] The complaint alleged that Google maintained its monopolies by engaging in practices such as entering into exclusivity agreements that prohibited the preinstallation of competitors’ search engines, forcing preinstallation of Google’s search engine in prime mobile device locations, and making it undeletable regardless of consumer preference.[9] For example, Google’s agreement with Apple required that all Apple products and tools have Google as the preinstalled default—essentially an exclusive—search engine.[10] Google also allegedly used its monopoly profits to fund the payments to secure preferential treatment on devices, web browsers, and other search access points, creating a self-reinforcing cycle of monopolization.[11]

According to the petitioners, these practices not only limited competitor opportunities, but also harmed consumers by reducing search engine options and diminishing quality, particularly in areas like privacy and data use.[12] Furthermore, Google’s dominance in search advertising has allowed it to charge higher prices, impacting advertisers and lowering service quality—outcomes unlikely in a more competitive market.[13]

Google rebutted the petitioners’ argument, asserting instead that its search product is preferred due to its superiority and is freely chosen by its consumers.[14] Google also noted that if users wish to switch to a different search engine, they can do so easily.[15]

However, Judge Mehta agreed with the arguments posed by the petitioners and held Google’s market dominance in search and search advertising constituted a monopoly, achieved through exclusionary practices violating U.S. antitrust laws.[16] The case will now move to the remedy determination phase, where the DOJ and Google will argue what remedies are appropriate to impose on Google during a hearing in April 2025.[17]

The Proposed Remedies and Implications

In November, the petitioners filed their final proposed remedies—both behavioral and structural—for Google with the court.[18] Behavioral remedies govern a company’s conduct whereas structural remedies generally refer to reorganization and or divestment.[19]  The proposed behavioral remedies include barring Google from entering exclusive preinstallation agreements and requiring Google to license certain indexes, data, and models that drive its search engine.[20] These remedies would help create more opportunities for competing search engines to gain visibility and improve their search capabilities and ad services. The petitioner’s filing mentioned they would also pursue structural remedies including forcing Google to breakup or divest from its Chrome browser and Android mobile operating system.[21] To ensure Google adheres to these changes, the petitioners proposed appointing a court-monitored technical committee to oversee Google’s compliance.[22]

It could be many years before any of the proposed remedies are actually instituted, given that Google has indicated it will appeal Judge Mehta’s ruling.[23] Additionally, given precedent it is unlikely that any structural remedies will be imposed or enforced.[24] However, any remedies ultimately approved would set a precedent for regulatory control over Big Tech, signaling that the U.S. government is willing to take strong steps to curb monopolistic practices. This could encourage further action against other tech giants and redefine regulatory expectations across the industry, particularly around data transparency and competition in digital advertising.

 

Notes

[1] See Virginia Heffernan, Just Google It: A Short History of a Newfound Verb, Wired (Nov. 15, 2017, 7:00 AM), https://www.wired.com/story/just-google-it-a-short-history-of-a-newfound-verb/.

[2] Justice Department Calls for Sanctions Against Google in Landmark Antitrust Case, Nat’l Pub. Radio, (Oct. 9, 2024, 12:38 AM), https://www.npr.org/2024/10/09/nx-s1-5146006/justice-department-sanctions-google-search-engine-lawsuit [hereinafter Calls for Sanctions Against Google].

[3] United States v. Google LLC, 2024 WL 3647498, 1, 134 (2024).

[4] Justice Department Sues Monopolist Google For Violating Antitrust Laws, U.S. Dep’t of Just. (Oct. 20, 2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws [hereinafter Justice Department Calls for Sanctions].

[5] Dara Kerr, United States Takes on Google in Biggest Tech Monopoly Trial of 21st Century, Nat’l Pub. Radio, (Sept. 12, 2023, 5:00 AM), https://www.npr.org/2023/09/12/1198558372/doj-google-monopoly-antitrust-trial-search-engine.

[6] Tracker Detail US v. Google LLC / State of Colorado v. Google LLC, TechPolicy.Press, https://www.techpolicy.press/tracker/us-v-google-llc/ (last visited Nov. 20, 2024).

[7] Calls for Sanctions Against Google, supra note 2 (“The last antitrust case of this magnitude to make it to trial was in 1998, when the Justice Department sued Microsoft.”).

[8] Justice Department Calls for Sanctions, supra note 4.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Kerrr, supra note 5.

[15] Id.

[16] United States v. Google LLC, 2024 WL 3647498, 1, 4 (2024).

[17] Calls for Sanctions Against Google, supra note 2.

[18] Steve Brachmann, DOJ, State AGs File Proposed Remedial Framework in Google Search Antitrust Case, (Oct. 13, 2024, 12:15 PM), https://ipwatchdog.com/2024/10/13/doj-state-ags-file-proposed-remedial-framework-google-search-antitrust-case/id=182031/.

[19] Dan Robinson, Uncle Sam may force Google to sell Chrome browser, or Android OS, The Reg. (Oct. 9, 2024, 12:56 pm), https://www.theregister.com/2024/10/09/usa_vs_google_proposed_remedies/.

[20] Brachmann, supra note 18.

[21] Exec. Summary of Plaintiff’s Proposed Final Judgement at 3–4, United States v. Google LLC No. 1:20-cv-03010-APM (D.D.C. Nov. 20, 2024). Id at 4.

[22] Id.

[23] See Jane Wolfe & Miles Kruppa, Google Loses Antitrust Case Over Search-Engine Dominance, Wall Street J. (Aug. 5, 2024, 5:02 pm), https://www.wsj.com/tech/google-loses-federal-antitrust-case-27810c43?mod=article_inline.

[24] See Makenzie Holland, Google Breakup Unlikely in Event of Guilty Verdict, Tech Target (Oct. 11, 2023), https://www.techtarget.com/searchcio/news/366555177/Google-breakup-unlikely-in-event-of-guilty-verdict. See also Michael Brick, U.S. Appeals Court Overturns Microsoft Antitrust Ruling, N.Y. Times (Jun 28, 2001), https://www.nytimes.com/2001/06/28/business/us-appeals-court-overturns-microsoft-antitrust-ruling.html. (summarizing the U.S. Court of Appeals decision overturning of the structural remedies imposed on Microsoft in an antitrust case).

 

 


Modern Misinformation: Tort Law’s Limitations

Anzario Serrant, MJLST Staffer

Since the ushering in of the new millennium, there has been over a thousand percent increase in the number of active internet users, defined as those who have had access to the internet in the last month.[1]  The internet–and technology as a whole–has planted its roots deeply into our everyday lives and morphed the world into what it is today. As the internet transformed, so did our society, shifting from a time when the internet was solely used by government entities and higher-learning institutions[2] to now, where over 60% of the world’s population has regular access to cyberspace.[3] The ever-evolving nature of the internet and technology has brought an ease and convenience like never imagined while also fostering global connectivity. Although this connection may bring the immediate gratification of instantaneously communicating with friends hundreds of miles away, it has also created an arena conducive to the spread of false or inaccurate information—both deliberate and otherwise.

The evolution of misinformation and disinformation has radically changed how societies interact with information, posing new challenges to individuals, governments, and legal systems. Misinformation, the sharing of a verifiably false statement without intent to deceive, and disinformation, a subset of misinformation distinguished by intent to mislead and actual knowledge that the information is false, are not new phenomena.[4] They have existed throughout history, from the spread of rumors during the Black Death[5] to misinformation about HIV/AIDS in the 1980s.[6] In both examples, misinformation promoted ineffective measures, increased ostracization, and inevitably allowed for the loss of countless lives. Today, the internet has exponentially increased the speed and scale at which misinformation spreads, making our society even more vulnerable to associated harms. But who should bear the liability for these harms—individuals, social media companies, both? Additionally, does existing tort law provide adequate remedies to offset these harms?

The Legal Challenge

Given the global reach of social media and the proliferation of both misinformation and disinformation, one critical question arises: Who should be held legally responsible when misinformation causes harm? This question is becoming more pressing, particularly in light of “recent” events like the COVID-19 pandemic, during which unproven treatments were promoted on social media, leading to widespread confusion and, in some cases, physical harm.[7]

Under tort law, legal remedies exist that could potentially address the spread and use of inaccurate information in situations involving a risk of physical harm. These include fraudulent or negligent misrepresentation, conscious misrepresentation involving risk of physical harm, and negligent misrepresentation involving risk of physical harm.[8] However, these legal concepts were developed prior to the internet and applying them to the realm of social media remains challenging.

Fraudulent Misrepresentation and Disinformation

Current tort law provides limited avenues for addressing disinformation, especially on social media. However, fraudulent misrepresentation can help tackle cases involving deliberate financial deception, such as social media investment scams. These scams arguably meet the fraudulent misrepresentation criteria—false promises meant to induce investment, resulting in financial losses for victims.[9] However, the broad, impersonal nature of social media complicates proving “justifiable reliance.” For instance, would a reasonable person rely on an Instagram post from a stranger to make an investment decision?

In limited instances, courts applying a more subjective analysis might be willing to find the victim’s reliance justifiable, but that still leaves various victims unprotected.[10]  Given these challenges and the limited prospect for success, it may be more effective to consider the role of social media platforms in spreading disinformation.

Conscious misrepresentation involving risk of physical harm (CMIRPH)

Another tort that applies in limited circumstances is CMIRPH. This tort applies when false or unverified information is knowingly spread to induce action, or with disregard for the likelihood of inducing action, that carries an unreasonable risk of physical harm.[11] The most prominent example of this occurred during the COVID-19 pandemic, when false information about hydroxychloroquine and chloroquine spread online, with some public figures promoting the drugs as cures.[12] In such cases, those spreading false information knew, or should have known, that they were not competent to make those statements and that they posed serious risks to public health.

While this tort could be instrumental in holding individuals accountable for spreading harmful medical misinformation, challenges arise in establishing intent and reliance and the broad scope of social media’s reach can make it difficult to apply traditional legal remedies. Moreover, because representations of opinions are covered by the tort,[13] First Amendment arguments would likely be raised if liability were to be placed on people who publicly posted their inaccurate opinions.

Negligent misrepresentation and Misinformation

While fraudulent misrepresentation applies to disinformation, negligent misrepresentation is more suitable to misinformation. A case for negligent misrepresentation must demonstrate (1) declarant pecuniary interest in the transaction, (2) false information supplied for the guidance of others, (3) justifiable reliance, and (4) breach of reasonable care.[14]

Applying negligent misrepresentation to online misinformation proves difficult. For one, the tort requires that the defendant have a pecuniary interest in the transaction. Much of the misinformation inadvertently spread on social media does not involve financial gain for the poster. Moreover, negligent misrepresentation is limited to cases where misinformation was directed at a specific individual or a defined group, making it hard to apply to content posted on public platforms meant to reach as many people as possible.[15]

Even if these obstacles are overcome, the problem of contributory negligence remains. Courts may find that individuals who act on information from social media without verifying its accuracy bear some responsibility for the harm they suffer.

Negligent misrepresentation involving risk of physical harm (NMIRPH)

In cases where there is risk of physical harm, but no financial loss, NMIRPH applies.[16] This tort is particularly relevant in the context of social media, where misinformation about health treatments can spread rapidly—often without monetary motives.

A notable example involves the spread of false claims about natural remedies in African and Caribbean cultures. In these communities, it is common to see misinformation about the health benefits of certain fruits—such as soursop—which is widely believed to have cancer-curing properties. Social media posts frequently promote such claims, leading individuals to rely on these remedies instead of seeking conventional medical treatment, sometimes with harmful results.

In these cases, the tort’s elements are met. False information is shared, individuals reasonably rely on it—within their cultural context—and physical harm follows. However, applying this tort to social media cases is challenging. Courts must assess whether reliance on such information is reasonable and whether the sharer breached a duty of care. Causation is also difficult to prove given the multiple sources of misinformation online. Moreover, the argument for subjective reliance is strongest within the context of smaller communities—leaving the vast majority of social media posts from strangers unprotected.

The Role of Social Media Platforms

One potential solution is to shift the focus of liability from individuals to the platforms themselves. Social media companies have largely been shielded from liability for user-generated content by Section 230 of the U.S. Communications Decency Act, which grants them immunity from being held responsible for third-party content. It can be argued that this immunity, which was granted to aid their development,[17] is no longer necessary, given the vast power and resources these companies now hold. Moreover, blanket immunity might be removing the incentive for these companies to innovate and find a solution, which only they can. There is also an ability to pay quandary as individuals might not be able to compensate for the widespread harm social media platforms allow them to carry out.

While this approach may offer a more practical means of addressing misinformation at scale, it raises concerns about free speech and the feasibility of monitoring all content posted on large platforms like Facebook, Instagram, or Twitter. Additionally, imposing liability on social media companies could incentivize them to over-censor, potentially stifling legitimate expression.[18]

Conclusion

The legal system must evolve to address the unique challenges posed by online platforms. While existing tort remedies like fraudulent misrepresentation and negligent misrepresentation offer potential avenues for redress, their application to social media is limited by questions of reliance, scope, and practicality. To better protect individuals from the harms caused by misinformation, lawmakers may need to consider updating existing laws or creating new legal frameworks tailored to the realities of the digital world. At the same time, social media companies must be encouraged to take a more active role in curbing the spread of false information, while balancing the need to protect free speech.

Solving the problem of misinformation requires a comprehensive approach, combining legal accountability, platform responsibility, and public education to ensure a more informed and resilient society.

 

Notes

[1] Hannah Ritchie et al., Internet, Our World in Data, (2023) ourworldindata.org/internet.

[2] See generally Barry Leiner et al., The Past and Future History of the Internet, 40 Commc’ns ACM 102 (1997) (discussing the origins of the internet).

[3] Lexie Pelchen, Internet Usage Statistics In 2024, Forbes Home, (Mar. 1, 2024) https://www.forbes.com/home-improvement/internet/internet-statistics/#:~:text=There%20are%205.35%20billion%20internet%20users%20worldwide.&text=Out%20of%20the%20nearly%208,the%20internet%2C%20according%20to%20Statista.

[4] Audrey Normandin, Redefining “Misinformation,” “Disinformation,” and “Fake News”: Using Social Science Research to Form an Interdisciplinary Model of Online Limited Forums on Social Media Platforms, 44 Campbell L. Rev., 289, 293 (2022).

[5] Melissa De Witte, For Renaissance Italians, Combating Black Plague Was as Much About Politics as It Was Science, According to Stanford Scholar, Stan. Rep., (Mar. 17, 2020) https://news.stanford.edu/stories/2020/05/combating-black-plague-just-much-politics-science (discussing that poor people and foreigners were believed to be the cause—at least partially—of the plague).

[6] 40 Years of HIV Discovery: The First Cases of a Mysterious Disease in the Early 1980s, Institut Pasteur, (May 5, 2023) https://www.pasteur.fr/en/research-journal/news/40-years-hiv-discovery-first-cases-mysterious-disease-early-1980s (“This syndrome is then called the ‘4H disease’ to designate Homosexuals, Heroin addicts, Hemophiliacs and Haitians, before we understand that it does not only concern ‘these populations.’”).

[7] See generally Kacper Niburski & Oskar Niburski, Impact of Trump’s Promotion of Unproven COVID-19 Treatments and Subsequent Internet Trends: Observational Study, J. Med. Internet Rsch., Nov. 22, 2020 (discussing the impact of former President Trump’s promotion of hydroxychloroquine); Matthew Cohen et al., When COVID-19 Prophylaxis Leads to Hydroxychloroquine Poisoning, 10 Sw. Respiratory & Critical Care Chrons., 52 (discussing increase in hydroxychloroquine overdoses following its brief emergency use authorization).

[8] Geelan Fahimy, Liable for Your Lies: Misrepresentation Law as a Mechanism for Regulating Behavior on Social Networking Sites, 39 Pepp. L. Rev., 367, 370–79 (2012).

[9] See Restatement (Second) of Torts § 525 (Am. L. Inst. 1977) (“One who fraudulently makes a misrepresentation of fact, opinion, intention or law for the purpose of inducing another to act or to refrain from action in reliance upon it, is subject to liability to the other in deceit for pecuniary loss caused to him by his justifiable reliance upon the misrepresentation.”).

[10] Justifiable reliance can be proven through either a subjective or objective standard. Restatement (Second) of Torts § 538 (Am. L. Inst. 1977).

[11] Restatement (Second) of Torts § 310 (Am. L. Inst. 1965) (“An actor who makes a misrepresentation is subject to liability to another for physical harm which results from an act done by the other or a third person in reliance upon the truth of the representation, if the actor (a) intends his statement to induce or should realize that is likely to induce action by the other, or a third person, which involves an unreasonable risk of physical harm to the other, and (b) knows (i) that the statement is false, or (ii) that he has not the knowledge which he professes.”).

[12] See Niburski, supra note 7, for a discussion of former President Trump’s statements.

[13] Restatement (Second) of Torts § 310 cmt. b (Am. L. Inst. 1965).

[14] Restatement (Second) of Torts § 552(1) (Am. L. Inst. 1977) (“One who, in the course of his business, profession or employment, or in any other transaction in which he has a pecuniary interest, supplies false information for the guidance of others in their business transactions, is subject to liability for pecuniary loss caused to them by their justifiable reliance upon the information, if he fails to exercise reasonable care or competence in obtaining or communicating the information.”).

[15] Liability under negligent misrepresentation is limited to the person or group that the declarant intended to guide by supplying the information. Restatement (Second) of Torts § 552(2)(a)(1) (Am. L. Inst. 1977).

[16] Restatement (Second) of Torts § 311 (Am. L. Inst. 1965) (“One who negligently gives false information to another is subject to liability for physical harm caused by action taken by the other in reasonable reliance upon such information, where such harm results (a) to the other, or (b) to such third persons as the actor should expect to be put in peril by the action taken. Such negligence may consist of failure to exercise reasonable care (a) in ascertaining the accuracy of the information, or (b) in the manner in which it is communicated.”).

[17] See George Fishback, How the Wolf of Wall Street Shaped the Internet: A Review of Section 230 of the Communications Decency Act, 28 Tex. Intell. Prop. L.J. 275, 276 (2020) (“Section 230 promoted websites to grow without [the] fear . . . of liability for content beyond their control.”).

[18] See Section 230, Elec. Frontier Found. https://www.eff.org/issues/cda230#:~:text=Section%20230%20allows%20for%20web,what%20content%20they%20will%20distribute (last visited Oct. 23, 2024) (“In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects.”).

 


Enriching and Undermining Justice: The Risks of Zoom Court

Matthew Prager, MJLST Staffer

In the spring of 2022, the United States shut down public spaces in response to the COVID-19 pandemic. The court system did not escape this process, seeing all jury trials paused in March 2022.[1] In this rapidly changing environment, courts scrambled to adjust using a slew of modern telecommunication and video conferencing systems to resume the various aspects of the courtroom system in the virtual world. Despite this radical upheaval to traditional courtroom structure, this new form of court appears here to stay.[2]

Much has been written about the benefits of telecommunication services like Zoom and similar software to the courtroom system.[3]  However, while Zoom court has been a boon to many, Zoom-style virtual court appearances also present legal challenges.[4] Some of these problems affect all courtroom participants, while others disproportionally affect highly vulnerable individuals’ ability to participate in the legal system.

Telecommunications, like all forms of technology, is vulnerable to malfunctions and ‘glitches’, and these glitches can have significant disadvantage on a party’s engagement with the legal system. In the most direct sense, glitches– be they video malfunction, audio or microphone failure, or unstable internet connections–can limit a party’s ability to hear and be heard by their attorneys, opposing parties or judge, ultimately compromising their legitimate participation in the legal process.[5]

But these glitches can have effects beyond affecting direct communications. One study found participants evaluated individuals suffering from connection issues as less likable.[6] Another study using mock jurors, found those shown a video on a broken VCR recommend higher prison terms than a group of mock jurors provided with a functional VCR.[7] In effect, technology can act as a third party in a courtroom, and when that third party misbehaves, frustrations can unjustly prejudice a party with deleterious consequences.

Even absent glitches, observing a person through a screen can have a negative impact on how that person is perceived.[8] Researchers have noted this issue even before the pandemic. Online bail hearings conducted by closed-circuit camera led to significantly higher bond amounts than those conducted in person.[9] Simply adjusting the camera angle can alter the perception of a witness in the eyes of the observer.[10]

These issues represent a universal problem for any party in the legal system, but they are especially impactful on the elderly population.[11] Senior citizens often lacks digital literacy with modern and emerging technologies, and may even find their first experience with these telecommunications systems is in a courtroom hearing– that is if they even have access to the necessary technology.[12] These issues can have extreme consequences, with one case of an elderly defendant violating their probation because they failed to navigate a faulty Zoom link.[13]  The elderly are especially vulnerable, as issues with technical literacy can be compounded by sensory difficulties. One party with bad eyesight found requiring communication through a screen functionally deprived him of any communication at all.[14]

While there has been some effort to return to the in-person court experience, the benefits of virtual trials are too significant to ignore.[15] Virtual court minimizes transportation costs, allows vulnerable parties to engage in the legal system from the safety and familiarity of their own home and simplifies the logistical tail of the courtroom process. These benefits are indisputable for many participants in the legal system. But these benefits are accompanied by drawbacks, and practicalities aside, the adverse and disproportionate impact on senior citizens in virtual courtrooms should be seen as a problem to solve and not simply endure.

Notes

[1] Debra Cassens Weiss, A slew of federal and state courts suspend trials or close for coronavirus threat, ABA JOURNAL (March 18, 2020) (https://www.abajournal.com/news/article/a-slew-of-federal-and-state-courts-jump-on-the-bandwagon-suspending-trials-for-coronavirus-threat)

[2] How Courts Embraced Technology, Met the Pandemic Challenge, and Revolutionized Their Operations, PEW, December 1, 2021 (https://www.pewtrusts.org/en/research-and-analysis/reports/2021/12/how-courts-embraced-technology-met-the-pandemic-challenge-and-revolutionized-their-operations).

[3] See Amy Petkovsek, A Virtual Path to Justice: Paving Smoother Roads to Courtroom Access, ABA (June 3, 2024) (https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/technology-and-the-law/a-virtual-path-to-justice) (finding that Zoom court: minimizes transportation costs for low-income, disabled or remote parties; allows parties to participate in court from a safe or trusted environment; minimizes disruptions for children who would otherwise miss entire days of school; protects undocumented individuals from the risk of deportation; diminishes courtroom reschedulings from parties lacking access to childcare or transportation and allows immune-compromised and other high health-risk parties to engage in the legal process without exposure to transmittable illnesses).

[4] Daniel Gielchinsky, Returning to Court in a Post-COVID Era: The Pros and Cons of a Virtual Court System, LAW.com (https://www.law.com/dailybusinessreview/2024/03/15/returning-to-court-in-a-post-covid-era-the-pros-and-cons-of-a-virtual-court-system/)

[5] Benefits & Disadvantages of Zoom Court Hearings, APPEL & MORSE, (https://www.appelmorse.com/blog/2020/july/benefits-disadvantages-of-zoom-court-hearings/) (last visited Oct. 7, 2024).

[6] Angela Chang, Zoom Trials as the New Normal: A Cautionary Tale, U. CHI. L. REV. (https://lawreview.uchicago.edu/online-archive/zoom-trials-new-normal-cautionary-tale) (“Participants in that study perceived their conversation partners as less friendly, less active and less cheerful when there were transmission delays. . . .compared to conversations without delays.”).

[7] Id.

[8]  Id. “Screen” interactions are remembered less vividly and obscure important nonverbal social cues.

[9] Id.

[10] Shannon Havener, Effects of Videoconferencing on Perception in the Courtroom (2014) (Ph.D. dissertation, Arizona State University).

[11] Virtual Justice? A National Study Analyzing the Transition to Remote Criminal Court, STANFORD CRIMINAL JUSTICE CENTER, Aug. 2021, at 78.

[12] Id. at 79 (describing how some parties lack access to phones, Wi-Fi or any methods of electronic communication).

[13] Ivan Villegas, Elderly Accused Violates Probation, VANGUARD NEWS GROUP (October 21, 2022) (https://davisvanguard.org/2022/10/elderly-accused-violates-probation-zoom-problems-defense-claims/)

[14] John Seasly, Challenges arise as the courtroom goes virtual, Injustice Watch (April 22, 2020) (https://www.injusticewatch.org/judges/court-administration/2020/challenges-arise-as-the-courtroom-goes-virtual/)

[15] Kara Berg, Leading Michigan judges call for return to in-person court proceedings (Oct. 2, 2024, 9:36:00 PM), (https://detroitnews.com/story/news/local/michigan/2024/10/02/leading-michigan-judges-call-for-return-to-in-person-court-proceedings/75484358007/#:~:text=Courts%20began%20heavily%20using%20Zoom,is%20determined%20by%20individual%20judges).


Are AI Overviews Creating New Risk of Libel for Search Engines?

Eleanor Nagel-Bennett, MJLST Staffer

47 USC § 230 of the Communications Decency Act (“CDA”) protects online service providers from civil liability for content published on their servers by third parties. Essentially, it clarifies that if a Google search for one’s name produced a link to a blog post containing false and libelous content about that person, the falsely accused searcher could pursue a claim of defamation against the publisher of the blog, but not against Google. Under § 230, Google is not considered the speaker or the publisher of the libelous statements on the blog, despite returning the libelous results on the search engine results page. Specifically, § 230 provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” for purposes of civil penalties.[i]

However, in May 2024 Google rolled out an “AI Overview” feature on their search engine. The AI Overview is powered by Gemini, Google’s generative artificial intelligence chatbot.[ii] Gemini generates a response to each google search by combining information from internet sources and writing a complete overview answer to the search query using “multi-step reasoning, planning and multimodality” functions.[iii] After submitting a query, the AI Overview is displayed at the top of the search results. In the first few weeks, Google’s AI Overview suffered from hallucinations producing “odd and erroneous” outputs.[iv] Some of the odd results were obviously false, such as suggesting a user try adhering cheese to their pizza with a glue stick.[v]

Besides going viral online, the silly results were largely inconsequential. However, there were also several more serious reports of Google’s AI Overview feature generating misinformation that presented responses more difficult to identify as false. One such result claimed that President Barack Obama was the first Muslim President, a popular but demonstrably false conspiracy theory that has circulated the internet for years, while another told users that certain poisonous mushrooms were safe for human consumption.[vi] Google has since changed the data pool used to produce AI Overviews, and now rarely produces blatantly false results — but is rarely enough when 8.5 billion searches are run per day on Google?[vii]

This raises the question: can search engines be held liable for libelous content published by their generative AI? A plaintiff will have to prove to the court that § 230 of the Communications Decency Act is not a statutory bar to claims against generative AI. A recent consensus of legal scholars anticipate courts will likely find that the CDA would not bar claims against a company producing libelous content through generative AI because content produced by generative AI is original work, “authored” by the AI itself.[viii]

For an illustrative comparison, consider how defamation claims against journalists work as compared to defamation claims against traditional search engine results. While a journalist may write stories based on interviews, research, and experience, the language she publishes are her own words, her own creation, and she can be held liable for them despite sourcing some pieces from other speakers. Traditional search engines on the other hand historically post the sourced material directly to the reader, so they are not the “speaker” and therefore are insulated from defamation claims.  Enter generative AI, the output of which is likely to be considered original work by courts, and that insulation may erode.[ix] Effectively, introducing an AI Overview feature waives the statutory bar to claims under § 230 of the CDA relied upon by search engines to avoid liability for defamation claims.

But even without an outright statutory bar to defamation claims against a search engine’s libelous AI output, there is disagreement over whether generative AI output in general is relied upon seriously enough by humans to give rise to a defamation claim. Some believe that AI generated text should not be interpreted as a reasonably perceived factual claim, and therefore argue that AI generated content cannot give rise to a claim for defamation.[x] This is where the legitimacy of a result displayed on a popular search engine comes into play. Even if AI generated text is not ordinarily reasonably perceived as a factual claim, when displayed at the top of a search engine’s results page, more weight and authority is given to the result, though users might otherwise be wary of AI outputs.[xi]

While no landmark case law on the liability of an AI machine for libelous output has been developed to date, several lawsuits have already been filed on the question of liability assignment for libelous content produced by generative AI, including at least one case against a search engine for AI generated output displayed on a search engine results page.[xii]

Despite the looming potential for consequences, most AI companies have neglected to give attention to the risk of libel created by the operation of generative AI.[xiii] While all AI companies should pay attention to the risks, search engines previously insulated from civil liability by § 230 of the CDA should be especially wary of just how much liability they may be opening themselves up to by including an AI Overview on their results pages.

 

Notes

[i] 47 U.S.C. §230(c)(1).

[ii] Reid, Liz, Generative AI in Search: Let Google do the searching for you, Google (May 14, 2024) https://blog.google/products/search/generative-ai-google-search-may-2024/.

[iii] Id.

[iv] Reid, Liz, AI Overviews: About last week, Google (May 30, 2024) https://blog.google/products/search/ai-overviews-update-may-2024/.

[v] O’Brien, Matt, Google makes fixes to AI-generated search summaries after outlandish answers went viral, The Associated Press (May 30, 2024) https://apnews.com/article/google-ai-overviews-hallucination-33060569d6cc01abe6c63d21665330d8.

[vi] Id.

[vii] Brannon, Jordan, Game-Changing Google Search Statistics for 2024, Coalition, (Apr. 5, 2024) https://coalitiontechnologies.com/blog/game-changing-google-search-statistics-for-2024.

[viii] Joel Simon, Can AI be sued for defamation?, Col. Journalism Rev. (March 18, 2024).

[ix] Id.

[x]  See Eugene Volokh, Large Libel Models? Liability For AI Output, 3 J. Free Speech L. 489, 498 (2023).

[xi] Id.

[xii] In July of 2023, Jeffery Battle of Maryland filed suit against Microsoft for an AI generated search result on BING accusing him of crimes he did not commit. The Plaintiff Jeffery Battle is a veteran, business owner, and aerospace professor. When his name is searched online, however, Bing’s AI overview accuses Battle of crimes committed by a different Jeffrey Battle, Jeffery Leon Battle. The other Jeffery Battle pled guilty to seditious conspiracy and levying war against the United States after he tried to join the Taliban in the wake of 9/11. Bing’s search engine results page overview powered by Chat GPT combines information about the two Jeffery’s into one. See Id. at 492.

[xiii] Id. at 493.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


Moderating Social Media Content: A Comparative Analysis of European Union and United States Policy

Jaxon Hill, MJLST Staffer

In the wake of the Capitol Hill uprising, former president Donald Trump had several of his social media accounts suspended.1 Twitter explained that their decision to suspend Trump’s account was “due to the risk of further incitement of violence.”2 Though this decision caught a lot of attention in the public eye, Trump was not the first figure in the political sphere to have his account suspended.3 In response to the social media platforms alleged censorship, some states, mainly Florida and Texas, attempted to pass anti-censorship laws which limit the ability for social media companies to moderate content.4 

Now, as litigation ensues for Trump and social media companies fighting the Texas and Florida legislation, the age-old question rears its ugly head: what is free speech?5 Do social media companies have a right to limit free speech? Social media companies are not bound by the First Amendment.6 Thus, barring valid legislation that says otherwise, they are allowed to restrict or moderate content on their platforms. But should they, and, if so, how? How does the answer to these questions differ for public officials on social media? To analyze these considerations, it is worthwhile to look beyond the borders of the United States. This analysis is not meant to presuppose that there is any wrongful conduct on the part of social media companies. Rather, this serves as an opportunity to examine an alternative option to social media content moderation that could provide more clarity to all interested parties. 

  In the European Union, social media companies are required to provide clear and specific information whenever they restrict the content on their platform.7 These statements are called “Statements of Reasons” (“SoRs”) and they must include some reference to whatever law the post violated.8 All SoRs  are  made publicly available to ensure transparency between the users and the organization.9 

An analysis of these SoRs yielded mixed results as to their efficacy but it opened up the door for potential improvements.10 Ultimately, the analysis showed inconsistencies among the various platforms in how or why they moderate content, but those inconsistencies can potentially open up an ability for legislators to clarify social media guidelines.11 

Applying this same principle domestically could allow for greater transparency between consumers, social media companies, and the government. By providing publicly available rationale behind any moderation, social media companies could continue to remove illegal content while not straddling the line of censorship. It is worth noting that there are likely negative financial implications for this policy, though. With states potentially implementing vastly different policies, social media companies may have to increase costs to ensure they are in compliance wherever they operate.12 Nevertheless, absorbing these costs up front may be preferable to “censorship” or “extremism, hatred, [or] misinformation and disinformation.”13 

In terms of the specific application to government officials, it may seem this alternative fails to offer any clarity to the current state of affairs. This assertion may have some merit as government officials have still been able to post harmful social media content in the EU without it being moderated.14 With that being said, politicians engaging with social media is a newer development—domestically and internationally—so more research needs to be conducted to conclude best practices. Regardless, increasing transparency should bar social media companies from making moderation choices unfounded in the law.

 

Notes

1 Bobby Allyn & Tamara Keith, Twitter Permanently Suspends Trump, Citing ‘Risk Of Further Incitement Of Violence’, Npr (Jan. 8, 2021), https://www.npr.org/2021/01/08/954760928/twitter-bans-president-trump-citing-risk-of-further-incitement-of-violence.

2 Id.

3 See Christian Shaffer, Deplatforming Censorship: How Texas Constitutionally Barred Social Media Platform Censorship, 55 Tex. Tech L. Rev. 893, 903-04 (2023) (giving an example of both conservative and liberal users that had their accounts suspended).

4 See Daveed Gartenstein-Ross et al., Anti-Censorship Legislation: A Flawed Attempt to Address a Legitimate Problem, Lawfare (July 27, 2022), https://www.lawfaremedia.org/article/anti-censorship-legislation-flawed-attempt-address-legitimate-problem (explaining the Texas and Florida legislation in-depth).

5 See e.g. Trump v. United States, 219 L.E.2d 991, 1034 (2024) (remanding the case to the lower courts); Moody v. NetChoice, LLC, 219 L.E.2d. 1075, 1104 (2024) (remanding the case to the lower courts).

6 Evelyn Mary Aswad, Taking Exception to Assessments of American Exceptionalism: Why the United States Isn’t Such an Outlier on Free Speech, 126 Dick. L. R. 69, 72 (2021).

7 Chiara Drolsbach & Nicolas Pröllochs, Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (2023), https://arxiv.org/html/2312.04431v1/#bib.bib56.

8  Id.

9 Id.

10 Id. This analysis showed that (1) content moderation varies across platforms in number, (2) content moderation is most often applied to videos and text, whereas images are moderated much less, (3) most rule-breaking content is decided via automated means (except X), (4) there is much variation among how often the platforms choose to moderate illegal content, and (5) the primary reasons for moderation include falling out of the scope of the platform’s services, illegal or harmful speech, and sexualized content. Misinformation was very rarely cited as the reason for moderation.

11 Id.

12 Perkins Coie LLP, More State Content Moderation Laws Coming to Social Media Platforms (November 17, 2022), https://perkinscoie.com/insights/update/more-state-content-moderation-laws-coming-social-media-platforms (recommending social media companies to hire counsel to ensure they are complying with various state laws). 

13 See e.g. Shaffer, supra note 3 (detailing the harms of censorship); Gartenstein-Ross, supra note 4 (outlining the potential harms of restrictive content moderation).

14 Goujard et al., Europe’s Far Right Uses TikTok to Win Youth Vote, Politico (Mar. 17, 2024), https://www.politico.eu/article/tiktok-far-right-european-parliament-politics-europe/ (“Without evidence, [Polish far-right politician, Patryk Jaki] insinuated the person who carried out the attack was a migrant”).

 


A Nation of Misinformation? the Attack on the Government’s Efforts to Stop Social Media Misinformation

Alex Mastorides, MJLST Staffer

Whether and how misinformation on social media can be curtailed has long been the subject of public debate. This debate has increasingly gained momentum since the beginning of the COVID-19 pandemic, at a time when uncertainty was the norm and people across the nation scrambled for information to help them stay safe. Misinformation regarding things like the origin of the pandemic, the treatment that should be administered to COVID-positive people, and the safety of the vaccine has been widely disseminated via social media platforms like TikTok, Facebook, Instagram, and X (formerly known as Twitter). The federal government under the Biden Administration has sought to curtail this wave of misinformation, characterizing it as a threat to public health. However, many have accused it of unconstitutional acts of censorship in violation of the First Amendment.

The government cannot directly interfere with the content posted on social media platforms; this right is held by the private companies that own the platforms. Instead, the government’s approach has been to communicate with social media companies, encouraging them to address misinformation that is promulgated on their sites. Per the Biden Administration: “The President’s view is that the major platforms have a responsibility related to the health and safety of all Americans to stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections.”[1]

Lower Courts have Ruled that the Government May Not Communicate with Social Media Companies for Purposes of Curtailing Online Misinformation

The case of Murthy v. Missouri may result in further clarity from the Supreme Court regarding the powers of the federal government to combat misinformation on social media platforms. The case began in the United States District Court for the Western District of Louisiana when two states–Missouri and Louisiana–along with several private parties filed suit against numerous federal government entities, including the White House and agencies such as the Federal Bureau of Investigation, the Centers for Disease Control & Prevention, and the Cybersecurity & Infrastructure Security Agency.[2] These entities have repeatedly communicated with social media companies, allegedly encouraging them to remove or censor the plaintiffs’ online content due to misinformation about the COVID-19 pandemic (including content discussing “the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story.”)[3] The plaintiffs allege that these government entities “‘coerced, threatened, and pressured [the] social-media platforms to censor [them]’ through private communications and legal threats” in violation of the plaintiffs’ First Amendment rights.[4]

The District Court agreed with the plaintiffs, issuing a preliminary injunction on July 4, 2023 to greatly restrict the entities’ ability to contact social media companies (especially with regard to misinformation).[5] This approach was predicated on the idea that government communications with social media companies about misinformation on their platforms is essentially coercive, forcing the companies to censor speech at the government’s demand. The injunction was appealed to the Fifth Circuit, which narrowed the injunction’s scope to just the White House, the Surgeon General’s office, and the FBI.[6]

Following the Fifth Circuit’s ruling on the preliminary injunction, the government parties to the Murthy case applied for a stay of the injunction with the United States Supreme Court.[7] The government further requested that the Court grant certiorari with regard to the questions presented by the injunction. The government attacked the injunction on three grounds. The first is that the plaintiffs did not have standing to sue under Article III because they did not show that the censoring effect on their posts was “fairly traceable” to the government or “redressable by injunctive relief.”[8]

The second argument is that the conduct at issue does not constitute a First Amendment free speech violation.[9] This claim is based on the state action doctrine, which outlines the circumstances in which the decisions of private entities are considered to be “state action.” If a private social media company’s decisions to moderate content are sufficiently “coerced” by the government, the law treats those decisions as if they were made by the government directly.[10] In that situation, the First Amendment would apply.[11] The Supreme Court has advocated for a strict evaluation of what kind of conduct might be considered “coercive” under this doctrine in an effort to avoid infringing upon the rights of private companies to modulate speech on their platforms.[12] The government’s Application for Stay argues that the Fifth Circuit’s decision is an overly broad application of the doctrine in light of the government’s conduct.[13]

Third, the government maintains that the preliminary injunction is overly broad because it “covers the government’s communications with all social-media platforms (not just those used by respondents) regarding all posts by any person (not just respondents) on all topics.”[14]

The Supreme Court Granted the Requested Stay and Granted Certiorari Regarding Three Key Questions

The Supreme Court granted the government’s request for a stay on the preliminary injunction. The Court simultaneously granted certiorari with respect to the questions posed in the government’s Application for Stay: “(1) Whether respondents have Article III standing; (2) Whether the government’s challenged conduct transformed private social-media companies’ content-moderation decisions into state action and violated respondents’ First Amendment rights; and (3) Whether the terms and breadth of the preliminary injunction are proper.”[15]

The Court gave no explanation for its grant of the request for stay or for its grant of certiorari. However, Justice Alito, joined by Justice Thomas and Justice Gorsuch, issued a dissent from the grant of application for stay, arguing that the government has not shown a likelihood that denial of a stay will result in irreparable harm.[16] He contends that the government’s argument about irreparable harm comes from hypotheticals rather than from actual “concrete” proof that harm is imminent.[17] The dissent further displays a disapproving attitude of the government’s actions toward social media misinformation: “At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.”[18]

Justice Alito noted in his dissent that the completion of the Court’s review of the case may not come until spring of next year.[19] The stay on the preliminary injunction will hold until that time.

Notes

[1] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[2] State v. Biden, 83 F.4th 350, 359 (5th Cir. 2023).

[3] Id. at 359.

[4] Id. at 359-60.

[5] Id. at 360.

[6] Id.

[7] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[8] Id. at 2.

[9] Id. at 3.

[10] Id. at 10.

[11] Id.

[12] Id. at 4 (citing Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1933 (2019)).

[13] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[14] Id. at 5.

[15] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[16] On Application for Stay at 3, Murthy v. Missouri, No. 23A243 (23-411) (October 20, 2023) (Alito, J. dissenting) (citing Hollingsworth v. Perry, 558 U.S. 183, 190 (2010)).

[17] Id. at 3-4.

[18] Id. at 5.

[19] Id. at 2.


Will Moody v. NetChoice, LLC End Social Media?

Aidan Vogelson, MJLST Staffer

At first, the concept that social media’s days may be numbered seems outlandish. Billions of people utilize social media every day and, historically, social media companies and other internet services have enjoyed virtually unfettered editorial control over how they manage their services. This freedom stems from 47 U.S.C. § 230.[1] § 230 withholds liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[2]  In other words, if someone makes an obscene post on Facebook and Facebook removes the post, Facebook cannot be held liable for any violation of protected speech. § 230 has long allowed social media companies to self-regulate by removing posts that violate their terms of service, but on September 29, the Supreme Court granted a writ of certiorari in Moody v. NetChoice, LLC, a case that may fundamentally change how social media companies operate by allowing the government at the state or federal level to regulate around their § 230 protections.

At issue in Moody is whether the methods social media companies use to moderate their content are permissible under the First Amendment and whether social media companies may be classified as common carriers.[3] Common carriers are services which hold themselves open to the public and transport people or goods.[4] While the term “common carrier” once referred only to public transportation services like railroads and airlines, the definition now encompasses communications services such as radio and telephone companies.[5] Common carriers are subjected to greater regulations, including anti-discrimination regulations, due to their market domination of a necessary public service.[6]  For example, given our reliance on airlines and telephone companies in performing necessary services, common carrier regulations ensure that an airline cannot decline to sell tickets to passengers because of their religious beliefs and a cellular network cannot bar service to customers because it disapproves of the content of their phone conversations. If social media companies are held to be common carriers, the federal government and the state governments could impose regulations on what content those companies restrict.

Moody stems from state efforts to do just that. The Florida legislature passed State Bill 7072 to curtail what it saw as social media censorship of conservative voices.[7] The Florida law allows for significant fines against social media companies that demonstrate “unfair censorship” or “deplatform” political candidates, like X (formerly Twitter) did when it removed former President Trump from its platform for falsely claiming that the 2020 election was stolen.[8] Florida is not the only state to pursue a common carrier designation for social media. Texas passed a similar law in 2021 (which is currently enjoined by NetChoice, LLC  v. Paxton and will be addressed alongside Moody) and the attorney general of Ohio has sued Google, seeking for the court to declare that Google is a common carrier to prevent the company from prioritizing its own products in search results.[9] Ohio v. Google LLC is ongoing, and while the judge partially granted Google’s motion to dismiss, he found that Ohio’s claim that Google is a common carrier is cognizable.[10] Given the increasing propensity with which states are attempting to regulate social media, the Supreme Court’s ruling is necessary to settle this vital issue.

Supporters of classifying social media companies as common carriers argue that social media is simply the most recent advancement in communication and should accordingly be designated a common carrier, just as telephone operators and cellular networks are. They explain that designating social media companies as common carriers is actually consistent with the broad protections of § 230, as regulating speech on a social media site regulates the speech of users, not the speech of the company.[11]

However, they ignore that social media companies rely on First Amendment and § 230 protections when they curate the content on their sites. Without the ability to promote or suppress posts and users, these companies would not be able to provide the personalized content that attracts users, and social media would likely become an even greater hotbed of misinformation and hate speech than it already is. The purpose of § 230 is to encourage the development of a thriving online community, which is why Congress chose to shield internet services from liability for content. Treating social media companies as common carriers would stifle that aim.

It is unclear how the Court will rule. In his concurrence in Biden v. Knight First Amend. Inst., Justice Thomas indicated he may be willing to consider social media companies as common carriers.[12] The other justices have yet to write or comment on this issue, but whatever their decision may be, the ramifications of this case will be significant. The conservative politicians behind the Florida and Texas laws have specifically decried what they argue is partisan censorship of conservative views about the Covid-19 pandemic and the 2020 election, yet these very complaints demonstrate the need for social media companies to exercise editorial control over their content. Covid-19 misinformation unquestionably led to unnecessary deaths from the Covid-19 pandemic.[13] Misinformation about the 2020 election led to a violent attempted overthrow of our government. These threats of violence and dangerous misinformation are the harms that Congress created § 230 to avoid. Without the ability for social media companies to curate content, social media will assuredly contain more racism, misinformation, and calls for violence. Few would argue given the omnipresence of social media in our modern world, our reliance on it for communication, and the misinformation it spreads that social media does not need some form of regulation, but if the Court allows the Florida and Texas laws implicated in Moody and NetChoice to stand, they will be paving the way for a patchwork quilt of laws in every state which may render social media unworkable

Notes

[1] See 47 U.S.C. § 230.

[2] 47 U.S.C. §230(c)(2)(A).

[3] Moody v. Netchoice, LLC, SCOTUSblog, https://www.scotusblog.com/case-files/cases/moody-v-netchoice-llc/.

[4] Alison Frankel, Are Internet Companies ‘Common Carriers’ of Content? Courts Diverge on Key Question, REUTERS, (May 31, 2022, 5:52 PM), https://www.reuters.com/legal/transactional/are-internet-companies-common-carriers-content-courts-diverge-key-question-2022-05-31/.

[5] Id.

[6] Id.

[7] David Savage, Supreme Court Will Decide if Texas and Florida Can Regulate Social Media to Protect ‘Conservative Speech’, LA TIMES (Sept. 29, 2023, 8:33 AM), https://www.msn.com/en-us/news/us/supreme-court-will-decide-if-texas-and-florida-can-regulate-social-media-to-protect-conservative-speech/ar-AA1hrE2s.

[8] Id.

[9] AG Yost Files Landmark Lawsuit to Declare Google a Public Utility, OHIO ATTORNEY GENERAL’S OFFICE (June 8, 2021), https://www.ohioattorneygeneral.gov/Media/News-Releases/June-2021/AG-Yost-Files-Landmark-Lawsuit-to-Declare-Google-a.

[10] Ohio v. Google LLC, No. 21-CV-H-06-0274 (Ohio Misc. 2022), https://fingfx.thomsonreuters.com/gfx/legaldocs/gdpzyeakzvw/frankel-socialmediacommoncarrier–ohioruling.pdf.

[11] John Villasenor, Social Media Companies and Common Carrier Status: A Primer, BROOKINGS INST. (Oct. 27, 2022), https://www.brookings.edu/articles/social-media-companies-and-common-carrier-status-a-primer/.

[12] Biden v. Knight First Amend. Inst., 141 S. Ct. 1220 (2021),  https://www.law.cornell.edu/supremecourt/text/20-197.

[13] Alistair Coleman, ’Hundreds Dead’ Because of Covid-19 Misinformation, BBC (Aug. 12, 2020), https://www.bbc.com/news/world-53755067.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


Perhaps Big Tech Regulation Belongs on Congress’s for You Page

Kira Le, MJLST Staffer

On Thursday, March 23, 2023, TikTok CEO Shou Zi Chew testified before a Congressional panel for five hours in order to convince Congress that the social media platform should not be banned in the United States. The hearing came one week after reports surfaced that the Committee on Foreign Investment was threatening a ban unless TikTok’s parent company ByteDance sells its share of the company.[1] Lawmakers on both sides of the aisle, as well as FBI officials, are allegedly concerned with the possibility of the Chinese government manipulating users’ experience on the platform or threatening the security of the data of its more than 150 million users in the United States.[2] Despite Chew’s testimony that TikTok plans to contract with U.S. tech giant Oracle to store U.S. data on U.S. servers on U.S. soil, preventing Chinese interference on the platform and recommending content to U.S. users through Oracle infrastructure, lawmakers were not convinced, and not a single one offered support for TikTok.[3]

In terms of what’s to come for TikTok’s future in the United States, Senator Marco Rubio updated his website on Monday, March 27, 2023 with information on “when TikTok will be banned,” claiming his proposed ANTI-SOCIAL CCP Act is the only bipartisan, bicameral legislation that would actually prevent TikTok from operating in the United States.[4] In order to cut off the platform’s access to critical functions needed to remain online, the proposed statute would require the president to use the International Emergency Economic Powers Act to block and prohibit all transactions with TikTok, ByteDance, and any subsidiary or successor within 30 days.[5] Senator Rubio explains that the proposed legislation “requires the president to block and prohibit transactions with social media companies owned or otherwise controlled by countries or entities of concern.”[6] Reuters reports that The White House supports the Senate bill known as the RESTRICT Act.[7] However, former President Trump made an almost identical attempt to ban the app in 2020.[8]TikTok was successful in quashing the effort, and would almost certainly challenge any future attempts.[9] Further, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, “To justify a TikTok ban, the government would have to demonstrate that privacy and security concerns can’t be addressed in narrower ways. The government hasn’t demonstrated this, and we doubt it could. Restricting access to a speech platform that is used by millions of Americans every day would set a dangerous precedent for regulating our digital public sphere more broadly.”[10]

Despite what Congress may want the public to think, it certainly has other options for protecting Americans and their data from Big Tech companies like TikTok. For example, nothing is stopping U.S. lawmakers from following in the footsteps of the European Parliament, which passed the Digital Markets Act just last year.[11] Although the main purpose of the Act is to limit anticompetitive conduct by large technology companies, it includes several provisions on protecting the personal data of users of defined “gatekeeper” firms. Under the Act, a gatekeeper is a company that provides services such as online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communications services; operating systems; web browsers; and online advertising services that are gateways for business to reach end users.[12] The Digital Markets Act forbids these gatekeepers from processing the personal data of end users for the purpose of providing online advertisement services, combining or cross-using their personal data, or signing users into other services in order to combine their personal data without their explicit consent.[13]

The penalties associated with violations of the Act give it some serious teeth. For noncompliance, the European Commission may impose a fine of up to 10% of the offending gatekeeper’s total worldwide turnover in the preceding year in the first instance, and up to 20% if the gatekeeper has committed the same or a similar infringement laid out in specific articles at some point in the eight preceding years.[14] For any company, not limited to gatekeepers, the Commission may impose a fine of up to 1% of total worldwide turnover in the preceding year for failing to provide the Commission with information as required by various articles in the Act. Finally, in order to compel any company to comply with specific decisions of the Commission and other articles in the regulation, the Commission may impose period penalty payments of up to 5% of the average daily worldwide turnover in the preceding year, per day.[15]

If U.S. lawmakers who have backed bipartisan legislation giving President Biden a path to ban TikTok are truly concerned about preventing the spread of misinformation on the platform, who truly believe, as Representative Gus Bilirakis claims to, that it is “literally leading to death” and that “[w]e must save our children from big tech companies” who allow harmful content to be viewed and spread without regulation, then perhaps Congress should simply: regulate it.[16] After the grueling congressional hearing, the Chinese foreign ministry stated in a regular news briefing that it has never asked companies “to collect or provide data from abroad to the Chinese government in a way that violated local laws…”[17]During his testimony, Chew also argued that TikTok is no different than other social media giants, and has even sought to put stronger safeguards in place as compared to its competitors.[18] Granted, some lawmakers have expressed support for comprehensive data privacy legislation that would apply to all tech companies.[19] Perhaps it would be more fruitful for U.S. lawmakers to focus on doing so.

Notes

[1] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[2] Id.

[3] Id.; David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[4] FAQ: When Will TikTok Be Banned?, MARCO RUBIO US SENATOR FOR FLORIDA (Mar. 27, 2023), https://www.rubio.senate.gov/public/index.cfm/press-releases?ContentRecord_id=C5313B3F-8173-4DC8-B1D9-9566F3E2595C.

[5] Id.

[6] Id.

[7] Factbox: Why a Broad US TikTok Ban is Unlikely to Take Effect Soon, REUTERS (Mar. 23, 2023), https://www.reuters.com/technology/why-broad-us-tiktok-ban-is-unlikely-take-effect-soon-2023-03-23/.

[8] Id.

[9] Id.

[10] Id.

[11] Council Regulation (EU) 2022/1925 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. L 265/1 [hereinafter Digital Markets Act].

[12] Id., Art. 3, 2022 O.J. L 265/28, 30.

[13] Id. art. 5, at 33.

[14] Id. art. 30, at 51, 52.

[15] Id. art. 17, at 44.

[16] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[17] David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[18] Daniel Flatley, Five Key Moments From TikTok CEO’s Combative Hearing in Congress, BLOOMBERG (Mar. 23, 2023), https://www.bloomberg.com/news/articles/2023-03-23/five-key-moments-from-tiktok-ceo-s-combative-hearing-in-congress#xj4y7vzkg.

[19] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.