First Amendment

Modern Misinformation: Tort Law’s Limitations

Anzario Serrant, MJLST Staffer

Since the ushering in of the new millennium, there has been over a thousand percent increase in the number of active internet users, defined as those who have had access to the internet in the last month.[1]  The internet–and technology as a whole–has planted its roots deeply into our everyday lives and morphed the world into what it is today. As the internet transformed, so did our society, shifting from a time when the internet was solely used by government entities and higher-learning institutions[2] to now, where over 60% of the world’s population has regular access to cyberspace.[3] The ever-evolving nature of the internet and technology has brought an ease and convenience like never imagined while also fostering global connectivity. Although this connection may bring the immediate gratification of instantaneously communicating with friends hundreds of miles away, it has also created an arena conducive to the spread of false or inaccurate information—both deliberate and otherwise.

The evolution of misinformation and disinformation has radically changed how societies interact with information, posing new challenges to individuals, governments, and legal systems. Misinformation, the sharing of a verifiably false statement without intent to deceive, and disinformation, a subset of misinformation distinguished by intent to mislead and actual knowledge that the information is false, are not new phenomena.[4] They have existed throughout history, from the spread of rumors during the Black Death[5] to misinformation about HIV/AIDS in the 1980s.[6] In both examples, misinformation promoted ineffective measures, increased ostracization, and inevitably allowed for the loss of countless lives. Today, the internet has exponentially increased the speed and scale at which misinformation spreads, making our society even more vulnerable to associated harms. But who should bear the liability for these harms—individuals, social media companies, both? Additionally, does existing tort law provide adequate remedies to offset these harms?

The Legal Challenge

Given the global reach of social media and the proliferation of both misinformation and disinformation, one critical question arises: Who should be held legally responsible when misinformation causes harm? This question is becoming more pressing, particularly in light of “recent” events like the COVID-19 pandemic, during which unproven treatments were promoted on social media, leading to widespread confusion and, in some cases, physical harm.[7]

Under tort law, legal remedies exist that could potentially address the spread and use of inaccurate information in situations involving a risk of physical harm. These include fraudulent or negligent misrepresentation, conscious misrepresentation involving risk of physical harm, and negligent misrepresentation involving risk of physical harm.[8] However, these legal concepts were developed prior to the internet and applying them to the realm of social media remains challenging.

Fraudulent Misrepresentation and Disinformation

Current tort law provides limited avenues for addressing disinformation, especially on social media. However, fraudulent misrepresentation can help tackle cases involving deliberate financial deception, such as social media investment scams. These scams arguably meet the fraudulent misrepresentation criteria—false promises meant to induce investment, resulting in financial losses for victims.[9] However, the broad, impersonal nature of social media complicates proving “justifiable reliance.” For instance, would a reasonable person rely on an Instagram post from a stranger to make an investment decision?

In limited instances, courts applying a more subjective analysis might be willing to find the victim’s reliance justifiable, but that still leaves various victims unprotected.[10]  Given these challenges and the limited prospect for success, it may be more effective to consider the role of social media platforms in spreading disinformation.

Conscious misrepresentation involving risk of physical harm (CMIRPH)

Another tort that applies in limited circumstances is CMIRPH. This tort applies when false or unverified information is knowingly spread to induce action, or with disregard for the likelihood of inducing action, that carries an unreasonable risk of physical harm.[11] The most prominent example of this occurred during the COVID-19 pandemic, when false information about hydroxychloroquine and chloroquine spread online, with some public figures promoting the drugs as cures.[12] In such cases, those spreading false information knew, or should have known, that they were not competent to make those statements and that they posed serious risks to public health.

While this tort could be instrumental in holding individuals accountable for spreading harmful medical misinformation, challenges arise in establishing intent and reliance and the broad scope of social media’s reach can make it difficult to apply traditional legal remedies. Moreover, because representations of opinions are covered by the tort,[13] First Amendment arguments would likely be raised if liability were to be placed on people who publicly posted their inaccurate opinions.

Negligent misrepresentation and Misinformation

While fraudulent misrepresentation applies to disinformation, negligent misrepresentation is more suitable to misinformation. A case for negligent misrepresentation must demonstrate (1) declarant pecuniary interest in the transaction, (2) false information supplied for the guidance of others, (3) justifiable reliance, and (4) breach of reasonable care.[14]

Applying negligent misrepresentation to online misinformation proves difficult. For one, the tort requires that the defendant have a pecuniary interest in the transaction. Much of the misinformation inadvertently spread on social media does not involve financial gain for the poster. Moreover, negligent misrepresentation is limited to cases where misinformation was directed at a specific individual or a defined group, making it hard to apply to content posted on public platforms meant to reach as many people as possible.[15]

Even if these obstacles are overcome, the problem of contributory negligence remains. Courts may find that individuals who act on information from social media without verifying its accuracy bear some responsibility for the harm they suffer.

Negligent misrepresentation involving risk of physical harm (NMIRPH)

In cases where there is risk of physical harm, but no financial loss, NMIRPH applies.[16] This tort is particularly relevant in the context of social media, where misinformation about health treatments can spread rapidly—often without monetary motives.

A notable example involves the spread of false claims about natural remedies in African and Caribbean cultures. In these communities, it is common to see misinformation about the health benefits of certain fruits—such as soursop—which is widely believed to have cancer-curing properties. Social media posts frequently promote such claims, leading individuals to rely on these remedies instead of seeking conventional medical treatment, sometimes with harmful results.

In these cases, the tort’s elements are met. False information is shared, individuals reasonably rely on it—within their cultural context—and physical harm follows. However, applying this tort to social media cases is challenging. Courts must assess whether reliance on such information is reasonable and whether the sharer breached a duty of care. Causation is also difficult to prove given the multiple sources of misinformation online. Moreover, the argument for subjective reliance is strongest within the context of smaller communities—leaving the vast majority of social media posts from strangers unprotected.

The Role of Social Media Platforms

One potential solution is to shift the focus of liability from individuals to the platforms themselves. Social media companies have largely been shielded from liability for user-generated content by Section 230 of the U.S. Communications Decency Act, which grants them immunity from being held responsible for third-party content. It can be argued that this immunity, which was granted to aid their development,[17] is no longer necessary, given the vast power and resources these companies now hold. Moreover, blanket immunity might be removing the incentive for these companies to innovate and find a solution, which only they can. There is also an ability to pay quandary as individuals might not be able to compensate for the widespread harm social media platforms allow them to carry out.

While this approach may offer a more practical means of addressing misinformation at scale, it raises concerns about free speech and the feasibility of monitoring all content posted on large platforms like Facebook, Instagram, or Twitter. Additionally, imposing liability on social media companies could incentivize them to over-censor, potentially stifling legitimate expression.[18]

Conclusion

The legal system must evolve to address the unique challenges posed by online platforms. While existing tort remedies like fraudulent misrepresentation and negligent misrepresentation offer potential avenues for redress, their application to social media is limited by questions of reliance, scope, and practicality. To better protect individuals from the harms caused by misinformation, lawmakers may need to consider updating existing laws or creating new legal frameworks tailored to the realities of the digital world. At the same time, social media companies must be encouraged to take a more active role in curbing the spread of false information, while balancing the need to protect free speech.

Solving the problem of misinformation requires a comprehensive approach, combining legal accountability, platform responsibility, and public education to ensure a more informed and resilient society.

 

Notes

[1] Hannah Ritchie et al., Internet, Our World in Data, (2023) ourworldindata.org/internet.

[2] See generally Barry Leiner et al., The Past and Future History of the Internet, 40 Commc’ns ACM 102 (1997) (discussing the origins of the internet).

[3] Lexie Pelchen, Internet Usage Statistics In 2024, Forbes Home, (Mar. 1, 2024) https://www.forbes.com/home-improvement/internet/internet-statistics/#:~:text=There%20are%205.35%20billion%20internet%20users%20worldwide.&text=Out%20of%20the%20nearly%208,the%20internet%2C%20according%20to%20Statista.

[4] Audrey Normandin, Redefining “Misinformation,” “Disinformation,” and “Fake News”: Using Social Science Research to Form an Interdisciplinary Model of Online Limited Forums on Social Media Platforms, 44 Campbell L. Rev., 289, 293 (2022).

[5] Melissa De Witte, For Renaissance Italians, Combating Black Plague Was as Much About Politics as It Was Science, According to Stanford Scholar, Stan. Rep., (Mar. 17, 2020) https://news.stanford.edu/stories/2020/05/combating-black-plague-just-much-politics-science (discussing that poor people and foreigners were believed to be the cause—at least partially—of the plague).

[6] 40 Years of HIV Discovery: The First Cases of a Mysterious Disease in the Early 1980s, Institut Pasteur, (May 5, 2023) https://www.pasteur.fr/en/research-journal/news/40-years-hiv-discovery-first-cases-mysterious-disease-early-1980s (“This syndrome is then called the ‘4H disease’ to designate Homosexuals, Heroin addicts, Hemophiliacs and Haitians, before we understand that it does not only concern ‘these populations.’”).

[7] See generally Kacper Niburski & Oskar Niburski, Impact of Trump’s Promotion of Unproven COVID-19 Treatments and Subsequent Internet Trends: Observational Study, J. Med. Internet Rsch., Nov. 22, 2020 (discussing the impact of former President Trump’s promotion of hydroxychloroquine); Matthew Cohen et al., When COVID-19 Prophylaxis Leads to Hydroxychloroquine Poisoning, 10 Sw. Respiratory & Critical Care Chrons., 52 (discussing increase in hydroxychloroquine overdoses following its brief emergency use authorization).

[8] Geelan Fahimy, Liable for Your Lies: Misrepresentation Law as a Mechanism for Regulating Behavior on Social Networking Sites, 39 Pepp. L. Rev., 367, 370–79 (2012).

[9] See Restatement (Second) of Torts § 525 (Am. L. Inst. 1977) (“One who fraudulently makes a misrepresentation of fact, opinion, intention or law for the purpose of inducing another to act or to refrain from action in reliance upon it, is subject to liability to the other in deceit for pecuniary loss caused to him by his justifiable reliance upon the misrepresentation.”).

[10] Justifiable reliance can be proven through either a subjective or objective standard. Restatement (Second) of Torts § 538 (Am. L. Inst. 1977).

[11] Restatement (Second) of Torts § 310 (Am. L. Inst. 1965) (“An actor who makes a misrepresentation is subject to liability to another for physical harm which results from an act done by the other or a third person in reliance upon the truth of the representation, if the actor (a) intends his statement to induce or should realize that is likely to induce action by the other, or a third person, which involves an unreasonable risk of physical harm to the other, and (b) knows (i) that the statement is false, or (ii) that he has not the knowledge which he professes.”).

[12] See Niburski, supra note 7, for a discussion of former President Trump’s statements.

[13] Restatement (Second) of Torts § 310 cmt. b (Am. L. Inst. 1965).

[14] Restatement (Second) of Torts § 552(1) (Am. L. Inst. 1977) (“One who, in the course of his business, profession or employment, or in any other transaction in which he has a pecuniary interest, supplies false information for the guidance of others in their business transactions, is subject to liability for pecuniary loss caused to them by their justifiable reliance upon the information, if he fails to exercise reasonable care or competence in obtaining or communicating the information.”).

[15] Liability under negligent misrepresentation is limited to the person or group that the declarant intended to guide by supplying the information. Restatement (Second) of Torts § 552(2)(a)(1) (Am. L. Inst. 1977).

[16] Restatement (Second) of Torts § 311 (Am. L. Inst. 1965) (“One who negligently gives false information to another is subject to liability for physical harm caused by action taken by the other in reasonable reliance upon such information, where such harm results (a) to the other, or (b) to such third persons as the actor should expect to be put in peril by the action taken. Such negligence may consist of failure to exercise reasonable care (a) in ascertaining the accuracy of the information, or (b) in the manner in which it is communicated.”).

[17] See George Fishback, How the Wolf of Wall Street Shaped the Internet: A Review of Section 230 of the Communications Decency Act, 28 Tex. Intell. Prop. L.J. 275, 276 (2020) (“Section 230 promoted websites to grow without [the] fear . . . of liability for content beyond their control.”).

[18] See Section 230, Elec. Frontier Found. https://www.eff.org/issues/cda230#:~:text=Section%20230%20allows%20for%20web,what%20content%20they%20will%20distribute (last visited Oct. 23, 2024) (“In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects.”).

 


A Nation of Misinformation? the Attack on the Government’s Efforts to Stop Social Media Misinformation

Alex Mastorides, MJLST Staffer

Whether and how misinformation on social media can be curtailed has long been the subject of public debate. This debate has increasingly gained momentum since the beginning of the COVID-19 pandemic, at a time when uncertainty was the norm and people across the nation scrambled for information to help them stay safe. Misinformation regarding things like the origin of the pandemic, the treatment that should be administered to COVID-positive people, and the safety of the vaccine has been widely disseminated via social media platforms like TikTok, Facebook, Instagram, and X (formerly known as Twitter). The federal government under the Biden Administration has sought to curtail this wave of misinformation, characterizing it as a threat to public health. However, many have accused it of unconstitutional acts of censorship in violation of the First Amendment.

The government cannot directly interfere with the content posted on social media platforms; this right is held by the private companies that own the platforms. Instead, the government’s approach has been to communicate with social media companies, encouraging them to address misinformation that is promulgated on their sites. Per the Biden Administration: “The President’s view is that the major platforms have a responsibility related to the health and safety of all Americans to stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections.”[1]

Lower Courts have Ruled that the Government May Not Communicate with Social Media Companies for Purposes of Curtailing Online Misinformation

The case of Murthy v. Missouri may result in further clarity from the Supreme Court regarding the powers of the federal government to combat misinformation on social media platforms. The case began in the United States District Court for the Western District of Louisiana when two states–Missouri and Louisiana–along with several private parties filed suit against numerous federal government entities, including the White House and agencies such as the Federal Bureau of Investigation, the Centers for Disease Control & Prevention, and the Cybersecurity & Infrastructure Security Agency.[2] These entities have repeatedly communicated with social media companies, allegedly encouraging them to remove or censor the plaintiffs’ online content due to misinformation about the COVID-19 pandemic (including content discussing “the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story.”)[3] The plaintiffs allege that these government entities “‘coerced, threatened, and pressured [the] social-media platforms to censor [them]’ through private communications and legal threats” in violation of the plaintiffs’ First Amendment rights.[4]

The District Court agreed with the plaintiffs, issuing a preliminary injunction on July 4, 2023 to greatly restrict the entities’ ability to contact social media companies (especially with regard to misinformation).[5] This approach was predicated on the idea that government communications with social media companies about misinformation on their platforms is essentially coercive, forcing the companies to censor speech at the government’s demand. The injunction was appealed to the Fifth Circuit, which narrowed the injunction’s scope to just the White House, the Surgeon General’s office, and the FBI.[6]

Following the Fifth Circuit’s ruling on the preliminary injunction, the government parties to the Murthy case applied for a stay of the injunction with the United States Supreme Court.[7] The government further requested that the Court grant certiorari with regard to the questions presented by the injunction. The government attacked the injunction on three grounds. The first is that the plaintiffs did not have standing to sue under Article III because they did not show that the censoring effect on their posts was “fairly traceable” to the government or “redressable by injunctive relief.”[8]

The second argument is that the conduct at issue does not constitute a First Amendment free speech violation.[9] This claim is based on the state action doctrine, which outlines the circumstances in which the decisions of private entities are considered to be “state action.” If a private social media company’s decisions to moderate content are sufficiently “coerced” by the government, the law treats those decisions as if they were made by the government directly.[10] In that situation, the First Amendment would apply.[11] The Supreme Court has advocated for a strict evaluation of what kind of conduct might be considered “coercive” under this doctrine in an effort to avoid infringing upon the rights of private companies to modulate speech on their platforms.[12] The government’s Application for Stay argues that the Fifth Circuit’s decision is an overly broad application of the doctrine in light of the government’s conduct.[13]

Third, the government maintains that the preliminary injunction is overly broad because it “covers the government’s communications with all social-media platforms (not just those used by respondents) regarding all posts by any person (not just respondents) on all topics.”[14]

The Supreme Court Granted the Requested Stay and Granted Certiorari Regarding Three Key Questions

The Supreme Court granted the government’s request for a stay on the preliminary injunction. The Court simultaneously granted certiorari with respect to the questions posed in the government’s Application for Stay: “(1) Whether respondents have Article III standing; (2) Whether the government’s challenged conduct transformed private social-media companies’ content-moderation decisions into state action and violated respondents’ First Amendment rights; and (3) Whether the terms and breadth of the preliminary injunction are proper.”[15]

The Court gave no explanation for its grant of the request for stay or for its grant of certiorari. However, Justice Alito, joined by Justice Thomas and Justice Gorsuch, issued a dissent from the grant of application for stay, arguing that the government has not shown a likelihood that denial of a stay will result in irreparable harm.[16] He contends that the government’s argument about irreparable harm comes from hypotheticals rather than from actual “concrete” proof that harm is imminent.[17] The dissent further displays a disapproving attitude of the government’s actions toward social media misinformation: “At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.”[18]

Justice Alito noted in his dissent that the completion of the Court’s review of the case may not come until spring of next year.[19] The stay on the preliminary injunction will hold until that time.

Notes

[1] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[2] State v. Biden, 83 F.4th 350, 359 (5th Cir. 2023).

[3] Id. at 359.

[4] Id. at 359-60.

[5] Id. at 360.

[6] Id.

[7] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[8] Id. at 2.

[9] Id. at 3.

[10] Id. at 10.

[11] Id.

[12] Id. at 4 (citing Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1933 (2019)).

[13] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[14] Id. at 5.

[15] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[16] On Application for Stay at 3, Murthy v. Missouri, No. 23A243 (23-411) (October 20, 2023) (Alito, J. dissenting) (citing Hollingsworth v. Perry, 558 U.S. 183, 190 (2010)).

[17] Id. at 3-4.

[18] Id. at 5.

[19] Id. at 2.


Will Moody v. NetChoice, LLC End Social Media?

Aidan Vogelson, MJLST Staffer

At first, the concept that social media’s days may be numbered seems outlandish. Billions of people utilize social media every day and, historically, social media companies and other internet services have enjoyed virtually unfettered editorial control over how they manage their services. This freedom stems from 47 U.S.C. § 230.[1] § 230 withholds liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[2]  In other words, if someone makes an obscene post on Facebook and Facebook removes the post, Facebook cannot be held liable for any violation of protected speech. § 230 has long allowed social media companies to self-regulate by removing posts that violate their terms of service, but on September 29, the Supreme Court granted a writ of certiorari in Moody v. NetChoice, LLC, a case that may fundamentally change how social media companies operate by allowing the government at the state or federal level to regulate around their § 230 protections.

At issue in Moody is whether the methods social media companies use to moderate their content are permissible under the First Amendment and whether social media companies may be classified as common carriers.[3] Common carriers are services which hold themselves open to the public and transport people or goods.[4] While the term “common carrier” once referred only to public transportation services like railroads and airlines, the definition now encompasses communications services such as radio and telephone companies.[5] Common carriers are subjected to greater regulations, including anti-discrimination regulations, due to their market domination of a necessary public service.[6]  For example, given our reliance on airlines and telephone companies in performing necessary services, common carrier regulations ensure that an airline cannot decline to sell tickets to passengers because of their religious beliefs and a cellular network cannot bar service to customers because it disapproves of the content of their phone conversations. If social media companies are held to be common carriers, the federal government and the state governments could impose regulations on what content those companies restrict.

Moody stems from state efforts to do just that. The Florida legislature passed State Bill 7072 to curtail what it saw as social media censorship of conservative voices.[7] The Florida law allows for significant fines against social media companies that demonstrate “unfair censorship” or “deplatform” political candidates, like X (formerly Twitter) did when it removed former President Trump from its platform for falsely claiming that the 2020 election was stolen.[8] Florida is not the only state to pursue a common carrier designation for social media. Texas passed a similar law in 2021 (which is currently enjoined by NetChoice, LLC  v. Paxton and will be addressed alongside Moody) and the attorney general of Ohio has sued Google, seeking for the court to declare that Google is a common carrier to prevent the company from prioritizing its own products in search results.[9] Ohio v. Google LLC is ongoing, and while the judge partially granted Google’s motion to dismiss, he found that Ohio’s claim that Google is a common carrier is cognizable.[10] Given the increasing propensity with which states are attempting to regulate social media, the Supreme Court’s ruling is necessary to settle this vital issue.

Supporters of classifying social media companies as common carriers argue that social media is simply the most recent advancement in communication and should accordingly be designated a common carrier, just as telephone operators and cellular networks are. They explain that designating social media companies as common carriers is actually consistent with the broad protections of § 230, as regulating speech on a social media site regulates the speech of users, not the speech of the company.[11]

However, they ignore that social media companies rely on First Amendment and § 230 protections when they curate the content on their sites. Without the ability to promote or suppress posts and users, these companies would not be able to provide the personalized content that attracts users, and social media would likely become an even greater hotbed of misinformation and hate speech than it already is. The purpose of § 230 is to encourage the development of a thriving online community, which is why Congress chose to shield internet services from liability for content. Treating social media companies as common carriers would stifle that aim.

It is unclear how the Court will rule. In his concurrence in Biden v. Knight First Amend. Inst., Justice Thomas indicated he may be willing to consider social media companies as common carriers.[12] The other justices have yet to write or comment on this issue, but whatever their decision may be, the ramifications of this case will be significant. The conservative politicians behind the Florida and Texas laws have specifically decried what they argue is partisan censorship of conservative views about the Covid-19 pandemic and the 2020 election, yet these very complaints demonstrate the need for social media companies to exercise editorial control over their content. Covid-19 misinformation unquestionably led to unnecessary deaths from the Covid-19 pandemic.[13] Misinformation about the 2020 election led to a violent attempted overthrow of our government. These threats of violence and dangerous misinformation are the harms that Congress created § 230 to avoid. Without the ability for social media companies to curate content, social media will assuredly contain more racism, misinformation, and calls for violence. Few would argue given the omnipresence of social media in our modern world, our reliance on it for communication, and the misinformation it spreads that social media does not need some form of regulation, but if the Court allows the Florida and Texas laws implicated in Moody and NetChoice to stand, they will be paving the way for a patchwork quilt of laws in every state which may render social media unworkable

Notes

[1] See 47 U.S.C. § 230.

[2] 47 U.S.C. §230(c)(2)(A).

[3] Moody v. Netchoice, LLC, SCOTUSblog, https://www.scotusblog.com/case-files/cases/moody-v-netchoice-llc/.

[4] Alison Frankel, Are Internet Companies ‘Common Carriers’ of Content? Courts Diverge on Key Question, REUTERS, (May 31, 2022, 5:52 PM), https://www.reuters.com/legal/transactional/are-internet-companies-common-carriers-content-courts-diverge-key-question-2022-05-31/.

[5] Id.

[6] Id.

[7] David Savage, Supreme Court Will Decide if Texas and Florida Can Regulate Social Media to Protect ‘Conservative Speech’, LA TIMES (Sept. 29, 2023, 8:33 AM), https://www.msn.com/en-us/news/us/supreme-court-will-decide-if-texas-and-florida-can-regulate-social-media-to-protect-conservative-speech/ar-AA1hrE2s.

[8] Id.

[9] AG Yost Files Landmark Lawsuit to Declare Google a Public Utility, OHIO ATTORNEY GENERAL’S OFFICE (June 8, 2021), https://www.ohioattorneygeneral.gov/Media/News-Releases/June-2021/AG-Yost-Files-Landmark-Lawsuit-to-Declare-Google-a.

[10] Ohio v. Google LLC, No. 21-CV-H-06-0274 (Ohio Misc. 2022), https://fingfx.thomsonreuters.com/gfx/legaldocs/gdpzyeakzvw/frankel-socialmediacommoncarrier–ohioruling.pdf.

[11] John Villasenor, Social Media Companies and Common Carrier Status: A Primer, BROOKINGS INST. (Oct. 27, 2022), https://www.brookings.edu/articles/social-media-companies-and-common-carrier-status-a-primer/.

[12] Biden v. Knight First Amend. Inst., 141 S. Ct. 1220 (2021),  https://www.law.cornell.edu/supremecourt/text/20-197.

[13] Alistair Coleman, ’Hundreds Dead’ Because of Covid-19 Misinformation, BBC (Aug. 12, 2020), https://www.bbc.com/news/world-53755067.


Are Social Media Empires Liable for “Terror” Organizations?

Ray Mestad, MJLST Staffer

The practicality, ease of use, and sheer addictiveness of social media has led to its massive explansion around the world. Approximately 65% of the world uses the internet, and of that group, only 5% does not use social media.[1] So 60% of the world is on social media, around 4.76 billion people.[2] For most, social media is one of the simplest ways to stay connected and communicate with friends, family, and other people in their circle. But along with the growing use of social media, questions have been raised regarding the potential liability social media corporations may have for the content that is posted on their platforms. Recently, lawsuits have been filed against companies like Google, Twitter, and Facebook for allegedly allowing groups accused of terrorism to spread their message or plan on their platforms.[3] The question we are left with is to what extent are social media companies responsible for posts on their sites that lead to violence?

The family of Nohemi Gonzales, an American student killed in Paris during a 2015 Islamic State attack, is suing Google for platforming the Islamic State by allowing them to post videos on Youtube, and then recommending them to people with the Google algorithm.[4] And the family of Nawras Alassaf, a Jordanian citizen killed in a 2017 Istanbul Islamic State attack, is suing Twitter, Google, and Facebook, for not doing more to prevent the organization from using their platform as a communications and messaging tool.[5] Gonzales v. Google and Twitter v. Taamneh will both be presenting their oral arguments to the Supreme Court this month, February 2023.[6]

The legal issues in these cases are rooted in Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996.[7] 47 U.S.C. 230 intends to protect freedom of expression by protecting intermediaries that publish information posted by users.[8] Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[9] This protects web platforms from liability for the content that users post.

Further, Section 230(c)(2) states that “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[10] This is known as the “Good Samaritan” provision. Like 230(c)(1), Section 230(c)(2) gives internet providers liability protection, allowing them to moderate content in certain circumstances and then providing a safeguard from the free speech claims that would be made against them.[11]

The question is whether or not defendant social media platforms should be shielded from liability for platforming content that has allegedly led to or facilitated violent attacks. In Gonzales, the Justice department stated that although the company is protected against claims for hosting ISIS videos, a claim may be filed against Google for allowing Youtube to provide targeted recommendations of the videos.[12] And in Taamneh, the 9th Circuit agreed with the plaintiffs that there was room for the claim to go forward under the Anti-Terrorism Act because Twitter had generalized knowledge of the Islamic State’s use of their services.[13]

Section 230 has drawn an eclectic mix of critics and supporters. For example, although many conservatives and free speech advocates support the protections of Section 230, there have also been conservatives that oppose the code due to perceived restriction of conservative viewpoints on social media platforms. For example, prominent Republican Josh Hawley from Missouri has come out against the code, stating that the tech platforms ought to be treated as distributors and lose Section 230 protections.[14] In fact, Hawley introduced a piece of legislation opposing Section 230 called the Federal Big Tech Tort Act to impose liability on tech platforms.[15] And on the left, Section 230 is supported by those who believe the voices of the marginalized are protected by 230 and would otherwise be at the whim of tech companies, but opposed by people who fear that the code enables political violence and hate speech.[16]

The Supreme Court has now granted certiorari in both Gonzales and Taamneh. In Gonzales, the plaintiffs are arguing that Section 230 should not protect the actions of Google because the events occurred outside the US, it is preempted by the Justice Against Sponsors of Terrorism Act (JASTA), and the algorithmic recommendations transform Google / Youtube from an interactive computer service to an information content provider.[17] Google is arguing that they should be protected by 230, particularly 230(c)(1).[18] The 9th Circuit stated that although 230 did apply abroad, that JASTA shouldn’t supersede 230. Instead, 230 and JASTA should run parallel to each other. The 9th Circuit further stated that the claims based on revenue sharing (rather than ad targeting) should be dismissed. They did not think Google was contributing to terrorism, because they were motivated by financial enrichment rather than ideology, and affirmed the dismissal, partially because there was not clear enough information of how much support Google had provided to ISIS.[19] Future decisions regarding this case will implicate things like whether algorithmic recommendations should apply to 230.[20]

In Taamneh, the defendants argued that there was no proximate cause, as well as arguing about the inapplicability of Section 230.[21]  Unlike in GonzalesTaamneh had adequately stated a claim for aiding and abetting because the social media companies had more explicit knowledge of how their platforms were being used by these groups. The Taamneh dismissal was reversed. The Supreme Court review of this case will have implications on what it means to support or have a relationship with a group via a social media platform. In both of these cases, fears regarding the scope of 230 were expressed, which could reflect poorly on its applicability going forward.[24]   

Gonzales and Taamneh will hit the Supreme Court soon. If 230 is restricted, it would enable greater free speech but risks exposing more people to harms like hate speech or violence.  However, if 230 is preserved as is, it could restrict the accessibility and openness that has made the internet what it is today. Whichever decision is made, there will be massive implications for what the internet looks like in the future.

Notes

[1] https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/#:~:text=The%20number%20of%20social%20media,growth%20of%20%2B137%20million%20users.

[2] Id.

[3] https://apnews.com/article/islamic-state-group-us-supreme-court-technology-france-social-media-6bee9b5adf33dd15ee64b0d4d4e5ec78

[4] Id.

[5] Id.

[6] https://www.washingtonpost.com/politics/2023/01/03/2023-is-poised-be-landmark-year-tech-legal-bouts/

[7] https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996

[8] https://www.eff.org/issues/cda230

[9] https://casetext.com/statute/united-states-code/title-47-telecommunications/chapter-5-wire-or-radio-communication/subchapter-ii-common-carriers/part-i-common-carrier-regulation/section-230-protection-for-private-blocking-and-screening-of-offensive-material

[10] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[11] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[12] https://www.washingtonpost.com/politics/2022/12/09/tech-critics-urge-supreme-court-narrow-section-230/

[13] https://knightcolumbia.org/blog/twitter-v-taamneh-in-the-supreme-court-whats-at-stake

[14] Supa Washington Post

[15] https://www.hawley.senate.gov/hawley-files-gonzalez-v-google-amicus-brief-supreme-court-challenging-big-techs-section-230

[16] Supa Washington Post

[17] https://www.lawfareblog.com/supreme-court-grants-certiorari-gonzalez-v-google-and-twitter-v-taamneh-overview

[18] Id.

[19] Id.

[20]

[21] Id.

[22] Id.

[23] Id.

[24]Id.


Freedom to Moderate? Circuits Split Over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


Holy Crap: The First Amendment, Septic Systems, and the Strict Scrutiny Standard in Land Use Law

Sarah Bauer, MJLST Staffer

In the Summer of 2021, the U.S. Supreme Court released a bevy of decisions favoring religious freedom. Among these was Mast v. City of Fillmore, a case about, well, septic systems and the First Amendment. But Mast is about so much more than that: it showcases the Court’s commitment to free exercise in a variety of contexts and Justice Gorsuch as a champion of Western sensibilities. It also demonstrates that moving forward, the government is going to need work harder to support that its compelling interest in land use regulation trumps an individual’s free exercise rights.

The Facts of Mast

To understand how septic systems and the First Amendment can even exist in the same sentence, it’s important to know the facts of Mast. In the state of Minnesota, the Pollution Control Agency (MPCA) is responsible for maintaining water quality. It promulgates regulations accordingly, then local governments adopt those regulations into ordinances. Among those are prescriptive regulations about wastewater treatment. At issue is one such ordinance adopted by Fillmore County, Minnesota, that requires most homes to have a modern septic system for the disposal of gray water.

The plaintiffs in the case are Swartzentruber Amish. They sought a religious exemption from the ordinance, saying that their religion forbade the use of that technology. The MPCA instead demanded the installation of the modern system under threat of criminal penalty, civil fines, and eviction from their farms. When the MPCA rejected a low-tech alternative offered by the plaintiffs, a mulch basin system not uncommon in other states, the Amish sought relief on grounds that the ordinance violated the Religious Land Use and Institutionalized Persons Act (RLUIPA). After losing the battle in state courts, the Mast plaintiffs took it to the Supreme Court, where the case was decided in their favor last summer.

The First Amendment and Strict Scrutiny

Mast’s issue is a land use remix of Fulton v. City of Philadelphia, another free exercise case from the same docket. Fulton, the more controversial and well-known of the two, involved the City of Philadelphia’s decision to discontinue contracts with Catholic Social Services (CSS) for placement of children in foster homes. The City said that CSS’s refusal to place children with same-sex couples violated a non-discrimination provision in both the contract and the non-discrimination requirements of the citywide Fair Practices Ordinance. The Supreme Court didn’t buy it, holding instead that the City’s policy impermissibly burdened CSS’s free exercise of religion.

The Fulton decision was important for refining the legal analysis and standards when a law burdens free exercise of religion. First, if a law incidentally burdens religion but is both 1) neutral and 2) generally applicable, then courts will not ordinarily apply a strict scrutiny standard on review. If one of those elements is not met, courts will apply strict scrutiny, and the government will need to show that the law 1) advances a compelling interest and 2) is narrowly tailored to achieve those interests. The trick to strict scrutiny is this: the government’s compelling interest in denying an exception needs to apply specifically to those requesting the religious exception. A law examined under strict scrutiny will not survive if the State only asserts that it has a compelling interest in enforcing its laws generally.

Strict Scrutiny, RLUIPA, and Mast

The Mast Plaintiffs sought relief under RLUIPA. RLUIPA isn’t just a contender for Congress’s “Most Difficult to Pronounce Acronym” Award. It’s a choice legal weapon for those claiming that a land use regulation restricts free exercise of religion. The strict scrutiny standard is built into RLUIPA, meaning that courts skip straight to the question of whether 1) the government had a compelling government interest, and 2) whether the rule was the least restrictive means of furthering that compelling government interest. And now, post-Fulton, that first inquiry involves looking at whether the government had a compelling interest in denying an exception specifically as it applies to plaintiffs.

So that is how we end up with septic systems and the First Amendment in the same case. The Amish sued under RLUIPA, the Court applied strict scrutiny, and the government failed to show that it had a compelling interest in denying the Amish an exception to the rule that they needed to install a septic system for their gray water. Particularly convincing at least from Coloradan Justice Gorsuch’s perspective, were the facts that 1) Minnesota law allowed exemptions to campers and outdoorsman, 2) other jurisdictions allowed for gray water disposal in the same alternative manner suggested by the plaintiffs, and 3) the government couldn’t show that the alternative method wouldn’t effectively filter the water.

So what does this ultimately mean for land use regulation? It means that in the niche area of RLUIPA litigation, religious groups have a stronger strict scrutiny standard to lean on, forcing governments to present more evidence justifying a refusal to extend religious exemptions. And government can’t bypass the standard by making regulations more “generally applicable,” for example by removing exemptions for campers. Strict scrutiny still applies under RLUIPA, and governments are stuck with it, resulting in a possible windfall of exceptions for the religious.


The First Amendment and Trademarks: Are Offensive Trademarks Registrable?

Kelly Brandenburg, MJLST Staffer 

The Lanham Act, which is the federal statute that governs trademarks, had a disparagement clause, that prohibited the registration of a trademark “which may disparage . . . persons, living or dead, institutions, beliefs, or national symbols, or bring them into contempt, or disrepute.” This provision has been the focal issue in several cases over the years, but was finally brought up to the Supreme Court of the United States (“SCOTUS”) which decided that the clause was unconstitutional.. In that case, Matal v. Tam, an Asian-American dance-rock band with the name “The Slants” was originally denied trademark protection on their name because “slant” is a derogatory term for people of Asian descent. In the end, the Court found the disparagement clause violated the free speech clause of the First Amendment. The Court said the clause violated the basic principle of the First Amendment that “speech may not be banned on the ground that it expresses ideas that offend.”

This decision had a significant impact on a well-known case involving the Washington Redskins. The team had six trademarks that were cancelled by the Trademark Office in 2014, but after the Matal decision, the U.S. Court of Appeals for the Fourth Circuit vacated that prior decision since the disparagement clause was the basis for the Native American’s argument to revoke the Redskin registrations.

Currently, there is a case awaiting a Supreme Court hearing that discusses a closely related topic. In re Brunetti involves a trademark for the word “Fuct,” which is the name of a clothing brand. The Trademark Trial and Appeal Board found the word to be “vulgar,” which violates the immoral or scandalous provision of the same statute that was at issue in Matal. The case was appealed and the Federal Circuit upheld the rejection of the registration. The ruling in Matal was discussed as an argument for the immoral or scandalous clause being unconstitutional, but the Court decided the case without addressing the constitutionality of the clause; instead, it determined that the word “impermissibly discriminates based on content in violation of the First Amendment,” and is therefore not registerable. However, SCOTUS granted certiorari in the case and depending on how the Court defines the word, it will potentially have to address the constitutionality of the immoral or scandalous provision. An argument was made at the Federal Circuit that the immoral or scandalous clause would be constitutional because, unlike the disparagement clause, this clause is “viewpoint neutral.” This argument was not addressed by the Federal Circuit, but could potentially be addressed in the upcoming SCOTUS hearing. If so, will  SCOTUS find enough of a difference between the disparagement clause and the immoral or scandalous clause to consider it constitutional, or will the same free speech issues be present? The oral argument is scheduled for April 15, 2019, so stay tuned!


E-Threat: Imminent Danger in the Information Age

MJLST Staffer, Jacob Weindling

 

One of the basic guarantees of the First Amendment is the right to free speech. This right protects the individual from restrictions on speech by the government, but is often invoked as a rhetorical weapon against private individuals or organizations declining to publish another’s words. On the internet, these organizations include some of the most popular discussion platforms in the U.S. including Facebook, Reddit, Yahoo, and Twitter. A key feature of these organizations is their lack of government control. As recenty as 2017, the Supreme Court has identified First Amendment grounds for overturning prohibitions on social media access. Indeed, one of the only major government prohibitions on speech currently in force is the ban on child pornography. Violent rhetoric, meanwhile, continues to fall under the Constitutional protections identified by the Court.

Historically, the Supreme Court has taken a nuanced view of violent speech as it relates to the First Amendment. The Court held in Brandenburg v. Ohio that “the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Contrast this with discussion of a moral responsibility to resort to violence, which the Supreme Court has held to be distinct from preparing a group for imminent violent acts.

With the rise and maturation of the internet, public discourse has entered a new and relatively unchartered territory that the Supreme Court would have been hard-pressed to anticipate at the time of the Brandenburg and Noto decisions. Where once geography served to isolate Neo-Nazi groups and the Ku Klux Klan into small local chapters, the internet now provides a centralized meeting place for the dissemination and discussion of violent rhetoric. Historically, the Supreme Court concerned itself mightily with the distinction between an imminent call to action and a general discussion of moral imperatives, making clear delineations between the two.

The context of the Brandenburg decision was a pre-information age telecommunications regime. While large amounts of information could be transmitted around the world in relatively short order thanks to development of international commercial air travel, real-time communication was generally limited to telephone conversations between two individuals. An imminent call to action would require substantial real-world logistics, meetings, and preparation, all of which provide significant opportunities for detection and disruption by law enforcement. By comparison, internet forums today provide for near-instant communication between large groups of individuals across the entire world, likely narrowing the window that law enforcement would have to identify and act upon a credible, imminent threat.

At what point does Islamic State recruitment or militant Neo-Nazi organizing on the internet rise to the level of imminent threat? The Supreme Court has not yet decided the issue, many internet businesses have recently begun to take matters into their own hands. Facebook and Youtube have reportedly been more active in policing Islamic State propaganda, while Reddit has taken some steps to remove communities that advocate for rape and violence. Consequently, while the Supreme Court has not yet elected to draw (or redraw) a bright red line in the internet age, many businesses appear to be taking the first steps to draw the line themselves, on their terms.