Social Media

A Nation of Misinformation? the Attack on the Government’s Efforts to Stop Social Media Misinformation

Alex Mastorides, MJLST Staffer

Whether and how misinformation on social media can be curtailed has long been the subject of public debate. This debate has increasingly gained momentum since the beginning of the COVID-19 pandemic, at a time when uncertainty was the norm and people across the nation scrambled for information to help them stay safe. Misinformation regarding things like the origin of the pandemic, the treatment that should be administered to COVID-positive people, and the safety of the vaccine has been widely disseminated via social media platforms like TikTok, Facebook, Instagram, and X (formerly known as Twitter). The federal government under the Biden Administration has sought to curtail this wave of misinformation, characterizing it as a threat to public health. However, many have accused it of unconstitutional acts of censorship in violation of the First Amendment.

The government cannot directly interfere with the content posted on social media platforms; this right is held by the private companies that own the platforms. Instead, the government’s approach has been to communicate with social media companies, encouraging them to address misinformation that is promulgated on their sites. Per the Biden Administration: “The President’s view is that the major platforms have a responsibility related to the health and safety of all Americans to stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections.”[1]

Lower Courts have Ruled that the Government May Not Communicate with Social Media Companies for Purposes of Curtailing Online Misinformation

The case of Murthy v. Missouri may result in further clarity from the Supreme Court regarding the powers of the federal government to combat misinformation on social media platforms. The case began in the United States District Court for the Western District of Louisiana when two states–Missouri and Louisiana–along with several private parties filed suit against numerous federal government entities, including the White House and agencies such as the Federal Bureau of Investigation, the Centers for Disease Control & Prevention, and the Cybersecurity & Infrastructure Security Agency.[2] These entities have repeatedly communicated with social media companies, allegedly encouraging them to remove or censor the plaintiffs’ online content due to misinformation about the COVID-19 pandemic (including content discussing “the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story.”)[3] The plaintiffs allege that these government entities “‘coerced, threatened, and pressured [the] social-media platforms to censor [them]’ through private communications and legal threats” in violation of the plaintiffs’ First Amendment rights.[4]

The District Court agreed with the plaintiffs, issuing a preliminary injunction on July 4, 2023 to greatly restrict the entities’ ability to contact social media companies (especially with regard to misinformation).[5] This approach was predicated on the idea that government communications with social media companies about misinformation on their platforms is essentially coercive, forcing the companies to censor speech at the government’s demand. The injunction was appealed to the Fifth Circuit, which narrowed the injunction’s scope to just the White House, the Surgeon General’s office, and the FBI.[6]

Following the Fifth Circuit’s ruling on the preliminary injunction, the government parties to the Murthy case applied for a stay of the injunction with the United States Supreme Court.[7] The government further requested that the Court grant certiorari with regard to the questions presented by the injunction. The government attacked the injunction on three grounds. The first is that the plaintiffs did not have standing to sue under Article III because they did not show that the censoring effect on their posts was “fairly traceable” to the government or “redressable by injunctive relief.”[8]

The second argument is that the conduct at issue does not constitute a First Amendment free speech violation.[9] This claim is based on the state action doctrine, which outlines the circumstances in which the decisions of private entities are considered to be “state action.” If a private social media company’s decisions to moderate content are sufficiently “coerced” by the government, the law treats those decisions as if they were made by the government directly.[10] In that situation, the First Amendment would apply.[11] The Supreme Court has advocated for a strict evaluation of what kind of conduct might be considered “coercive” under this doctrine in an effort to avoid infringing upon the rights of private companies to modulate speech on their platforms.[12] The government’s Application for Stay argues that the Fifth Circuit’s decision is an overly broad application of the doctrine in light of the government’s conduct.[13]

Third, the government maintains that the preliminary injunction is overly broad because it “covers the government’s communications with all social-media platforms (not just those used by respondents) regarding all posts by any person (not just respondents) on all topics.”[14]

The Supreme Court Granted the Requested Stay and Granted Certiorari Regarding Three Key Questions

The Supreme Court granted the government’s request for a stay on the preliminary injunction. The Court simultaneously granted certiorari with respect to the questions posed in the government’s Application for Stay: “(1) Whether respondents have Article III standing; (2) Whether the government’s challenged conduct transformed private social-media companies’ content-moderation decisions into state action and violated respondents’ First Amendment rights; and (3) Whether the terms and breadth of the preliminary injunction are proper.”[15]

The Court gave no explanation for its grant of the request for stay or for its grant of certiorari. However, Justice Alito, joined by Justice Thomas and Justice Gorsuch, issued a dissent from the grant of application for stay, arguing that the government has not shown a likelihood that denial of a stay will result in irreparable harm.[16] He contends that the government’s argument about irreparable harm comes from hypotheticals rather than from actual “concrete” proof that harm is imminent.[17] The dissent further displays a disapproving attitude of the government’s actions toward social media misinformation: “At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.”[18]

Justice Alito noted in his dissent that the completion of the Court’s review of the case may not come until spring of next year.[19] The stay on the preliminary injunction will hold until that time.

Notes

[1] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[2] State v. Biden, 83 F.4th 350, 359 (5th Cir. 2023).

[3] Id. at 359.

[4] Id. at 359-60.

[5] Id. at 360.

[6] Id.

[7] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[8] Id. at 2.

[9] Id. at 3.

[10] Id. at 10.

[11] Id.

[12] Id. at 4 (citing Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1933 (2019)).

[13] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[14] Id. at 5.

[15] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[16] On Application for Stay at 3, Murthy v. Missouri, No. 23A243 (23-411) (October 20, 2023) (Alito, J. dissenting) (citing Hollingsworth v. Perry, 558 U.S. 183, 190 (2010)).

[17] Id. at 3-4.

[18] Id. at 5.

[19] Id. at 2.


Will Moody v. NetChoice, LLC End Social Media?

Aidan Vogelson, MJLST Staffer

At first, the concept that social media’s days may be numbered seems outlandish. Billions of people utilize social media every day and, historically, social media companies and other internet services have enjoyed virtually unfettered editorial control over how they manage their services. This freedom stems from 47 U.S.C. § 230.[1] § 230 withholds liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[2]  In other words, if someone makes an obscene post on Facebook and Facebook removes the post, Facebook cannot be held liable for any violation of protected speech. § 230 has long allowed social media companies to self-regulate by removing posts that violate their terms of service, but on September 29, the Supreme Court granted a writ of certiorari in Moody v. NetChoice, LLC, a case that may fundamentally change how social media companies operate by allowing the government at the state or federal level to regulate around their § 230 protections.

At issue in Moody is whether the methods social media companies use to moderate their content are permissible under the First Amendment and whether social media companies may be classified as common carriers.[3] Common carriers are services which hold themselves open to the public and transport people or goods.[4] While the term “common carrier” once referred only to public transportation services like railroads and airlines, the definition now encompasses communications services such as radio and telephone companies.[5] Common carriers are subjected to greater regulations, including anti-discrimination regulations, due to their market domination of a necessary public service.[6]  For example, given our reliance on airlines and telephone companies in performing necessary services, common carrier regulations ensure that an airline cannot decline to sell tickets to passengers because of their religious beliefs and a cellular network cannot bar service to customers because it disapproves of the content of their phone conversations. If social media companies are held to be common carriers, the federal government and the state governments could impose regulations on what content those companies restrict.

Moody stems from state efforts to do just that. The Florida legislature passed State Bill 7072 to curtail what it saw as social media censorship of conservative voices.[7] The Florida law allows for significant fines against social media companies that demonstrate “unfair censorship” or “deplatform” political candidates, like X (formerly Twitter) did when it removed former President Trump from its platform for falsely claiming that the 2020 election was stolen.[8] Florida is not the only state to pursue a common carrier designation for social media. Texas passed a similar law in 2021 (which is currently enjoined by NetChoice, LLC  v. Paxton and will be addressed alongside Moody) and the attorney general of Ohio has sued Google, seeking for the court to declare that Google is a common carrier to prevent the company from prioritizing its own products in search results.[9] Ohio v. Google LLC is ongoing, and while the judge partially granted Google’s motion to dismiss, he found that Ohio’s claim that Google is a common carrier is cognizable.[10] Given the increasing propensity with which states are attempting to regulate social media, the Supreme Court’s ruling is necessary to settle this vital issue.

Supporters of classifying social media companies as common carriers argue that social media is simply the most recent advancement in communication and should accordingly be designated a common carrier, just as telephone operators and cellular networks are. They explain that designating social media companies as common carriers is actually consistent with the broad protections of § 230, as regulating speech on a social media site regulates the speech of users, not the speech of the company.[11]

However, they ignore that social media companies rely on First Amendment and § 230 protections when they curate the content on their sites. Without the ability to promote or suppress posts and users, these companies would not be able to provide the personalized content that attracts users, and social media would likely become an even greater hotbed of misinformation and hate speech than it already is. The purpose of § 230 is to encourage the development of a thriving online community, which is why Congress chose to shield internet services from liability for content. Treating social media companies as common carriers would stifle that aim.

It is unclear how the Court will rule. In his concurrence in Biden v. Knight First Amend. Inst., Justice Thomas indicated he may be willing to consider social media companies as common carriers.[12] The other justices have yet to write or comment on this issue, but whatever their decision may be, the ramifications of this case will be significant. The conservative politicians behind the Florida and Texas laws have specifically decried what they argue is partisan censorship of conservative views about the Covid-19 pandemic and the 2020 election, yet these very complaints demonstrate the need for social media companies to exercise editorial control over their content. Covid-19 misinformation unquestionably led to unnecessary deaths from the Covid-19 pandemic.[13] Misinformation about the 2020 election led to a violent attempted overthrow of our government. These threats of violence and dangerous misinformation are the harms that Congress created § 230 to avoid. Without the ability for social media companies to curate content, social media will assuredly contain more racism, misinformation, and calls for violence. Few would argue given the omnipresence of social media in our modern world, our reliance on it for communication, and the misinformation it spreads that social media does not need some form of regulation, but if the Court allows the Florida and Texas laws implicated in Moody and NetChoice to stand, they will be paving the way for a patchwork quilt of laws in every state which may render social media unworkable

Notes

[1] See 47 U.S.C. § 230.

[2] 47 U.S.C. §230(c)(2)(A).

[3] Moody v. Netchoice, LLC, SCOTUSblog, https://www.scotusblog.com/case-files/cases/moody-v-netchoice-llc/.

[4] Alison Frankel, Are Internet Companies ‘Common Carriers’ of Content? Courts Diverge on Key Question, REUTERS, (May 31, 2022, 5:52 PM), https://www.reuters.com/legal/transactional/are-internet-companies-common-carriers-content-courts-diverge-key-question-2022-05-31/.

[5] Id.

[6] Id.

[7] David Savage, Supreme Court Will Decide if Texas and Florida Can Regulate Social Media to Protect ‘Conservative Speech’, LA TIMES (Sept. 29, 2023, 8:33 AM), https://www.msn.com/en-us/news/us/supreme-court-will-decide-if-texas-and-florida-can-regulate-social-media-to-protect-conservative-speech/ar-AA1hrE2s.

[8] Id.

[9] AG Yost Files Landmark Lawsuit to Declare Google a Public Utility, OHIO ATTORNEY GENERAL’S OFFICE (June 8, 2021), https://www.ohioattorneygeneral.gov/Media/News-Releases/June-2021/AG-Yost-Files-Landmark-Lawsuit-to-Declare-Google-a.

[10] Ohio v. Google LLC, No. 21-CV-H-06-0274 (Ohio Misc. 2022), https://fingfx.thomsonreuters.com/gfx/legaldocs/gdpzyeakzvw/frankel-socialmediacommoncarrier–ohioruling.pdf.

[11] John Villasenor, Social Media Companies and Common Carrier Status: A Primer, BROOKINGS INST. (Oct. 27, 2022), https://www.brookings.edu/articles/social-media-companies-and-common-carrier-status-a-primer/.

[12] Biden v. Knight First Amend. Inst., 141 S. Ct. 1220 (2021),  https://www.law.cornell.edu/supremecourt/text/20-197.

[13] Alistair Coleman, ’Hundreds Dead’ Because of Covid-19 Misinformation, BBC (Aug. 12, 2020), https://www.bbc.com/news/world-53755067.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


Are Social Media Empires Liable for “Terror” Organizations?

Ray Mestad, MJLST Staffer

The practicality, ease of use, and sheer addictiveness of social media has led to its massive explansion around the world. Approximately 65% of the world uses the internet, and of that group, only 5% does not use social media.[1] So 60% of the world is on social media, around 4.76 billion people.[2] For most, social media is one of the simplest ways to stay connected and communicate with friends, family, and other people in their circle. But along with the growing use of social media, questions have been raised regarding the potential liability social media corporations may have for the content that is posted on their platforms. Recently, lawsuits have been filed against companies like Google, Twitter, and Facebook for allegedly allowing groups accused of terrorism to spread their message or plan on their platforms.[3] The question we are left with is to what extent are social media companies responsible for posts on their sites that lead to violence?

The family of Nohemi Gonzales, an American student killed in Paris during a 2015 Islamic State attack, is suing Google for platforming the Islamic State by allowing them to post videos on Youtube, and then recommending them to people with the Google algorithm.[4] And the family of Nawras Alassaf, a Jordanian citizen killed in a 2017 Istanbul Islamic State attack, is suing Twitter, Google, and Facebook, for not doing more to prevent the organization from using their platform as a communications and messaging tool.[5] Gonzales v. Google and Twitter v. Taamneh will both be presenting their oral arguments to the Supreme Court this month, February 2023.[6]

The legal issues in these cases are rooted in Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996.[7] 47 U.S.C. 230 intends to protect freedom of expression by protecting intermediaries that publish information posted by users.[8] Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[9] This protects web platforms from liability for the content that users post.

Further, Section 230(c)(2) states that “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[10] This is known as the “Good Samaritan” provision. Like 230(c)(1), Section 230(c)(2) gives internet providers liability protection, allowing them to moderate content in certain circumstances and then providing a safeguard from the free speech claims that would be made against them.[11]

The question is whether or not defendant social media platforms should be shielded from liability for platforming content that has allegedly led to or facilitated violent attacks. In Gonzales, the Justice department stated that although the company is protected against claims for hosting ISIS videos, a claim may be filed against Google for allowing Youtube to provide targeted recommendations of the videos.[12] And in Taamneh, the 9th Circuit agreed with the plaintiffs that there was room for the claim to go forward under the Anti-Terrorism Act because Twitter had generalized knowledge of the Islamic State’s use of their services.[13]

Section 230 has drawn an eclectic mix of critics and supporters. For example, although many conservatives and free speech advocates support the protections of Section 230, there have also been conservatives that oppose the code due to perceived restriction of conservative viewpoints on social media platforms. For example, prominent Republican Josh Hawley from Missouri has come out against the code, stating that the tech platforms ought to be treated as distributors and lose Section 230 protections.[14] In fact, Hawley introduced a piece of legislation opposing Section 230 called the Federal Big Tech Tort Act to impose liability on tech platforms.[15] And on the left, Section 230 is supported by those who believe the voices of the marginalized are protected by 230 and would otherwise be at the whim of tech companies, but opposed by people who fear that the code enables political violence and hate speech.[16]

The Supreme Court has now granted certiorari in both Gonzales and Taamneh. In Gonzales, the plaintiffs are arguing that Section 230 should not protect the actions of Google because the events occurred outside the US, it is preempted by the Justice Against Sponsors of Terrorism Act (JASTA), and the algorithmic recommendations transform Google / Youtube from an interactive computer service to an information content provider.[17] Google is arguing that they should be protected by 230, particularly 230(c)(1).[18] The 9th Circuit stated that although 230 did apply abroad, that JASTA shouldn’t supersede 230. Instead, 230 and JASTA should run parallel to each other. The 9th Circuit further stated that the claims based on revenue sharing (rather than ad targeting) should be dismissed. They did not think Google was contributing to terrorism, because they were motivated by financial enrichment rather than ideology, and affirmed the dismissal, partially because there was not clear enough information of how much support Google had provided to ISIS.[19] Future decisions regarding this case will implicate things like whether algorithmic recommendations should apply to 230.[20]

In Taamneh, the defendants argued that there was no proximate cause, as well as arguing about the inapplicability of Section 230.[21]  Unlike in GonzalesTaamneh had adequately stated a claim for aiding and abetting because the social media companies had more explicit knowledge of how their platforms were being used by these groups. The Taamneh dismissal was reversed. The Supreme Court review of this case will have implications on what it means to support or have a relationship with a group via a social media platform. In both of these cases, fears regarding the scope of 230 were expressed, which could reflect poorly on its applicability going forward.[24]   

Gonzales and Taamneh will hit the Supreme Court soon. If 230 is restricted, it would enable greater free speech but risks exposing more people to harms like hate speech or violence.  However, if 230 is preserved as is, it could restrict the accessibility and openness that has made the internet what it is today. Whichever decision is made, there will be massive implications for what the internet looks like in the future.

Notes

[1] https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/#:~:text=The%20number%20of%20social%20media,growth%20of%20%2B137%20million%20users.

[2] Id.

[3] https://apnews.com/article/islamic-state-group-us-supreme-court-technology-france-social-media-6bee9b5adf33dd15ee64b0d4d4e5ec78

[4] Id.

[5] Id.

[6] https://www.washingtonpost.com/politics/2023/01/03/2023-is-poised-be-landmark-year-tech-legal-bouts/

[7] https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996

[8] https://www.eff.org/issues/cda230

[9] https://casetext.com/statute/united-states-code/title-47-telecommunications/chapter-5-wire-or-radio-communication/subchapter-ii-common-carriers/part-i-common-carrier-regulation/section-230-protection-for-private-blocking-and-screening-of-offensive-material

[10] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[11] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[12] https://www.washingtonpost.com/politics/2022/12/09/tech-critics-urge-supreme-court-narrow-section-230/

[13] https://knightcolumbia.org/blog/twitter-v-taamneh-in-the-supreme-court-whats-at-stake

[14] Supa Washington Post

[15] https://www.hawley.senate.gov/hawley-files-gonzalez-v-google-amicus-brief-supreme-court-challenging-big-techs-section-230

[16] Supa Washington Post

[17] https://www.lawfareblog.com/supreme-court-grants-certiorari-gonzalez-v-google-and-twitter-v-taamneh-overview

[18] Id.

[19] Id.

[20]

[21] Id.

[22] Id.

[23] Id.

[24]Id.


Emptying the Nest: Recent Events at Twitter Prompt Class-Action Litigation, Among Other Things

Ted Mathiowetz, MJLST Staffer

You’d be forgiven if you thought the circumstances that led to Elon Musk ultimately acquiring Twitter would be the end of the drama for the social media company. In the past seven months, Musk went from becoming the largest shareholder of the company, to publicly feuding with then-CEO, Parag Agrawal, to making an offer to take the company private for $44 billion, to deciding he didn’t want to purchase the company, to being sued by Twitter to force him to complete the deal. Eventually, two weeks before trial was scheduled, Musk purchased the company for the original, agreed upon price.[1] However, within the first two-and-a-half weeks that Musk took Twitter private, the drama has continued, if not ramped-up, with one lawsuit already filed and the specter of additional litigation looming.[2]

There’s been the highly controversial rollout and almost immediate suspension of Twitter Blue—Musk’s idea of increasing the reliability of information on Twitter and simultaneously helping ameliorate Twitter’s financial woes.[3]Essentially, users were able to pay $8 a month for verification, albeit without actually verifying their identity. Instead, their username would remain frozen at the time they paid for the service.[4] Users quickly created fake “verified” accounts for real companies and spread misinformation while armed with the “verified” check mark, duping both the public and investors. For example, a newly created account with the handle “@EliLillyandCo” paid for Twitter Blue and tweeted “We are excited to announce insulin is free now.”[5] Eli Lilly’s actual Twitter account, “@LillyPad” had to tweet a message apologizing to those “who have been served a misleading message” from the fake account, after the pharmaceutical company’s shares dipped around 5% after the tweet.[6] In addition to Eli Lilly, several other companies, like Lockheed Martin, faced similar identity theft.[7] Twitter Blue was quickly suspended in the wake of these viral impersonations and advertisers have continued to flee the company, affecting its revenue.[8]

Musk also pulled over 50 engineers from Tesla, the vehicle manufacturing company of which he is CEO, to help him in his reimagining of Twitter.[9] Among those 50 engineers are the director of software development and the senior director of software engineering.[10] Pulling engineers from his publicly traded company to work on his separately owned private company almost assuredly raises questions of a violation of his fiduciary duty to Tesla’s shareholders, especially with Tesla’s share price falling 13% over the last week (as of November 9, 2022).[11]

The bulk of Twitter’s current legal issues reside in Musk’s decision to engage in mass-layoffs of employees at Twitter.[12] After his first week in charge, he sent out notices to around half of Twitter’s 7500 employees that they would be laid off, reasoning that cutbacks were necessary because Twitter was losing over $4 million per day.[13] Soon after the layoffs, a group of employees filed suit alleging that Twitter violated the Worker Adjustment and Retraining Act (WARN) by failing to give adequate notice.[14]

The WARN Act, passed in 1988, applies to employers with 100 or more employees[15] and mandates that an “employer shall not order a [mass layoff]” until it gives sixty-days’ notice to the state and affected employees.[16]Compliance can also be reached if, in lieu of giving notice, the employee is paid for the sixty-day notice period. In Twitter’s case, some employees were offered pay to comply with the sixty-day period after the initial lawsuit was filed,[17] though the lead plaintiff in the class action suit was allegedly laid off on November 1st with no notice or offer of severance pay.[18] Additionally, it appears as though Twitter is now offering severance to employees in return for a signature releasing them from liability in a WARN action.[19]

With regard to those who have not yet signed releases and were not given notice of a layoff, there is a question of what the penalties may be to Twitter and what potential defenses they may have. Each employee is entitled to “back pay for each day of violation” as well as benefits under their respective plan.[20] Furthermore, the employer is subject to a civil penalty of “not more than $500 for each day of violation” unless they pay their liability to each employee within three weeks of the layoff.[21] One possible defense that Twitter may assert in response to this suit is that of “unforeseeable business circumstances.”[22] Considering Musk’s recent comments that there is the potential that Twitter is headed for bankruptcy as well as the saddling of the company with debt to purchase it (reportedly $13 billion, with $1 billion per year in interest payments),[23] it seems there is a chance this defense could suffice. However, an unforeseen circumstance is strongly indicated when the circumstance is “outside the employer’s” control[24], something that’s arguable given the company’s recent conduct.[25] Additionally, Twitter would have to show that it has been exercising “commercially reasonable business judgment as would a similarly situated employer” in their conduct, another burden that may be hard to overcome. In sum, it’s quite clear why Twitter is trying to keep this lawsuit from gaining traction by securing release waivers. It’s also clear that Twitter has learned its lesson in not offering severance but they may be wading into other areas of employment law with recent conduct.[26]

Notes

[1] Timeline of Billionaire Elon Musk’s to Control Twitter, Associated Press (Oct. 28, 2022), https://apnews.com/article/twitter-elon-musk-timeline-c6b09620ee0905e59df9325ed042a609.

[2] Annie Palmer, Twitter Sued by Employees After Mass Layoffs Begin, CNBC (Nov. 4, 2022), https://www.cnbc.com/2022/11/04/twitter-sued-by-employees-after-mass-layoffs-begin.html.

[3] Siladitya Ray, Twitter Blue: Signups for Paid Verification Appear Suspended After Impersonator Chaos, Forbes (Nov. 11, 2022), https://www.forbes.com/sites/siladityaray/2022/11/11/twitter-blue-new-signups-for-paid-verification-appear-suspended-after-impersonator-chaos/?sh=14faf76c385c; see also Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:43 PM), https://twitter.com/elonmusk/status/1589403131770974208?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[4] Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:35 PM), https://twitter.com/elonmusk/status/1589401231545741312?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[5] Steve Mollman, No, Insulin is not Free: Eli Lilly is the Latest High-Profile Casualty of Elon Musk’s Twitter Verification Mess, Fortune(Nov. 11, 2022), https://fortune.com/2022/11/11/no-free-insulin-eli-lilly-casualty-of-elon-musk-twitter-blue-verification-mess/.

[6] Id. Eli Lilly and Company (@LillyPad), Twitter (Nov. 10, 2022, 3:09 PM), https://twitter.com/LillyPad/status/1590813806275469333?s=20&t=4XvAAidJmNLYwSCcWtd4VQ.

[7] Mollman, supra note 5 (showing Lockheed Martin’s stock dipped around 5% as well following a tweet from a “verified” account saying arms sales were being suspended to various countries went viral).

[8] Herb Scribner, Twitter Suffers “Massive Drop in Revenue,” Musk Says, Axios (Nov. 4, 2022), https://www.axios.com/2022/11/04/elon-musk-twitter-revenue-drop-advertisers.

[9] Lora Kolodny, Elon Musk has Pulled More Than 50 Tesla Employees into his Twitter Takeover, CNBC (Oct. 31, 2022), https://www.cnbc.com/2022/10/31/elon-musk-has-pulled-more-than-50-tesla-engineers-into-twitter.html.

[10] Id.

[11] Trefis Team, Tesla Stock Falls Post Elon Musk’s Twitter Purchase. What’s Next?, NASDAQ (Nov. 9, 2022), https://www.nasdaq.com/articles/tesla-stock-falls-post-elon-musks-twitter-purchase.-whats-next.

[12] Dominic Rushe, et al., Twitter Slashes Nearly Half its Workforce as Musk Admits ‘Massive Drop’ in Revenue, The Guardian (Nov. 4, 2022), https://www.theguardian.com/technology/2022/nov/04/twitter-layoffs-elon-musk-revenue-drop.

[13] Id.

[14] Phil Helsel, Twitter Sued Over Short-Notice Layoffs as Elon Musk’s Takeover Rocks Company, NBC News (Nov. 4, 2022), https://www.nbcnews.com/business/business-news/twitter-sued-layoffs-days-elon-musk-purchase-rcna55619.

[15] 29 USC § 2101(a)(1).

[16] 29 USC § 2102(a).

[17] On Point, Boston Labor Lawyer Discusses her Class Action Lawsuit Against Twitter, WBUR Radio Boston (Nov. 10, 2022), https://www.wbur.org/radioboston/2022/11/10/shannon-liss-riordan-musk-class-action-twitter-suit (discussing recent developments in the case with attorney Shannon Liss-Riordan).

[18] Complaint at 5, Cornet et al. v. Twitter, Inc., Docket No. 3:22-cv-06857 (N.D. Cal. 2022).

[19] Id. at 6 (outlining previous attempts by another Musk company, Tesla, to get around WARN Act violations by tying severance agreements to waiver of litigation rights); see also On Point, supra note 17.

[20] 29 USC § 2104.

[21] Id.

[22] 20 CFR § 639.9 (2012).

[23] Hannah Murphy, Musk Warns Twitter Bankruptcy is Possible as Executives Exit, Financial Times (Nov. 10, 2022), https://www.ft.com/content/85eaf14b-7892-4d42-80a9-099c0925def0.

[24] Id.

[25] See e.g., Murphy supra note 22.

[26] See Pete Syme, Elon Musk Sent a Midnight Email Telling Twitter Staff to Commit to an ‘Extremely Hardcore’ Work Schedule – or Get Laid off with Three Months’ Severance, Business Insider (Nov. 16, 2022), https://www.businessinsider.com/elon-musk-twitter-staff-commit-extremely-hardcore-work-laid-off-2022-11; see also Jaclyn Diaz, Fired by Tweet: Elon Musk’s Latest Actions are Jeopardizing Twitter, Experts Say. NPR (Nov. 17, 2022), https://www.npr.org/2022/11/17/1137265843/elon-musk-fires-employee-by-tweet (discussing firing of an employee for correcting Musk on Twitter and potential liability for a retaliation claim under California law).

 


Twitter Troubles: The Upheaval of a Platform and Lessons for Social Media Governance

Gordon Unzen, MJLST Staffer

Elon Musk’s Tumultuous Start

On October 27, 2022, Elon Musk officially completed his $44 billion deal to purchase the social media platform, Twitter.[1] When Musk’s bid to buy Twitter was initially accepted in April 2022, proponents spoke of a grand ideological vision for the platform under Musk. Musk himself emphasized the importance of free speech to democracy and called Twitter “the digital town square where matters vital to the future of humanity are debated.”[2] Twitter co-founder Jack Dorsey called Twitter the “closest thing we have to a global consciousness,” and expressed his support of Musk: “I trust his mission to extend the light of consciousness.”[3]

Yet only two weeks into Musk’s rule, the tone has quickly shifted towards doom, with advertisers fleeing the platform, talk of bankruptcy, and the Federal Trade Commission (“FTC”) expressing “deep concern.” What happened?

Free Speech or a Free for All?

Critics were quick to read Musk’s pre-purchase remarks about improving ‘free speech’ on Twitter to mean he would change how the platform would regulate hate speech and misinformation.[4] This fear was corroborated by the stream of racist slurs and memes from anonymous trolls ‘celebrating’ Musk’s purchase of Twitter.[5] However, Musk’s first major change to the platform came in the form of a new verification service called ‘Twitter Blue.’

Musk took control of Twitter during a substantial pullback in advertisement spending in the tech industry, a problem that has impacted other tech giants like Meta, Spotify, and Google.[6] His solution was to seek revenue directly from consumers through Twitter Blue, a program where users could pay $8 a month for verification with the ‘blue check’ that previously served to tell users whether an account of public interest was authentic.[7] Musk claimed this new system would give ‘power to the people,’ which proved correct in an ironic and unintended fashion.

Twitter Blue allowed users to pay $8 for a blue check and impersonate politicians, celebrities, and company media accounts—which is exactly what happened. Musk, Rudy Giuliani, O.J. Simpson, LeBron James, and even the Pope were among the many impersonated by Twitter users.[8] Companies received the same treatment, with an impersonation Eli Lilly and Company account writing “We are excited to announce insulin is free now,” causing its stock to drop 2.2%.[9]This has led advertising firms like Omnicom and IPG’s Mediabrands to conclude that brand safety measures are currently impeded on Twitter and advertisers have subsequently begun to announce pauses on ad spending.[10] Musk responded by suspending Twitter Blue only 48 hours after it launched, but the damage may already be done for Twitter, a company whose revenue was 90% ad sales in the second quarter of this year.[11] During his first mass call with employees, Musk said he could not rule out bankruptcy in Twitter’s future.[12]

It also remains to be seen whether the Twitter impersonators will escape civil liability under theories of defamation[13] or misappropriation of name or likeness,[14] or criminal liability under state identity theft[15] or false representation of a public employee statutes,[16] which have been legal avenues used to punish instances of social media impersonation in the past.

FTC and Twitter’s Consent Decree

On the first day of Musk’s takeover of Twitter, he immediately fired the CEO, CFO, head of legal policy, trust and safety, and general counsel.[17] By the following week, mass layoffs were in full swing with 3,700 Twitter jobs, or 50% of its total workforce, to be eliminated.[18] This move has already landed Twitter in legal trouble for potentially violating the California WARN Act, which requires 60 days advance notice of mass layoffs.[19] More ominously, however, these layoffs, as well as the departure of the company’s head of trust and safety, chief information security officer, chief compliance officer and chief privacy officer, have attracted the attention of the FTC.[20]

In 2011, Twitter entered a consent decree with the FTC in response to data security lapses requiring the company to establish and maintain a program that ensured its new features do not misrepresent “the extent to which it maintains and protects the security, privacy, confidentiality, or integrity of nonpublic consumer information.”[21] Twitter also agreed to implement two-factor authentication without collecting personal data, limit employee access to information, provide training for employees working on user data, designate executives to be responsible for decision-making regarding sensitive user data, and undergo a third-party audit every six months.[22] Twitter was most recently fined $150 million back in May for violating the consent decree.[23]

With many of Twitter’s former executives gone, the company may be at an increased risk for violating regulatory orders and may find itself lacking the necessary infrastructure to comply with the consent decree. Musk also reportedly urged software engineers to “self-certify” legal compliance for the products and features they deployed, which may already violate the court-ordered agreement.[24] In response to these developments, Douglas Farrar, the FTC’s director of public affairs, said the commission is watching “Twitter with deep concern” and added that “No chief executive or company is above the law.”[25] He also noted that the FTC had “new tools to ensure compliance, and we are prepared to use them.”[26] Whether and how the FTC will employ regulatory measures against Twitter remains uncertain.

Conclusions

The fate of Twitter is by no means set in stone—in two weeks the platform has lost advertisers, key employees, and some degree of public legitimacy. However, at the speed Musk has moved so far, in two more weeks the company could likely be in a very different position. Beyond the immediate consequences to the company, Musk’s leadership of Twitter illuminates some important lessons about social media governance, both internal and external to a platform.

First, social media is foremost a business and not the ‘digital town square’ Musk imagines. Twitter’s regulation of hate speech and verification of public accounts served an important role in maintaining community standards, promoting brand safety for advertisers, and protecting users. Loosening regulatory control runs a great risk of delegitimizing a platform that corporations and politicians alike took seriously as a tool for public communication.

Second, social media stability is important to government regulators and further oversight may not be far off on the horizon. Musk is setting a precedent and bringing the spotlight on the dangers of a destabilized social media platform and the risks this may pose to data privacy, efforts to curb misinformation, and even the stock market. In addition to the FTC, Senate Majority Whip, and chair of the Senate Judiciary Committee, Dick Durbin, has already commented negatively on the Twitter situation.[27] Musk may have given powerful regulators, and even legislators, the opportunity they were looking for to impose greater control over social media. For better or worse, Twitter’s present troubles could lead to a new era of government involvement in digital social spaces.

Notes

[1] Adam Bankhurst, Elon Musk’s Twitter Takeover and the Chaos that Followed: The Complete Timeline, IGN (Nov. 11, 2022), https://www.ign.com/articles/elon-musks-twitter-takeover-and-the-chaos-that-followed-the-complete-timeline.

[2] Monica Potts & Jean Yi, Why Twitter is Unlikely to Become the ‘Digital Town Square’ Elon Musk Envisions, FiveThirtyEight (Apr. 29, 2022), https://fivethirtyeight.com/features/why-twitter-is-unlikely-to-become-the-digital-town-square-elon-musk-envisions/.

[3] Bankhurst, supra note 1.

[4] Potts & Yi, supra note 2.

[5] Drew Harwell et al., Racist Tweets Quickly Surface After Musk Closes Twitter Deal, Washington Post (Oct. 28, 2022), https://www.washingtonpost.com/technology/2022/10/28/musk-twitter-racist-posts/.

[6] Bobby Allyn, Elon Musk Says Twitter Bankruptcy is Possible, But is That Likely?, NPR (Nov. 12, 2022), https://www.wglt.org/2022-11-12/elon-musk-says-twitter-bankruptcy-is-possible-but-is-that-likely.

[7] Id.

[8] Keegan Kelly, We Will Never Forget These Hilarious Twitter Impersonations, Cracked (Nov. 12, 2022), https://www.cracked.com/article_35965_we-will-never-forget-these-hilarious-twitter-impersonations.html; Shirin Ali, The Parody Gold Created by Elon Musk’s Twitter Blue, Slate (Nov. 11, 2022), https://slate.com/technology/2022/11/parody-accounts-of-twitter-blue.html.

[9] Ali, supra note 8.

[10] Mehnaz Yasmin & Kenneth Li, Major Ad Firm Omnicom Recommends Clients Pause Twitter Ad Spend – Memo, Reuters (Nov. 11, 2022), https://www.reuters.com/technology/major-ad-firm-omnicom-recommends-clients-pause-twitter-ad-spend-verge-2022-11-11/; Rebecca Kern, Top Firm Advises Pausing Twitter Ads After Musk Takeover, Politico (Nov. 1, 2022), https://www.politico.com/news/2022/11/01/top-marketing-firm-recommends-suspending-twitter-ads-with-musk-takeover-00064464.

[11] Yasmin & Li, supra note 10.

[12] Katie Paul & Paresh Dave, Musk Warns of Twitter Bankruptcy as More Senior Executives Quit, Reuters (Nov. 10, 2022), https://www.reuters.com/technology/twitter-information-security-chief-kissner-decides-leave-2022-11-10/.

[13] Dorrian Horsey, How to Deal With Defamation on Twitter, Minc, https://www.minclaw.com/how-to-report-slander-on-twitter/ (last visited Nov. 12, 2022).

[14] Maksim Reznik, Identity Theft on Social Networking Sites: Developing Issues of Internet Impersonation, 29 Touro L. Rev. 455, 456 n.12 (2013), https://digitalcommons.tourolaw.edu/cgi/viewcontent.cgi?article=1472&context=lawreview.

[15] Id. at 455.

[16] Brett Snider, Can a Fake Twitter Account Get You Arrested?, FindLaw Blog (April 22, 2014), https://www.findlaw.com/legalblogs/criminal-defense/can-a-fake-twitter-account-get-you-arrested/.

[17] Bankhurst, supra note 1.

[18] Sarah Perez & Ivan Mehta, Twitter Sued in Class Action Lawsuit Over Mass Layoffs Without Proper Legal Notice, Techcrunch (Nov. 4, 2022), https://techcrunch.com/2022/11/04/twitter-faces-a-class-action-lawsuit-over-mass-employee-layoffs-with-proper-legal-notice/.

[19] Id.

[20] Natasha Lomas & Darrell Etherington, Musk’s Lawyer Tells Twitter Staff They Won’t be Liable if Company Violates FTC Consent Decree (Nov. 11, 2022), https://techcrunch.com/2022/11/11/musks-lawyer-tells-twitter-staff-they-wont-be-liable-if-company-violates-ftc-consent-decree/.

[21] Id.

[22] Scott Nover, Elon Musk Might Have Already Broken Twitter’s Agreement With the FTC, Quartz (Nov. 11, 2022), https://qz.com/elon-musk-might-have-already-broken-twitter-s-agreement-1849771518.

[23] Tom Espiner, Twitter Boss Elon Musk ‘Not Above the Law’, Warns US Regulator, BBC (Nov. 11, 2022), https://www.bbc.com/news/business-63593242.

[24] Nover, supra note 22.

[25] Espiner, supra note 23.

[26] Id.

[27] Kern, supra note 10.


It’s Social Media – A Big Lump of Unregulated Child Influencers!

Tessa Wright, MJLST Staffer

If you’ve been on TikTok lately, you’re probably familiar with the Corn Kid. Seven-year-old Tariq went viral on TikTok in August after appearing in an 85-second video clip professing his love of corn.[1] Due to his accidental viral popularity, Tariq has become a social media celebrity. He has been featured in content collaborations with notable influencers, starred in a social media ad for Chipotle, and even created an account on Cameo.[2] At seven-years-old, he has become a child influencer, a minor celebrity, and a major financial contributor for his family. Corn Kid is not alone. There are a growing number of children rising to fame via social media. In fact, today child influencers have created an eight-billion-dollar social media advertising industry, with some children generating as much as $26 million a year through advertising and sponsored content.[3] Yet, despite this rapidly growing industry, there are still very few regulations protecting the financial earnings of children entertainers in the social media industry.[4]

What Protects Children’s Financial Earnings in the Entertainment Industry?

Normally, children in the entertainment industry have their financial earnings protected under the California Child Actor’s Bill (also known as the Coogan Law).[5] The Coogan Law was passed in 1939 by the state of California in response to the plight of Jackie Coogan.[6] Coogan was a child star who earned millions of dollars as a child actor only to discover upon reaching adulthood that his parents had spent almost all of his money.[7] Over the years the law has evolved, and today it upholds that earnings by minors in the entertainment industry are the property of the minor.[8] Specifically, the California law creates a fiduciary relationship between the parent and child and requires that 15% of all earnings must be set aside in a blocked trust.[9]

What Protections do Child Social Media Stars Have? 

Social media stars are not legally considered to be actors, so the Coogan Law does not apply to their earnings.[10] So, are there other laws protecting these social media stars? The short answer is, no. 

Technically, there are laws that prevent children under the age of 12 from using social media apps which in theory should protect the youngest of social media stars.[11] However, even though these social media platforms claim that they require users to be at least thirteen years old to create accounts on their platforms, there are still ways children end up working in content creation jobs.[12] The most common scenario is that parents of these children make content in which they feature their children.[13] These “family vloggers” are a popular genre of YouTube videos where parents frequently feature their children and share major life events; sometimes they even feature the birth of their children. Often these parents also make separate social media accounts for their children which are technically run by the parents and are therefore allowed despite the age restrictions.[14] There are no restrictions or regulations preventing parents from making social media accounts for their children, and therefore no restriction on the parents’ collection of the income generated from such accounts.[15]

New Attempts at Legislation 

So far, there has been very little intervention by lawmakers. The state of Washington has attempted to turn the tide by proposing a new state bill that attempts to protect children working in social media.[16] The bill was introduced in January of 2022 and, if passed, would offer protection to children living within the state of Washington who are on social media.[17] Specifically, the bill introduction reads, “Those children are generating interest in and revenue for the content, but receive no financial compensation for their participation. Unlike in child acting, these children are not playing a part, and lack legal protections.”[18] The bill would hopefully help protect the finances of these child influencers. 

Additionally, California passed a similar bill in 2018.[19] Unfortunately, it only applies to videos that are longer than one hour and have direct payment to the child.[20] What this means is that a child who, for example, is a Twitch streamer that posts a three-hour livestream and receives direct donations during the stream, would be covered by the bill; however, a child featured in a 10-minute YouTube video or a 15-second TikTok would not be financially protected under the bill.

The Difficulties in Regulating Social Media Earnings for Children

Currently, France is the only country in the world with regulations for children working in the social media industry.[21] There, children working in the entertainment industry (whether as child actors, models, or social media influencers) have to register for a license and their earnings must be put into a dedicated bank account for them to access when they’re sixteen.[22] However, the legislation is still new and it is too soon to see how well these regulations will work. 

The problem with creating legislation in this area is attributable to the ad hoc nature of making social media content.[23] It is not realistic to simply extend existing legislation applicable to child entertainers to child influencers[24] as their work differs greatly. Moreover, it becomes extremely difficult to attempt to regulate an industry when influencers can post content from any location at any time, and when parents may be the ones filming and posting the videos of their children in order to boost their household income. For example, it would be hard to draw a clear line between when a child is being filmed casually for a home video and when it is being done for work, and when an entire family is featured in a video it would be difficult to determine how much money is attributable to each family member. 

Is There a Solution?

While there is no easy solution, changing the current regulations or creating new regulations is the clearest route. Traditionally, tech platforms have taken the view that governments should make rules and then they will then enforce them.[25] All major social media sites have their own safety rules, but the extent to which they are responsible for the oversight of child influencers is not clearly defined.[26] However, if any new regulation is going to be effective, big tech companies will need to get involved. As it stands today, parents have found loopholes that allow them to feature their child stars on social media without violating age restrictions. To avoid these sorts of loopholes to new regulations, it will be essential that big tech companies work in collaboration with legislators in order to create technical features that prevent them.

The hope is that one day, children like Corn Kid will have total control of their financial earnings, and will not reach adulthood only to discover their money has already been spent by their parents or guardians. The future of entertainment is changing every day, and the laws need to keep up. 

Notes

[1] Madison Malone Kircher, New York Times (Online), New York: New York Times Company (September 21, 2022) https://www.nytimes.com/2022/09/21/style/corn-kid-tariq-tiktok.html.

[2] Id.

[3] Marina Masterson, When Play Becomes Work: Child Labor Laws in the Era of ‘Kidfluencers’, 169 U. Pa. L. Rev. 577, 577 (2021).

[4] Coogan Accounts: Protecting Your Child Star’s Earnings, Morgan Stanley (Jan. 10, 2022), https://www.morganstanley.com/articles/trust-account-for-child-performer.

[5] Coogan Law, https://www.sagaftra.org/membership-benefits/young-performers/coogan-law (last visited Oct. 16, 2022).

[6] Id.

[7] Id.

[8] Cal. Fam. Code § 6752.

[9] Id.

[10] Morgan Stanley, supra note 4.

[11] Sapna Maheshwari, Online and Making Thousands, at Age 4: Meet the Kidfluencers, N.Y. Times, (March 1, 2019) https://www.nytimes.com/2019/03/01/business/media/social-media-influencers-kids.html.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.

[17] Id.

[18] Id.

[19] E.W. Park, Child Influencers Have No Child Labor Regulations. They Should, Lavoz News (May 16, 2022) https://lavozdeanza.com/opinions/2022/05/16/child-influencers-have-no-child-labor-regulations-they-should/.

[20] Id.

[21] Collins, supra note 19.

[22] Id.

[23] Id.

[24] Id.

[25] Id.

[26] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.


Freedom to Moderate? Circuits Split Over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.