Moderating Social Media Content: A Comparative Analysis of European Union and United States Policy

Jaxon Hill, MJLST Staffer

In the wake of the Capitol Hill uprising, former president Donald Trump had several of his social media accounts suspended.1 Twitter explained that their decision to suspend Trump’s account was “due to the risk of further incitement of violence.”2 Though this decision caught a lot of attention in the public eye, Trump was not the first figure in the political sphere to have his account suspended.3 In response to the social media platforms alleged censorship, some states, mainly Florida and Texas, attempted to pass anti-censorship laws which limit the ability for social media companies to moderate content.4 

Now, as litigation ensues for Trump and social media companies fighting the Texas and Florida legislation, the age-old question rears its ugly head: what is free speech?5 Do social media companies have a right to limit free speech? Social media companies are not bound by the First Amendment.6 Thus, barring valid legislation that says otherwise, they are allowed to restrict or moderate content on their platforms. But should they, and, if so, how? How does the answer to these questions differ for public officials on social media? To analyze these considerations, it is worthwhile to look beyond the borders of the United States. This analysis is not meant to presuppose that there is any wrongful conduct on the part of social media companies. Rather, this serves as an opportunity to examine an alternative option to social media content moderation that could provide more clarity to all interested parties. 

  In the European Union, social media companies are required to provide clear and specific information whenever they restrict the content on their platform.7 These statements are called “Statements of Reasons” (“SoRs”) and they must include some reference to whatever law the post violated.8 All SoRs  are  made publicly available to ensure transparency between the users and the organization.9 

An analysis of these SoRs yielded mixed results as to their efficacy but it opened up the door for potential improvements.10 Ultimately, the analysis showed inconsistencies among the various platforms in how or why they moderate content, but those inconsistencies can potentially open up an ability for legislators to clarify social media guidelines.11 

Applying this same principle domestically could allow for greater transparency between consumers, social media companies, and the government. By providing publicly available rationale behind any moderation, social media companies could continue to remove illegal content while not straddling the line of censorship. It is worth noting that there are likely negative financial implications for this policy, though. With states potentially implementing vastly different policies, social media companies may have to increase costs to ensure they are in compliance wherever they operate.12 Nevertheless, absorbing these costs up front may be preferable to “censorship” or “extremism, hatred, [or] misinformation and disinformation.”13 

In terms of the specific application to government officials, it may seem this alternative fails to offer any clarity to the current state of affairs. This assertion may have some merit as government officials have still been able to post harmful social media content in the EU without it being moderated.14 With that being said, politicians engaging with social media is a newer development—domestically and internationally—so more research needs to be conducted to conclude best practices. Regardless, increasing transparency should bar social media companies from making moderation choices unfounded in the law.

 

Notes

1 Bobby Allyn & Tamara Keith, Twitter Permanently Suspends Trump, Citing ‘Risk Of Further Incitement Of Violence’, Npr (Jan. 8, 2021), https://www.npr.org/2021/01/08/954760928/twitter-bans-president-trump-citing-risk-of-further-incitement-of-violence.

2 Id.

3 See Christian Shaffer, Deplatforming Censorship: How Texas Constitutionally Barred Social Media Platform Censorship, 55 Tex. Tech L. Rev. 893, 903-04 (2023) (giving an example of both conservative and liberal users that had their accounts suspended).

4 See Daveed Gartenstein-Ross et al., Anti-Censorship Legislation: A Flawed Attempt to Address a Legitimate Problem, Lawfare (July 27, 2022), https://www.lawfaremedia.org/article/anti-censorship-legislation-flawed-attempt-address-legitimate-problem (explaining the Texas and Florida legislation in-depth).

5 See e.g. Trump v. United States, 219 L.E.2d 991, 1034 (2024) (remanding the case to the lower courts); Moody v. NetChoice, LLC, 219 L.E.2d. 1075, 1104 (2024) (remanding the case to the lower courts).

6 Evelyn Mary Aswad, Taking Exception to Assessments of American Exceptionalism: Why the United States Isn’t Such an Outlier on Free Speech, 126 Dick. L. R. 69, 72 (2021).

7 Chiara Drolsbach & Nicolas Pröllochs, Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (2023), https://arxiv.org/html/2312.04431v1/#bib.bib56.

8  Id.

9 Id.

10 Id. This analysis showed that (1) content moderation varies across platforms in number, (2) content moderation is most often applied to videos and text, whereas images are moderated much less, (3) most rule-breaking content is decided via automated means (except X), (4) there is much variation among how often the platforms choose to moderate illegal content, and (5) the primary reasons for moderation include falling out of the scope of the platform’s services, illegal or harmful speech, and sexualized content. Misinformation was very rarely cited as the reason for moderation.

11 Id.

12 Perkins Coie LLP, More State Content Moderation Laws Coming to Social Media Platforms (November 17, 2022), https://perkinscoie.com/insights/update/more-state-content-moderation-laws-coming-social-media-platforms (recommending social media companies to hire counsel to ensure they are complying with various state laws). 

13 See e.g. Shaffer, supra note 3 (detailing the harms of censorship); Gartenstein-Ross, supra note 4 (outlining the potential harms of restrictive content moderation).

14 Goujard et al., Europe’s Far Right Uses TikTok to Win Youth Vote, Politico (Mar. 17, 2024), https://www.politico.eu/article/tiktok-far-right-european-parliament-politics-europe/ (“Without evidence, [Polish far-right politician, Patryk Jaki] insinuated the person who carried out the attack was a migrant”).