November 2021

With Lull in Deepfake Legislation, Questions Loom Large as Ever

Alex O’Connor, MJLST Staffer

In 2019 and 2020, remarkably realistic forged politically motivated content went viral on social media. The content, known as “deepfakes,” included photorealistic images of world leaders such as Kim Jong Un, Vladimir Putin, Matt Gaetz, and Barack Obama. Also in 2019, a woman was conned out of nearly $300,000 by a scammer posing as a U.S. Navy Admiral using deepfake technology. These stories, and others, catapulted online forgeries to the front page of newspapers, as observers were both intrigued and frightened by this novel technology. 

While the potential for deepfake technology to deceive political leaders and provoke conflict helped bring deepfakes into the public consciousness, individuals — and particularly women — have been victimized by deepfakes since as early as 2017. Even today, research suggests that 96% of deepfake content available online is nonconsensual pornography. While early targets of deepfakes were mostly celebrity women, nonpublic figures have been victimized as well. Indeed, deepfake technology is becoming increasingly more sophisticated and user friendly, giving anyone inclined the ability to forge pornography using a woman’s photograph transposed over explicit content in order to harass, blackmail, or embarrass. For example, one deepfake app allowed users to strip a subject’s clothing from photos, creating a photorealistic nude image. After widespread outcry, the developers of the app shut it down only hours after its launch. 

The political implications of deepfakes alarmed lawmakers as well, and congress leapt into action. Beginning in 2020, the National Defense Authorization Act (NDAA) included a requirement that the Department of Homeland Security (DHS) issue an annual report on the threats that deepfake technology poses for national security. The following year, the NDAA broadened the DHS report to include threats to individuals as well. Another piece of legislation, the Identifying Outputs of Generative Adversarial Networks Act, directed the National Institute of Standards and Technology to support research for developing standards related to deepfake content. 

A much more controversial bill went beyond mere research and committees. The DEEP FAKES Accountability Act would require any producer of deepfake content to include a watermark over the image notifying viewers that it was a forgery. If the content contains “sexual content of a visual nature,” producers of unwatermarked content would be subject to criminal penalties. Meanwhile, anyone who merely violates the watermark requirement would be subject to civil penalties of $150,000 per image. 

While many have celebrated the bill for its potential to protect individuals and the political process, others have criticized it as an overbroad and ineffective infringement on free speech. Producers of political satire in particular may find the watermark requirement a joke killer. Further, some worry that the pace of deepfake technology development could expose websites to interminable litigation as the proliferation of deepfake content renders enforcement of the act on platforms impossible. Originally introduced in June 2019 by Representative Yvette Clarke, [D-NY-9], the bill languished in committee. Representative Clarke reintroduced the bill in April of this year before the 117th Congress, and it is currently being considered by three committees: Energy and Commerce, Judiciary, and Homeland Security.

The flurry of legislative activity at the federal level was mirrored by engagement by states as well. Five states have enacted deepfake legislation to combat political interference, nonconsensual pornography, or both, while another four states have introduced similar legislation. As with the federal legislation, opposition to the state deepfake laws is grounded in First Amendment concerns, with defenders of civil liberties such as the ACLU sending a letter to the California governor asking him to veto the legislation. He declined.

Deepfake related legislative activity has stalled during the Coronavirus pandemic, but the questions around how to craft legislation that strikes the right balance between privacy and dignity on the one hand, and free expression and satire on the other loom large as ever. These questions will only become more relevant with the rapid growth of deepfake technology and growing concerns about governmental overreach in good-faith efforts to protect citizens’ privacy and the democratic process.


Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.