Alex O’Connor, MJLST Staffer
In 2019 and 2020, remarkably realistic forged politically motivated content went viral on social media. The content, known as “deepfakes,” included photorealistic images of world leaders such as Kim Jong Un, Vladimir Putin, Matt Gaetz, and Barack Obama. Also in 2019, a woman was conned out of nearly $300,000 by a scammer posing as a U.S. Navy Admiral using deepfake technology. These stories, and others, catapulted online forgeries to the front page of newspapers, as observers were both intrigued and frightened by this novel technology.
While the potential for deepfake technology to deceive political leaders and provoke conflict helped bring deepfakes into the public consciousness, individuals — and particularly women — have been victimized by deepfakes since as early as 2017. Even today, research suggests that 96% of deepfake content available online is nonconsensual pornography. While early targets of deepfakes were mostly celebrity women, nonpublic figures have been victimized as well. Indeed, deepfake technology is becoming increasingly more sophisticated and user friendly, giving anyone inclined the ability to forge pornography using a woman’s photograph transposed over explicit content in order to harass, blackmail, or embarrass. For example, one deepfake app allowed users to strip a subject’s clothing from photos, creating a photorealistic nude image. After widespread outcry, the developers of the app shut it down only hours after its launch.
The political implications of deepfakes alarmed lawmakers as well, and congress leapt into action. Beginning in 2020, the National Defense Authorization Act (NDAA) included a requirement that the Department of Homeland Security (DHS) issue an annual report on the threats that deepfake technology poses for national security. The following year, the NDAA broadened the DHS report to include threats to individuals as well. Another piece of legislation, the Identifying Outputs of Generative Adversarial Networks Act, directed the National Institute of Standards and Technology to support research for developing standards related to deepfake content.
A much more controversial bill went beyond mere research and committees. The DEEP FAKES Accountability Act would require any producer of deepfake content to include a watermark over the image notifying viewers that it was a forgery. If the content contains “sexual content of a visual nature,” producers of unwatermarked content would be subject to criminal penalties. Meanwhile, anyone who merely violates the watermark requirement would be subject to civil penalties of $150,000 per image.
While many have celebrated the bill for its potential to protect individuals and the political process, others have criticized it as an overbroad and ineffective infringement on free speech. Producers of political satire in particular may find the watermark requirement a joke killer. Further, some worry that the pace of deepfake technology development could expose websites to interminable litigation as the proliferation of deepfake content renders enforcement of the act on platforms impossible. Originally introduced in June 2019 by Representative Yvette Clarke, [D-NY-9], the bill languished in committee. Representative Clarke reintroduced the bill in April of this year before the 117th Congress, and it is currently being considered by three committees: Energy and Commerce, Judiciary, and Homeland Security.
The flurry of legislative activity at the federal level was mirrored by engagement by states as well. Five states have enacted deepfake legislation to combat political interference, nonconsensual pornography, or both, while another four states have introduced similar legislation. As with the federal legislation, opposition to the state deepfake laws is grounded in First Amendment concerns, with defenders of civil liberties such as the ACLU sending a letter to the California governor asking him to veto the legislation. He declined.
Deepfake related legislative activity has stalled during the Coronavirus pandemic, but the questions around how to craft legislation that strikes the right balance between privacy and dignity on the one hand, and free expression and satire on the other loom large as ever. These questions will only become more relevant with the rapid growth of deepfake technology and growing concerns about governmental overreach in good-faith efforts to protect citizens’ privacy and the democratic process.