Internet

Are AI Overviews Creating New Risk of Libel for Search Engines?

Eleanor Nagel-Bennett, MJLST Staffer

47 USC § 230 of the Communications Decency Act (“CDA”) protects online service providers from civil liability for content published on their servers by third parties. Essentially, it clarifies that if a Google search for one’s name produced a link to a blog post containing false and libelous content about that person, the falsely accused searcher could pursue a claim of defamation against the publisher of the blog, but not against Google. Under § 230, Google is not considered the speaker or the publisher of the libelous statements on the blog, despite returning the libelous results on the search engine results page. Specifically, § 230 provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” for purposes of civil penalties.[i]

However, in May 2024 Google rolled out an “AI Overview” feature on their search engine. The AI Overview is powered by Gemini, Google’s generative artificial intelligence chatbot.[ii] Gemini generates a response to each google search by combining information from internet sources and writing a complete overview answer to the search query using “multi-step reasoning, planning and multimodality” functions.[iii] After submitting a query, the AI Overview is displayed at the top of the search results. In the first few weeks, Google’s AI Overview suffered from hallucinations producing “odd and erroneous” outputs.[iv] Some of the odd results were obviously false, such as suggesting a user try adhering cheese to their pizza with a glue stick.[v]

Besides going viral online, the silly results were largely inconsequential. However, there were also several more serious reports of Google’s AI Overview feature generating misinformation that presented responses more difficult to identify as false. One such result claimed that President Barack Obama was the first Muslim President, a popular but demonstrably false conspiracy theory that has circulated the internet for years, while another told users that certain poisonous mushrooms were safe for human consumption.[vi] Google has since changed the data pool used to produce AI Overviews, and now rarely produces blatantly false results — but is rarely enough when 8.5 billion searches are run per day on Google?[vii]

This raises the question: can search engines be held liable for libelous content published by their generative AI? A plaintiff will have to prove to the court that § 230 of the Communications Decency Act is not a statutory bar to claims against generative AI. A recent consensus of legal scholars anticipate courts will likely find that the CDA would not bar claims against a company producing libelous content through generative AI because content produced by generative AI is original work, “authored” by the AI itself.[viii]

For an illustrative comparison, consider how defamation claims against journalists work as compared to defamation claims against traditional search engine results. While a journalist may write stories based on interviews, research, and experience, the language she publishes are her own words, her own creation, and she can be held liable for them despite sourcing some pieces from other speakers. Traditional search engines on the other hand historically post the sourced material directly to the reader, so they are not the “speaker” and therefore are insulated from defamation claims.  Enter generative AI, the output of which is likely to be considered original work by courts, and that insulation may erode.[ix] Effectively, introducing an AI Overview feature waives the statutory bar to claims under § 230 of the CDA relied upon by search engines to avoid liability for defamation claims.

But even without an outright statutory bar to defamation claims against a search engine’s libelous AI output, there is disagreement over whether generative AI output in general is relied upon seriously enough by humans to give rise to a defamation claim. Some believe that AI generated text should not be interpreted as a reasonably perceived factual claim, and therefore argue that AI generated content cannot give rise to a claim for defamation.[x] This is where the legitimacy of a result displayed on a popular search engine comes into play. Even if AI generated text is not ordinarily reasonably perceived as a factual claim, when displayed at the top of a search engine’s results page, more weight and authority is given to the result, though users might otherwise be wary of AI outputs.[xi]

While no landmark case law on the liability of an AI machine for libelous output has been developed to date, several lawsuits have already been filed on the question of liability assignment for libelous content produced by generative AI, including at least one case against a search engine for AI generated output displayed on a search engine results page.[xii]

Despite the looming potential for consequences, most AI companies have neglected to give attention to the risk of libel created by the operation of generative AI.[xiii] While all AI companies should pay attention to the risks, search engines previously insulated from civil liability by § 230 of the CDA should be especially wary of just how much liability they may be opening themselves up to by including an AI Overview on their results pages.

 

Notes

[i] 47 U.S.C. §230(c)(1).

[ii] Reid, Liz, Generative AI in Search: Let Google do the searching for you, Google (May 14, 2024) https://blog.google/products/search/generative-ai-google-search-may-2024/.

[iii] Id.

[iv] Reid, Liz, AI Overviews: About last week, Google (May 30, 2024) https://blog.google/products/search/ai-overviews-update-may-2024/.

[v] O’Brien, Matt, Google makes fixes to AI-generated search summaries after outlandish answers went viral, The Associated Press (May 30, 2024) https://apnews.com/article/google-ai-overviews-hallucination-33060569d6cc01abe6c63d21665330d8.

[vi] Id.

[vii] Brannon, Jordan, Game-Changing Google Search Statistics for 2024, Coalition, (Apr. 5, 2024) https://coalitiontechnologies.com/blog/game-changing-google-search-statistics-for-2024.

[viii] Joel Simon, Can AI be sued for defamation?, Col. Journalism Rev. (March 18, 2024).

[ix] Id.

[x]  See Eugene Volokh, Large Libel Models? Liability For AI Output, 3 J. Free Speech L. 489, 498 (2023).

[xi] Id.

[xii] In July of 2023, Jeffery Battle of Maryland filed suit against Microsoft for an AI generated search result on BING accusing him of crimes he did not commit. The Plaintiff Jeffery Battle is a veteran, business owner, and aerospace professor. When his name is searched online, however, Bing’s AI overview accuses Battle of crimes committed by a different Jeffrey Battle, Jeffery Leon Battle. The other Jeffery Battle pled guilty to seditious conspiracy and levying war against the United States after he tried to join the Taliban in the wake of 9/11. Bing’s search engine results page overview powered by Chat GPT combines information about the two Jeffery’s into one. See Id. at 492.

[xiii] Id. at 493.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


Moderating Social Media Content: A Comparative Analysis of European Union and United States Policy

Jaxon Hill, MJLST Staffer

In the wake of the Capitol Hill uprising, former president Donald Trump had several of his social media accounts suspended.1 Twitter explained that their decision to suspend Trump’s account was “due to the risk of further incitement of violence.”2 Though this decision caught a lot of attention in the public eye, Trump was not the first figure in the political sphere to have his account suspended.3 In response to the social media platforms alleged censorship, some states, mainly Florida and Texas, attempted to pass anti-censorship laws which limit the ability for social media companies to moderate content.4 

Now, as litigation ensues for Trump and social media companies fighting the Texas and Florida legislation, the age-old question rears its ugly head: what is free speech?5 Do social media companies have a right to limit free speech? Social media companies are not bound by the First Amendment.6 Thus, barring valid legislation that says otherwise, they are allowed to restrict or moderate content on their platforms. But should they, and, if so, how? How does the answer to these questions differ for public officials on social media? To analyze these considerations, it is worthwhile to look beyond the borders of the United States. This analysis is not meant to presuppose that there is any wrongful conduct on the part of social media companies. Rather, this serves as an opportunity to examine an alternative option to social media content moderation that could provide more clarity to all interested parties. 

  In the European Union, social media companies are required to provide clear and specific information whenever they restrict the content on their platform.7 These statements are called “Statements of Reasons” (“SoRs”) and they must include some reference to whatever law the post violated.8 All SoRs  are  made publicly available to ensure transparency between the users and the organization.9 

An analysis of these SoRs yielded mixed results as to their efficacy but it opened up the door for potential improvements.10 Ultimately, the analysis showed inconsistencies among the various platforms in how or why they moderate content, but those inconsistencies can potentially open up an ability for legislators to clarify social media guidelines.11 

Applying this same principle domestically could allow for greater transparency between consumers, social media companies, and the government. By providing publicly available rationale behind any moderation, social media companies could continue to remove illegal content while not straddling the line of censorship. It is worth noting that there are likely negative financial implications for this policy, though. With states potentially implementing vastly different policies, social media companies may have to increase costs to ensure they are in compliance wherever they operate.12 Nevertheless, absorbing these costs up front may be preferable to “censorship” or “extremism, hatred, [or] misinformation and disinformation.”13 

In terms of the specific application to government officials, it may seem this alternative fails to offer any clarity to the current state of affairs. This assertion may have some merit as government officials have still been able to post harmful social media content in the EU without it being moderated.14 With that being said, politicians engaging with social media is a newer development—domestically and internationally—so more research needs to be conducted to conclude best practices. Regardless, increasing transparency should bar social media companies from making moderation choices unfounded in the law.

 

Notes

1 Bobby Allyn & Tamara Keith, Twitter Permanently Suspends Trump, Citing ‘Risk Of Further Incitement Of Violence’, Npr (Jan. 8, 2021), https://www.npr.org/2021/01/08/954760928/twitter-bans-president-trump-citing-risk-of-further-incitement-of-violence.

2 Id.

3 See Christian Shaffer, Deplatforming Censorship: How Texas Constitutionally Barred Social Media Platform Censorship, 55 Tex. Tech L. Rev. 893, 903-04 (2023) (giving an example of both conservative and liberal users that had their accounts suspended).

4 See Daveed Gartenstein-Ross et al., Anti-Censorship Legislation: A Flawed Attempt to Address a Legitimate Problem, Lawfare (July 27, 2022), https://www.lawfaremedia.org/article/anti-censorship-legislation-flawed-attempt-address-legitimate-problem (explaining the Texas and Florida legislation in-depth).

5 See e.g. Trump v. United States, 219 L.E.2d 991, 1034 (2024) (remanding the case to the lower courts); Moody v. NetChoice, LLC, 219 L.E.2d. 1075, 1104 (2024) (remanding the case to the lower courts).

6 Evelyn Mary Aswad, Taking Exception to Assessments of American Exceptionalism: Why the United States Isn’t Such an Outlier on Free Speech, 126 Dick. L. R. 69, 72 (2021).

7 Chiara Drolsbach & Nicolas Pröllochs, Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (2023), https://arxiv.org/html/2312.04431v1/#bib.bib56.

8  Id.

9 Id.

10 Id. This analysis showed that (1) content moderation varies across platforms in number, (2) content moderation is most often applied to videos and text, whereas images are moderated much less, (3) most rule-breaking content is decided via automated means (except X), (4) there is much variation among how often the platforms choose to moderate illegal content, and (5) the primary reasons for moderation include falling out of the scope of the platform’s services, illegal or harmful speech, and sexualized content. Misinformation was very rarely cited as the reason for moderation.

11 Id.

12 Perkins Coie LLP, More State Content Moderation Laws Coming to Social Media Platforms (November 17, 2022), https://perkinscoie.com/insights/update/more-state-content-moderation-laws-coming-social-media-platforms (recommending social media companies to hire counsel to ensure they are complying with various state laws). 

13 See e.g. Shaffer, supra note 3 (detailing the harms of censorship); Gartenstein-Ross, supra note 4 (outlining the potential harms of restrictive content moderation).

14 Goujard et al., Europe’s Far Right Uses TikTok to Win Youth Vote, Politico (Mar. 17, 2024), https://www.politico.eu/article/tiktok-far-right-european-parliament-politics-europe/ (“Without evidence, [Polish far-right politician, Patryk Jaki] insinuated the person who carried out the attack was a migrant”).

 


A Nation of Misinformation? the Attack on the Government’s Efforts to Stop Social Media Misinformation

Alex Mastorides, MJLST Staffer

Whether and how misinformation on social media can be curtailed has long been the subject of public debate. This debate has increasingly gained momentum since the beginning of the COVID-19 pandemic, at a time when uncertainty was the norm and people across the nation scrambled for information to help them stay safe. Misinformation regarding things like the origin of the pandemic, the treatment that should be administered to COVID-positive people, and the safety of the vaccine has been widely disseminated via social media platforms like TikTok, Facebook, Instagram, and X (formerly known as Twitter). The federal government under the Biden Administration has sought to curtail this wave of misinformation, characterizing it as a threat to public health. However, many have accused it of unconstitutional acts of censorship in violation of the First Amendment.

The government cannot directly interfere with the content posted on social media platforms; this right is held by the private companies that own the platforms. Instead, the government’s approach has been to communicate with social media companies, encouraging them to address misinformation that is promulgated on their sites. Per the Biden Administration: “The President’s view is that the major platforms have a responsibility related to the health and safety of all Americans to stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections.”[1]

Lower Courts have Ruled that the Government May Not Communicate with Social Media Companies for Purposes of Curtailing Online Misinformation

The case of Murthy v. Missouri may result in further clarity from the Supreme Court regarding the powers of the federal government to combat misinformation on social media platforms. The case began in the United States District Court for the Western District of Louisiana when two states–Missouri and Louisiana–along with several private parties filed suit against numerous federal government entities, including the White House and agencies such as the Federal Bureau of Investigation, the Centers for Disease Control & Prevention, and the Cybersecurity & Infrastructure Security Agency.[2] These entities have repeatedly communicated with social media companies, allegedly encouraging them to remove or censor the plaintiffs’ online content due to misinformation about the COVID-19 pandemic (including content discussing “the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story.”)[3] The plaintiffs allege that these government entities “‘coerced, threatened, and pressured [the] social-media platforms to censor [them]’ through private communications and legal threats” in violation of the plaintiffs’ First Amendment rights.[4]

The District Court agreed with the plaintiffs, issuing a preliminary injunction on July 4, 2023 to greatly restrict the entities’ ability to contact social media companies (especially with regard to misinformation).[5] This approach was predicated on the idea that government communications with social media companies about misinformation on their platforms is essentially coercive, forcing the companies to censor speech at the government’s demand. The injunction was appealed to the Fifth Circuit, which narrowed the injunction’s scope to just the White House, the Surgeon General’s office, and the FBI.[6]

Following the Fifth Circuit’s ruling on the preliminary injunction, the government parties to the Murthy case applied for a stay of the injunction with the United States Supreme Court.[7] The government further requested that the Court grant certiorari with regard to the questions presented by the injunction. The government attacked the injunction on three grounds. The first is that the plaintiffs did not have standing to sue under Article III because they did not show that the censoring effect on their posts was “fairly traceable” to the government or “redressable by injunctive relief.”[8]

The second argument is that the conduct at issue does not constitute a First Amendment free speech violation.[9] This claim is based on the state action doctrine, which outlines the circumstances in which the decisions of private entities are considered to be “state action.” If a private social media company’s decisions to moderate content are sufficiently “coerced” by the government, the law treats those decisions as if they were made by the government directly.[10] In that situation, the First Amendment would apply.[11] The Supreme Court has advocated for a strict evaluation of what kind of conduct might be considered “coercive” under this doctrine in an effort to avoid infringing upon the rights of private companies to modulate speech on their platforms.[12] The government’s Application for Stay argues that the Fifth Circuit’s decision is an overly broad application of the doctrine in light of the government’s conduct.[13]

Third, the government maintains that the preliminary injunction is overly broad because it “covers the government’s communications with all social-media platforms (not just those used by respondents) regarding all posts by any person (not just respondents) on all topics.”[14]

The Supreme Court Granted the Requested Stay and Granted Certiorari Regarding Three Key Questions

The Supreme Court granted the government’s request for a stay on the preliminary injunction. The Court simultaneously granted certiorari with respect to the questions posed in the government’s Application for Stay: “(1) Whether respondents have Article III standing; (2) Whether the government’s challenged conduct transformed private social-media companies’ content-moderation decisions into state action and violated respondents’ First Amendment rights; and (3) Whether the terms and breadth of the preliminary injunction are proper.”[15]

The Court gave no explanation for its grant of the request for stay or for its grant of certiorari. However, Justice Alito, joined by Justice Thomas and Justice Gorsuch, issued a dissent from the grant of application for stay, arguing that the government has not shown a likelihood that denial of a stay will result in irreparable harm.[16] He contends that the government’s argument about irreparable harm comes from hypotheticals rather than from actual “concrete” proof that harm is imminent.[17] The dissent further displays a disapproving attitude of the government’s actions toward social media misinformation: “At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.”[18]

Justice Alito noted in his dissent that the completion of the Court’s review of the case may not come until spring of next year.[19] The stay on the preliminary injunction will hold until that time.

Notes

[1] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[2] State v. Biden, 83 F.4th 350, 359 (5th Cir. 2023).

[3] Id. at 359.

[4] Id. at 359-60.

[5] Id. at 360.

[6] Id.

[7] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[8] Id. at 2.

[9] Id. at 3.

[10] Id. at 10.

[11] Id.

[12] Id. at 4 (citing Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1933 (2019)).

[13] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[14] Id. at 5.

[15] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[16] On Application for Stay at 3, Murthy v. Missouri, No. 23A243 (23-411) (October 20, 2023) (Alito, J. dissenting) (citing Hollingsworth v. Perry, 558 U.S. 183, 190 (2010)).

[17] Id. at 3-4.

[18] Id. at 5.

[19] Id. at 2.


Will Moody v. NetChoice, LLC End Social Media?

Aidan Vogelson, MJLST Staffer

At first, the concept that social media’s days may be numbered seems outlandish. Billions of people utilize social media every day and, historically, social media companies and other internet services have enjoyed virtually unfettered editorial control over how they manage their services. This freedom stems from 47 U.S.C. § 230.[1] § 230 withholds liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[2]  In other words, if someone makes an obscene post on Facebook and Facebook removes the post, Facebook cannot be held liable for any violation of protected speech. § 230 has long allowed social media companies to self-regulate by removing posts that violate their terms of service, but on September 29, the Supreme Court granted a writ of certiorari in Moody v. NetChoice, LLC, a case that may fundamentally change how social media companies operate by allowing the government at the state or federal level to regulate around their § 230 protections.

At issue in Moody is whether the methods social media companies use to moderate their content are permissible under the First Amendment and whether social media companies may be classified as common carriers.[3] Common carriers are services which hold themselves open to the public and transport people or goods.[4] While the term “common carrier” once referred only to public transportation services like railroads and airlines, the definition now encompasses communications services such as radio and telephone companies.[5] Common carriers are subjected to greater regulations, including anti-discrimination regulations, due to their market domination of a necessary public service.[6]  For example, given our reliance on airlines and telephone companies in performing necessary services, common carrier regulations ensure that an airline cannot decline to sell tickets to passengers because of their religious beliefs and a cellular network cannot bar service to customers because it disapproves of the content of their phone conversations. If social media companies are held to be common carriers, the federal government and the state governments could impose regulations on what content those companies restrict.

Moody stems from state efforts to do just that. The Florida legislature passed State Bill 7072 to curtail what it saw as social media censorship of conservative voices.[7] The Florida law allows for significant fines against social media companies that demonstrate “unfair censorship” or “deplatform” political candidates, like X (formerly Twitter) did when it removed former President Trump from its platform for falsely claiming that the 2020 election was stolen.[8] Florida is not the only state to pursue a common carrier designation for social media. Texas passed a similar law in 2021 (which is currently enjoined by NetChoice, LLC  v. Paxton and will be addressed alongside Moody) and the attorney general of Ohio has sued Google, seeking for the court to declare that Google is a common carrier to prevent the company from prioritizing its own products in search results.[9] Ohio v. Google LLC is ongoing, and while the judge partially granted Google’s motion to dismiss, he found that Ohio’s claim that Google is a common carrier is cognizable.[10] Given the increasing propensity with which states are attempting to regulate social media, the Supreme Court’s ruling is necessary to settle this vital issue.

Supporters of classifying social media companies as common carriers argue that social media is simply the most recent advancement in communication and should accordingly be designated a common carrier, just as telephone operators and cellular networks are. They explain that designating social media companies as common carriers is actually consistent with the broad protections of § 230, as regulating speech on a social media site regulates the speech of users, not the speech of the company.[11]

However, they ignore that social media companies rely on First Amendment and § 230 protections when they curate the content on their sites. Without the ability to promote or suppress posts and users, these companies would not be able to provide the personalized content that attracts users, and social media would likely become an even greater hotbed of misinformation and hate speech than it already is. The purpose of § 230 is to encourage the development of a thriving online community, which is why Congress chose to shield internet services from liability for content. Treating social media companies as common carriers would stifle that aim.

It is unclear how the Court will rule. In his concurrence in Biden v. Knight First Amend. Inst., Justice Thomas indicated he may be willing to consider social media companies as common carriers.[12] The other justices have yet to write or comment on this issue, but whatever their decision may be, the ramifications of this case will be significant. The conservative politicians behind the Florida and Texas laws have specifically decried what they argue is partisan censorship of conservative views about the Covid-19 pandemic and the 2020 election, yet these very complaints demonstrate the need for social media companies to exercise editorial control over their content. Covid-19 misinformation unquestionably led to unnecessary deaths from the Covid-19 pandemic.[13] Misinformation about the 2020 election led to a violent attempted overthrow of our government. These threats of violence and dangerous misinformation are the harms that Congress created § 230 to avoid. Without the ability for social media companies to curate content, social media will assuredly contain more racism, misinformation, and calls for violence. Few would argue given the omnipresence of social media in our modern world, our reliance on it for communication, and the misinformation it spreads that social media does not need some form of regulation, but if the Court allows the Florida and Texas laws implicated in Moody and NetChoice to stand, they will be paving the way for a patchwork quilt of laws in every state which may render social media unworkable

Notes

[1] See 47 U.S.C. § 230.

[2] 47 U.S.C. §230(c)(2)(A).

[3] Moody v. Netchoice, LLC, SCOTUSblog, https://www.scotusblog.com/case-files/cases/moody-v-netchoice-llc/.

[4] Alison Frankel, Are Internet Companies ‘Common Carriers’ of Content? Courts Diverge on Key Question, REUTERS, (May 31, 2022, 5:52 PM), https://www.reuters.com/legal/transactional/are-internet-companies-common-carriers-content-courts-diverge-key-question-2022-05-31/.

[5] Id.

[6] Id.

[7] David Savage, Supreme Court Will Decide if Texas and Florida Can Regulate Social Media to Protect ‘Conservative Speech’, LA TIMES (Sept. 29, 2023, 8:33 AM), https://www.msn.com/en-us/news/us/supreme-court-will-decide-if-texas-and-florida-can-regulate-social-media-to-protect-conservative-speech/ar-AA1hrE2s.

[8] Id.

[9] AG Yost Files Landmark Lawsuit to Declare Google a Public Utility, OHIO ATTORNEY GENERAL’S OFFICE (June 8, 2021), https://www.ohioattorneygeneral.gov/Media/News-Releases/June-2021/AG-Yost-Files-Landmark-Lawsuit-to-Declare-Google-a.

[10] Ohio v. Google LLC, No. 21-CV-H-06-0274 (Ohio Misc. 2022), https://fingfx.thomsonreuters.com/gfx/legaldocs/gdpzyeakzvw/frankel-socialmediacommoncarrier–ohioruling.pdf.

[11] John Villasenor, Social Media Companies and Common Carrier Status: A Primer, BROOKINGS INST. (Oct. 27, 2022), https://www.brookings.edu/articles/social-media-companies-and-common-carrier-status-a-primer/.

[12] Biden v. Knight First Amend. Inst., 141 S. Ct. 1220 (2021),  https://www.law.cornell.edu/supremecourt/text/20-197.

[13] Alistair Coleman, ’Hundreds Dead’ Because of Covid-19 Misinformation, BBC (Aug. 12, 2020), https://www.bbc.com/news/world-53755067.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


Perhaps Big Tech Regulation Belongs on Congress’s for You Page

Kira Le, MJLST Staffer

On Thursday, March 23, 2023, TikTok CEO Shou Zi Chew testified before a Congressional panel for five hours in order to convince Congress that the social media platform should not be banned in the United States. The hearing came one week after reports surfaced that the Committee on Foreign Investment was threatening a ban unless TikTok’s parent company ByteDance sells its share of the company.[1] Lawmakers on both sides of the aisle, as well as FBI officials, are allegedly concerned with the possibility of the Chinese government manipulating users’ experience on the platform or threatening the security of the data of its more than 150 million users in the United States.[2] Despite Chew’s testimony that TikTok plans to contract with U.S. tech giant Oracle to store U.S. data on U.S. servers on U.S. soil, preventing Chinese interference on the platform and recommending content to U.S. users through Oracle infrastructure, lawmakers were not convinced, and not a single one offered support for TikTok.[3]

In terms of what’s to come for TikTok’s future in the United States, Senator Marco Rubio updated his website on Monday, March 27, 2023 with information on “when TikTok will be banned,” claiming his proposed ANTI-SOCIAL CCP Act is the only bipartisan, bicameral legislation that would actually prevent TikTok from operating in the United States.[4] In order to cut off the platform’s access to critical functions needed to remain online, the proposed statute would require the president to use the International Emergency Economic Powers Act to block and prohibit all transactions with TikTok, ByteDance, and any subsidiary or successor within 30 days.[5] Senator Rubio explains that the proposed legislation “requires the president to block and prohibit transactions with social media companies owned or otherwise controlled by countries or entities of concern.”[6] Reuters reports that The White House supports the Senate bill known as the RESTRICT Act.[7] However, former President Trump made an almost identical attempt to ban the app in 2020.[8]TikTok was successful in quashing the effort, and would almost certainly challenge any future attempts.[9] Further, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, “To justify a TikTok ban, the government would have to demonstrate that privacy and security concerns can’t be addressed in narrower ways. The government hasn’t demonstrated this, and we doubt it could. Restricting access to a speech platform that is used by millions of Americans every day would set a dangerous precedent for regulating our digital public sphere more broadly.”[10]

Despite what Congress may want the public to think, it certainly has other options for protecting Americans and their data from Big Tech companies like TikTok. For example, nothing is stopping U.S. lawmakers from following in the footsteps of the European Parliament, which passed the Digital Markets Act just last year.[11] Although the main purpose of the Act is to limit anticompetitive conduct by large technology companies, it includes several provisions on protecting the personal data of users of defined “gatekeeper” firms. Under the Act, a gatekeeper is a company that provides services such as online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communications services; operating systems; web browsers; and online advertising services that are gateways for business to reach end users.[12] The Digital Markets Act forbids these gatekeepers from processing the personal data of end users for the purpose of providing online advertisement services, combining or cross-using their personal data, or signing users into other services in order to combine their personal data without their explicit consent.[13]

The penalties associated with violations of the Act give it some serious teeth. For noncompliance, the European Commission may impose a fine of up to 10% of the offending gatekeeper’s total worldwide turnover in the preceding year in the first instance, and up to 20% if the gatekeeper has committed the same or a similar infringement laid out in specific articles at some point in the eight preceding years.[14] For any company, not limited to gatekeepers, the Commission may impose a fine of up to 1% of total worldwide turnover in the preceding year for failing to provide the Commission with information as required by various articles in the Act. Finally, in order to compel any company to comply with specific decisions of the Commission and other articles in the regulation, the Commission may impose period penalty payments of up to 5% of the average daily worldwide turnover in the preceding year, per day.[15]

If U.S. lawmakers who have backed bipartisan legislation giving President Biden a path to ban TikTok are truly concerned about preventing the spread of misinformation on the platform, who truly believe, as Representative Gus Bilirakis claims to, that it is “literally leading to death” and that “[w]e must save our children from big tech companies” who allow harmful content to be viewed and spread without regulation, then perhaps Congress should simply: regulate it.[16] After the grueling congressional hearing, the Chinese foreign ministry stated in a regular news briefing that it has never asked companies “to collect or provide data from abroad to the Chinese government in a way that violated local laws…”[17]During his testimony, Chew also argued that TikTok is no different than other social media giants, and has even sought to put stronger safeguards in place as compared to its competitors.[18] Granted, some lawmakers have expressed support for comprehensive data privacy legislation that would apply to all tech companies.[19] Perhaps it would be more fruitful for U.S. lawmakers to focus on doing so.

Notes

[1] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[2] Id.

[3] Id.; David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[4] FAQ: When Will TikTok Be Banned?, MARCO RUBIO US SENATOR FOR FLORIDA (Mar. 27, 2023), https://www.rubio.senate.gov/public/index.cfm/press-releases?ContentRecord_id=C5313B3F-8173-4DC8-B1D9-9566F3E2595C.

[5] Id.

[6] Id.

[7] Factbox: Why a Broad US TikTok Ban is Unlikely to Take Effect Soon, REUTERS (Mar. 23, 2023), https://www.reuters.com/technology/why-broad-us-tiktok-ban-is-unlikely-take-effect-soon-2023-03-23/.

[8] Id.

[9] Id.

[10] Id.

[11] Council Regulation (EU) 2022/1925 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. L 265/1 [hereinafter Digital Markets Act].

[12] Id., Art. 3, 2022 O.J. L 265/28, 30.

[13] Id. art. 5, at 33.

[14] Id. art. 30, at 51, 52.

[15] Id. art. 17, at 44.

[16] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[17] David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[18] Daniel Flatley, Five Key Moments From TikTok CEO’s Combative Hearing in Congress, BLOOMBERG (Mar. 23, 2023), https://www.bloomberg.com/news/articles/2023-03-23/five-key-moments-from-tiktok-ceo-s-combative-hearing-in-congress#xj4y7vzkg.

[19] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.


Mental Health Telehealth Services May Not Be Protecting Your Data

Tessa Wright, MJLST Staffer

The COVID-19 pandemic changed much about our daily lives, and nowhere have those changes been more visible than in the healthcare industry. During the pandemic, there were overflowing emergency rooms coupled with doctor shortages.[1] In-person medical appointments were canceled, and non-emergency patients had to wait months for appointments.[2] In response, the use of telehealth services began to increase rapidly.[3] In fact, one 2020 study found that telehealth visits accounted for less than 1% of health visits prior to the pandemic and increased to as much as 80% of visits when the pandemic was at its peak.[4] And, while the use of telehealth services has decreased slightly in recent years, it seems as though it is likely here to stay. Nowhere has the use of telehealth services been more prevalent than in mental health services.[5] Indeed, as of 2022, telehealth still represented over 36% of outpatient mental health visits.[6] Moreover, a recent study found that since 2020, over one in three mental health outpatient visits have been delivered by telehealth.[7] And while this increased use in telehealth services has helped make mental health services more affordable and accessible to many Americans, this shift in the way healthcare is provided also comes with new legal concerns that have yet to be fully addressed.

Privacy Concerns for Healthcare Providers

One of the largest concerns surrounding the increased use of telehealth in mental health services is privacy. There are several reasons for this. The primary concern has been due to the fact that telehealth takes place over the phone or via personal computers. When using personal devices, it is nearly impossible to ensure HIPAA compliance. However, the majority of healthcare providers now offer telehealth options that connect directly to their private healthcare systems, which allows for more secure data transmission.[8] While there are still concerns surrounding this issue, these secure servers have helped mitigate much of the concern.[9]

Privacy Concerns with Mental Health Apps

The other privacy concern surrounding the use of telehealth services for mental health is a little more difficult to address. This concern comes from the increased use of mental health apps. Mental health apps are mobile apps that allow users to access online talk therapy and psychiatric care.[10] With the increased use of telehealth for mental health services, there has also been an increase in the use of these mental health apps. Americans are used to their private medical information being protected by the Health Insurance Portability and Accountability Act (HIPAA).[11] HIPAA is a federal law that creates privacy rules for our medical records and other individually identifiable health information during the flow of certain health care transactions.[12] But HIPAA wasn’t designed to handle modern technology.[13] The majority of mental health apps are not covered by HIPAA rules, meaning that these tech companies can sell the private health data from their apps to third parties, with or without consent.[14] In fact, a recent study that analyzed 578 mental health-related apps found that nearly half (44%) of the apps shared users’ personal health information with third parties.[15] This personal health information can include psychiatric diagnoses and medication prescriptions, as well as other identifiers including age, gender, ethnicity, religion, credit score, etc.[16]

In fact, according to a 2022 study, a popular therapy app, BetterHelp, was among the worst offenders in terms of privacy.[17] “BetterHelp has been caught in various controversies, including a ‘bait and switch’ scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.”[18]

An example of information that does get shared is the intake questionnaire.[19] An intake questionnaire needs to be filled out on BetterHelp, or other therapy apps, in order for the customer to be matched with a provider.[20] The answers to these intake questionnaires were specifically found to have been shared by BetterHelp with an analytics company, along with the approximate location and device of the user.[21]

Another example of the type of data that is shared is metadata.[22] BetterHelp can share information about how long someone uses the app, how long the therapy sessions are, how long someone spends sending messages on the app, what times someone logs into the app, what times someone sends a message or speaks to their therapists, the approximate location of the user, how often someone opens the app, and so on.[23] According to the ACLU, data brokers, Facebook, and Google were found to be among the recipients of other information shared from BetterHelp.[24]

It is also important to note that deleting an account may not remove all of your personal information, and there is no way of knowing what data will remain.[25] It remains unclear how long sensitive information that has been collected and retained could be available for use by the app.

What Solutions Are There?

The U.S. Department of Health and Human Services recently released updated guidance on HIPAA, confirming that the HIPAA Privacy Rule does not apply to most health apps because they are not “covered entities” under the law.[26]  Additionally, the FDA put out guidance saying that it is going to use its enforcement discretion when dealing with mental health apps.[27] This means that if the privacy risk seems to be low, the FDA is not going to enforce or chase these companies.[28]

Ultimately, if mental telehealth services are here to stay, HIPAA will need to be expanded to cover the currently unregulated field of mental health apps. HIPAA and state laws would need to be specifically amended to include digital app-based platforms as covered entities.[29] These mental health apps are offering telehealth services, similar to any healthcare provider that is covered by HIPAA. Knowledge that personal data is being shared so freely by mental health apps often leads to distrust, and due to those privacy concerns, many users have lost confidence in them. In the long run, regulatory oversight would increase the pressure on these companies to show that their service can be trusted, potentially increasing their success by growing their trust with the public as well.

Notes

[1] Gary Drenik, The Future of Telehealth in a Post-Pandemic World, Forbes, (Jun. 2, 2022), https://www.forbes.com/sites/garydrenik/2022/06/02/the-future-of-telehealth-in-a-post-pandemic-world/?sh=2ce7200526e1.

[2] Id.

[3] Id.

[4] Madjid Karimi, et. al., National Survey Trends in Telehealth Use in 2021: Disparities in Utilization and Audio vs. Video Services, Office of Health Policy (Feb. 1, 2022).

[5] Shreya Tewari, How to Navigate Mental Health Apps that May Share Your Data, ACLU (Sep. 28, 2022).

[6] Justin Lo, et. al., Telehealth has Played an Outsized Role Meeting Mental Health Needs During the Covid-19 Pandemic, Kaiser Family Foundation, (Mar. 15, 2022), https://www.kff.org/coronavirus-covid-19/issue-brief/telehealth-has-played-an-outsized-role-meeting-mental-health-needs-during-the-covid-19-pandemic/.

[7] Id.

[8] Supra note 1.

[9] Id.

[10] Heather Landi, With Consumers’ Health and Privacy on the Line, do Mental Wellness Apps Need More Oversight?, Fierce Healthcare, (Apr. 21, 2021), https://www.fiercehealthcare.com/tech/consumers-health-and-privacy-line-does-digital-mental-health-market-need-more-oversight.

[11] Peter Simons, Your Mental Health Information is for Sale, Mad in America, (Feb. 20, 2023), https://www.madinamerica.com/2023/02/mental-health-information-for-sale/.

[12] Supra note 5.

[13] Supra note 11.

[14] Id.

[15] Deb Gordon, Using a Mental Health App? New Study Says Your Data May Be Shared, Forbes, (Dec. 29, 2022), https://www.forbes.com/sites/debgordon/2022/12/29/using-a-mental-health-app-new-study-says-your-data-may-be-shared/?sh=fe47a5fcad2b.

[16] Id.

[17] Supra note 11.

[18] Id.

[19] Supra note 5.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Supra note 5.

[26] Id.

[27] Supra note 10.

[28] Id.

[29] Supra note 11.


A Manhattan Federal Jury Found Trademark Rights to Extend to the Metaverse. Why Should You Care?

Carlisle Ghirardini, MJLST Staffer

Earlier this month, the federal court in the Southern District of New York issued an opinion regarding a luxury fashion brand’s trademark rights in the Metaverse – the first trial verdict concerning trademarks in non-fungible tokens (NFTs).[1] The suit was brought in January of 2022 by the Parisian fashion giant Hermès when a digital artist created NFTs of the brand’s iconic “Birkin bag” and made a profit selling these “MetaBirkins.”[2]

The key question in the suit came down to whether the NFT was likened to art, which would receive First Amendment protection, or a consumer product, which would be subject to trademark infringement liabilities.[3] A federal grand jury found the artist’s use of the Birkin name and style to be more commercial than artistic in nature, and, therefore, potentially infringing on Hermès’ trademarks depending on public perception.[4]

Trademark infringement is the unauthorized use of a mark in a way that would confuse a consumer as to the source of the product or service connected to the mark.[5] Surveys and social media evidence in this case showed confusion among NFT consumers as to Hermès’ involvement with the MetaBirkins, which led the jury to find the use of the mark to be infringing and a capitalization of the Hermès brand’s goodwill for profit.[6] Hermès was awarded $133,000 in total damages – a small win for the fashion powerhouse, but a huge win for brand owners across many different industries who now know their trademark rights may be protectable in the Metaverse.[7]

I don’t use or understand the Metaverse – why should I care about this decision?

Even for those who don’t know what an NFT is, this decision to extend trademarks rights to the Metaverse is still important. First, it is well known that many brands are now registering trademarks in the Metaverse, so if a consumer sees a brand in this realm, there is a higher likelihood of confusion of association with that virtual good or service. If people assume a connection between a brand and the illegal use of its mark, the brand is at risk of significant damage. For example, if an unauthorized user opened a Metaverse McDonald’s which gave out racy or controversial happy meal prizes, McDonald’s could face serious backlash if its consumers believed McDonald’s to be condoning such activities.[8]Although it seems like this connection may be less convincing or harmful for a big brand like McDonald’s, it was enough to compel Hermès to protect the integrity of their brand and their customers.[9] It is not only big brands that can be victims of such infringement, however. While it is easy to understand why someone would take advantage of a more recognized company due to greater traffic, this could easily happen to smaller brands we know and love. If the little coffee shop chain you frequent is hurt by such virtual infringement, perhaps by a local competitor, it could run them out of business. Connecting a brand in the Metaverse to products or values they are not aligned with could have damaging real world effects.[10]

Just as brand exposure in the Metaverse can cause harm, it also has the potential to benefit businesses. Such virtual brand display, which is cheaper than buying advertising or opening a new brick and mortar store, can translate to more business in the real world.[11] Brands have started creating virtual experiences that have driven in-store sales and served as powerful marketing. Vans shoe and skateboard company, for example, made a Metaverse skatepark in which users could earn points when “boarding” that were redeemable for discounts inside real Vans stores.[12] Chipotle released a burrito-making game that yielded “burrito bucks” for exchange in their actual restaurants.[13] As use of NFTs grows, and as brands recognize the ramifications of the Hermès lawsuit, we will likely continue to see more trademarks used in the Metaverse. Brand owners should keep in mind the dangers of failing to sufficiently protect their trademarks in the virtual space and the potential for benefits if used strategically.

Notes

[1] Reed Clancy and Alexander Curylo, Verdict Reached in MetaBirkin NFT Case, AIPLA NEWSTAND (Feb. 9, 2023), https://www.lexology.com/library/detail.aspx?g=0faf6e67-38b4-4add-971d-badd08199c0c&utm_source=Lexology+Daily+Newsfeed&utm_medium=HTML+email+-+Body+-+General+section&utm_campaign=AIPLA+2013+subscriber+daily+feed&utm_content=Lexology+Daily+Newsfeed+2023-02-13&utm_term=.

[2] Muzamil Abdul Huq et al., Hermès Successfully Defends its Trademark in the Metaverse, AIPLA NEWSTAND (Feb. 9, 2023), https://www.lexology.com/library/detail.aspx?g=6dba3b12-030d-41ff-98c6-1c2aad6468ce&utm_source=Lexology+Daily+Newsfeed&utm_medium=HTML+email+-+Body+-+General+section&utm_campaign=AIPLA+2013+subscriber+daily+feed&utm_content=Lexology+Daily+Newsfeed+2023-02-13&utm_term=.

[3] Id.

[4] Id.

[5] About Trademark Infringement, U.S. PATENT AND TRADEMARK OFFICE, https://www.uspto.gov/page/about-trademark-infringement (last visited Feb. 17, 2023).

[6] Huq et al., Hermès Successfully Defends its Trademark in the Metaverse, AIPLA NEWSSTAND (Feb. 9, 2023).

[7] Id.

[8] Joanna Fantozzi, Why Every Restaurant Operator Should Care About NFTs and the Metaverse Right Now, NATION’SRESTAURANT NEWS (Feb. 25, 2022) https://www.nrn.com/technology/why-every-restaurant-operator-should-care-about-nfts-and-metaverse-right-now.

[9] Zachary Small, Hermès Wins MetaBirkins Lawsuit; Jurors Not Convinced NFTs Are Art, N.Y. TIMES (Feb. 8, 2023), https://www.nytimes.com/2023/02/08/arts/hermes-metabirkins-lawsuit-verdict.html.

[10] Fantozzi, Why Every Restaurant Operator Should Care About NFTs and the Metaverse Right Now, NATION’SRESTAURANT NEWS (Feb. 25, 2022).

[11] Id.

[12] Andrew Hanson, Understanding the Metaverse and its Impact on the Future of Digital Marketing, CUKER (Mar. 29, 2022), https://www.cukeragency.com/understanding-metaverse-and-its-impact-future-digi/.

[13] Dani James, How Retailers are Connecting the Metaverse to real World Sales and Revenues, RETAILDIVE (Nov. 14, 2022), https://www.retaildive.com/news/retailers-connecting-metaverse-roblox-real-world-revenue/636209/.


Are Social Media Empires Liable for “Terror” Organizations?

Ray Mestad, MJLST Staffer

The practicality, ease of use, and sheer addictiveness of social media has led to its massive explansion around the world. Approximately 65% of the world uses the internet, and of that group, only 5% does not use social media.[1] So 60% of the world is on social media, around 4.76 billion people.[2] For most, social media is one of the simplest ways to stay connected and communicate with friends, family, and other people in their circle. But along with the growing use of social media, questions have been raised regarding the potential liability social media corporations may have for the content that is posted on their platforms. Recently, lawsuits have been filed against companies like Google, Twitter, and Facebook for allegedly allowing groups accused of terrorism to spread their message or plan on their platforms.[3] The question we are left with is to what extent are social media companies responsible for posts on their sites that lead to violence?

The family of Nohemi Gonzales, an American student killed in Paris during a 2015 Islamic State attack, is suing Google for platforming the Islamic State by allowing them to post videos on Youtube, and then recommending them to people with the Google algorithm.[4] And the family of Nawras Alassaf, a Jordanian citizen killed in a 2017 Istanbul Islamic State attack, is suing Twitter, Google, and Facebook, for not doing more to prevent the organization from using their platform as a communications and messaging tool.[5] Gonzales v. Google and Twitter v. Taamneh will both be presenting their oral arguments to the Supreme Court this month, February 2023.[6]

The legal issues in these cases are rooted in Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996.[7] 47 U.S.C. 230 intends to protect freedom of expression by protecting intermediaries that publish information posted by users.[8] Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[9] This protects web platforms from liability for the content that users post.

Further, Section 230(c)(2) states that “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[10] This is known as the “Good Samaritan” provision. Like 230(c)(1), Section 230(c)(2) gives internet providers liability protection, allowing them to moderate content in certain circumstances and then providing a safeguard from the free speech claims that would be made against them.[11]

The question is whether or not defendant social media platforms should be shielded from liability for platforming content that has allegedly led to or facilitated violent attacks. In Gonzales, the Justice department stated that although the company is protected against claims for hosting ISIS videos, a claim may be filed against Google for allowing Youtube to provide targeted recommendations of the videos.[12] And in Taamneh, the 9th Circuit agreed with the plaintiffs that there was room for the claim to go forward under the Anti-Terrorism Act because Twitter had generalized knowledge of the Islamic State’s use of their services.[13]

Section 230 has drawn an eclectic mix of critics and supporters. For example, although many conservatives and free speech advocates support the protections of Section 230, there have also been conservatives that oppose the code due to perceived restriction of conservative viewpoints on social media platforms. For example, prominent Republican Josh Hawley from Missouri has come out against the code, stating that the tech platforms ought to be treated as distributors and lose Section 230 protections.[14] In fact, Hawley introduced a piece of legislation opposing Section 230 called the Federal Big Tech Tort Act to impose liability on tech platforms.[15] And on the left, Section 230 is supported by those who believe the voices of the marginalized are protected by 230 and would otherwise be at the whim of tech companies, but opposed by people who fear that the code enables political violence and hate speech.[16]

The Supreme Court has now granted certiorari in both Gonzales and Taamneh. In Gonzales, the plaintiffs are arguing that Section 230 should not protect the actions of Google because the events occurred outside the US, it is preempted by the Justice Against Sponsors of Terrorism Act (JASTA), and the algorithmic recommendations transform Google / Youtube from an interactive computer service to an information content provider.[17] Google is arguing that they should be protected by 230, particularly 230(c)(1).[18] The 9th Circuit stated that although 230 did apply abroad, that JASTA shouldn’t supersede 230. Instead, 230 and JASTA should run parallel to each other. The 9th Circuit further stated that the claims based on revenue sharing (rather than ad targeting) should be dismissed. They did not think Google was contributing to terrorism, because they were motivated by financial enrichment rather than ideology, and affirmed the dismissal, partially because there was not clear enough information of how much support Google had provided to ISIS.[19] Future decisions regarding this case will implicate things like whether algorithmic recommendations should apply to 230.[20]

In Taamneh, the defendants argued that there was no proximate cause, as well as arguing about the inapplicability of Section 230.[21]  Unlike in GonzalesTaamneh had adequately stated a claim for aiding and abetting because the social media companies had more explicit knowledge of how their platforms were being used by these groups. The Taamneh dismissal was reversed. The Supreme Court review of this case will have implications on what it means to support or have a relationship with a group via a social media platform. In both of these cases, fears regarding the scope of 230 were expressed, which could reflect poorly on its applicability going forward.[24]   

Gonzales and Taamneh will hit the Supreme Court soon. If 230 is restricted, it would enable greater free speech but risks exposing more people to harms like hate speech or violence.  However, if 230 is preserved as is, it could restrict the accessibility and openness that has made the internet what it is today. Whichever decision is made, there will be massive implications for what the internet looks like in the future.

Notes

[1] https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/#:~:text=The%20number%20of%20social%20media,growth%20of%20%2B137%20million%20users.

[2] Id.

[3] https://apnews.com/article/islamic-state-group-us-supreme-court-technology-france-social-media-6bee9b5adf33dd15ee64b0d4d4e5ec78

[4] Id.

[5] Id.

[6] https://www.washingtonpost.com/politics/2023/01/03/2023-is-poised-be-landmark-year-tech-legal-bouts/

[7] https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996

[8] https://www.eff.org/issues/cda230

[9] https://casetext.com/statute/united-states-code/title-47-telecommunications/chapter-5-wire-or-radio-communication/subchapter-ii-common-carriers/part-i-common-carrier-regulation/section-230-protection-for-private-blocking-and-screening-of-offensive-material

[10] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[11] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[12] https://www.washingtonpost.com/politics/2022/12/09/tech-critics-urge-supreme-court-narrow-section-230/

[13] https://knightcolumbia.org/blog/twitter-v-taamneh-in-the-supreme-court-whats-at-stake

[14] Supa Washington Post

[15] https://www.hawley.senate.gov/hawley-files-gonzalez-v-google-amicus-brief-supreme-court-challenging-big-techs-section-230

[16] Supa Washington Post

[17] https://www.lawfareblog.com/supreme-court-grants-certiorari-gonzalez-v-google-and-twitter-v-taamneh-overview

[18] Id.

[19] Id.

[20]

[21] Id.

[22] Id.

[23] Id.

[24]Id.