Social Media

Emptying the Nest: Recent Events at Twitter Prompt Class-Action Litigation, Among Other Things

Ted Mathiowetz, MJLST Staffer

You’d be forgiven if you thought the circumstances that led to Elon Musk ultimately acquiring Twitter would be the end of the drama for the social media company. In the past seven months, Musk went from becoming the largest shareholder of the company, to publicly feuding with then-CEO, Parag Agrawal, to making an offer to take the company private for $44 billion, to deciding he didn’t want to purchase the company, to being sued by Twitter to force him to complete the deal. Eventually, two weeks before trial was scheduled, Musk purchased the company for the original, agreed upon price.[1] However, within the first two-and-a-half weeks that Musk took Twitter private, the drama has continued, if not ramped-up, with one lawsuit already filed and the specter of additional litigation looming.[2]

There’s been the highly controversial rollout and almost immediate suspension of Twitter Blue—Musk’s idea of increasing the reliability of information on Twitter and simultaneously helping ameliorate Twitter’s financial woes.[3]Essentially, users were able to pay $8 a month for verification, albeit without actually verifying their identity. Instead, their username would remain frozen at the time they paid for the service.[4] Users quickly created fake “verified” accounts for real companies and spread misinformation while armed with the “verified” check mark, duping both the public and investors. For example, a newly created account with the handle “@EliLillyandCo” paid for Twitter Blue and tweeted “We are excited to announce insulin is free now.”[5] Eli Lilly’s actual Twitter account, “@LillyPad” had to tweet a message apologizing to those “who have been served a misleading message” from the fake account, after the pharmaceutical company’s shares dipped around 5% after the tweet.[6] In addition to Eli Lilly, several other companies, like Lockheed Martin, faced similar identity theft.[7] Twitter Blue was quickly suspended in the wake of these viral impersonations and advertisers have continued to flee the company, affecting its revenue.[8]

Musk also pulled over 50 engineers from Tesla, the vehicle manufacturing company of which he is CEO, to help him in his reimagining of Twitter.[9] Among those 50 engineers are the director of software development and the senior director of software engineering.[10] Pulling engineers from his publicly traded company to work on his separately owned private company almost assuredly raises questions of a violation of his fiduciary duty to Tesla’s shareholders, especially with Tesla’s share price falling 13% over the last week (as of November 9, 2022).[11]

The bulk of Twitter’s current legal issues reside in Musk’s decision to engage in mass-layoffs of employees at Twitter.[12] After his first week in charge, he sent out notices to around half of Twitter’s 7500 employees that they would be laid off, reasoning that cutbacks were necessary because Twitter was losing over $4 million per day.[13] Soon after the layoffs, a group of employees filed suit alleging that Twitter violated the Worker Adjustment and Retraining Act (WARN) by failing to give adequate notice.[14]

The WARN Act, passed in 1988, applies to employers with 100 or more employees[15] and mandates that an “employer shall not order a [mass layoff]” until it gives sixty-days’ notice to the state and affected employees.[16]Compliance can also be reached if, in lieu of giving notice, the employee is paid for the sixty-day notice period. In Twitter’s case, some employees were offered pay to comply with the sixty-day period after the initial lawsuit was filed,[17] though the lead plaintiff in the class action suit was allegedly laid off on November 1st with no notice or offer of severance pay.[18] Additionally, it appears as though Twitter is now offering severance to employees in return for a signature releasing them from liability in a WARN action.[19]

With regard to those who have not yet signed releases and were not given notice of a layoff, there is a question of what the penalties may be to Twitter and what potential defenses they may have. Each employee is entitled to “back pay for each day of violation” as well as benefits under their respective plan.[20] Furthermore, the employer is subject to a civil penalty of “not more than $500 for each day of violation” unless they pay their liability to each employee within three weeks of the layoff.[21] One possible defense that Twitter may assert in response to this suit is that of “unforeseeable business circumstances.”[22] Considering Musk’s recent comments that there is the potential that Twitter is headed for bankruptcy as well as the saddling of the company with debt to purchase it (reportedly $13 billion, with $1 billion per year in interest payments),[23] it seems there is a chance this defense could suffice. However, an unforeseen circumstance is strongly indicated when the circumstance is “outside the employer’s” control[24], something that’s arguable given the company’s recent conduct.[25] Additionally, Twitter would have to show that it has been exercising “commercially reasonable business judgment as would a similarly situated employer” in their conduct, another burden that may be hard to overcome. In sum, it’s quite clear why Twitter is trying to keep this lawsuit from gaining traction by securing release waivers. It’s also clear that Twitter has learned its lesson in not offering severance but they may be wading into other areas of employment law with recent conduct.[26]

Notes

[1] Timeline of Billionaire Elon Musk’s to Control Twitter, Associated Press (Oct. 28, 2022), https://apnews.com/article/twitter-elon-musk-timeline-c6b09620ee0905e59df9325ed042a609.

[2] Annie Palmer, Twitter Sued by Employees After Mass Layoffs Begin, CNBC (Nov. 4, 2022), https://www.cnbc.com/2022/11/04/twitter-sued-by-employees-after-mass-layoffs-begin.html.

[3] Siladitya Ray, Twitter Blue: Signups for Paid Verification Appear Suspended After Impersonator Chaos, Forbes (Nov. 11, 2022), https://www.forbes.com/sites/siladityaray/2022/11/11/twitter-blue-new-signups-for-paid-verification-appear-suspended-after-impersonator-chaos/?sh=14faf76c385c; see also Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:43 PM), https://twitter.com/elonmusk/status/1589403131770974208?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[4] Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:35 PM), https://twitter.com/elonmusk/status/1589401231545741312?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[5] Steve Mollman, No, Insulin is not Free: Eli Lilly is the Latest High-Profile Casualty of Elon Musk’s Twitter Verification Mess, Fortune(Nov. 11, 2022), https://fortune.com/2022/11/11/no-free-insulin-eli-lilly-casualty-of-elon-musk-twitter-blue-verification-mess/.

[6] Id. Eli Lilly and Company (@LillyPad), Twitter (Nov. 10, 2022, 3:09 PM), https://twitter.com/LillyPad/status/1590813806275469333?s=20&t=4XvAAidJmNLYwSCcWtd4VQ.

[7] Mollman, supra note 5 (showing Lockheed Martin’s stock dipped around 5% as well following a tweet from a “verified” account saying arms sales were being suspended to various countries went viral).

[8] Herb Scribner, Twitter Suffers “Massive Drop in Revenue,” Musk Says, Axios (Nov. 4, 2022), https://www.axios.com/2022/11/04/elon-musk-twitter-revenue-drop-advertisers.

[9] Lora Kolodny, Elon Musk has Pulled More Than 50 Tesla Employees into his Twitter Takeover, CNBC (Oct. 31, 2022), https://www.cnbc.com/2022/10/31/elon-musk-has-pulled-more-than-50-tesla-engineers-into-twitter.html.

[10] Id.

[11] Trefis Team, Tesla Stock Falls Post Elon Musk’s Twitter Purchase. What’s Next?, NASDAQ (Nov. 9, 2022), https://www.nasdaq.com/articles/tesla-stock-falls-post-elon-musks-twitter-purchase.-whats-next.

[12] Dominic Rushe, et al., Twitter Slashes Nearly Half its Workforce as Musk Admits ‘Massive Drop’ in Revenue, The Guardian (Nov. 4, 2022), https://www.theguardian.com/technology/2022/nov/04/twitter-layoffs-elon-musk-revenue-drop.

[13] Id.

[14] Phil Helsel, Twitter Sued Over Short-Notice Layoffs as Elon Musk’s Takeover Rocks Company, NBC News (Nov. 4, 2022), https://www.nbcnews.com/business/business-news/twitter-sued-layoffs-days-elon-musk-purchase-rcna55619.

[15] 29 USC § 2101(a)(1).

[16] 29 USC § 2102(a).

[17] On Point, Boston Labor Lawyer Discusses her Class Action Lawsuit Against Twitter, WBUR Radio Boston (Nov. 10, 2022), https://www.wbur.org/radioboston/2022/11/10/shannon-liss-riordan-musk-class-action-twitter-suit (discussing recent developments in the case with attorney Shannon Liss-Riordan).

[18] Complaint at 5, Cornet et al. v. Twitter, Inc., Docket No. 3:22-cv-06857 (N.D. Cal. 2022).

[19] Id. at 6 (outlining previous attempts by another Musk company, Tesla, to get around WARN Act violations by tying severance agreements to waiver of litigation rights); see also On Point, supra note 17.

[20] 29 USC § 2104.

[21] Id.

[22] 20 CFR § 639.9 (2012).

[23] Hannah Murphy, Musk Warns Twitter Bankruptcy is Possible as Executives Exit, Financial Times (Nov. 10, 2022), https://www.ft.com/content/85eaf14b-7892-4d42-80a9-099c0925def0.

[24] Id.

[25] See e.g., Murphy supra note 22.

[26] See Pete Syme, Elon Musk Sent a Midnight Email Telling Twitter Staff to Commit to an ‘Extremely Hardcore’ Work Schedule – or Get Laid off with Three Months’ Severance, Business Insider (Nov. 16, 2022), https://www.businessinsider.com/elon-musk-twitter-staff-commit-extremely-hardcore-work-laid-off-2022-11; see also Jaclyn Diaz, Fired by Tweet: Elon Musk’s Latest Actions are Jeopardizing Twitter, Experts Say. NPR (Nov. 17, 2022), https://www.npr.org/2022/11/17/1137265843/elon-musk-fires-employee-by-tweet (discussing firing of an employee for correcting Musk on Twitter and potential liability for a retaliation claim under California law).

 


Twitter Troubles: The Upheaval of a Platform and Lessons for Social Media Governance

Gordon Unzen, MJLST Staffer

Elon Musk’s Tumultuous Start

On October 27, 2022, Elon Musk officially completed his $44 billion deal to purchase the social media platform, Twitter.[1] When Musk’s bid to buy Twitter was initially accepted in April 2022, proponents spoke of a grand ideological vision for the platform under Musk. Musk himself emphasized the importance of free speech to democracy and called Twitter “the digital town square where matters vital to the future of humanity are debated.”[2] Twitter co-founder Jack Dorsey called Twitter the “closest thing we have to a global consciousness,” and expressed his support of Musk: “I trust his mission to extend the light of consciousness.”[3]

Yet only two weeks into Musk’s rule, the tone has quickly shifted towards doom, with advertisers fleeing the platform, talk of bankruptcy, and the Federal Trade Commission (“FTC”) expressing “deep concern.” What happened?

Free Speech or a Free for All?

Critics were quick to read Musk’s pre-purchase remarks about improving ‘free speech’ on Twitter to mean he would change how the platform would regulate hate speech and misinformation.[4] This fear was corroborated by the stream of racist slurs and memes from anonymous trolls ‘celebrating’ Musk’s purchase of Twitter.[5] However, Musk’s first major change to the platform came in the form of a new verification service called ‘Twitter Blue.’

Musk took control of Twitter during a substantial pullback in advertisement spending in the tech industry, a problem that has impacted other tech giants like Meta, Spotify, and Google.[6] His solution was to seek revenue directly from consumers through Twitter Blue, a program where users could pay $8 a month for verification with the ‘blue check’ that previously served to tell users whether an account of public interest was authentic.[7] Musk claimed this new system would give ‘power to the people,’ which proved correct in an ironic and unintended fashion.

Twitter Blue allowed users to pay $8 for a blue check and impersonate politicians, celebrities, and company media accounts—which is exactly what happened. Musk, Rudy Giuliani, O.J. Simpson, LeBron James, and even the Pope were among the many impersonated by Twitter users.[8] Companies received the same treatment, with an impersonation Eli Lilly and Company account writing “We are excited to announce insulin is free now,” causing its stock to drop 2.2%.[9]This has led advertising firms like Omnicom and IPG’s Mediabrands to conclude that brand safety measures are currently impeded on Twitter and advertisers have subsequently begun to announce pauses on ad spending.[10] Musk responded by suspending Twitter Blue only 48 hours after it launched, but the damage may already be done for Twitter, a company whose revenue was 90% ad sales in the second quarter of this year.[11] During his first mass call with employees, Musk said he could not rule out bankruptcy in Twitter’s future.[12]

It also remains to be seen whether the Twitter impersonators will escape civil liability under theories of defamation[13] or misappropriation of name or likeness,[14] or criminal liability under state identity theft[15] or false representation of a public employee statutes,[16] which have been legal avenues used to punish instances of social media impersonation in the past.

FTC and Twitter’s Consent Decree

On the first day of Musk’s takeover of Twitter, he immediately fired the CEO, CFO, head of legal policy, trust and safety, and general counsel.[17] By the following week, mass layoffs were in full swing with 3,700 Twitter jobs, or 50% of its total workforce, to be eliminated.[18] This move has already landed Twitter in legal trouble for potentially violating the California WARN Act, which requires 60 days advance notice of mass layoffs.[19] More ominously, however, these layoffs, as well as the departure of the company’s head of trust and safety, chief information security officer, chief compliance officer and chief privacy officer, have attracted the attention of the FTC.[20]

In 2011, Twitter entered a consent decree with the FTC in response to data security lapses requiring the company to establish and maintain a program that ensured its new features do not misrepresent “the extent to which it maintains and protects the security, privacy, confidentiality, or integrity of nonpublic consumer information.”[21] Twitter also agreed to implement two-factor authentication without collecting personal data, limit employee access to information, provide training for employees working on user data, designate executives to be responsible for decision-making regarding sensitive user data, and undergo a third-party audit every six months.[22] Twitter was most recently fined $150 million back in May for violating the consent decree.[23]

With many of Twitter’s former executives gone, the company may be at an increased risk for violating regulatory orders and may find itself lacking the necessary infrastructure to comply with the consent decree. Musk also reportedly urged software engineers to “self-certify” legal compliance for the products and features they deployed, which may already violate the court-ordered agreement.[24] In response to these developments, Douglas Farrar, the FTC’s director of public affairs, said the commission is watching “Twitter with deep concern” and added that “No chief executive or company is above the law.”[25] He also noted that the FTC had “new tools to ensure compliance, and we are prepared to use them.”[26] Whether and how the FTC will employ regulatory measures against Twitter remains uncertain.

Conclusions

The fate of Twitter is by no means set in stone—in two weeks the platform has lost advertisers, key employees, and some degree of public legitimacy. However, at the speed Musk has moved so far, in two more weeks the company could likely be in a very different position. Beyond the immediate consequences to the company, Musk’s leadership of Twitter illuminates some important lessons about social media governance, both internal and external to a platform.

First, social media is foremost a business and not the ‘digital town square’ Musk imagines. Twitter’s regulation of hate speech and verification of public accounts served an important role in maintaining community standards, promoting brand safety for advertisers, and protecting users. Loosening regulatory control runs a great risk of delegitimizing a platform that corporations and politicians alike took seriously as a tool for public communication.

Second, social media stability is important to government regulators and further oversight may not be far off on the horizon. Musk is setting a precedent and bringing the spotlight on the dangers of a destabilized social media platform and the risks this may pose to data privacy, efforts to curb misinformation, and even the stock market. In addition to the FTC, Senate Majority Whip, and chair of the Senate Judiciary Committee, Dick Durbin, has already commented negatively on the Twitter situation.[27] Musk may have given powerful regulators, and even legislators, the opportunity they were looking for to impose greater control over social media. For better or worse, Twitter’s present troubles could lead to a new era of government involvement in digital social spaces.

Notes

[1] Adam Bankhurst, Elon Musk’s Twitter Takeover and the Chaos that Followed: The Complete Timeline, IGN (Nov. 11, 2022), https://www.ign.com/articles/elon-musks-twitter-takeover-and-the-chaos-that-followed-the-complete-timeline.

[2] Monica Potts & Jean Yi, Why Twitter is Unlikely to Become the ‘Digital Town Square’ Elon Musk Envisions, FiveThirtyEight (Apr. 29, 2022), https://fivethirtyeight.com/features/why-twitter-is-unlikely-to-become-the-digital-town-square-elon-musk-envisions/.

[3] Bankhurst, supra note 1.

[4] Potts & Yi, supra note 2.

[5] Drew Harwell et al., Racist Tweets Quickly Surface After Musk Closes Twitter Deal, Washington Post (Oct. 28, 2022), https://www.washingtonpost.com/technology/2022/10/28/musk-twitter-racist-posts/.

[6] Bobby Allyn, Elon Musk Says Twitter Bankruptcy is Possible, But is That Likely?, NPR (Nov. 12, 2022), https://www.wglt.org/2022-11-12/elon-musk-says-twitter-bankruptcy-is-possible-but-is-that-likely.

[7] Id.

[8] Keegan Kelly, We Will Never Forget These Hilarious Twitter Impersonations, Cracked (Nov. 12, 2022), https://www.cracked.com/article_35965_we-will-never-forget-these-hilarious-twitter-impersonations.html; Shirin Ali, The Parody Gold Created by Elon Musk’s Twitter Blue, Slate (Nov. 11, 2022), https://slate.com/technology/2022/11/parody-accounts-of-twitter-blue.html.

[9] Ali, supra note 8.

[10] Mehnaz Yasmin & Kenneth Li, Major Ad Firm Omnicom Recommends Clients Pause Twitter Ad Spend – Memo, Reuters (Nov. 11, 2022), https://www.reuters.com/technology/major-ad-firm-omnicom-recommends-clients-pause-twitter-ad-spend-verge-2022-11-11/; Rebecca Kern, Top Firm Advises Pausing Twitter Ads After Musk Takeover, Politico (Nov. 1, 2022), https://www.politico.com/news/2022/11/01/top-marketing-firm-recommends-suspending-twitter-ads-with-musk-takeover-00064464.

[11] Yasmin & Li, supra note 10.

[12] Katie Paul & Paresh Dave, Musk Warns of Twitter Bankruptcy as More Senior Executives Quit, Reuters (Nov. 10, 2022), https://www.reuters.com/technology/twitter-information-security-chief-kissner-decides-leave-2022-11-10/.

[13] Dorrian Horsey, How to Deal With Defamation on Twitter, Minc, https://www.minclaw.com/how-to-report-slander-on-twitter/ (last visited Nov. 12, 2022).

[14] Maksim Reznik, Identity Theft on Social Networking Sites: Developing Issues of Internet Impersonation, 29 Touro L. Rev. 455, 456 n.12 (2013), https://digitalcommons.tourolaw.edu/cgi/viewcontent.cgi?article=1472&context=lawreview.

[15] Id. at 455.

[16] Brett Snider, Can a Fake Twitter Account Get You Arrested?, FindLaw Blog (April 22, 2014), https://www.findlaw.com/legalblogs/criminal-defense/can-a-fake-twitter-account-get-you-arrested/.

[17] Bankhurst, supra note 1.

[18] Sarah Perez & Ivan Mehta, Twitter Sued in Class Action Lawsuit Over Mass Layoffs Without Proper Legal Notice, Techcrunch (Nov. 4, 2022), https://techcrunch.com/2022/11/04/twitter-faces-a-class-action-lawsuit-over-mass-employee-layoffs-with-proper-legal-notice/.

[19] Id.

[20] Natasha Lomas & Darrell Etherington, Musk’s Lawyer Tells Twitter Staff They Won’t be Liable if Company Violates FTC Consent Decree (Nov. 11, 2022), https://techcrunch.com/2022/11/11/musks-lawyer-tells-twitter-staff-they-wont-be-liable-if-company-violates-ftc-consent-decree/.

[21] Id.

[22] Scott Nover, Elon Musk Might Have Already Broken Twitter’s Agreement With the FTC, Quartz (Nov. 11, 2022), https://qz.com/elon-musk-might-have-already-broken-twitter-s-agreement-1849771518.

[23] Tom Espiner, Twitter Boss Elon Musk ‘Not Above the Law’, Warns US Regulator, BBC (Nov. 11, 2022), https://www.bbc.com/news/business-63593242.

[24] Nover, supra note 22.

[25] Espiner, supra note 23.

[26] Id.

[27] Kern, supra note 10.


It’s Social Media – A Big Lump of Unregulated Child Influencers!

Tessa Wright, MJLST Staffer

If you’ve been on TikTok lately, you’re probably familiar with the Corn Kid. Seven-year-old Tariq went viral on TikTok in August after appearing in an 85-second video clip professing his love of corn.[1] Due to his accidental viral popularity, Tariq has become a social media celebrity. He has been featured in content collaborations with notable influencers, starred in a social media ad for Chipotle, and even created an account on Cameo.[2] At seven-years-old, he has become a child influencer, a minor celebrity, and a major financial contributor for his family. Corn Kid is not alone. There are a growing number of children rising to fame via social media. In fact, today child influencers have created an eight-billion-dollar social media advertising industry, with some children generating as much as $26 million a year through advertising and sponsored content.[3] Yet, despite this rapidly growing industry, there are still very few regulations protecting the financial earnings of children entertainers in the social media industry.[4]

What Protects Children’s Financial Earnings in the Entertainment Industry?

Normally, children in the entertainment industry have their financial earnings protected under the California Child Actor’s Bill (also known as the Coogan Law).[5] The Coogan Law was passed in 1939 by the state of California in response to the plight of Jackie Coogan.[6] Coogan was a child star who earned millions of dollars as a child actor only to discover upon reaching adulthood that his parents had spent almost all of his money.[7] Over the years the law has evolved, and today it upholds that earnings by minors in the entertainment industry are the property of the minor.[8] Specifically, the California law creates a fiduciary relationship between the parent and child and requires that 15% of all earnings must be set aside in a blocked trust.[9]

What Protections do Child Social Media Stars Have? 

Social media stars are not legally considered to be actors, so the Coogan Law does not apply to their earnings.[10] So, are there other laws protecting these social media stars? The short answer is, no. 

Technically, there are laws that prevent children under the age of 12 from using social media apps which in theory should protect the youngest of social media stars.[11] However, even though these social media platforms claim that they require users to be at least thirteen years old to create accounts on their platforms, there are still ways children end up working in content creation jobs.[12] The most common scenario is that parents of these children make content in which they feature their children.[13] These “family vloggers” are a popular genre of YouTube videos where parents frequently feature their children and share major life events; sometimes they even feature the birth of their children. Often these parents also make separate social media accounts for their children which are technically run by the parents and are therefore allowed despite the age restrictions.[14] There are no restrictions or regulations preventing parents from making social media accounts for their children, and therefore no restriction on the parents’ collection of the income generated from such accounts.[15]

New Attempts at Legislation 

So far, there has been very little intervention by lawmakers. The state of Washington has attempted to turn the tide by proposing a new state bill that attempts to protect children working in social media.[16] The bill was introduced in January of 2022 and, if passed, would offer protection to children living within the state of Washington who are on social media.[17] Specifically, the bill introduction reads, “Those children are generating interest in and revenue for the content, but receive no financial compensation for their participation. Unlike in child acting, these children are not playing a part, and lack legal protections.”[18] The bill would hopefully help protect the finances of these child influencers. 

Additionally, California passed a similar bill in 2018.[19] Unfortunately, it only applies to videos that are longer than one hour and have direct payment to the child.[20] What this means is that a child who, for example, is a Twitch streamer that posts a three-hour livestream and receives direct donations during the stream, would be covered by the bill; however, a child featured in a 10-minute YouTube video or a 15-second TikTok would not be financially protected under the bill.

The Difficulties in Regulating Social Media Earnings for Children

Currently, France is the only country in the world with regulations for children working in the social media industry.[21] There, children working in the entertainment industry (whether as child actors, models, or social media influencers) have to register for a license and their earnings must be put into a dedicated bank account for them to access when they’re sixteen.[22] However, the legislation is still new and it is too soon to see how well these regulations will work. 

The problem with creating legislation in this area is attributable to the ad hoc nature of making social media content.[23] It is not realistic to simply extend existing legislation applicable to child entertainers to child influencers[24] as their work differs greatly. Moreover, it becomes extremely difficult to attempt to regulate an industry when influencers can post content from any location at any time, and when parents may be the ones filming and posting the videos of their children in order to boost their household income. For example, it would be hard to draw a clear line between when a child is being filmed casually for a home video and when it is being done for work, and when an entire family is featured in a video it would be difficult to determine how much money is attributable to each family member. 

Is There a Solution?

While there is no easy solution, changing the current regulations or creating new regulations is the clearest route. Traditionally, tech platforms have taken the view that governments should make rules and then they will then enforce them.[25] All major social media sites have their own safety rules, but the extent to which they are responsible for the oversight of child influencers is not clearly defined.[26] However, if any new regulation is going to be effective, big tech companies will need to get involved. As it stands today, parents have found loopholes that allow them to feature their child stars on social media without violating age restrictions. To avoid these sorts of loopholes to new regulations, it will be essential that big tech companies work in collaboration with legislators in order to create technical features that prevent them.

The hope is that one day, children like Corn Kid will have total control of their financial earnings, and will not reach adulthood only to discover their money has already been spent by their parents or guardians. The future of entertainment is changing every day, and the laws need to keep up. 

Notes

[1] Madison Malone Kircher, New York Times (Online), New York: New York Times Company (September 21, 2022) https://www.nytimes.com/2022/09/21/style/corn-kid-tariq-tiktok.html.

[2] Id.

[3] Marina Masterson, When Play Becomes Work: Child Labor Laws in the Era of ‘Kidfluencers’, 169 U. Pa. L. Rev. 577, 577 (2021).

[4] Coogan Accounts: Protecting Your Child Star’s Earnings, Morgan Stanley (Jan. 10, 2022), https://www.morganstanley.com/articles/trust-account-for-child-performer.

[5] Coogan Law, https://www.sagaftra.org/membership-benefits/young-performers/coogan-law (last visited Oct. 16, 2022).

[6] Id.

[7] Id.

[8] Cal. Fam. Code § 6752.

[9] Id.

[10] Morgan Stanley, supra note 4.

[11] Sapna Maheshwari, Online and Making Thousands, at Age 4: Meet the Kidfluencers, N.Y. Times, (March 1, 2019) https://www.nytimes.com/2019/03/01/business/media/social-media-influencers-kids.html.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.

[17] Id.

[18] Id.

[19] E.W. Park, Child Influencers Have No Child Labor Regulations. They Should, Lavoz News (May 16, 2022) https://lavozdeanza.com/opinions/2022/05/16/child-influencers-have-no-child-labor-regulations-they-should/.

[20] Id.

[21] Collins, supra note 19.

[22] Id.

[23] Id.

[24] Id.

[25] Id.

[26] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.


Meta Faces Class Action Lawsuits Over Pixel Tool Data Controversy

Ray Mestad, MJLST Staffer

With a market capitalization of $341 billion, Meta Platforms is one of the most valuable companies in the world.[1] Information is a prized asset for Meta, but how that information is acquired continues to be a source of conflict. Their Meta “Pixel” tool is a piece of code that allows websites to track visitor activity.[2] However, what Meta does with the data after it is acquired may be in violation of a variety of privacy laws. Because of that, Meta is now facing almost fifty class action lawsuits due to Pixel’s use of data from video players and healthcare patient portals.[3]

What is Pixel?

Pixel is an analytical tool that tracks visitor actions on a website.[4] In theory, the actions that are tracked include purchases, registrations, cart additions, searches and more. This information can then be used by the website owners to better understand user behavior. Website owners can more efficiently use ad spend by tailoring ads to relevant users and finding more receptive users based on Pixel’s analysis.[5]

In the world of search engine optimization and web analysis tools like Pixel are common, and there are other sites, like Google Analytics, that provide similar functions. However, there are two key differences between these other tools and Pixel. First, Pixel has in some cases accidentally scraped private, identifiable information from websites. Second, Pixel can connect that information to the social profiles on their flagship website, Facebook. Whether intentionally or accidentally, Pixel has been found to have grabbed personal information beyond the simple user web actions it was supposed to be limited to and connected them to Facebook profiles.[6]

Pixel and Patient Healthcare Information

It’s estimated that, until recently, one third of the top 100 hospitals in the country used Pixel on their websites.[7] However, that number may decrease after Meta’s recent data privacy issues. Meta faced both criticism and legal action in the summer of 2022 for its treatment of user data on healthcare websites. Pixel incorrectly retrieved private patient information, including names, conditions, email addresses and more. Meta then targeted hospital website users with ads on Facebook, using the information Pixel collected from hospital websites and patient portals by matching user information with their Facebook accounts.[8] Novant Health, a healthcare provider, ran advertisements promoting vaccinations in 2020. They then added Pixel code to their website to evaluate the effectiveness of the campaign. Pixel proceeded to send private and identifiable user information to Meta.[9] Another hospital (and Meta’s co-defendant in the lawsuit), the University of California San Francisco and Dignity Health (“UCSF”), was accused of illegally gathering patient information via Pixel code on their patient portal. Private medical information was then distributed to Meta. At some point, it is claimed that pharmaceutical companies then gained access to this medical information and sent out targeted ads based thereon.[10] That is just one example – all in all, more than 1 million patients have been affected by this Pixel breach.[11] 

Pixel and Video Tracking

The problems did not stop there. Following its patient portal controversy, Meta again faced criticism for obtaining protected user data with Pixel, this time in the context of video consumption. There are currently 47 proposed class actions against Meta for violations of the Video Privacy Protection Act (the “VPPA”). The VPPA was created in the 1980’s to cover videotape and audio-visual materials. No longer confined to the rental store, the VPPA has now taken on a much broader meaning after the growth of the internet. 

These class actions accuse Meta of using the Pixel tool to take video user data from a variety of company websites, including the NFL, NPR, the Boston Globe, Bloomberg Law and many more. The classes allege that by collecting video viewing activity in a personally identifiable manner without consent (matching Facebook user IDs to the activity rather than anonymously), so Pixel users could target their ads at the viewers, Pixel violated the VPPA. Under the VPPA Meta is not the defendant in these lawsuits, but rather the companies that shared user information with Meta.[12]

Causes of Action

The relatively new area of data privacy is scarcely litigated by the federal government due to the lack of statutes protecting consumer privacy on the federal level. Because of that, the number of data protection civil litigants can be expected to continue to grow. [13] HIPAA is the Health Insurance Portability and Accountability Act, an act created in 1996 to protect patient information from disclosure without patient consent. In the patient portal cases, HIPAA actions would have to be initiated by the US government. Claimants are therefore suing Meta under consumer protection and other privacy laws like the California Confidentiality of Medical Information Act, the Federal Wiretap Act, and the Comprehensive Computer Data Access and Fraud Act instead.[14] These state Acts allow individuals to sue, when under Federal Acts like HIPPA, the Government may move slowly, or not at all. And in the cases of video tracking, the litigants may only sue the video provider, not Meta itself.[15] Despite that wrinkle of benefit to Meta, their involvement in more privacy disputes is not ideal for the tech giant as it may hurt the trustworthiness of Meta Platforms in the eyes of the public.

Possible Outcomes

If found liable, the VPPA violations could result in damages of $2,500 per class member.[16] Punitive damages for the healthcare data breaches could run in the millions as well and would vary state to state due to the variety of acts the claims are brought in violation of.[17] Specifically, in the UCSF data case class members are seeking punitive damages of $5 million.[18] One possible hang-up that may become an issue for claimants are arbitration agreements. If the terms and conditions of either hospital patient portals or video provider websites contain arbitration clauses, litigants may have difficulty overcoming them. On the one hand, these terms and conditions may be binding and force the parties to attend mandatory arbitration meetings. On the other hand, consumer rights attorneys may argue that consent needs to come from forms separate from online user agreements.[19] If more lawsuits emerge due to the actions of Pixel, it is quite possible that companies will move away from the web analytics tools to avoid potential liability. It remains to be seen whether the convenience and utility of Meta Pixel stops being worth the risk the web analytics tools present to websites.

Notes

[1] Meta Nasdaq, https://www.google.com/finance/quote/META:NASDAQ (last visited Oct. 21, 2022).

[2] Meta Pixel, Meta for Developers, https://developers.facebook.com/docs/meta-pixel/.

[3] Sky Witley, Meta Pixel’s Video Tracking Spurs Wave of Data Privacy Suits, (Oct. 13, 2022, 3:55 AM), Bloomberg Law, https://news.bloomberglaw.com/privacy-and-data-security/meta-pixels-video-tracking-spurs-wave-of-consumer-privacy-suits.

[4] Meta Pixel, https://adwisely.com/glossary/meta-pixel/ (last visited Oct. 21, 2022).

[5] Ted Vrountas, What Is the Meta Pixel & What Does It Do?, https://instapage.com/blog/meta-pixel.

[6] Steve Adler, Meta Facing Further Class Action Lawsuit Over Use of Meta Pixel Code on Hospital Websites, HIPPA Journal (Aug. 1, 2022), https://www.hipaajournal.com/meta-facing-further-class-action-lawsuit-over-use-of-meta-pixel-code-on-hospital-websites/.

[7] Id.

[8] Id.

[9] Bill Toulas, Misconfigured Meta Pixel exposed healthcare data of 1.3M patients, Bleeping Computer (Aug. 22, 2022, 2:16 PM), https://www.bleepingcomputer.com/news/security/misconfigured-meta-pixel-exposed-healthcare-data-of-13m-patients/.

[10] Adler, supra note 6.

[11] Toulas, supra note 9.

[12] Witley, supra note 3. 

[13] Id.

[14] Adler, supra note 6.

[15] Witley, supra note 3.

[16] Id

[17] Dave Muoio, Northwestern Memorial the latest hit with a class action over Meta’s alleged patient data mining, Fierce Healthcare (Aug. 12, 2022 10:30AM), https://www.fiercehealthcare.com/health-tech/report-third-top-hospitals-websites-collecting-patient-data-facebook.

[18] Id.

[19] Witley, supra note 3.




After Hepp: Section 230 and State Intellectual Property Law

Kelso Horne IV, MJLST Staffer

Although hardly a competitive arena, Section 230(c) of the Communications Decency Act (the “CDA”) is almost certainly the best known of all telecommunications laws in the United States. Shielding Internet Service Providers (“ISPs”) and websites from liability for the content published by their users, § 230(c)’s policy goals are laid out succinctly, if a bit grandly, in § 230(a) and § 230(b).[1] These two sections speak about the internet as a force for economic and social good, characterizing it as a “vibrant and competitive free market” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”[2] But where §§ 230(a),(b) both speak broadly of a utopian vision for the internet, and (c) grants websites substantial privileges, § 230(e) gets down to brass tacks.[3]

CDA: Goals and Text

The CDA lays out certain limitations on the shield protections provided by § 230(c).[4] Among these is § 230(e)(2) which states in full, “Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.”[5] This particular section, despite its seeming clarity, has been the subject of litigation for over a decade, and in 2021 a clear circuit split was opened between the 9th and 3rd Circuit Courts over how this short sentence applies to state intellectual property laws. The 9th Circuit Court follows the principle that the policy portions of § 230 as stated in §§ 230(a),(b) should be controlling, and that, as a consequence, state intellectual property claims should be barred. The 3rd Circuit Court follows the principle that the plain text of § 230(e)(2) unambiguously allows for state intellectual property claims.

Who Got There First? Lycos and Perfect 10

In Universal Commc’n Sys., Inc. v. Lycos, Inc., the 1st Circuit Court faced this question obliquely; the court assumed that they were not immunized from state intellectual property law by § 230 and the claims were dismissed, but on different grounds.[6] Consequently, when the 9th Circuit released their opinion in Perfect 10, Inc. v. CCBILL LLC only one month later, they felt free to craft their own rule on the issue.[7] Consisting of a few short paragraphs, the court’s decision on state intellectual property rights is nicely summarized in a short sentence. They stated that “As a practical matter, inclusion of rights protected by state law within the ‘intellectual property’ exemption would fatally undermine the broad grant of immunity provided by the CDA.”[8] The court’s analysis in Perfect 10 was almost entirely based on what allowing state intellectual property claims would do to the policy goals stated in § 230(a) and § 230(b), and did not attempt, or rely on, a particularly thorough reading of § 230(e)(2). Here the court looks at both the policy stated in § 230(a) and § 230(b) and the text of § 230(e)(2) and attempts to rectify them. The court clearly sees the possibility of issues arising from allowing plaintiffs to bring cases through fifty different state systems against websites and ISPs for the postings of their users. This insight may be little more than hindsight, however, given the date of the CDA’s drafting.

Hepp Solidifies a Split

Perfect 10 would remain the authoritative appellate level case on the issue of the CDA and state intellectual property law until 2021, when the 3rd Circuit stepped into the ring.[9] In Hepp v. Facebook, Pennsylvania newsreader Karen Hepp sued Facebook for hosting advertisements promoting a dating website and other services which had used her likeness without her permission.[10] In a much longer analysis, the 3rd Circuit held that the 9th Circuit’s interpretation argued for by Facebook “stray[ed] too far from the natural reading of § 230(e)(2)”.[11] Instead, the 3rd Circuit argued for a closer reading of the text of § 230(e)(2) which they said aligned closely with a more balanced selection of policy goals, including allowance for state intellectual property law.[12] The court also mentions structural arguments relied on by Facebook, mostly examining how narrow the other exceptions in 230(e) are, which the majority states “cuts both ways” since Congress easily cabined meanings when they wanted to.[13]

The dissent in Hepp agreed with the 9th Circuit that the policy goals stated in §§230(a),(b) should be considered controlling.[14] It also noted two cases in other circuits where courts had shown hesitancy towards allowing state intellectual property claims under the CDA to go forward, although both claims had been dismissed on other grounds.[15] Perhaps unsurprisingly, the dissent sees the structural arguments as compelling, and in Facebook’s favor.[16] With the circuits now definitively split on the issue, the text of §§ 230(a),(b) would certainly seem to demand the Supreme Court, or Congress, step in and provide a clear standard.

What Next? Analyzing the CDA

Despite being a pair of decisions ostensibly focused on parsing out what exactly Congress was intending when they drafted § 230, both Perfect 10 and Hepp left out any citation to legislative history when discussing the § 230(e)(2) issue. However, this is not as odd as it seems at first glance. The Communications Decency Act is large, over a hundred pages in length, and § 230 makes up about a page and a half.[17] Most of the content of the legislative reports published after the CDA was passed instead focused on its landmark provisions which attempted, mostly unsuccessfully, to regulate obscene materials on the internet.[18] Section 230 gets a passing mention, less than a page, some of which is taken up with assurances that it would not interfere with civil liability for those engaged in “cancelbotting,” a controversial anti-spam method of the Usenet era.[19] It is perhaps unfair to say that § 230 was an afterthought, but it is likely that lawmakers did not understand its importance at the time of passage. This may be an argument for eschewing the 9th Circuit’s analysis which seemingly imparts the CDA’s drafters with an overly high degree of foresight into § 230’s use by internet companies over a decade later.

Indeed, although one may wish that Congress had drafted it differently, the text of § 230(e)(2) is clear, and the inclusion of “any” as a modifier to “law” makes it difficult to argue that state intellectual property claims are not exempted by the general grant of immunity in § 230.[20] Congressional inaction should not give way to courts stepping in to determine what they believe would be a better Act. Indeed, the 3rd Circuit majority in Hepp may be correct in stating that Congress did in fact want state intellectual property claims to stand. Either way, we are faced with no easy judicial answer; to follow the clear text of the section would be to undermine what many in the e-commerce industry clearly see as an important protection and to follow the purported vision of the Act stated in §§230(a),(b) would be to remove a protection to intellectual property which victims of infringement may use to defend themselves. The circuit split has made it clear that this is a question on which reasonable jurists can disagree. Congress, as an elected body, is in the best position to balance these equities, and they should use their law making powers to definitively clarify the issue.

Notes

[1] 47 U.S.C. § 230.

[2] Id.

[3] 47 U.S.C. § 230(e).

[4] Id.

[5] 47 U.S.C. § 230(e)(2).

[6] Universal v. Lycos, 478 F.3d 413 (1st Cir. 2007)(“UCS’s remaining claim against Lycos was brought under Florida trademark law, alleging dilution of the “UCSY” trade name under Fla. Stat. § 495.151. Claims based on intellectual property laws are not subject to Section 230 immunity.”).

[7] 488 F.3d 1102 (9th Cir. 2007).

[8] Id. at 1119 n.5.

[9] Kyle Jahner, Facebook Ruling Splits Courts Over Liability Shield Limits for IP, Bloomberg Law, (Sep. 28, 2021, 11:32 AM).

[10] 14 F.4th 204, 206-7 (3d Cir. 2021).

[11] Id. at 210.

[12] Id. at 211.

[13] Hepp v. Facebook, 14 F.4th 204 (3d Cir. 2021)(“[T]he structural evidence it cites cuts both ways. Facebook is correct that the explicit references to state law in subsection (e) are coextensive with federal laws. But those references also suggest that when Congress wanted to cabin the interpretation about state law, it knew how to do so—and did so explicitly.”).

[14] 14 F.4th at 216-26 (Cowen, J., dissenting).

[15] Almeida v. Amazon.com, Inc., 456 F.3d 1316 (11th Cir. 2006); Doe v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016).

[16] 14 F.4th at 220 (Cowen, J., dissenting) (“[T]he codified findings and policies clearly tilt the balance in Facebook’s favor.”).

[17] Communications Decency Act of 1996, Pub. L. 104-104, § 509, 110 Stat. 56, 137-39.

[18] H.R. REP. NO. 104-458 at 194 (1996) (Conf. Rep.); S. Rep. No. 104-230 at 194 (1996) (Conf. Rep.).

[19] Benjamin Volpe, From Innovation to Abuse: Does the Internet Still Need Section 230 Immunity?, 68 Cath. U. L. Rev. 597, 602 n.27 (2019); see Denise Pappalardo & Todd Wallack, Antispammers Take Matters Into Their Own Hands, Network World, Aug. 11, 1997, at 8 (“cancelbots are programs that automatically delete Usenet postings by forging cancel messages in the name of the authors. Normally, they are used to delete postings by known spammers. . . .”).

[20] 47 U.S.C. § 230(e)(2).


Freedom to Moderate? Circuits Split over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


“I Don’t Know What To Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


Social Media Influencers Ask What “Intellectual Property” Means

Henry Killen, MJLST Staffer

Today, just about anyone can name their favorite social media influencer. The most popular influencers are athletes, musicians, politicians, entrepreneurs, or models. Ultra-famous influencers, such as Kylie Jenner, can charge over 1 million dollars for a single post with a company’s product. So what are the risks of being an influencer? Tik Tok star Charli D’Amelio has been on both sides of intellectual property disputes. A photo of Charli was included in media mogul Sheeraz Hasan’s video promoting his ability to “make anyone famous.” The video featured many other celebrities such as Logan Paul and Zendaya. Charli’s legal team sent a cease-and-desist letter to Sheeraz demanding that her portion of the promotional video is scrubbed. Her lawyers assert that her presence in the promo “is not approved and will not be approved.” Charli has also been on the other side of celebrity intellectual property issues. The star published her first book In December and has come under fire from photographer Jake Doolittle for allegedly using photos he took without his permission. Though no lawsuit has been filed, Jake posted a series of Instagram posts blaming Charli’s team for not compensating him for his work.

Charli’s controversies highlight a bigger question society is facing, is content shared on social media platforms considered intellectual property? A good place to begin is figuring out what exactly intellectual property is. Intellectual property “refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names, and images used in commerce.” Social media platforms make it possible to access endless displays of content – from images to ideas – creating a cultural norm of sharing many aspects of life. Legal teams at the major social media platforms already have policies in place that make it against the rules to take images from a social media feed and use them as one’s own. For example, Bloggers may not be aware what they write may already by trademarked or copyrighted or that the images they get off the internet for their posts may not be freely reposted. Influencers get reposted on sites like Instagram all the time, and not just by loyal fans. These reposts may seem harmless to many influencers, but it is actually against Instagram’s policy to repost a photo without the creator’s consent. This may seem like not a big deal because what influencer doesn’t want more attention? However, sometimes influencers’ work gets taken and then becomes a sensation. A group of BIPOC TikTok users are fighting to copyright a dance they created that eventually became one of biggest dances in TikTok history. A key fact in their case is that the dance only became wildly popular after the most famous TiKTok users began doing it.

There are few examples of social media copyright issues being litigated, but in August 2021, a Manhattan Federal judge ruled that the practice of embedding social media posts on third-party websites, without permission from the content owner, could violate the owner’s copyright. In reaching this decision, the judge rejected the “server test” from the 9th Circuit, which holds that embedding content from a third party’s social media account only violates the contents owner’s copyright if a copy is stored on the defendant’s serves. .  General copyright laws from Congress lay out four considerations when deciding if a work should be granted copyright protection: originality, fixation, idea versus expression, and functionality. These considerations notably leave a gray area in determining if dances or expressions on social media sites can be copyrighted. Congress should enact a more comprehensive law to better address intellectual property as it relates to social media.


Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.