Internet

Will Artificial Intelligence Surpass Human Intelligence Sooner Than Expected? Taking a Look at ChatGPT

Alex Zeng, MJLST Staffer

The fear of robots taking over the world and making humans obsolete has permeated the fabric of human society in recent history. With advances in technology blurring the line between human art and artificial intelligence (“AI”) art and a study predicting that 800 million workers across the globe will be replaced by robots by 2030, it may be hard to remain optimistic about humanity’s role in an increasingly automated society. Indeed, films such as 2001: A Space Odyssey(1968) and I, Robot (2004) take what awaits humans in a society ruled by robots to its logical conclusion, and—spoiler alert—it is not great for humans. This blog post discusses ChatGPT, its achievements, and its potential consequences on human society. ChatGPT, a point for the robots, embodies people’s fear of the bleak future of a fully automated world.

What Is ChatGPT?

ChatGPT is a chatbot launched by OpenAI in November of 2022. It uses natural language processing to engage in realistic conversations with humans and it can generate articles, fictional stories, poems, and computer code by responding to prompts queried by users. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned using supervised and reinforcement learning techniques. This GPT model is also autoregressive, meaning that it predicts the next word given a body of text. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is not without its limitations, however. OpenAI says that ChatGPT’s limitations include: (1) writing plausible-sounding but incorrect or nonsensical answers, (2) being sensitive to tweaks to the input phrasing or attempting the same prompt multiple times, (3) being excessively verbose and overusing certain phrases, (4) being unable to ask clarifying questions when the user provides an ambiguous query, and (5) responding to harmful instructions or exhibiting biased behavior.

Uses For ChatGPT

The main distinction between ChatGPT and other chatbots and natural language processing systems is its ultra-realistic conversational skills. Professor Ethan Mollick in the Harvard Business Review claims that it is a tipping point for AI because of this difference in quality as it can even be used to write weight-loss plans, children’s books, and offer advice on how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. I even attempted to use ChatGPT to write this blog post for me, although it wrote only 347 words—nowhere near the word minimum of 1,000 words that I had set for it. What is evident through these cases, however, is its level of quality reflecting something that sounds remarkably human.

ChatGPT’s uses are not limited to just answering absurd prompts, however. Professor Mollick had a student using ChatGPT complete a four-hour project in less than an hour by creating a computer code for a startup prototype using code libraries they had never seen before. Additionally, ChatGPT was able to pass graduate business and law exams, although it was by the skin of its silicon teeth. Indeed, it was even able to pass Constitutional Law, Employee Benefits, Taxation, and Torts exams administered by University of Minnesota Law School professors Jonathan Choi, Kristin Hickman, Amy Monahan, and Daniel Schwarcz. Of course, while ChatGPT would not be graduating in the top of its class and would actually be placed on academic probation, it would still notably graduate with a degree based on these results.

Implications of ChatGPT

ChatGPT’s application to tasks that require creativity and expression such as answering exam questions, producing computer code, and being this generation’s Dr. Seuss, reveals an important yet potentially perilous step forward in how AI is used. Rather than being used in areas where failure is expensive and intolerable—such as with autonomous driving—AI is now being used in tasks where some failure is acceptable. In these tasks, AI such as ChatGPT is already performing well enough that online customer service roles were taken over by AI and it threatens replacing humans in any task that requires simple execution, such as following a script or whipping up a legal document. In fact, an AI-powered robot lawyer was about to represent a defendant in court before the prosecutors threatened the person behind the chatbot with prison time.

When used as a tool rather than a standalone replacement for humans, however, the realm of possibilities regarding productivity expands exponentially. Businesses and individuals can save time and resources by having AI do more of these menial tasks such as drafting letters and writing emails. Writers with writer’s block, for example, can suddenly gain inspiration by having a conversation with ChatGPT. On the other hand, students can use ChatGPT to finish their assignments and write their exams for them. Additionally, while ChatGPT has filters that prevent it from saying offensive language, these filters can be bypassed so that it responds to queries that may facilitate crime. Additionally, ChatGPT raises big questions regarding, for example, copyright law and who owns the responses ChatGPT generates.

Some drawbacks to using AI and ChatGPT for these tasks is that while ChatGPT gives human-like answers, it does not necessarily give the right answer. ChatGPT also cannot explain what it does or how it does it, making it difficult to verify what results in the answers it gives. Finally, and perhaps critically, ChatGPT cannot explain why something is meaningful and thus cannot replicate human judgment. In other words, ChatGPT can explain data but cannot explain why it matters.

Conclusion

In a more positive light, some may herald the improvements in AI and ChatGPT as the dawn of a new human-machine hybrid Industrial Revolution, where humans are able to be vastly more efficient and effective at their jobs. ChatGPT is, in some ways, the culmination of current efforts in AI to produce human sentience. However, as advancements in AI continue to replace human functions in society, it may no longer be a question of if humans will be replaced entirely by robots, but when. Although it was previously believed that AI could never replicate art, for example, discussions about AI-generated art today reflect that AI may achieve what was believed to be impossible sooner rather than later. In this case, AI like ChatGPT can be viewed not as the harbinger of a human-machine society, but an omen of the obsoletion of human function in society. Relievingly, however, AI like ChatGPT has not yet reached the logical conclusion contemplated in dystopian films.


A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Emptying the Nest: Recent Events at Twitter Prompt Class-Action Litigation, Among Other Things

Ted Mathiowetz, MJLST Staffer

You’d be forgiven if you thought the circumstances that led to Elon Musk ultimately acquiring Twitter would be the end of the drama for the social media company. In the past seven months, Musk went from becoming the largest shareholder of the company, to publicly feuding with then-CEO, Parag Agrawal, to making an offer to take the company private for $44 billion, to deciding he didn’t want to purchase the company, to being sued by Twitter to force him to complete the deal. Eventually, two weeks before trial was scheduled, Musk purchased the company for the original, agreed upon price.[1] However, within the first two-and-a-half weeks that Musk took Twitter private, the drama has continued, if not ramped-up, with one lawsuit already filed and the specter of additional litigation looming.[2]

There’s been the highly controversial rollout and almost immediate suspension of Twitter Blue—Musk’s idea of increasing the reliability of information on Twitter and simultaneously helping ameliorate Twitter’s financial woes.[3]Essentially, users were able to pay $8 a month for verification, albeit without actually verifying their identity. Instead, their username would remain frozen at the time they paid for the service.[4] Users quickly created fake “verified” accounts for real companies and spread misinformation while armed with the “verified” check mark, duping both the public and investors. For example, a newly created account with the handle “@EliLillyandCo” paid for Twitter Blue and tweeted “We are excited to announce insulin is free now.”[5] Eli Lilly’s actual Twitter account, “@LillyPad” had to tweet a message apologizing to those “who have been served a misleading message” from the fake account, after the pharmaceutical company’s shares dipped around 5% after the tweet.[6] In addition to Eli Lilly, several other companies, like Lockheed Martin, faced similar identity theft.[7] Twitter Blue was quickly suspended in the wake of these viral impersonations and advertisers have continued to flee the company, affecting its revenue.[8]

Musk also pulled over 50 engineers from Tesla, the vehicle manufacturing company of which he is CEO, to help him in his reimagining of Twitter.[9] Among those 50 engineers are the director of software development and the senior director of software engineering.[10] Pulling engineers from his publicly traded company to work on his separately owned private company almost assuredly raises questions of a violation of his fiduciary duty to Tesla’s shareholders, especially with Tesla’s share price falling 13% over the last week (as of November 9, 2022).[11]

The bulk of Twitter’s current legal issues reside in Musk’s decision to engage in mass-layoffs of employees at Twitter.[12] After his first week in charge, he sent out notices to around half of Twitter’s 7500 employees that they would be laid off, reasoning that cutbacks were necessary because Twitter was losing over $4 million per day.[13] Soon after the layoffs, a group of employees filed suit alleging that Twitter violated the Worker Adjustment and Retraining Act (WARN) by failing to give adequate notice.[14]

The WARN Act, passed in 1988, applies to employers with 100 or more employees[15] and mandates that an “employer shall not order a [mass layoff]” until it gives sixty-days’ notice to the state and affected employees.[16]Compliance can also be reached if, in lieu of giving notice, the employee is paid for the sixty-day notice period. In Twitter’s case, some employees were offered pay to comply with the sixty-day period after the initial lawsuit was filed,[17] though the lead plaintiff in the class action suit was allegedly laid off on November 1st with no notice or offer of severance pay.[18] Additionally, it appears as though Twitter is now offering severance to employees in return for a signature releasing them from liability in a WARN action.[19]

With regard to those who have not yet signed releases and were not given notice of a layoff, there is a question of what the penalties may be to Twitter and what potential defenses they may have. Each employee is entitled to “back pay for each day of violation” as well as benefits under their respective plan.[20] Furthermore, the employer is subject to a civil penalty of “not more than $500 for each day of violation” unless they pay their liability to each employee within three weeks of the layoff.[21] One possible defense that Twitter may assert in response to this suit is that of “unforeseeable business circumstances.”[22] Considering Musk’s recent comments that there is the potential that Twitter is headed for bankruptcy as well as the saddling of the company with debt to purchase it (reportedly $13 billion, with $1 billion per year in interest payments),[23] it seems there is a chance this defense could suffice. However, an unforeseen circumstance is strongly indicated when the circumstance is “outside the employer’s” control[24], something that’s arguable given the company’s recent conduct.[25] Additionally, Twitter would have to show that it has been exercising “commercially reasonable business judgment as would a similarly situated employer” in their conduct, another burden that may be hard to overcome. In sum, it’s quite clear why Twitter is trying to keep this lawsuit from gaining traction by securing release waivers. It’s also clear that Twitter has learned its lesson in not offering severance but they may be wading into other areas of employment law with recent conduct.[26]

Notes

[1] Timeline of Billionaire Elon Musk’s to Control Twitter, Associated Press (Oct. 28, 2022), https://apnews.com/article/twitter-elon-musk-timeline-c6b09620ee0905e59df9325ed042a609.

[2] Annie Palmer, Twitter Sued by Employees After Mass Layoffs Begin, CNBC (Nov. 4, 2022), https://www.cnbc.com/2022/11/04/twitter-sued-by-employees-after-mass-layoffs-begin.html.

[3] Siladitya Ray, Twitter Blue: Signups for Paid Verification Appear Suspended After Impersonator Chaos, Forbes (Nov. 11, 2022), https://www.forbes.com/sites/siladityaray/2022/11/11/twitter-blue-new-signups-for-paid-verification-appear-suspended-after-impersonator-chaos/?sh=14faf76c385c; see also Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:43 PM), https://twitter.com/elonmusk/status/1589403131770974208?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[4] Elon Musk (@elonmusk), Twitter (Nov. 6, 2022, 5:35 PM), https://twitter.com/elonmusk/status/1589401231545741312?s=20&t=bkkh_m5EgMreMCU-GWxXrQ.

[5] Steve Mollman, No, Insulin is not Free: Eli Lilly is the Latest High-Profile Casualty of Elon Musk’s Twitter Verification Mess, Fortune(Nov. 11, 2022), https://fortune.com/2022/11/11/no-free-insulin-eli-lilly-casualty-of-elon-musk-twitter-blue-verification-mess/.

[6] Id. Eli Lilly and Company (@LillyPad), Twitter (Nov. 10, 2022, 3:09 PM), https://twitter.com/LillyPad/status/1590813806275469333?s=20&t=4XvAAidJmNLYwSCcWtd4VQ.

[7] Mollman, supra note 5 (showing Lockheed Martin’s stock dipped around 5% as well following a tweet from a “verified” account saying arms sales were being suspended to various countries went viral).

[8] Herb Scribner, Twitter Suffers “Massive Drop in Revenue,” Musk Says, Axios (Nov. 4, 2022), https://www.axios.com/2022/11/04/elon-musk-twitter-revenue-drop-advertisers.

[9] Lora Kolodny, Elon Musk has Pulled More Than 50 Tesla Employees into his Twitter Takeover, CNBC (Oct. 31, 2022), https://www.cnbc.com/2022/10/31/elon-musk-has-pulled-more-than-50-tesla-engineers-into-twitter.html.

[10] Id.

[11] Trefis Team, Tesla Stock Falls Post Elon Musk’s Twitter Purchase. What’s Next?, NASDAQ (Nov. 9, 2022), https://www.nasdaq.com/articles/tesla-stock-falls-post-elon-musks-twitter-purchase.-whats-next.

[12] Dominic Rushe, et al., Twitter Slashes Nearly Half its Workforce as Musk Admits ‘Massive Drop’ in Revenue, The Guardian (Nov. 4, 2022), https://www.theguardian.com/technology/2022/nov/04/twitter-layoffs-elon-musk-revenue-drop.

[13] Id.

[14] Phil Helsel, Twitter Sued Over Short-Notice Layoffs as Elon Musk’s Takeover Rocks Company, NBC News (Nov. 4, 2022), https://www.nbcnews.com/business/business-news/twitter-sued-layoffs-days-elon-musk-purchase-rcna55619.

[15] 29 USC § 2101(a)(1).

[16] 29 USC § 2102(a).

[17] On Point, Boston Labor Lawyer Discusses her Class Action Lawsuit Against Twitter, WBUR Radio Boston (Nov. 10, 2022), https://www.wbur.org/radioboston/2022/11/10/shannon-liss-riordan-musk-class-action-twitter-suit (discussing recent developments in the case with attorney Shannon Liss-Riordan).

[18] Complaint at 5, Cornet et al. v. Twitter, Inc., Docket No. 3:22-cv-06857 (N.D. Cal. 2022).

[19] Id. at 6 (outlining previous attempts by another Musk company, Tesla, to get around WARN Act violations by tying severance agreements to waiver of litigation rights); see also On Point, supra note 17.

[20] 29 USC § 2104.

[21] Id.

[22] 20 CFR § 639.9 (2012).

[23] Hannah Murphy, Musk Warns Twitter Bankruptcy is Possible as Executives Exit, Financial Times (Nov. 10, 2022), https://www.ft.com/content/85eaf14b-7892-4d42-80a9-099c0925def0.

[24] Id.

[25] See e.g., Murphy supra note 22.

[26] See Pete Syme, Elon Musk Sent a Midnight Email Telling Twitter Staff to Commit to an ‘Extremely Hardcore’ Work Schedule – or Get Laid off with Three Months’ Severance, Business Insider (Nov. 16, 2022), https://www.businessinsider.com/elon-musk-twitter-staff-commit-extremely-hardcore-work-laid-off-2022-11; see also Jaclyn Diaz, Fired by Tweet: Elon Musk’s Latest Actions are Jeopardizing Twitter, Experts Say. NPR (Nov. 17, 2022), https://www.npr.org/2022/11/17/1137265843/elon-musk-fires-employee-by-tweet (discussing firing of an employee for correcting Musk on Twitter and potential liability for a retaliation claim under California law).

 


Twitter Troubles: The Upheaval of a Platform and Lessons for Social Media Governance

Gordon Unzen, MJLST Staffer

Elon Musk’s Tumultuous Start

On October 27, 2022, Elon Musk officially completed his $44 billion deal to purchase the social media platform, Twitter.[1] When Musk’s bid to buy Twitter was initially accepted in April 2022, proponents spoke of a grand ideological vision for the platform under Musk. Musk himself emphasized the importance of free speech to democracy and called Twitter “the digital town square where matters vital to the future of humanity are debated.”[2] Twitter co-founder Jack Dorsey called Twitter the “closest thing we have to a global consciousness,” and expressed his support of Musk: “I trust his mission to extend the light of consciousness.”[3]

Yet only two weeks into Musk’s rule, the tone has quickly shifted towards doom, with advertisers fleeing the platform, talk of bankruptcy, and the Federal Trade Commission (“FTC”) expressing “deep concern.” What happened?

Free Speech or a Free for All?

Critics were quick to read Musk’s pre-purchase remarks about improving ‘free speech’ on Twitter to mean he would change how the platform would regulate hate speech and misinformation.[4] This fear was corroborated by the stream of racist slurs and memes from anonymous trolls ‘celebrating’ Musk’s purchase of Twitter.[5] However, Musk’s first major change to the platform came in the form of a new verification service called ‘Twitter Blue.’

Musk took control of Twitter during a substantial pullback in advertisement spending in the tech industry, a problem that has impacted other tech giants like Meta, Spotify, and Google.[6] His solution was to seek revenue directly from consumers through Twitter Blue, a program where users could pay $8 a month for verification with the ‘blue check’ that previously served to tell users whether an account of public interest was authentic.[7] Musk claimed this new system would give ‘power to the people,’ which proved correct in an ironic and unintended fashion.

Twitter Blue allowed users to pay $8 for a blue check and impersonate politicians, celebrities, and company media accounts—which is exactly what happened. Musk, Rudy Giuliani, O.J. Simpson, LeBron James, and even the Pope were among the many impersonated by Twitter users.[8] Companies received the same treatment, with an impersonation Eli Lilly and Company account writing “We are excited to announce insulin is free now,” causing its stock to drop 2.2%.[9]This has led advertising firms like Omnicom and IPG’s Mediabrands to conclude that brand safety measures are currently impeded on Twitter and advertisers have subsequently begun to announce pauses on ad spending.[10] Musk responded by suspending Twitter Blue only 48 hours after it launched, but the damage may already be done for Twitter, a company whose revenue was 90% ad sales in the second quarter of this year.[11] During his first mass call with employees, Musk said he could not rule out bankruptcy in Twitter’s future.[12]

It also remains to be seen whether the Twitter impersonators will escape civil liability under theories of defamation[13] or misappropriation of name or likeness,[14] or criminal liability under state identity theft[15] or false representation of a public employee statutes,[16] which have been legal avenues used to punish instances of social media impersonation in the past.

FTC and Twitter’s Consent Decree

On the first day of Musk’s takeover of Twitter, he immediately fired the CEO, CFO, head of legal policy, trust and safety, and general counsel.[17] By the following week, mass layoffs were in full swing with 3,700 Twitter jobs, or 50% of its total workforce, to be eliminated.[18] This move has already landed Twitter in legal trouble for potentially violating the California WARN Act, which requires 60 days advance notice of mass layoffs.[19] More ominously, however, these layoffs, as well as the departure of the company’s head of trust and safety, chief information security officer, chief compliance officer and chief privacy officer, have attracted the attention of the FTC.[20]

In 2011, Twitter entered a consent decree with the FTC in response to data security lapses requiring the company to establish and maintain a program that ensured its new features do not misrepresent “the extent to which it maintains and protects the security, privacy, confidentiality, or integrity of nonpublic consumer information.”[21] Twitter also agreed to implement two-factor authentication without collecting personal data, limit employee access to information, provide training for employees working on user data, designate executives to be responsible for decision-making regarding sensitive user data, and undergo a third-party audit every six months.[22] Twitter was most recently fined $150 million back in May for violating the consent decree.[23]

With many of Twitter’s former executives gone, the company may be at an increased risk for violating regulatory orders and may find itself lacking the necessary infrastructure to comply with the consent decree. Musk also reportedly urged software engineers to “self-certify” legal compliance for the products and features they deployed, which may already violate the court-ordered agreement.[24] In response to these developments, Douglas Farrar, the FTC’s director of public affairs, said the commission is watching “Twitter with deep concern” and added that “No chief executive or company is above the law.”[25] He also noted that the FTC had “new tools to ensure compliance, and we are prepared to use them.”[26] Whether and how the FTC will employ regulatory measures against Twitter remains uncertain.

Conclusions

The fate of Twitter is by no means set in stone—in two weeks the platform has lost advertisers, key employees, and some degree of public legitimacy. However, at the speed Musk has moved so far, in two more weeks the company could likely be in a very different position. Beyond the immediate consequences to the company, Musk’s leadership of Twitter illuminates some important lessons about social media governance, both internal and external to a platform.

First, social media is foremost a business and not the ‘digital town square’ Musk imagines. Twitter’s regulation of hate speech and verification of public accounts served an important role in maintaining community standards, promoting brand safety for advertisers, and protecting users. Loosening regulatory control runs a great risk of delegitimizing a platform that corporations and politicians alike took seriously as a tool for public communication.

Second, social media stability is important to government regulators and further oversight may not be far off on the horizon. Musk is setting a precedent and bringing the spotlight on the dangers of a destabilized social media platform and the risks this may pose to data privacy, efforts to curb misinformation, and even the stock market. In addition to the FTC, Senate Majority Whip, and chair of the Senate Judiciary Committee, Dick Durbin, has already commented negatively on the Twitter situation.[27] Musk may have given powerful regulators, and even legislators, the opportunity they were looking for to impose greater control over social media. For better or worse, Twitter’s present troubles could lead to a new era of government involvement in digital social spaces.

Notes

[1] Adam Bankhurst, Elon Musk’s Twitter Takeover and the Chaos that Followed: The Complete Timeline, IGN (Nov. 11, 2022), https://www.ign.com/articles/elon-musks-twitter-takeover-and-the-chaos-that-followed-the-complete-timeline.

[2] Monica Potts & Jean Yi, Why Twitter is Unlikely to Become the ‘Digital Town Square’ Elon Musk Envisions, FiveThirtyEight (Apr. 29, 2022), https://fivethirtyeight.com/features/why-twitter-is-unlikely-to-become-the-digital-town-square-elon-musk-envisions/.

[3] Bankhurst, supra note 1.

[4] Potts & Yi, supra note 2.

[5] Drew Harwell et al., Racist Tweets Quickly Surface After Musk Closes Twitter Deal, Washington Post (Oct. 28, 2022), https://www.washingtonpost.com/technology/2022/10/28/musk-twitter-racist-posts/.

[6] Bobby Allyn, Elon Musk Says Twitter Bankruptcy is Possible, But is That Likely?, NPR (Nov. 12, 2022), https://www.wglt.org/2022-11-12/elon-musk-says-twitter-bankruptcy-is-possible-but-is-that-likely.

[7] Id.

[8] Keegan Kelly, We Will Never Forget These Hilarious Twitter Impersonations, Cracked (Nov. 12, 2022), https://www.cracked.com/article_35965_we-will-never-forget-these-hilarious-twitter-impersonations.html; Shirin Ali, The Parody Gold Created by Elon Musk’s Twitter Blue, Slate (Nov. 11, 2022), https://slate.com/technology/2022/11/parody-accounts-of-twitter-blue.html.

[9] Ali, supra note 8.

[10] Mehnaz Yasmin & Kenneth Li, Major Ad Firm Omnicom Recommends Clients Pause Twitter Ad Spend – Memo, Reuters (Nov. 11, 2022), https://www.reuters.com/technology/major-ad-firm-omnicom-recommends-clients-pause-twitter-ad-spend-verge-2022-11-11/; Rebecca Kern, Top Firm Advises Pausing Twitter Ads After Musk Takeover, Politico (Nov. 1, 2022), https://www.politico.com/news/2022/11/01/top-marketing-firm-recommends-suspending-twitter-ads-with-musk-takeover-00064464.

[11] Yasmin & Li, supra note 10.

[12] Katie Paul & Paresh Dave, Musk Warns of Twitter Bankruptcy as More Senior Executives Quit, Reuters (Nov. 10, 2022), https://www.reuters.com/technology/twitter-information-security-chief-kissner-decides-leave-2022-11-10/.

[13] Dorrian Horsey, How to Deal With Defamation on Twitter, Minc, https://www.minclaw.com/how-to-report-slander-on-twitter/ (last visited Nov. 12, 2022).

[14] Maksim Reznik, Identity Theft on Social Networking Sites: Developing Issues of Internet Impersonation, 29 Touro L. Rev. 455, 456 n.12 (2013), https://digitalcommons.tourolaw.edu/cgi/viewcontent.cgi?article=1472&context=lawreview.

[15] Id. at 455.

[16] Brett Snider, Can a Fake Twitter Account Get You Arrested?, FindLaw Blog (April 22, 2014), https://www.findlaw.com/legalblogs/criminal-defense/can-a-fake-twitter-account-get-you-arrested/.

[17] Bankhurst, supra note 1.

[18] Sarah Perez & Ivan Mehta, Twitter Sued in Class Action Lawsuit Over Mass Layoffs Without Proper Legal Notice, Techcrunch (Nov. 4, 2022), https://techcrunch.com/2022/11/04/twitter-faces-a-class-action-lawsuit-over-mass-employee-layoffs-with-proper-legal-notice/.

[19] Id.

[20] Natasha Lomas & Darrell Etherington, Musk’s Lawyer Tells Twitter Staff They Won’t be Liable if Company Violates FTC Consent Decree (Nov. 11, 2022), https://techcrunch.com/2022/11/11/musks-lawyer-tells-twitter-staff-they-wont-be-liable-if-company-violates-ftc-consent-decree/.

[21] Id.

[22] Scott Nover, Elon Musk Might Have Already Broken Twitter’s Agreement With the FTC, Quartz (Nov. 11, 2022), https://qz.com/elon-musk-might-have-already-broken-twitter-s-agreement-1849771518.

[23] Tom Espiner, Twitter Boss Elon Musk ‘Not Above the Law’, Warns US Regulator, BBC (Nov. 11, 2022), https://www.bbc.com/news/business-63593242.

[24] Nover, supra note 22.

[25] Espiner, supra note 23.

[26] Id.

[27] Kern, supra note 10.


Target Number One, the Consequences of Being the Best

Ben Lauter, MJLST Staffer

The World of Chess

Since 2013, Norwegian Magnus Carlsen has been the reigning World Champion in chess. This achievement was not shocking to many; Magnus has been an elite chess prodigy and Grandmaster since the age of thirteen (nine years before his eventual champion title). Many regard Magnus as the best chess player ever, surpassing the legend of Fischer and Kasparov[1], two former great world champions. During Kasparov’s reign, he drew, or tied, Magnus in a classical game[2] of chess when Magnus was just thirteen. With this being said, it seems impossible to quantify the talent and genius that Magnus possesses and continues to refine in chess. However, that is exactly what the ELO rating system intends to do.

An ELO rating is a calculation of a chess player’s current skill level. Magnus boasts the highest classical ELO rating ever to be retained: 2882. Along the way to receiving this all-time high was a period of time spanning nearly two and a half years where Magnus did not lose a single classical game, winning 125 straight. All of this is to say, Magnus Carlsen is an unstoppable force in chess. However, on September 4th, 2022, Magnus played a game that would snap his then current 53 game winning streak. On that date he lost to a 19-year-old American at the St. Louis based Sinquefield Cup Tournament, Hans Niemann, a San Francisco born prodigy currently ranked as the 49th best player in the world with an ELO rating of 2688.

The Match

This match had anything but a quiet result, despite the silence in the interviews afterwards. All that was said from the reigning World Champion was a tweet stating that Magnus would be withdrawing from the tournament, a measure that is near unprecedented from a World Champion at such a major world tournament. With that tweet, a clip was attached of the famous soccer (football) manager, Jose Mourinho, saying “If I speak, I will be in big trouble.” The chess world speculated that this was Magnus’s informal way of accusing the teenage Hans of cheating in an “over the board” chess match. A conjecture of which the chess world has not yet made peace, with article after article, interview after interview, and Grandmaster after Grandmaster giving their two cents.

There were many aftershocks to Magnus’s tweet, but it seems that the legal ones, namely a defamation case for slander or libel, may be the worst for Magnus. For the past several weeks Hans Niemann has been put under the magnifying glass. He has faced harassment, attacks on his character, and irreparable reputational damage. Yet, Magnus has still failed to present any evidence as to why he withdrew or sent that tweet out to the world and has not yet clarified or disclaimed any of the rumors that shadow Hans.
For a while, it looked like Hans would simply have actions and innuendos as his evidence in a slander or libel case. Then, after an online chess tournament that both Magnus and Hans were participants in, Magnus put out his official position on the matter. Magnus declared that on top of cheating in his match in St. Louis, Hans was a serial chess cheater and should be punished proportionately to the crime he committed. In Magnus’s declaration, he said that he believed his accusation whole-heartedly and would never participate in an invitational event in which Hans plays again. Throughout the rest of the statement Magnus provided zero evidence of the alleged cheating and stated he could not release his evidence without the approval of the player that he accused.

Consequences

There are two massive consequences likely to result from Magnus’s statement. The first is that Han’s professional career will likely be in ruins. Invitationals are a priority for top ranked chess professionals, allowing them to play in official matches and record status for their rating in addition to receiving prize money. If an invitational is going to have to choose between a candidate for the best player of all time, Magnus, and a rising teenager, Hans, there might not be a long discussion. The second consequence is that because no evidence has been released to validate the statements that Magnus made based on his gut feeling, Hans may have a case for slander or libel.

There are four elements to prove in a slander case. The plaintiff must show that there was a false statement made purporting to be fact, a publication of that statement to a third person, fault amounting to at least negligence, and damages incurred. Two of these elements are quite clear and likely provable; there was publication of a statement and there were damages to Han’s reputation. The other two elements require further analysis. The third element related to fault asks one to look to Magnus’s state of mind when he made his statements and find evidence that he did so to tarnish Han’s name, or was at the very least negligent in making the statements, to fulfill a prima facie case for slander. This standard is notoriously hard to prove and will undoubtedly act as a roadblock to a slander case. However, it will likely be even harder for Hans to prove the first element, that the statement was false purporting to be fact. This element causes an issue because of the difficulty in proving that something that didn’t happen, didn’t happen. Specifically, Hans would have to show that he did not cheat in order to prove that Magnus’s cheating accusation was false.

Further complicating the issue is surfacing evidence from other sources making Magnus’s claim of cheating more believable. Statistical analysis of Han’s performances show that he has been playing games with computer moves 90% of the time or more, compared to the likes of Fischer, Kasparov, or Magnus who are only around 70% during their all-time peaks, and to traditional 2700 ELO rated Grandmasters who average between 50%-60%. Reports indicate that based on Han’s last 18 months of performance the chance that he played games at the rate he had without computer assistance is one in over 60,000. Without being able to prove that Magnus’s statements are at the least unlikely true, Hans will likely fail to prove slander and his career will likely be derailed after the events of September.

Notes

[1]  Kasparov is the longest reigning World Champion to date.

[2] A “Classical Game” is a time format of chess that allows for 120 minutes of play per person for the first forty moves; it allows for the deepest level of consideration on every move. As a result, classical games of chess are an incredibly accurate and sound measure of a player’s talent. They are used to determine the World Champion every two years.


It’s Social Media – A Big Lump of Unregulated Child Influencers!

Tessa Wright, MJLST Staffer

If you’ve been on TikTok lately, you’re probably familiar with the Corn Kid. Seven-year-old Tariq went viral on TikTok in August after appearing in an 85-second video clip professing his love of corn.[1] Due to his accidental viral popularity, Tariq has become a social media celebrity. He has been featured in content collaborations with notable influencers, starred in a social media ad for Chipotle, and even created an account on Cameo.[2] At seven-years-old, he has become a child influencer, a minor celebrity, and a major financial contributor for his family. Corn Kid is not alone. There are a growing number of children rising to fame via social media. In fact, today child influencers have created an eight-billion-dollar social media advertising industry, with some children generating as much as $26 million a year through advertising and sponsored content.[3] Yet, despite this rapidly growing industry, there are still very few regulations protecting the financial earnings of children entertainers in the social media industry.[4]

What Protects Children’s Financial Earnings in the Entertainment Industry?

Normally, children in the entertainment industry have their financial earnings protected under the California Child Actor’s Bill (also known as the Coogan Law).[5] The Coogan Law was passed in 1939 by the state of California in response to the plight of Jackie Coogan.[6] Coogan was a child star who earned millions of dollars as a child actor only to discover upon reaching adulthood that his parents had spent almost all of his money.[7] Over the years the law has evolved, and today it upholds that earnings by minors in the entertainment industry are the property of the minor.[8] Specifically, the California law creates a fiduciary relationship between the parent and child and requires that 15% of all earnings must be set aside in a blocked trust.[9]

What Protections do Child Social Media Stars Have? 

Social media stars are not legally considered to be actors, so the Coogan Law does not apply to their earnings.[10] So, are there other laws protecting these social media stars? The short answer is, no. 

Technically, there are laws that prevent children under the age of 12 from using social media apps which in theory should protect the youngest of social media stars.[11] However, even though these social media platforms claim that they require users to be at least thirteen years old to create accounts on their platforms, there are still ways children end up working in content creation jobs.[12] The most common scenario is that parents of these children make content in which they feature their children.[13] These “family vloggers” are a popular genre of YouTube videos where parents frequently feature their children and share major life events; sometimes they even feature the birth of their children. Often these parents also make separate social media accounts for their children which are technically run by the parents and are therefore allowed despite the age restrictions.[14] There are no restrictions or regulations preventing parents from making social media accounts for their children, and therefore no restriction on the parents’ collection of the income generated from such accounts.[15]

New Attempts at Legislation 

So far, there has been very little intervention by lawmakers. The state of Washington has attempted to turn the tide by proposing a new state bill that attempts to protect children working in social media.[16] The bill was introduced in January of 2022 and, if passed, would offer protection to children living within the state of Washington who are on social media.[17] Specifically, the bill introduction reads, “Those children are generating interest in and revenue for the content, but receive no financial compensation for their participation. Unlike in child acting, these children are not playing a part, and lack legal protections.”[18] The bill would hopefully help protect the finances of these child influencers. 

Additionally, California passed a similar bill in 2018.[19] Unfortunately, it only applies to videos that are longer than one hour and have direct payment to the child.[20] What this means is that a child who, for example, is a Twitch streamer that posts a three-hour livestream and receives direct donations during the stream, would be covered by the bill; however, a child featured in a 10-minute YouTube video or a 15-second TikTok would not be financially protected under the bill.

The Difficulties in Regulating Social Media Earnings for Children

Currently, France is the only country in the world with regulations for children working in the social media industry.[21] There, children working in the entertainment industry (whether as child actors, models, or social media influencers) have to register for a license and their earnings must be put into a dedicated bank account for them to access when they’re sixteen.[22] However, the legislation is still new and it is too soon to see how well these regulations will work. 

The problem with creating legislation in this area is attributable to the ad hoc nature of making social media content.[23] It is not realistic to simply extend existing legislation applicable to child entertainers to child influencers[24] as their work differs greatly. Moreover, it becomes extremely difficult to attempt to regulate an industry when influencers can post content from any location at any time, and when parents may be the ones filming and posting the videos of their children in order to boost their household income. For example, it would be hard to draw a clear line between when a child is being filmed casually for a home video and when it is being done for work, and when an entire family is featured in a video it would be difficult to determine how much money is attributable to each family member. 

Is There a Solution?

While there is no easy solution, changing the current regulations or creating new regulations is the clearest route. Traditionally, tech platforms have taken the view that governments should make rules and then they will then enforce them.[25] All major social media sites have their own safety rules, but the extent to which they are responsible for the oversight of child influencers is not clearly defined.[26] However, if any new regulation is going to be effective, big tech companies will need to get involved. As it stands today, parents have found loopholes that allow them to feature their child stars on social media without violating age restrictions. To avoid these sorts of loopholes to new regulations, it will be essential that big tech companies work in collaboration with legislators in order to create technical features that prevent them.

The hope is that one day, children like Corn Kid will have total control of their financial earnings, and will not reach adulthood only to discover their money has already been spent by their parents or guardians. The future of entertainment is changing every day, and the laws need to keep up. 

Notes

[1] Madison Malone Kircher, New York Times (Online), New York: New York Times Company (September 21, 2022) https://www.nytimes.com/2022/09/21/style/corn-kid-tariq-tiktok.html.

[2] Id.

[3] Marina Masterson, When Play Becomes Work: Child Labor Laws in the Era of ‘Kidfluencers’, 169 U. Pa. L. Rev. 577, 577 (2021).

[4] Coogan Accounts: Protecting Your Child Star’s Earnings, Morgan Stanley (Jan. 10, 2022), https://www.morganstanley.com/articles/trust-account-for-child-performer.

[5] Coogan Law, https://www.sagaftra.org/membership-benefits/young-performers/coogan-law (last visited Oct. 16, 2022).

[6] Id.

[7] Id.

[8] Cal. Fam. Code § 6752.

[9] Id.

[10] Morgan Stanley, supra note 4.

[11] Sapna Maheshwari, Online and Making Thousands, at Age 4: Meet the Kidfluencers, N.Y. Times, (March 1, 2019) https://www.nytimes.com/2019/03/01/business/media/social-media-influencers-kids.html.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.

[17] Id.

[18] Id.

[19] E.W. Park, Child Influencers Have No Child Labor Regulations. They Should, Lavoz News (May 16, 2022) https://lavozdeanza.com/opinions/2022/05/16/child-influencers-have-no-child-labor-regulations-they-should/.

[20] Id.

[21] Collins, supra note 19.

[22] Id.

[23] Id.

[24] Id.

[25] Id.

[26] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.


Meta Faces Class Action Lawsuits Over Pixel Tool Data Controversy

Ray Mestad, MJLST Staffer

With a market capitalization of $341 billion, Meta Platforms is one of the most valuable companies in the world.[1] Information is a prized asset for Meta, but how that information is acquired continues to be a source of conflict. Their Meta “Pixel” tool is a piece of code that allows websites to track visitor activity.[2] However, what Meta does with the data after it is acquired may be in violation of a variety of privacy laws. Because of that, Meta is now facing almost fifty class action lawsuits due to Pixel’s use of data from video players and healthcare patient portals.[3]

What is Pixel?

Pixel is an analytical tool that tracks visitor actions on a website.[4] In theory, the actions that are tracked include purchases, registrations, cart additions, searches and more. This information can then be used by the website owners to better understand user behavior. Website owners can more efficiently use ad spend by tailoring ads to relevant users and finding more receptive users based on Pixel’s analysis.[5]

In the world of search engine optimization and web analysis tools like Pixel are common, and there are other sites, like Google Analytics, that provide similar functions. However, there are two key differences between these other tools and Pixel. First, Pixel has in some cases accidentally scraped private, identifiable information from websites. Second, Pixel can connect that information to the social profiles on their flagship website, Facebook. Whether intentionally or accidentally, Pixel has been found to have grabbed personal information beyond the simple user web actions it was supposed to be limited to and connected them to Facebook profiles.[6]

Pixel and Patient Healthcare Information

It’s estimated that, until recently, one third of the top 100 hospitals in the country used Pixel on their websites.[7] However, that number may decrease after Meta’s recent data privacy issues. Meta faced both criticism and legal action in the summer of 2022 for its treatment of user data on healthcare websites. Pixel incorrectly retrieved private patient information, including names, conditions, email addresses and more. Meta then targeted hospital website users with ads on Facebook, using the information Pixel collected from hospital websites and patient portals by matching user information with their Facebook accounts.[8] Novant Health, a healthcare provider, ran advertisements promoting vaccinations in 2020. They then added Pixel code to their website to evaluate the effectiveness of the campaign. Pixel proceeded to send private and identifiable user information to Meta.[9] Another hospital (and Meta’s co-defendant in the lawsuit), the University of California San Francisco and Dignity Health (“UCSF”), was accused of illegally gathering patient information via Pixel code on their patient portal. Private medical information was then distributed to Meta. At some point, it is claimed that pharmaceutical companies then gained access to this medical information and sent out targeted ads based thereon.[10] That is just one example – all in all, more than 1 million patients have been affected by this Pixel breach.[11] 

Pixel and Video Tracking

The problems did not stop there. Following its patient portal controversy, Meta again faced criticism for obtaining protected user data with Pixel, this time in the context of video consumption. There are currently 47 proposed class actions against Meta for violations of the Video Privacy Protection Act (the “VPPA”). The VPPA was created in the 1980’s to cover videotape and audio-visual materials. No longer confined to the rental store, the VPPA has now taken on a much broader meaning after the growth of the internet. 

These class actions accuse Meta of using the Pixel tool to take video user data from a variety of company websites, including the NFL, NPR, the Boston Globe, Bloomberg Law and many more. The classes allege that by collecting video viewing activity in a personally identifiable manner without consent (matching Facebook user IDs to the activity rather than anonymously), so Pixel users could target their ads at the viewers, Pixel violated the VPPA. Under the VPPA Meta is not the defendant in these lawsuits, but rather the companies that shared user information with Meta.[12]

Causes of Action

The relatively new area of data privacy is scarcely litigated by the federal government due to the lack of statutes protecting consumer privacy on the federal level. Because of that, the number of data protection civil litigants can be expected to continue to grow. [13] HIPAA is the Health Insurance Portability and Accountability Act, an act created in 1996 to protect patient information from disclosure without patient consent. In the patient portal cases, HIPAA actions would have to be initiated by the US government. Claimants are therefore suing Meta under consumer protection and other privacy laws like the California Confidentiality of Medical Information Act, the Federal Wiretap Act, and the Comprehensive Computer Data Access and Fraud Act instead.[14] These state Acts allow individuals to sue, when under Federal Acts like HIPPA, the Government may move slowly, or not at all. And in the cases of video tracking, the litigants may only sue the video provider, not Meta itself.[15] Despite that wrinkle of benefit to Meta, their involvement in more privacy disputes is not ideal for the tech giant as it may hurt the trustworthiness of Meta Platforms in the eyes of the public.

Possible Outcomes

If found liable, the VPPA violations could result in damages of $2,500 per class member.[16] Punitive damages for the healthcare data breaches could run in the millions as well and would vary state to state due to the variety of acts the claims are brought in violation of.[17] Specifically, in the UCSF data case class members are seeking punitive damages of $5 million.[18] One possible hang-up that may become an issue for claimants are arbitration agreements. If the terms and conditions of either hospital patient portals or video provider websites contain arbitration clauses, litigants may have difficulty overcoming them. On the one hand, these terms and conditions may be binding and force the parties to attend mandatory arbitration meetings. On the other hand, consumer rights attorneys may argue that consent needs to come from forms separate from online user agreements.[19] If more lawsuits emerge due to the actions of Pixel, it is quite possible that companies will move away from the web analytics tools to avoid potential liability. It remains to be seen whether the convenience and utility of Meta Pixel stops being worth the risk the web analytics tools present to websites.

Notes

[1] Meta Nasdaq, https://www.google.com/finance/quote/META:NASDAQ (last visited Oct. 21, 2022).

[2] Meta Pixel, Meta for Developers, https://developers.facebook.com/docs/meta-pixel/.

[3] Sky Witley, Meta Pixel’s Video Tracking Spurs Wave of Data Privacy Suits, (Oct. 13, 2022, 3:55 AM), Bloomberg Law, https://news.bloomberglaw.com/privacy-and-data-security/meta-pixels-video-tracking-spurs-wave-of-consumer-privacy-suits.

[4] Meta Pixel, https://adwisely.com/glossary/meta-pixel/ (last visited Oct. 21, 2022).

[5] Ted Vrountas, What Is the Meta Pixel & What Does It Do?, https://instapage.com/blog/meta-pixel.

[6] Steve Adler, Meta Facing Further Class Action Lawsuit Over Use of Meta Pixel Code on Hospital Websites, HIPPA Journal (Aug. 1, 2022), https://www.hipaajournal.com/meta-facing-further-class-action-lawsuit-over-use-of-meta-pixel-code-on-hospital-websites/.

[7] Id.

[8] Id.

[9] Bill Toulas, Misconfigured Meta Pixel exposed healthcare data of 1.3M patients, Bleeping Computer (Aug. 22, 2022, 2:16 PM), https://www.bleepingcomputer.com/news/security/misconfigured-meta-pixel-exposed-healthcare-data-of-13m-patients/.

[10] Adler, supra note 6.

[11] Toulas, supra note 9.

[12] Witley, supra note 3. 

[13] Id.

[14] Adler, supra note 6.

[15] Witley, supra note 3.

[16] Id

[17] Dave Muoio, Northwestern Memorial the latest hit with a class action over Meta’s alleged patient data mining, Fierce Healthcare (Aug. 12, 2022 10:30AM), https://www.fiercehealthcare.com/health-tech/report-third-top-hospitals-websites-collecting-patient-data-facebook.

[18] Id.

[19] Witley, supra note 3.




After Hepp: Section 230 and State Intellectual Property Law

Kelso Horne IV, MJLST Staffer

Although hardly a competitive arena, Section 230(c) of the Communications Decency Act (the “CDA”) is almost certainly the best known of all telecommunications laws in the United States. Shielding Internet Service Providers (“ISPs”) and websites from liability for the content published by their users, § 230(c)’s policy goals are laid out succinctly, if a bit grandly, in § 230(a) and § 230(b).[1] These two sections speak about the internet as a force for economic and social good, characterizing it as a “vibrant and competitive free market” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”[2] But where §§ 230(a),(b) both speak broadly of a utopian vision for the internet, and (c) grants websites substantial privileges, § 230(e) gets down to brass tacks.[3]

CDA: Goals and Text

The CDA lays out certain limitations on the shield protections provided by § 230(c).[4] Among these is § 230(e)(2) which states in full, “Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.”[5] This particular section, despite its seeming clarity, has been the subject of litigation for over a decade, and in 2021 a clear circuit split was opened between the 9th and 3rd Circuit Courts over how this short sentence applies to state intellectual property laws. The 9th Circuit Court follows the principle that the policy portions of § 230 as stated in §§ 230(a),(b) should be controlling, and that, as a consequence, state intellectual property claims should be barred. The 3rd Circuit Court follows the principle that the plain text of § 230(e)(2) unambiguously allows for state intellectual property claims.

Who Got There First? Lycos and Perfect 10

In Universal Commc’n Sys., Inc. v. Lycos, Inc., the 1st Circuit Court faced this question obliquely; the court assumed that they were not immunized from state intellectual property law by § 230 and the claims were dismissed, but on different grounds.[6] Consequently, when the 9th Circuit released their opinion in Perfect 10, Inc. v. CCBILL LLC only one month later, they felt free to craft their own rule on the issue.[7] Consisting of a few short paragraphs, the court’s decision on state intellectual property rights is nicely summarized in a short sentence. They stated that “As a practical matter, inclusion of rights protected by state law within the ‘intellectual property’ exemption would fatally undermine the broad grant of immunity provided by the CDA.”[8] The court’s analysis in Perfect 10 was almost entirely based on what allowing state intellectual property claims would do to the policy goals stated in § 230(a) and § 230(b), and did not attempt, or rely on, a particularly thorough reading of § 230(e)(2). Here the court looks at both the policy stated in § 230(a) and § 230(b) and the text of § 230(e)(2) and attempts to rectify them. The court clearly sees the possibility of issues arising from allowing plaintiffs to bring cases through fifty different state systems against websites and ISPs for the postings of their users. This insight may be little more than hindsight, however, given the date of the CDA’s drafting.

Hepp Solidifies a Split

Perfect 10 would remain the authoritative appellate level case on the issue of the CDA and state intellectual property law until 2021, when the 3rd Circuit stepped into the ring.[9] In Hepp v. Facebook, Pennsylvania newsreader Karen Hepp sued Facebook for hosting advertisements promoting a dating website and other services which had used her likeness without her permission.[10] In a much longer analysis, the 3rd Circuit held that the 9th Circuit’s interpretation argued for by Facebook “stray[ed] too far from the natural reading of § 230(e)(2)”.[11] Instead, the 3rd Circuit argued for a closer reading of the text of § 230(e)(2) which they said aligned closely with a more balanced selection of policy goals, including allowance for state intellectual property law.[12] The court also mentions structural arguments relied on by Facebook, mostly examining how narrow the other exceptions in 230(e) are, which the majority states “cuts both ways” since Congress easily cabined meanings when they wanted to.[13]

The dissent in Hepp agreed with the 9th Circuit that the policy goals stated in §§230(a),(b) should be considered controlling.[14] It also noted two cases in other circuits where courts had shown hesitancy towards allowing state intellectual property claims under the CDA to go forward, although both claims had been dismissed on other grounds.[15] Perhaps unsurprisingly, the dissent sees the structural arguments as compelling, and in Facebook’s favor.[16] With the circuits now definitively split on the issue, the text of §§ 230(a),(b) would certainly seem to demand the Supreme Court, or Congress, step in and provide a clear standard.

What Next? Analyzing the CDA

Despite being a pair of decisions ostensibly focused on parsing out what exactly Congress was intending when they drafted § 230, both Perfect 10 and Hepp left out any citation to legislative history when discussing the § 230(e)(2) issue. However, this is not as odd as it seems at first glance. The Communications Decency Act is large, over a hundred pages in length, and § 230 makes up about a page and a half.[17] Most of the content of the legislative reports published after the CDA was passed instead focused on its landmark provisions which attempted, mostly unsuccessfully, to regulate obscene materials on the internet.[18] Section 230 gets a passing mention, less than a page, some of which is taken up with assurances that it would not interfere with civil liability for those engaged in “cancelbotting,” a controversial anti-spam method of the Usenet era.[19] It is perhaps unfair to say that § 230 was an afterthought, but it is likely that lawmakers did not understand its importance at the time of passage. This may be an argument for eschewing the 9th Circuit’s analysis which seemingly imparts the CDA’s drafters with an overly high degree of foresight into § 230’s use by internet companies over a decade later.

Indeed, although one may wish that Congress had drafted it differently, the text of § 230(e)(2) is clear, and the inclusion of “any” as a modifier to “law” makes it difficult to argue that state intellectual property claims are not exempted by the general grant of immunity in § 230.[20] Congressional inaction should not give way to courts stepping in to determine what they believe would be a better Act. Indeed, the 3rd Circuit majority in Hepp may be correct in stating that Congress did in fact want state intellectual property claims to stand. Either way, we are faced with no easy judicial answer; to follow the clear text of the section would be to undermine what many in the e-commerce industry clearly see as an important protection and to follow the purported vision of the Act stated in §§230(a),(b) would be to remove a protection to intellectual property which victims of infringement may use to defend themselves. The circuit split has made it clear that this is a question on which reasonable jurists can disagree. Congress, as an elected body, is in the best position to balance these equities, and they should use their law making powers to definitively clarify the issue.

Notes

[1] 47 U.S.C. § 230.

[2] Id.

[3] 47 U.S.C. § 230(e).

[4] Id.

[5] 47 U.S.C. § 230(e)(2).

[6] Universal v. Lycos, 478 F.3d 413 (1st Cir. 2007)(“UCS’s remaining claim against Lycos was brought under Florida trademark law, alleging dilution of the “UCSY” trade name under Fla. Stat. § 495.151. Claims based on intellectual property laws are not subject to Section 230 immunity.”).

[7] 488 F.3d 1102 (9th Cir. 2007).

[8] Id. at 1119 n.5.

[9] Kyle Jahner, Facebook Ruling Splits Courts Over Liability Shield Limits for IP, Bloomberg Law, (Sep. 28, 2021, 11:32 AM).

[10] 14 F.4th 204, 206-7 (3d Cir. 2021).

[11] Id. at 210.

[12] Id. at 211.

[13] Hepp v. Facebook, 14 F.4th 204 (3d Cir. 2021)(“[T]he structural evidence it cites cuts both ways. Facebook is correct that the explicit references to state law in subsection (e) are coextensive with federal laws. But those references also suggest that when Congress wanted to cabin the interpretation about state law, it knew how to do so—and did so explicitly.”).

[14] 14 F.4th at 216-26 (Cowen, J., dissenting).

[15] Almeida v. Amazon.com, Inc., 456 F.3d 1316 (11th Cir. 2006); Doe v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016).

[16] 14 F.4th at 220 (Cowen, J., dissenting) (“[T]he codified findings and policies clearly tilt the balance in Facebook’s favor.”).

[17] Communications Decency Act of 1996, Pub. L. 104-104, § 509, 110 Stat. 56, 137-39.

[18] H.R. REP. NO. 104-458 at 194 (1996) (Conf. Rep.); S. Rep. No. 104-230 at 194 (1996) (Conf. Rep.).

[19] Benjamin Volpe, From Innovation to Abuse: Does the Internet Still Need Section 230 Immunity?, 68 Cath. U. L. Rev. 597, 602 n.27 (2019); see Denise Pappalardo & Todd Wallack, Antispammers Take Matters Into Their Own Hands, Network World, Aug. 11, 1997, at 8 (“cancelbots are programs that automatically delete Usenet postings by forging cancel messages in the name of the authors. Normally, they are used to delete postings by known spammers. . . .”).

[20] 47 U.S.C. § 230(e)(2).


iMessedUp – Why Apple’s iOS 16 Update Is a Mistake in the Eyes of Litigators.

Carlisle Ghirardini, MJLST Staffer

Have you ever wished you could unsend a text message? Has autocorrect ever created a typo you would give anything to edit? Apple’s recent iOS 16 update makes these dreams come true. The new software allows you to edit a text message a maximum of five times for up to 15 minutes after delivery and to fully unsend a text for up to two minutes after delivery.[1] While this update might be a dream for a sloppy texter, it may become a nightmare for a victim hoping to use text messages as legal evidence. 

But I Thought my Texts Were Private?

Regardless of the passcode on your phone, or other security measures you may use to keep your correspondence private, text messages can be used as relevant evidence in litigation so long as they can be authenticated.[2] Under the Federal Rules of Evidence Rule 901(a), such authentication only requires proof sufficient to support a finding that the evidence at issue is what you claim it is.[3] Absent access to the defendant’s phone, a key way to authenticate texts includes demonstrating the personal nature of the messages, which emulate earlier communication.[4] However, for texts to be admitted as evidence beyond hearsay, proof of the messages through screenshots, printouts, or other tangible methods of authentication is vital.[5]

A perpetrator may easily abuse the iOS 16 features by crafting harmful messages and then editing or unsending them. This has several negative effects. First, the fact that this capability is available may increase perpetrator utilization of text, knowing that disappearing harassment will be easier to get away with. Further, victims will be less likely to capture the evidence in the short time before the proof is rescinded, but after the damage has already been done. Attorney Michelle Simpson Tuegal who spoke out against this software shared how “victims of trauma cannot be relied upon, in that moment, to screenshot these messages to retain them for any future legal proceedings.”[6] Finally, when the victims are without proof and the perpetrator denies sending, psychological pain may result from such “gaslighting” and undermining of the victim’s experience.[7]

Why are Text Messages so Important?

Text messages have been critical evidence in proving the guilt of the defendant in many types of cases. One highly publicized example is the trial of Michelle Carter, who sent manipulative text messages to encourage her then 22-year-old boyfriend to commit suicide.[8] Not only were these texts of value in proving reckless conduct, they also proved Carter guilty of involuntary manslaughter as her words were shown to be the cause of the victim’s death. Without evidence of this communication, the case may have turned out very differently. Who is to say that Carter would not have succeeded in her abuse by sending and then unsending or editing her messages later?

Text messaging is also a popular tool for perpetrators of sexual harassment, and it happens every day. In a Rhode Island Supreme Court case, communication via iMessage was central to the finding of 1st degree sexual assault, as the 17-year-old plaintiff felt too afraid to receive a hospital examination after her attack.[9] Fortunately, the plaintiff had saved photos of inappropriate messages the perpetrator sent after the incident, amongst other records of their texting history, which properly authenticated the texts and connected him to the crime. It is important to note, however, that the incriminating screenshots were not taken until the morning after and with the help of a family member. This demonstrates how it is not often the first instinct of a victim to immediately memorialize evidence, especially when the content may be associated with shame or trauma. The new iOS feature may take away this opportunity to help one’s case through messages which can paint a picture of the incident or the relationship between the parties.

Apple Recognized That They Messed Up

The current iOS 16 update offering two minutes to recall messages and 15 minutes to edit them is actually an amendment to Apple’s originally offered timeframe of 15 minutes to unsend. This change came in light of efforts from an advocate for survivors of sexual harassment and assault. The advocate wrote a letter to the Apple CEO warning of the dangers of this new unsending capability.[10] While the decreased timeframe that resulted leaves less room for abuse of the feature, editing is just as dangerous as unsending. With no limit to how much text you can edit, one could send full sentences of verbal abuse simply just to later edit and replace them with a one-word message. Furthermore, if someone is reading the harmful messages in real time, the shorter window only gives them less time to react – less time to save the messages for evidence. While we can hope that the newly decreased window makes perpetrators think harder before sending a text that they may not be able to delete, this is wishful thinking. The fact that almost half of young people have reported being victims to cyberbullying when there has been no option to rescind or edit one’s messages shows that the length of the iOS feature likely does not matter.[11] The abilities of the new Apple software should be disabled; their “fix” to the update is not enough. The costs of what such a feature will do to victims and their chances of success in litigation outweigh the benefits to the careless texter. 

Notes

[1] Sofia Pitt, Apple Now Lets You Edit and Unsend Imessages on Your Iphone. Here’s How to Do It, CNBC (Sep. 12, 2022, 1:12 PM), https://www.cnbc.com/2022/09/12/how-to-unsend-imessages-in-ios-16.html.

[2] FED. R. EVID. 901(a).

[3] Id.

[4] United States v. Teran, 496 Fed. Appx. 287 (4th Cir. 2012).

[5] State v. Mulcahey, 219 A.3d 735 (R.I. Sup. Ct. 2019).

[6] Jess Hollington, Latest Ios 16 Beta Addresses Rising Safety Concerns for Message Editing, DIGITALTRENDS (Jul. 27, 2022) https://www.digitaltrends.com/mobile/ios-16-beta-4-message-editing-unsend-safety-concerns-fix/

[7] Id.

[8] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. Sup. Ct. 2018).

[9] Mulcahey, 219 A.3d at 740.

[10] Hollington, supra note 5.

[11] 45 Cyberbullying Statistics and Facts to Make Texting Safer, SLICKTEXT (Jan. 4, 2022) https://www.slicktext.com/blog/2020/05/cyberbullying-statistics-facts/.




Freedom to Moderate? Circuits Split over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.