Internet

iMessedUp – Why Apple’s iOS 16 Update Is a Mistake in the Eyes of Litigators.

Carlisle Ghirardini, MJLST Staffer

Have you ever wished you could unsend a text message? Has autocorrect ever created a typo you would give anything to edit? Apple’s recent iOS 16 update makes these dreams come true. The new software allows you to edit a text message a maximum of five times for up to 15 minutes after delivery and to fully unsend a text for up to two minutes after delivery.[1] While this update might be a dream for a sloppy texter, it may become a nightmare for a victim hoping to use text messages as legal evidence. 

But I Thought my Texts Were Private?

Regardless of the passcode on your phone, or other security measures you may use to keep your correspondence private, text messages can be used as relevant evidence in litigation so long as they can be authenticated.[2] Under the Federal Rules of Evidence Rule 901(a), such authentication only requires proof sufficient to support a finding that the evidence at issue is what you claim it is.[3] Absent access to the defendant’s phone, a key way to authenticate texts includes demonstrating the personal nature of the messages, which emulate earlier communication.[4] However, for texts to be admitted as evidence beyond hearsay, proof of the messages through screenshots, printouts, or other tangible methods of authentication is vital.[5]

A perpetrator may easily abuse the iOS 16 features by crafting harmful messages and then editing or unsending them. This has several negative effects. First, the fact that this capability is available may increase perpetrator utilization of text, knowing that disappearing harassment will be easier to get away with. Further, victims will be less likely to capture the evidence in the short time before the proof is rescinded, but after the damage has already been done. Attorney Michelle Simpson Tuegal who spoke out against this software shared how “victims of trauma cannot be relied upon, in that moment, to screenshot these messages to retain them for any future legal proceedings.”[6] Finally, when the victims are without proof and the perpetrator denies sending, psychological pain may result from such “gaslighting” and undermining of the victim’s experience.[7]

Why are Text Messages so Important?

Text messages have been critical evidence in proving the guilt of the defendant in many types of cases. One highly publicized example is the trial of Michelle Carter, who sent manipulative text messages to encourage her then 22-year-old boyfriend to commit suicide.[8] Not only were these texts of value in proving reckless conduct, they also proved Carter guilty of involuntary manslaughter as her words were shown to be the cause of the victim’s death. Without evidence of this communication, the case may have turned out very differently. Who is to say that Carter would not have succeeded in her abuse by sending and then unsending or editing her messages later?

Text messaging is also a popular tool for perpetrators of sexual harassment, and it happens every day. In a Rhode Island Supreme Court case, communication via iMessage was central to the finding of 1st degree sexual assault, as the 17-year-old plaintiff felt too afraid to receive a hospital examination after her attack.[9] Fortunately, the plaintiff had saved photos of inappropriate messages the perpetrator sent after the incident, amongst other records of their texting history, which properly authenticated the texts and connected him to the crime. It is important to note, however, that the incriminating screenshots were not taken until the morning after and with the help of a family member. This demonstrates how it is not often the first instinct of a victim to immediately memorialize evidence, especially when the content may be associated with shame or trauma. The new iOS feature may take away this opportunity to help one’s case through messages which can paint a picture of the incident or the relationship between the parties.

Apple Recognized That They Messed Up

The current iOS 16 update offering two minutes to recall messages and 15 minutes to edit them is actually an amendment to Apple’s originally offered timeframe of 15 minutes to unsend. This change came in light of efforts from an advocate for survivors of sexual harassment and assault. The advocate wrote a letter to the Apple CEO warning of the dangers of this new unsending capability.[10] While the decreased timeframe that resulted leaves less room for abuse of the feature, editing is just as dangerous as unsending. With no limit to how much text you can edit, one could send full sentences of verbal abuse simply just to later edit and replace them with a one-word message. Furthermore, if someone is reading the harmful messages in real time, the shorter window only gives them less time to react – less time to save the messages for evidence. While we can hope that the newly decreased window makes perpetrators think harder before sending a text that they may not be able to delete, this is wishful thinking. The fact that almost half of young people have reported being victims to cyberbullying when there has been no option to rescind or edit one’s messages shows that the length of the iOS feature likely does not matter.[11] The abilities of the new Apple software should be disabled; their “fix” to the update is not enough. The costs of what such a feature will do to victims and their chances of success in litigation outweigh the benefits to the careless texter. 

Notes

[1] Sofia Pitt, Apple Now Lets You Edit and Unsend Imessages on Your Iphone. Here’s How to Do It, CNBC (Sep. 12, 2022, 1:12 PM), https://www.cnbc.com/2022/09/12/how-to-unsend-imessages-in-ios-16.html.

[2] FED. R. EVID. 901(a).

[3] Id.

[4] United States v. Teran, 496 Fed. Appx. 287 (4th Cir. 2012).

[5] State v. Mulcahey, 219 A.3d 735 (R.I. Sup. Ct. 2019).

[6] Jess Hollington, Latest Ios 16 Beta Addresses Rising Safety Concerns for Message Editing, DIGITALTRENDS (Jul. 27, 2022) https://www.digitaltrends.com/mobile/ios-16-beta-4-message-editing-unsend-safety-concerns-fix/

[7] Id.

[8] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. Sup. Ct. 2018).

[9] Mulcahey, 219 A.3d at 740.

[10] Hollington, supra note 5.

[11] 45 Cyberbullying Statistics and Facts to Make Texting Safer, SLICKTEXT (Jan. 4, 2022) https://www.slicktext.com/blog/2020/05/cyberbullying-statistics-facts/.




Freedom to Moderate? Circuits Split Over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


Digital Literacy, a Problem for Americans of All Ages and Experiences

Justice Shannon, MJLST Staffer

According to the American Library Association, “digital literacy” is “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.” Digital literacy is a term that has existed since the year 1997. Paul Gilster coined Digital literacy as “the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers.” In this way, the definition of digital literacy has broadened from how a person absorbs digital information to how one develops, absorbs, and critiques digital information.

The Covid-19 Pandemic taught Americans of all ages the value of Digital literacy. Elderly populations were forced online without prior training due to the health risks presented by Covid-19, and digitally illiterate parents were unable to help their children with classes.

Separate from Covid-19, the rise of crypto-currency has created a need for digital literacy in spaces that are not federally regulated.

Elderly

The Covid-19 pandemic did not create the need for digital literacy training for the elderly. However, the pandemic highlighted a national need to address digital literacy among America’s oldest population. Elderly family members quarantined during the pandemic were quickly separated from their families. Teaching family members how to use Zoom and Facebook messenger became a substitute for some but not all forms of connectivity. However, teaching an elderly family member how to use Facebook messenger to speak to loved ones does not enable them to communicate with peers or teach them other digital literacy skills.

To address digital literacy issues within the elderly population states have approved Senior Citizen Technology grants. Pennsylvania’s Department of Aging has granted funds to adult education centers for technology for senior citizens. Programs like this have been developing throughout the nation. For example, Prince George’s Community College in Maryland uses state funds to teach technology skills to its older population.

It is difficult to tell if these programs are working. States like Pennsylvania and Maryland had programs before the pandemic. Still, these programs alone did not reduce the distance between America’s aging population and the rest of the nation during the pandemic. However, when looking at the scale of the program in Prince George’s County, this likely was not the goal. Beyond that, there is a larger question: Is the purpose of digital literacy for the elderly to ensure that they can connect with the world during a pandemic, or is the goal simply ensuring that the elderly have the skills to communicate with the world? With this in mind, programs that predate the pandemic, such as the programs in Pennsylvania and Maryland, likely had the right approach even if they weren’t of a large enough scale to ensure digital literacy for the entirety of our elderly population.

Parents

The pandemic highlighted a similar problem for many American families. While state, federal, and local governments stepped up to provide laptops and access to the internet, many families still struggled to get their children into online classes; this is an issue in what is known as “last mile infrastructure.”During the pandemic, the nation quickly provided families with access to the internet without ensuring they were ready to navigate it. This left families feeling ill-prepared to support their children’s educational growth from home. Providing families with access to broadband without digital literacy training disproportionately impacted families of color by limiting their children’s growth capacity online compared to their peers. While this wasn’t an intended result, it is a result of hasty bureaucracy in response to a national emergency. Nationally, the 2022 Workforce Innovation Opportunity Act aims to address digital literacy issues among adults by increasing funding for teaching workplace technology skills to working adults. However, this will not ensure that American parents can manage their children’s technological needs.

Crypto

Separate from issues created by Covid-19 is cryptocurrency. One of the largest selling points of cryptocurrency is that it is largely unregulated. Users see it as “digital gold, free from hyper-inflation.”While these claims can be valid, consumers frequently are not aware of the risks of cryptocurrency. Last year the Chair of the SEC called cryptocurrencies “the wild west of finance rife with fraud, scams, and abuse.”This year the Department of the Treasury announced they would release instructional materials to explain how cryptocurrencies work. While this will not directly regulate cryptocurrencies providing Americans with more tools to understand cryptocurrencies may help reduce cryptocurrency scams.

Conclusion

Addressing digital literacy has been a problem for years before the Covid-19 pandemic. Additionally, when new technologies become popular, there are new lessons to learn for all age groups. Covid-19 appropriately shined a light on the need to address digital literacy issues within our borders. However, if we only go so far as to get Americans networked and prepared for the next national emergency, we’ll find that there are disparities between those who excel online and those who are are ill-equipped to use the internet to connect with family, educate their kids, and participate in e-commerce.


Extending Trademark Protections to the Metaverse

Alex O’Connor, MJLST Staffer

After a 2020 bankruptcy and steadily decreasing revenue that the company attributes to the Coronavirus pandemic, Chuck E. Cheese is making the transition to a pandemic-proof virtual world. Restaurant and arcade center Chuck E. Cheese is hoping to revitalize its business model by entering the metaverse. In February, Chuck E. Cheese filed two intent to use trademark filings with the USPTO. The trademarks were filed under the names “CHUCK E. VERSE” and “CHUCK E. CHEESE METAVERSE”. 

Under Section 1 of the Lanham Act, the two most common types of applications for registration of a mark on the Principal Register are (1) a use based application for which the applicant must have used the mark in commerce and (2) an “intent to use” (ITU) based application for which the applicant must possess a bona fide intent to use the mark in trade in the near future. Chuck E. Cheese has filed an ITU application for its two marks.

The metaverse is a still-developing virtual and immersive world that will be inhabited by digital representations of people, places, and things. Its appeal lies in the possibility of living a parallel, virtual life. The pandemic has provoked a wave of investment into virtual technologies, and brands are hurrying to extend protection to virtual renditions of their marks by registering specifically for the metaverse. A series of lawsuits related to alleged infringing use of registered marks via still developing technology has spooked mark holders into taking preemptive action. In the face of this uncertainty, the USPTO could provide mark holders with a measure of predictability by extending analogue protections of marks used in commerce to substantially similar virtual renditions. 

Most notably, Hermes International S.A. sued the artist Mason Rothschild for both infringement and dilution for the use of the term “METABIRKINS” in his collection of Non-Fungible Tokens (NFTs). Hermes alleges that the NFTs are confusing customers about the source of the digital artwork and diluting the distinctive quality of Hermes’ popular line of handbags. The argument continues that the term “META” is merely a generic term that simply means “BIRKINS in the metaverse,” and Rothschild’s use of the mark constitutes trading on Hermes’ reputation as a brand.  

Many companies and individuals are rushing to the USPTO to register trademarks for their brands to use in virtual reality. Household names such as McDonalds (“MCCAFE” for a virtual restaurant featuring actual and virtual goods), Panera Bread (“PANERAVERSE” for virtual food and beverage items), and others have recently filed applications for registration with the USPTO for virtual marks. The rush of filings signals a recognition among companies that the digital marketplace presents countless opportunities for them to expand their brand awareness, or, if they’re not careful, for trademark copycats to trade on their hard-earned good will among consumers.

Luckily for Chuck E. Cheese and other companies that seek to extend their brands into the metaverse, trademark protection in the metaverse is governed by the same set of rules governing regular analogue trademark protection. That is, the mark the company is seeking to protect must be distinctive, it must be used in commerce, and it must not be covered by a statutory bar to protection. For example, if a mark’s exclusive use by one firm would leave other firms at a significant non-reputation related disadvantage, the mark is said to be functional, and it can’t be protected. The metaverse does not present any additional obstacles to trademark protection, and so as long as Chuck E. Cheese eventually uses its two marks,it will enjoy their exclusive use among consumers in the metaverse. 

However, the relationship between new virtual marks and analogue marks is a subject of some uncertainty. Most notably, should a mark find broad success and achieve fame in the metaverse, would that virtual fame confer fame in the real world? What will trademark expansion into the metaverse mean for licensing agreements? Clarification from the USPTO could help put mark holders at ease as they venture into the virtual market. 

Additionally, trademarks in the metaverse present another venue in which trademark trolls can attempt to register an already well known mark with no actual intent to use it-—although the requirement under U.S. law that mark holders either use or possess a bona fide intent to use the mark can help mitigate this problem. Finally, observers contend that the expansion of commerce into the virtual marketplace will present opportunities for copycats to exploit marks. Already, third parties are seeking to register marks for virtual renditions of existing brands. In response, trademark lawyers are encouraging their clients to register their virtual marks as quickly as possible to head off any potential copycat users. The USPTO could ensure brands’ security by providing more robust protections to virtual trademarks based on a substantially similar, already registered analogue trademark.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


Social Media Influencers Ask What “Intellectual Property” Means

Henry Killen, MJLST Staffer

Today, just about anyone can name their favorite social media influencer. The most popular influencers are athletes, musicians, politicians, entrepreneurs, or models. Ultra-famous influencers, such as Kylie Jenner, can charge over 1 million dollars for a single post with a company’s product. So what are the risks of being an influencer? Tik Tok star Charli D’Amelio has been on both sides of intellectual property disputes. A photo of Charli was included in media mogul Sheeraz Hasan’s video promoting his ability to “make anyone famous.” The video featured many other celebrities such as Logan Paul and Zendaya. Charli’s legal team sent a cease-and-desist letter to Sheeraz demanding that her portion of the promotional video is scrubbed. Her lawyers assert that her presence in the promo “is not approved and will not be approved.” Charli has also been on the other side of celebrity intellectual property issues. The star published her first book In December and has come under fire from photographer Jake Doolittle for allegedly using photos he took without his permission. Though no lawsuit has been filed, Jake posted a series of Instagram posts blaming Charli’s team for not compensating him for his work.

Charli’s controversies highlight a bigger question society is facing, is content shared on social media platforms considered intellectual property? A good place to begin is figuring out what exactly intellectual property is. Intellectual property “refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names, and images used in commerce.” Social media platforms make it possible to access endless displays of content – from images to ideas – creating a cultural norm of sharing many aspects of life. Legal teams at the major social media platforms already have policies in place that make it against the rules to take images from a social media feed and use them as one’s own. For example, Bloggers may not be aware what they write may already by trademarked or copyrighted or that the images they get off the internet for their posts may not be freely reposted. Influencers get reposted on sites like Instagram all the time, and not just by loyal fans. These reposts may seem harmless to many influencers, but it is actually against Instagram’s policy to repost a photo without the creator’s consent. This may seem like not a big deal because what influencer doesn’t want more attention? However, sometimes influencers’ work gets taken and then becomes a sensation. A group of BIPOC TikTok users are fighting to copyright a dance they created that eventually became one of biggest dances in TikTok history. A key fact in their case is that the dance only became wildly popular after the most famous TiKTok users began doing it.

There are few examples of social media copyright issues being litigated, but in August 2021, a Manhattan Federal judge ruled that the practice of embedding social media posts on third-party websites, without permission from the content owner, could violate the owner’s copyright. In reaching this decision, the judge rejected the “server test” from the 9th Circuit, which holds that embedding content from a third party’s social media account only violates the contents owner’s copyright if a copy is stored on the defendant’s serves. .  General copyright laws from Congress lay out four considerations when deciding if a work should be granted copyright protection: originality, fixation, idea versus expression, and functionality. These considerations notably leave a gray area in determining if dances or expressions on social media sites can be copyrighted. Congress should enact a more comprehensive law to better address intellectual property as it relates to social media.


The Uniform Domain Name Dispute Resolution Policy (“UDRP”): Not a Trademark Court but a Narrow Administrative Procedure Against Abusive Registrations

Thao Nguyen, MJLST Staffer

Anyone can register a domain name through one of the thousands of registrars on a first-come, first-serve basis at a low cost. The ease of entry has created so-called “cybersquatters,” who register for domain names that reflect trademarks before the true trademark owners are able to do so. Cybersquatters often aim to profit from cybersquatting activities, either by selling the domain names back to the trademark holders for a higher price, by generating confusion in order to take advantage of the trademark’s goodwill, or by diluting the trademark and disrupting the business of a competitor. A single cybersquatter can cybersquat on several thousand domain names that incorporate well-known trademarks.

Paragraph 4(a) of the UDRP provides that the complainant must successfully establish all three of the following of elements: (i) that the disputed domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; (ii) that the registrant has no rights or legitimate interests in respect of the domain name; and (iii) that the registrant registered and is using the domain name in bad faith. Remedies for a successful complainant include cancellation or transfer to the complainant of the disputed domain name.

Although prized for being focused, expedient, and inexpensive, the UDRP is not without criticism, the bulk of which focuses on the issue of fairness. The frequent charge is that the UDRP is inherently biased in favor of trademark owners and against domain name holders, not all of whom are “cybersquatters.” This bias is indicated by statistics: 75% to 90% of URDP decisions each year are decided against the domain name owner.

Nonetheless, the asymmetry of outcomes, rather than being a sign of an unfair arbitration process, may simply reflect the reality that most UDRP complaints are brought when there is a clear case of abuse, and most respondents in the proceeding are true cybersquatters who knowingly and willfully violated the UDRP. Therefore, what may appear to be the UDRP’s shortcomings are in facts signs that the UDRP is fulfilling its primary purpose. Furthermore, to appreciate the UDRP proceeding and understand the asymmetry that might normally raise red flags in an adjudication, one must understand that the UDRP is not meant to resolve trademark dispute. A representative case where this purpose is addressed is Cameron & Company, Inc. v. Patrick Dudley, FA1811001818217 (FORUM Dec. 26, 2018), where the Panel wrote, “cases involving disputes regarding trademark rights and usage, trademark infringement, unfair competition, deceptive trade practices and related U.S. law issues are beyond the scope of the Panel’s limited jurisdiction under the Policy.” In other words, the UDRP’s scope is limited to detecting and reversing the damages of cybersquatting, and the administrative dispute-resolution procedure is streamlined for this purpose.[1]

That the UDRP is not a trademark court is evident in the UDRP’s refusal to handle cases where multiple legitimate complainants assert right to a single domain name registered by a cybersquatter. UDRP Rule 3(a) states: “Any person or entity may initiate an administrative proceeding by submitting a complaint.” The Forum’s Supplemental Rule 1(e) defines “The Party Initiating a Complaint Concerning a Domain Name Registration” as a “single person or entity claiming to have rights in the domain name, or multiple persons or entities who have a sufficient nexus who can each claim to have rights to all domain names listed in the Complaint.” UDRP cases with two or more complainants in a proceeding are possible only when the complainants are affiliated with each other as to share a single license to a trademark,[2] for example, when the complainant is assigned rights to a trademark registered by another entity,[3] or when the complainant has a subsidiary relationship with the trademark registrant.[4]

Since the UDRP does not resolve a good faith trademark dispute but intervenes only when there is clear abuse, the respondent’s bad faith is central: a domain name may be confusingly similar or even identical to a trademark, and yet a complainant cannot prevail if the respondent has rights and legitimate interests in the domain name and/or did not register and use the domain name in bad faith.[5] For this reason, the UDRP sets a high standard for the complainant to establish respondent’s bad faith. For example, UDRP provides a defense if the domain name registrant has made demonstrable preparations to use the domain name in a bona fide offering of goods or services. On the other hand, the Anticybersquatting Consumer Protection Act (“ACPA”) only provides a defense if there is prior good faith use of the domain name, not simply preparation to use. Another distinction between the UDRP and the ACPA is that the UDRP requires that complainant prove bad faith in both registration and use of the disputed domain to prevail, whereas the ACPA only requires complainant to prove bad faith in either registration or use.

Such a high standard for bad faith indicates that the UDRP is not equipped resolve issues where both parties dispute their respective rights in the trademark. In fact, when abuse is non-existent or not obvious, the UDRP Panel would refuse to transfer the disputed domain name from the respondent to the complainant.[6] Instead, the parties would need to resolve these claims in regular courts under either the ACPA or the Latham act. Limiting itself to addressing cybersquatting allows the UDRP to become extremely efficient in dealing with cybersquatting practices, a widespread and highly damaging abuse of the Internet age. This efficiency and ease of the UDRP process is appreciated by trademark-owning businesses and individuals, who prefer that disputes are handled promptly and economically. From the time of the UDRP’s creation until now, ICANN has not shown intention for reforming the Policy despite existing criticisms,[7] and for good reasons.

 

[Notes]

[1] Gerald M. Levine, Domain Name Arbitration: Trademarks, Domain Names, and Cybersquatting at 102 (2019).

[2] Tasty Baking, Co. & Tastykake Invs., Inc. v. Quality Hosting, FA 208854 (FORUM Dec. 28, 2003) (treating the two complainants as a single entity where both parties held rights in trademarks contained within the disputed domain names.)

[3] Golden Door Properties, LLC v. Golden Beauty / goldendoorsalon, FA 1668748 (FORUM May 7, 2016) (finding rights in the GOLDEN DOOR mark where Complainant provided evidence of assignment of the mark, naming Complainant as assignee); Remithome Corp v. Pupalla, FA 1124302 (FORUM Feb. 21, 2008) (finding the complainant held the trademark rights to the federally registered mark REMITHOME, by virtue of an assignment); Stevenson v. Crossley, FA 1028240 (FORUM Aug. 22, 2007) (“Per the annexed U.S.P.T.O. certificates of registration, assignments and license agreement executed on May 30, 1997, Complainants have shown that they have rights in the MOLD-IN GRAPHIC/MOLD-IN GRAPHICS trademarks, whether as trademark holder, or as a licensee. The Panel concludes that Complainants have established rights to the MOLD-IN GRAPHIC SYSTEMS mark pursuant to Policy ¶ 4(a)(i).”)

[4] Provide Commerce, Inc v Amador Holdings Corp / Alex Arrocha, FA 1529347 (FORUM Jan. 3, 2014) (finding that the complainant shared rights in a mark through its subsidiary relationship with the trademark holder); Toyota Motor Sales, U.S.A., Inc. v. Indian Springs Motor, FA 157289 (FORUM June 23, 2003) (“Complainant has established that it has rights in the TOYOTA and LEXUS marks through TMC’s registration with the USPTO and Complainant’s subsidiary relationship with TMC.”)

[5] Levine, supra note 1, at 99; see e.g., Dr. Alan Y. Chow, d/b/a Optobionics v. janez bobnik, FA2110001967817 (FORUM Nov. 23, 2021) (refusing to transfer the <optobionics.com> domain name despite its being identical to Complainant’s OPTOBIONICS mark and formerly owned by Complainant, since “[t]he Panel finds no evidence in the Complainant’s submissions . . . [that] the Respondent a) does not have a legitimate interest in the domain name and b) registered and used the domain name in bad faith.”).

[6] Swisher International, Inc. v. Hempire State Smoke Shop, FA2106001952939 (FORUM July 27, 2021).

[7] Id. at 359.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.


Counter Logic Broadband

Justice C. Shannon, MJLST Staffer

In 2015 Zaqueri “Aphromoo” Black won his first North American League of Legends championship series “LCS” championship playing support for Counter Logic Gaming. Since 2013 at least forty players have made the starting lineups for eight to ten LCS teams. Aphromoo is the only African American to win an LCS MVP. Aphromoo is the only African American player to win multiple LCS finals. Aphromoo is the only African American player to win a single LCS Final. Aphromoo is the only African American player to make it to an LCS final. Aphromoo is the only African American player to participate in LCS playoffs. Indeed, Aphromoo is the only African American player to have a starting role on an LCS team. Why? At least in part, because due to the digital divide.

More than a quarter of African Americans do not have broadband. Further, nearly 40% of the African Americans in the rural south do not have broadband. One quarter of the Latinx population does not have broadband. These discrepancies allow fewer African Americans and Latinx to play online video games like League of Legends. Okay, but if the digital divide only affects esports, why should the nation care? The digital divide, as seen in esports, is also seen in the American educational system. More than 15% of American households lacked broadband at the start of the pandemic. This gap was more pronounced in African American and Latinx households. These statistics demonstrate a national need to address the digital divide for entertainment purposes and, more importantly, educational purposes. So, what are some legal solutions to the digital divide? Municipal internet, subsidies, and low-income broadband laws.

Municipal Internet

Municipal broadband is not a new concept, but recently it has been seen as a solution to help address the digital divide. While the up-front cost to a city may be substantial, the long-term advantages can be significant. Highland, IL, and other communities across the United States provide high-speed internet for as low as $35 a month. Cities providing low-cost broadband through municipalities frequently have competitive prices for gigabit speeds as well. The most significant downside to this solution is that these cities are frequently in rural locations that do not provide for large populations. In addition, when municipalities attempt to provide broadband outside of their borders, state laws preempt them to protect ISPs. ISPs lobby for laws to deter or prevent municipal internet on the basis that they are necessary to prevent unfair competition; this fear of unfair competition, however, restricts communities from getting connected.

To avoid the preemption issue during the pandemic, some cities have established narrow versions of municipal broadband. In addition, these cities are providing free connectivity in heavily populated communities. For example, during the pandemic, Chattanooga, Tennessee, offered free broadband to low-income students. If these solutions stay in place, they will set an industry precedent for providing broadband to low-income communities.

Subsidies

The emergency Broadband Benefit provides up to $50 per month towards broadband services for eligible households and $75 a month for households on tribal lands. To qualify for the program, a household must meet one of five standards. Congress created the program to help low-income households stay connected during the pandemic. Congress allocated $3.2 billion to the FCC to enable the agency to provide the discount. This discount also comes with a one-time device discount of up to $100 so that users not only have broadband but have the tools to utilize broadband. The advantage of this subsidy is it directly addresses the issue of low-income recipients not being able to afford broadband, which can immediately affect the 15% of Americans who do not have broadband.

The downside of this solution is to qualify, a recipient must share their income on a webpage they have not visited before, which can be invasive. Further, this plan does not permanently address the cost of broadband, and once it ends, it is possible that the same groups of Americans who could not afford broadband before lose access to the internet. Additionally, when the average cost of a laptop in America is $700, a discount of $100 does not do very much to ensure that users are correctly benefitting from their new broadband connection. If the goal is to ensure that users can attend classes, complete homework assignments, and maybe play esports on the side, then a lower-cost tablet ($350 on average) would not address the problem of needing hardware to access broadband.

However, a program like this could be valued as a reasonable start if things continue to go in the right direction. A fair price for broadband is $60 a month. Reducing the cost of broadband to $10 per recipient for competitive speeds and reliability after subsidization could be a great tool to eliminate the digital divide so long as it persists after the pandemic.

Low-Income Broadband Laws

Low-cost broadband laws would require internet service providers to provide broadband plans for low-income recipients at a low-cost price. This approach would directly address Americans with physical access to broadband but who cannot pay for broadband solutions due to cost, thus, helping to bridge the digital divide. Low-cost broadband plans such as New York’s proposed Affordable Broadband Act would require all internet service providers serving more than 20,000 households to provide two low-cost plans to qualifying (low income) customers. However, New York’s law was stymied by ISPs arguing that it is an illegal way to close the digital divide as states are preempted from rate regulation of broadband by the Federal Communications Commission.

The ISPs argued that the Affordable Broadband Act operated within the field of interstate commerce and was thus likely preempted by the Federal Communications Act of 1934. However, as broadband is almost always interstate commerce, other state laws similar to New York’s Affordable Broadband Act would probably run into the same issue. Thus, a low-income broadband law would likely need to come from the federal level to avoid the same road bumps.

The Future of Broadband and the Digital Divide

An overlapping theme between many of these solutions is that they were implemented during the pandemic; this begs the question, are these short-term solutions to an unexpected life-changing event or rational long-term solutions for various long-term problems, including the pandemic? If cities, states, and the nation stay the course and implement more low-cost broadband solutions such as municipal internet, subsidies, and low-income broadband laws, it will be possible to address the digital divide. However, if jurisdictions treat these solutions like short-term stopgaps, communities that cannot afford traditional broadband solutions will again lose broadband access. Students will again go to McDonald’s to do homework assignments, and Aphromoo may continue to be the only active African American LCS player.