Data Privacy

iMessedUp – Why Apple’s iOS 16 Update Is a Mistake in the Eyes of Litigators.

Carlisle Ghirardini, MJLST Staffer

Have you ever wished you could unsend a text message? Has autocorrect ever created a typo you would give anything to edit? Apple’s recent iOS 16 update makes these dreams come true. The new software allows you to edit a text message a maximum of five times for up to 15 minutes after delivery and to fully unsend a text for up to two minutes after delivery.[1] While this update might be a dream for a sloppy texter, it may become a nightmare for a victim hoping to use text messages as legal evidence. 

But I Thought my Texts Were Private?

Regardless of the passcode on your phone, or other security measures you may use to keep your correspondence private, text messages can be used as relevant evidence in litigation so long as they can be authenticated.[2] Under the Federal Rules of Evidence Rule 901(a), such authentication only requires proof sufficient to support a finding that the evidence at issue is what you claim it is.[3] Absent access to the defendant’s phone, a key way to authenticate texts includes demonstrating the personal nature of the messages, which emulate earlier communication.[4] However, for texts to be admitted as evidence beyond hearsay, proof of the messages through screenshots, printouts, or other tangible methods of authentication is vital.[5]

A perpetrator may easily abuse the iOS 16 features by crafting harmful messages and then editing or unsending them. This has several negative effects. First, the fact that this capability is available may increase perpetrator utilization of text, knowing that disappearing harassment will be easier to get away with. Further, victims will be less likely to capture the evidence in the short time before the proof is rescinded, but after the damage has already been done. Attorney Michelle Simpson Tuegal who spoke out against this software shared how “victims of trauma cannot be relied upon, in that moment, to screenshot these messages to retain them for any future legal proceedings.”[6] Finally, when the victims are without proof and the perpetrator denies sending, psychological pain may result from such “gaslighting” and undermining of the victim’s experience.[7]

Why are Text Messages so Important?

Text messages have been critical evidence in proving the guilt of the defendant in many types of cases. One highly publicized example is the trial of Michelle Carter, who sent manipulative text messages to encourage her then 22-year-old boyfriend to commit suicide.[8] Not only were these texts of value in proving reckless conduct, they also proved Carter guilty of involuntary manslaughter as her words were shown to be the cause of the victim’s death. Without evidence of this communication, the case may have turned out very differently. Who is to say that Carter would not have succeeded in her abuse by sending and then unsending or editing her messages later?

Text messaging is also a popular tool for perpetrators of sexual harassment, and it happens every day. In a Rhode Island Supreme Court case, communication via iMessage was central to the finding of 1st degree sexual assault, as the 17-year-old plaintiff felt too afraid to receive a hospital examination after her attack.[9] Fortunately, the plaintiff had saved photos of inappropriate messages the perpetrator sent after the incident, amongst other records of their texting history, which properly authenticated the texts and connected him to the crime. It is important to note, however, that the incriminating screenshots were not taken until the morning after and with the help of a family member. This demonstrates how it is not often the first instinct of a victim to immediately memorialize evidence, especially when the content may be associated with shame or trauma. The new iOS feature may take away this opportunity to help one’s case through messages which can paint a picture of the incident or the relationship between the parties.

Apple Recognized That They Messed Up

The current iOS 16 update offering two minutes to recall messages and 15 minutes to edit them is actually an amendment to Apple’s originally offered timeframe of 15 minutes to unsend. This change came in light of efforts from an advocate for survivors of sexual harassment and assault. The advocate wrote a letter to the Apple CEO warning of the dangers of this new unsending capability.[10] While the decreased timeframe that resulted leaves less room for abuse of the feature, editing is just as dangerous as unsending. With no limit to how much text you can edit, one could send full sentences of verbal abuse simply just to later edit and replace them with a one-word message. Furthermore, if someone is reading the harmful messages in real time, the shorter window only gives them less time to react – less time to save the messages for evidence. While we can hope that the newly decreased window makes perpetrators think harder before sending a text that they may not be able to delete, this is wishful thinking. The fact that almost half of young people have reported being victims to cyberbullying when there has been no option to rescind or edit one’s messages shows that the length of the iOS feature likely does not matter.[11] The abilities of the new Apple software should be disabled; their “fix” to the update is not enough. The costs of what such a feature will do to victims and their chances of success in litigation outweigh the benefits to the careless texter. 

Notes

[1] Sofia Pitt, Apple Now Lets You Edit and Unsend Imessages on Your Iphone. Here’s How to Do It, CNBC (Sep. 12, 2022, 1:12 PM), https://www.cnbc.com/2022/09/12/how-to-unsend-imessages-in-ios-16.html.

[2] FED. R. EVID. 901(a).

[3] Id.

[4] United States v. Teran, 496 Fed. Appx. 287 (4th Cir. 2012).

[5] State v. Mulcahey, 219 A.3d 735 (R.I. Sup. Ct. 2019).

[6] Jess Hollington, Latest Ios 16 Beta Addresses Rising Safety Concerns for Message Editing, DIGITALTRENDS (Jul. 27, 2022) https://www.digitaltrends.com/mobile/ios-16-beta-4-message-editing-unsend-safety-concerns-fix/

[7] Id.

[8] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. Sup. Ct. 2018).

[9] Mulcahey, 219 A.3d at 740.

[10] Hollington, supra note 5.

[11] 45 Cyberbullying Statistics and Facts to Make Texting Safer, SLICKTEXT (Jan. 4, 2022) https://www.slicktext.com/blog/2020/05/cyberbullying-statistics-facts/.




Would Autonomous Vehicles (AVs) Interfere With Our Fourth Amendment Rights?

Thao Nguyen, MJLST Staffer

Traffic accidents are a major issue in the U.S. and around the world. Although car safety features are continuously enhanced and improved, traffic crashes continue to be the leading cause of non-natural death for U.S. citizens. Most of the time, the primary causes are human errors rather than instrumental failures. Therefore, autonomous vehicles (“AVs”), which promise to be the automobiles that operate themselves without the human driver, are an exciting up and coming technology, studied and developed in both academia and industry[1].

To drive themselves, AVs must be able to perform two key tasks: sensing the surrounding environment and “driving”—essentially replacing the eyes and hands of the human driver.[2] The standard AV design today includes a sensing system that collects information from the outside world, assisting the “driving” function. The sensing system is composed of a variety of sensors,[3] most commonly a Light Detection and Ranging (LiDAR) and cameras.[4] A LiDAR is a device that emits laser pulses and uses sound navigation and ranging (“SONAR”) principles to get a depth estimation of the surroundings: the emitted laser pulses travel forward, hit an object, then bounce back to the receivers; the time taken for the pulses to travel back is measured, and the distance is computed. With this information about distance and depth, a 3D point cloud map is generated about the surrounding environment. In addition to precise 3D coordinates, most LiDAR systems also record “intensity.” “Intensity” is the measure of the return strength of the laser pulse, which is based, in part, on the reflectivity of the surface struck by the laser pulse. LiDAR “intensity” data thus reveal helpful information about the surface characteristics of their surroundings. The two sensors, the camera and the LiDAR, complement each other: the former conveys rich appearance data with more details on the objects, whereas the latter is able to capture 3D measurements[5]. Fusing the information acquired by each allows the sensing system to gain a reliable environmental perception.[6]

LiDAR sensing technology is usually combined with artificial intelligence, as its goal is to imitate and eventually replace human perception in driving. Today, the majority of artificial intelligences use “machine learning,” a method that gives computers the ability to learn without explicitly being programmed. With machine learning, computers train itself to do new tasks in a similar manner as do humans: by exploring data, identifying patterns, and improving upon past experiences. Applied machine learning is data-driven: the greater the breadth and depth of the data supplied to the computer, the greater the variety and complexity of the tasks that the computer can program itself to do. Since “driving” is a combination of multiple high-complexity tasks, such as object detection, path planning, localization, lane detection, etc., an AV that drives itself requires voluminous data in order to operate properly and effectively.

“Big data” is already considered a valuable commodity in the modern world. In the case of AVs, however, this data will be of public streets and road users, and the large-scale collection of this data is empowered further by various technologies to detect and identify, track and trace, mine and profile data. When profiles about a person’s traffic movements and behaviors exist in a database somewhere, there is a great temptation for the information to be used for other purposes than the purpose for which they were originally collected, as has been the case with a lot of other “big data” today. Law enforcement officers who get their hands on these AVs data can track and monitor people’s whereabouts, pinpointing individuals whose trajectories touch on suspicious locations at a high frequency. The trajectories can be matched with the individual identified via use of car models and license plates. The police may then identify crime suspects based on being able to see the trajectories of everyone in the same town, rather than taking the trouble to identify and physically track each suspect. Can this use of data by law enforcement be sufficiently justified?

As we know, use of “helpful” police tools may be restricted by the Fourth Amendment, and for good reasons. Although surveillance helps police officers detect criminals,[7] extraneous surveillance has its social costs: restricted privacy and a sense of being “watched” by the government inhibits citizens’ productivity, creativity, spontaneity, and causes other psychological effects.[8] Case law has given us guidance to interpret and apply the Fourth Amendment standards of “trespass” or “unreasonable searches and seizures” by the police. Three principal cases, Olmstead v. United States, 277 U.S. 438 (1928), Goldman v. United States, 316 U.S. 129 (1942), and United States v. Jones, 565 U.S. 400 (2012), a modern case, limit Fourth Amendment protection to protecting against physical intrusion into private homes and properties. Such protection would not be helpful in the case of LiDAR, which operates on public street as a remote sensing technology. Nonetheless, despite the Jones case, the more broad “reasonable expectation of privacy” test established by Katz v. United States, 389 U.S. 347 (1967) is more widely accepted. Instead of tracing physical boundaries of “persons, houses, papers, and effects,” the Katz test focuses on whether there is an expectation of privacy that is socially recognized as “reasonable.” The Fourth Amendment “protects people, not places,” wrote the Katz court.[9]

United States v. Knotts, 460 U.S. 276 (1983) was a public street surveillance case that invoked the Katz test. In Knotts, the police installed a beeper on to the defendant’s vehicle to track it. The Court found that such tracking on public streets was not prohibited by the Fourth Amendment: “A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[10] The Knotts Court thus applied the Katz test and considered the question of whether there was a “reasonable expectation of privacy,” meaning that such expectation was recognized as “reasonable” by society.[11] The Court’s answer is in the negative: unlike a person in his dwelling place, a person who is traveling on public streets “voluntarily conveyed to anyone who wanted to look at the fact that he was traveling over particular roads in a particular direction.”[12]

United States v. Maynard, 615 F.3d 544 (2010), another public street surveillance case taking place in the twenty-first century, reconsidered the Knotts holding regarding “reasonable expectation of privacy” on public streets. The Maynard defendant argued that the district court erred in admitting evidence acquired by the police’s warrantless use of a Global Pointing System (GPS) device to track defendant’s movements continuously for a month.[13] The Government invoked United States v. Knotts and its holding that “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[14] The DC Circuit Court of Appeals, however, distinguished Knotts, pointing out that the Government in Knotts used a beeper that tracked a single journey, whereas the Government’s GPS monitoring in Maynard was sustained 24 hours a day continuously for one month.[15]The use of the GPS device over the course of one month did more than simply tracking defendant’s “movements from one place to another.” The result in Maynard was the discovery of the “totality and pattern” of defendant’s movement. [16]The Court is willing to make a distinction between “one path” and “the totality of one’s movement”: since someone’s “totality of movement” is much less exposed to the view of the public and much more revealing of that person’s personal life, it is constitutional for the police to track an individual on “one path,” but not that same individual’s “totality of movement.”

Thus, with time the Supreme Court appears to be recognizing that when it comes to modern surveillance technology, the sheer quantity and the revealing nature of data collected on movements of public street users ought to raise concerns. The straightforward application of these to AV sensing data would be that data concerning a person’s “one path” can be obtained and used, but not the totality of a person’s movement. It is unclear where to draw the line      between “one path” and “the totality of movement.” The surveillance in Knotts was intermittent over the course of three days,[17] whereas the defendant in Maynard was tracked for over one month. The limit would perhaps fall somewhere in between.

Furthermore, this straightforward application is complicated by the fact that the sensors utilized by AVs do not pick up mere locational information. As discussed above, AV sensing system, being composed of multiple sensors, capture both camera images and information about speed, texture, and depth of the object. In other words, AVs do not merely track a vehicle’s location like a beeper or GPS, but they “see” the vehicle through their cameras and LiDAR and radar devices, gaining a wealth of information. This means that even if only data about “one path” of a person movement is extracted, this “one path” data as processed by AV sensing systems is much more in-depth than what a beeper or CSLI can communicate. Adding to this, current developers are proposing to create AVs networks that share data among many vehicles, so that data on “one path” can potentially be combined with other data of the same vehicle’s movement, or multiple views of the same “one path” from different perspectives can be combined. The extensiveness of these data goes far beyond the precedents in Knotts and Maynard. Thus, it is foreseeable that unwarranted subpoenaing AVs sensing data is firmly within the Supreme Court’s definition of a “trespass.”

[1] Tri Nguyen, Fusing LIDAR sensor and RGB camera for object detection in autonomous vehicle with fuzzy logic approach, 2021 International Conference on Information Networking (ICOIN) 788, 788 (2021).

[2] Id. (“An autonomous vehicle or self-driving car is a vehicle having the ability to sense the surrounding environment and capable of operation on its own without any human interference. The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounting on it.”)

[3] Id. “The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounted on it.”

[4] Heng Wang and Xiaodong Zhang, Real-time vehicle detection and tracking using 3D LiDAR, Asian Journal of Control 1, 1 (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”)

[5] Id. (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”) (“Conversely, LiDARs are able to produce 3D measurements and are not affected by the illumination of the environment [9,10].”).

[6] Nguyen, supra note 1, at 788 (“Due to the complementary of two sensors, it is necessary  to gain a more reliable environment perception by fusing the  information acquired from these two sensors.”).

[7] Raymond P. Siljander & Darin D. Fredrickson, Fundamentals of Physical Surveillance: A Guide for Uniformed and Plainclothes Personnel, Second Edition (2002) (abstract).

[8] Tamara Dinev et al., Internet Privacy Concerns and Beliefs About Government Surveillance – An Empirical Investigation, 17 Journal of Strategic Information Systems 214, 221 (2008) (“Surveillance has social costs (Rosen, 2000) and inhibiting effects on spontaneity, creativity, productivity, and other psychological effects.”).

[9] Katz v. United States, 389 U.S. 347, 351 (1967).

[10] United States v. Knotts, , 460 U.S. 276, 281 (1983) (“A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”)

[11] Id. at 282.

[12] Id.

[13] United States v. Maynard, 615 F.3d 544, 549 (2010).

[14]  Id. at 557.

[15] Id. at 556.

[16] Id. at 558 “[O]nes’s movements 24 hours a day for 28 days as he moved among scores of places, thereby discovering the totality and pattern of his movements.”).

[17] Knotts at 276.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.


The StingRay You’ve Never Heard Of: How One of the Most Effective Tools in Law Enforcement Operates Behind a Veil of Secrecy

Dan O’Dea, MJLST Staffer

One of the most effective investigatory tools in law enforcement has operated behind a veil of secrecy for over 15 years. “StingRay” cell phone tower simulators are used by law enforcement agencies to locate and apprehend violent offenders, track persons of interest, monitor crowds when intelligence suggests threats, and intercept signals that could activate devices. When passively operating, StingRays mimic cell phone towers, forcing all nearby cell phones to connect to them, while extracting data in the form of metadata calls, text messages, internet traffic, and location information, even when a connected phone is powered off. They can also inject spying software into phones and prevent phones from accessing cellular data. StingRays were initially used overseas by federal law enforcement agencies to combat terrorism, before spreading into the hands of the Department of Justice and Department of Homeland Security, and now are actively used by local law enforcement agencies in 27 states to solve everything from missing persons cases to thefts of chicken wings.

The use of StingRay devices is highly controversial due to their intrusive nature. Not only does the use of StingRays raise privacy concerns, but tricking phones into connecting to StingRays mimicking cell phone towers prevent accessing legitimate cell phone service towers, which can obstruct access to 911 and other emergency hotlines. Perplexingly, the use of StingRay technology by law enforcement is almost entirely unregulated. Local law enforcement agencies frequently cite secrecy agreements with the FBI and the need to protect an investigatory tool as a means of denying the public information about how StingRays operate, and criminal defense attorneys have almost no means of challenging their use without this information. While the Department of Justice now requires federal agents obtain a warrant to use StingRay technology in criminal cases, an exception is made for matters relating to national security, and the technology may have been used to spy on racial-justice protestors during the Summer of 2020 under this exception. Local law enforcement agencies are almost completely unrestricted in their use of StingRays, and may even conceal their use in criminal prosecutions by tagging their findings as those of a “confidential source,” rather than admitting the use of a controversial investigatory tool. Doing so allows prosecutors to avoid  battling 4th amendment arguments characterizing data obtained by StingRays as unlawful search and seizure.

After existing in a “legal no-man’s land” since the technology’s inception, Senator Ron Wyden (D-OR) and Representative Ted Lieu (D-HI) sought to put an end to the secrecy of StingRays through introducing the Cell-Site Simulator Warrant Act of 2021 in June of 2021. The bill would have mandated that law enforcement agencies obtain a warrant to investigate criminal activity before deploying StingRay technology while also requiring law enforcement agencies to delete data of phones other than those of investigative targets. Further, the legislation would have required agencies to demonstrate a need to use StingRay technology that outweighs any potential harm to the community impacted by the technology. Finally, the bill would have limited authorized use of StingRay technology to the minimum amount of time necessary to conduct an investigation. However, the Cell-Site Simulator Warrant Act of 2021 appears to have died in committee after failing to garner significant legislative support.

Ultimately, no device with the intrusive capabilities of StingRays should be allowed to operate free from the constraints of regulation. While StingRays are among the most effective tools utilized by law enforcement, they are also among the most intrusive into the privacy of the general public. It logically follows that agencies seeking to operate StingRays should be required to make a showing of a need to utilize such an intrusive investigatory tool. In certain situations, it may be easy to establish the need to deploy a StingRay, such as doing so to further the investigation of a missing persons case. In others, law enforcement agencies would correctly find their hands tied should they wish to utilize a StingRay to catch a chicken wing thief.


What the SolarWinds Hack Means for the Future of Law Firm Cybersecurity?

Sam Sylvan, MJLST Staffer

Last December, the massive software company SolarWinds acknowledged that its popular IT-monitoring software, Orion, was hacked earlier in the year. The software was sold to thousands of SolarWinds’ clients, including government and Fortune 500 companies. A software update of Orion provided Russian-backed hackers with a backdoor into the internal systems of approximately 18,000 SolarWinds customers—a number that is likely to increase over time as more organizations discover that they also are victims of the hack. Even the cybersecurity company FireEye that first identified the hack had learned that its own systems were compromised.

The hack has widespread implications on the future of cybersecurity in the legal field. Courts and government attorneys were not able to avoid the Orion hack. The cybercriminals were able to hack into the DOJ’s internal systems, leading the agency to report that the hackers might have breached 3,450 DOJ email inboxes. The Administrative Office of the U.S. Courts is working with DHS to audit vulnerabilities in the CM/ECF system where highly sensitive non-public documents are filed under seal. Although, as of late February, no law firms had announced that they too were victims of the hack, likely because law firms do not typically use Orion software for their IT management, the Orion hack is a wakeup call to law firms across the country regarding their cybersecurity. There have been hacks, including hacks of law firms, but nothing of this magnitude or potential level of sabotage. Now more than ever law firms must contemplate and implement preventative measures and response plans.

Law firms of all sizes handle confidential and highly sensitive client documents and data. Oftentimes, firms have IT specialists but lack cybersecurity experts on the payroll—somebody internal who can aid by continuing to develop cybersecurity defenses. The SolarWinds hack shows why this needs to change, particularly for law firms that handle an exorbitant amount of highly confidential and sensitive client documents and can afford to add these experts to their ranks. Law firms relying exclusively on consultants or other third parties for cybersecurity only further jeopardizes the security of law firms’ document management systems and caches of electronically stored client documents. Indeed, it is reliance on third-party vendors that enabled the SolarWinds hack in the first place.

In addition to adding a specialist to the payroll, there are a number of other specific measures that law firms can take in order to address and bolster their cybersecurity defenses. For those of us who think it is not a matter of “if” but rather “when,” law firms should have an incident response plan ready to go. According to Jim Turner, chief operating officer of Hilltop Consultants, many law firms do not even have an incident response plan in place.

Further, because complacency and outdated IT software is of particular concern for law firms, “vendor vulnerability assessments” should become commonplace across all law firms. False senses of protection need to be discarded and constant reassessment should become the norm. Moreover, firms should upgrade the type of software protection they have in place to include endpoint detection and response (EDR), which uses AI to detect hacking activity on systems. Last, purchasing cyber insurance is a strong safety measure in the event a law firm has to respond to a breach. It would allow for the provision of additional resources needed to effectively respond to hacks.


I’ve Been Shot! Give Me a Donut!: Linking Vaccine Verification Apps to Existing State Immunization Registries

Ian Colby, MJLST Staffer

The gold rush for vaccination appointments is in full swing. After Governor Walz and many other governors announced an acceleration of vaccine eligibility in their states, the newly eligible desperately sought vaccinations to help the world achieve herd immunity to the SARS-CoV-2 virus (“COVID”) and get back to normal life.

The organization administering a person’s initial dose typically gives the recipient an approximately 4” x 3” card that provides the vaccine manufacturer, the date and location of inoculation, and the Centers for Disease Control (“CDC”) logo. The CDC website does not specify what, exactly, this card is for. Likely reasons include informing the patient about the healthcare they just received, a reminder card for a second dose, or providing batch numbers in case a manufacturing issue arises. Maybe they did it for the ‘Gram. However, regardless of the CDC’s reason for the card, many news outlets have latched onto the most likely future use for them: as a passport to get the post-pandemic party started.

Airlines, sports venues, schools, and donut shops are desperate to return to safe mass gatherings and close contact, without needing to enforce as many protective measures. These organizations, in the short-term, will likely seek assurance of a person’s vaccination status. Aside from the equitable and scientific issues with requiring this assurance, these business will likely get “proof” with these CDC vaccination cards. The cardboard and ink security of these cards rivals social security cards in the “high importance – zero protection” category. Warnings of scammers providing blank CDC cards or stealing the vaccinated person’s name and birthdate hit the web last week (No scammers needed: you can get Missouri’s PDF to print one for free).

With so little security, but with a business-need to reopen the economy to vaccinated folks, businesses and governments have turned to digital vaccine passports. Generically named “digital health passes,” these apps will allow a person to show proof of their vaccination status securely. They “provide a path to reviving the economy and getting Americans back to work and play” according to a New York Times article. “For any such certificate or passport to work, it is going to need two things – access to a country’s official records of vaccinations and a secure method of identifying an individual and linking them to their health record.”

A variety of sources have undertaken development of these digital health passes, both governments and private firms. Israel already provides a nationwide digital proof of vaccination known as a Green Pass. Denmark followed suit with the Coronapas. In addition, a number of private companies and nonprofits are vying to become the preeminent vaccine status app for the world’s smartphones. While governments, such as Israel, have preexisting authority to access immunization and identification records, private firms do not. Private firms would require authorization to access your medical records.

So, in the United States, who would run these apps? Not the U.S. federal government. The Biden Administration unequivocally denied that it would ever require vaccine status checks, and would not keep a vaccination database. The federal government does not need to, though. Most states already manage a digital vaccination database, pursuant to laws authorizing them. Every other state, which doesn’t directly authorize them, still maintains a digital database anyway. These immunization information systems (“IIS”) provide quick access to a person’s vaccination status. A state’s resident can make a request for their vaccination status on myriad vaccinations for free and receive the results via email. Texas and Florida, who made big hubbubs about restricting any use of vaccine passports, both have registries to provide proof of vaccination. So does New York, who has already published an app, known as the Excelsior Pass, that does this for the COVID vaccine. The State’s app pulls information from New York’s immunization registry, providing a quick, simple yes-no result for those requiring proof. The app uses IBM’s blockchain technology, which is “designed to enable the secure verification of health credentials such as test results and vaccination records without the need to share underlying medical and personal information.”

With so many options, consumers of vaccine status apps could become overwhelmed. A vaccinated person may need to download innumerable apps to enter myriad activities. “Fake” apps could ask for additional medical information from the unwary. Private app developers may try to justify continued use of the app after the need for COVID vaccination proof passes.

In this competitive atmosphere, apps that partner with state governments likely provide the best form of digital vaccination verification. These apps have direct approval from the states that are required by law to maintain these vaccination records. They provide some authority to avoid scams. And cooperation to achieve state standardization of these apps may facilitate greater use. States seeking to reopen their economies should authorize digital interfaces with their pre-existing immunization registries. Now that the gold rush for vaccinations has started, the gold rush for vaccine passports is something to keep an eye on.

 


Ways to Lose Our Virtual Platforms: From TikTok to Parler

Mengmeng Du, MJLST Staffer

Many Americans bid farewell to the somewhat rough 2020 but found the beginning of 2021 rather shocking. After President Trump’s followers stormed the Capitol Building on January 6, 2021, major U.S. social media, including Twitter, Facebook, Instagram, and Snapchat, moved fast to block the nation’s president on their platforms. While everybody was still in shock, a second wave hit. Apple’s iOS App stores, Google’s Android Play stores, Amazon Web Services, and other service providers decided to remove Parler, an app used by Trump supporters in the riot and mostly favored by conservatives. Finding himself virtually homeless, President Trump relocated to TikTok, a Chinese owned short-video sharing app   relentlessly sought to ban ever since July 2020. Ironically but not unexpected, TikTok banned President Trump before he could even ban TikTok.

Dating back to June 2020, the fight between TikTok and President Trump germinated when the app’s Chinese parent company ByteDance was accused of discreetly accessing the clipboard content on their users’ iOS devices. Although the company argued that the accused technical feature was set up as an “anti-spam” measure and would be immediately stopped, the Trump administration signed Executive Order 13942 on August 6, 2020, citing national security concerns to ban the app in five stages. TikTok responded swiftly , the District Court for the District of Columbia issued a preliminary injunction on September 27, 2020. At the same while, knowing that the root of problem lies in its “Chinese nationality,” ByteDance desperately sought acquisition by U.S. corporations to make TikTok US-owned to dodge the ruthless banishment, even willing to give up billions of dollars and, worse, its future in the U.S. market. The sale soon drew qualified bidders including Microsoft, Oracle, and Walmart, but has not advanced far since September due to the pressure coming from both Washington and Beijing.

TikTok, in the same Executive Order was another Chinese app called WeChat. If banning TikTok means that American teens will lose their favorite virtual platform for life-sharing amid the pandemic, blocking WeChat means much more. It heavily burdens one particular minority group––hundreds and thousands of Chinese Americans and Chinese citizens in America who use WeChat. This group fear losing connection with families and becoming disengaged from the social networks they have built once the vital social platform disappears. For more insight, this is a blog post that talks about the impact of the WeChat ban on Chinese Students studying in the United States.

In response to the WeChat ban, several Chinese American lawyers led the creation of U.S. WeChat Users Alliance. Supported by thousands of U.S. WeChat users, the Alliance is a non-profit organization independent of Tencent, the owner of WeChat, and was formed on August 8, 2020 to advocate for all that are affected by the ban. Subsequently, the Alliance brought suit in the United States District Court for the Northern District of California against the Trump administration and received its first victory in court on September 20, 2020 as Judge Laurel Beeler issued a preliminary injunction against Trump’s executive order.

Law is powerful. Article Two of the United States Constitution vested the broad executive power in the president of this country to discretionally determine how to enforce the law via issuance of executive orders. Therefore, President Trump was able to hunt a cause that seemed satisfying to him and banned TikTok and WeChat for their Chinese “nationality.” Likewise, the First Amendment of the Constitution and section 230 of the Communication Decency Act empowers private Internet forum providers to screen and block offensive material. Thus, TikTok, following its peers, finds its legal justification to ban President Trump and Apple can keep Parler out of reach from Trump supporters. But power can corrupt. It is true that TikTok and WeChat are owned by Chinese companies, but an app, a technology, does not take on nationality from its ownership. What happened on January 6, 2021 in the Capitol Building was a shame but does not justify removal of Parler. Admittedly, regulations and even censorship on private virtual platforms are necessary for national security and other public interest purposes. But the solution shouldn’t be simply making platforms unavailable.

As a Chinese student studying in the United States, I personally felt the of the WeChat ban. I feel fortunate that the judicial check the U.S. legal system puts on the executive power saved WeChat this time, but I do fear for the of internet forum regulation.

 


Becoming “[COVID]aware” of the Debate Around Contact Tracing Apps

Ellie Soskin, MJLST Staffer

As COVID-19 cases continue to surge, states have ramped up containment efforts in the form of mask mandates, business closures, and other public health interventions. Contact tracing is a vital part of those efforts: health officials identify those who have been in close contact with individuals diagnosed with COVID-19 and alert them of their potential exposure to the virus, while withholding identifying information. But traditional contact tracing for a true global pandemic requires a lot of resources. Accordingly, a number of regions have looked to smartphone-based exposure notification technology as an innovative way to both supplement and automate containment efforts.

Minnesota is one of the latest states to adopt this approach: on November 23rd, the state released “COVIDaware” a phone application designed to notify individuals if they’ve been exposed to someone diagnosed with COVID-19. Minnesota’s application utilizes a notification technology developed jointly by Apple and Google, joining sixteen other states and the District of Columbia, with more expected to roll out in the coming weeks. The nature of the technology raises a number of complex concerns over data protection and privacy. Additionally, these apps are more effective the more people use them and lingering questions remain as to compliance and the feasibility of mandating use.

The joint Apple/Google notification software used in Minnesota is designed with an emphasis on privacy. The software uses anonymous identifying numbers (“keys”) that change rapidly, does not solicit identifying information, does not provide access to GPS data, and only stores data locally on each user’s phone, rather than in a server. The keys are exchanged via localized Bluetooth connection operating in the background. It can also be turned off and relies wholly on self-reports. For Minnesota, accurate reports come in the form of state-issued verification codes provided with positive test results. The COVIDaware app checks daily to see if any keys contacted within the last 14 days have recorded positive test results. Minnesota policymakers, likely aware of the intense privacy concerns triggered by contact tracing apps, have emphasized the minimal data collection required by COVIDaware.

The data privacy regulatory scheme in the United States is incredibly complex, as there is no single unified federal data protection policy. Instead, the sphere is dominated by individual states. Federal law enters into the picture primarily via the Health Insurance Portability and Accountability Act (“HIPAA”), which does not apply to patients voluntarily giving health information to third parties. In response to concerns over contact tracing app data, multiple data privacy bills were introduced to Congress, but even the bipartisan “Exposure Notification Privacy Act” remains unpassed.

Given the decentralized nature of the internet, applications tend to be designed to comply with all 50 states’ policies. However, in this case, state-created contact tracing applications are designed for local use, so from a practical perspective states may only have to worry about compliance with neighboring states’ data privacy acts. The Minnesota Government Data Practices Act passed in 1974 is the only substantive Minnesota state statute affecting data collection and neighboring states’ (Wisconsin, Iowa, North Dakota, and South Dakota) laws have similarly limited or dated schemes. In this specific case, the privacy-focused Apple/Google API that forms the backbone of COVIDaware and the design of the app itself, described briefly above, likely keep it complaint. In fact, some states have expressed frustration at the degree of individual privacy afforded by the Apple/Google API, saying it can stymie coordinated public health efforts.

Of course, one solution to even minimal data privacy concerns is simply not to use the application. But the efficacy of contact tracing apps depends entirely on whether people actually download and use them. Some countries have opted for degrees of mandatory use: China has mandated adoption of its contact tracing app for every citizen, utilizing unprecedented government surveillance to flag individuals potentially exposed, and India has made employers responsible for ensuring every employee download its government-developed contact tracing app. While a similar employer-based approach is not legally impossible in the United States, any such mandate would be legally complex, and anyone following the controversy over mask mandates should instinctively recognize that a mandated government tracking app is a hard sell (to put it lightly).

But mandates may not even be necessary. Experts have emphasized that universal compliance isn’t necessary for an app to be effective: every user helps. Germany and Ireland have not mandated use, but have download rates of 20% and 37% respectively. Some have proposed small, community-focused launches of tracking apps, similar to successful start-ups. With proper marketing and transparency, states need not even enter the sticky legal mess that is mandating compliance.

Virtually every policy response to COVID in the United States has been met with heated controversy and tracking apps are no different. As these apps are in their infancy, legal challenges have yet to emerge, but the area in general is something of a minefield. The limited and voluntary nature of Minnesota’s COVIDaware app likely places it out of the realm of significant legal challenges and significant data privacy concerns, at least for the moment. The general conversation around contact tracing apps is a much larger one, however, and has helped put data privacy and end user control into the global conversation.