Data Privacy

A “Living” AI: How ChatGPT Raises Novel Data Privacy Issues

Alexa Johnson-Gomez, MJLST Staffer

At the end of 2022, ChatGPT arrived on the scene with tremendous buzz and discourse to follow. “Is the college essay dead?”[1]“Can AI write my law school exams for me?”[2] “Will AI like ChatGPT take my job?”[3] While the public has been grappling with the implications of this new technology, an area that has been a bit less buzzy is how this massive boom in AI technology inextricably involves data privacy.

ChatGPT is a machine learning model that constantly evolves through a process of collecting and training on new data.[4] In teaching AI to generate text with a natural language style, computer scientists engage in “pre-generative training” involving feeding AI huge swaths of unlabeled text followed by repeated rounds of “fine-tuning.”[5] Since its public launch, that process has only grown in scale; the chatbot continues to utilize its interactions with users to fine-tune itself. This author asked ChatGPT itself how its machine learning implements user data, and it described itself as a “living” AI—one that is constantly growing with new user input. While such a statement might evoke dystopian sci-fi themes, perhaps much more unsettling is the concept that this AI is indiscriminately sucking in user data like a black hole.

In an era where “I didn’t read the privacy policy” is the default attitude, understanding what an AI might be able to glean from user data seems far beyond the purview of the general public. Yet this collection of user data is more salient than ever. Sure, one might worry about Meta targeting its advertisements based on user data or Google recommending restaurants based on their GPS data. In comparison, the way that our data is being used by ChatGPT is in a league of its own. User data is being iterated upon, and most importantly, is dispositive in how ChatGPT learns about us and our current moment in human culture.[6] User data is creating ChatGPT; it is ChatGPT.

In other words, the general public may not have full awareness of what kind of privacy protections—or lack thereof—are in place in the United States. In brief, we tend to favor free expression over the protection of individual privacy. The privacy act that regulates information sent over the Internet is the Electronic Communications Privacy Act, 18 U.S.C. §§ 2510–2523. Enacted in 1986, the bulk of ECPA predates the modern internet. As a result, any amendments have been meager changes that do not keep up with technological advancement. A majority of ECPA touches things like interceptions of communication with, for example, wiretapping or government access to electronic communications via warrants. “Electronic Communications” may be a concept that includes the Internet, yet the Internet is far too amorphous to be regulated by this outdated Act, and AI tools existing on the Internet are several technological steps away from its scope.

In contrast, the European Union regulates online data with the General Data Protection Regulation (GDPR), which governs the collection, use, and storage of personal data of people in the EU. The GDPR applies to all companies whose services reach individuals within the EU, regardless of where the company is based, and non-compliance can result in significant fines and legal penalties. It is considered to be one of the most comprehensive privacy regulations in the world. Since ChatGPT is accessible by those in the EU, interesting questions are raised about how the use and collection of data is the base function of this AI. Does the GDPR even allow for the use of ChatGPT, considering how user data is being constantly used to evolve the technology?[7] The collection and use of European citizens’ data is a violation of the GDPR, but the definition of “use” as it pertains to ChatGPT is not clear. The use of data in ChatGPT’s fine-tuning process could arguably be a violation of the GDPR.

While a bit of a unique use-case, a particularly troubling example raised by a recent Forbes article is a lawyer using ChatGPT to generate a contract, and inputting confidential information in the chatbot in the process.[8] That information is stored by ChatGPT, and would potentially violate ABA rules. As ChatGPT brews even more public fervor, professionals are likely to try to use the tool to make their work more efficient or thorough. But individuals should think long and hard about what kind of information they are inputting into the tool, especially if confidential or personally-identifying information is at play.

The privacy policy of OpenAI, the company responsible for ChatGPT, governs ChatGPT’s data practices. OpenAI stipulates collecting information including contact info (name, email, etc), profiles, technical info (IP, browser, device), and interactions with ChatGPT. OpenAI “may” share data with third parties that perform services for the company (e.g., website hosting, conducting research, customer service), affiliates and subsidiaries of the company, the government & law enforcement, “or other third parties as required by law.” OpenAI explicitly claims to comply with the GDPR and other privacy laws like the California Consumer Privacy Act (CCPA), in that transparency is a priority, and users can access and delete data upon request. However, compliance with the GDPR and CCPA must be in name only, as these regulations did not even contemplate what it means for user data to form the foundation of a machine learning model.

In conclusion, the rapid growth of AI technology presents important data privacy issues that must be addressed by lawmakers, policy experts, and the public alike. The development and use of AI arguably should be guided by regulations that balance innovation with privacy concerns. Yet public education is perhaps the most vital element of all, as regulation of this sort of technology is likely to take a long time in the U.S., if ever. If users of ChatGPT can be cognizant of what they are inputting into the tool, and stay informed about what kind of obligation OpenAI has to its users’ privacy, then perhaps privacy can be somewhat protected.

Notes

[1] Stephen Marche, The College Essay is Dead, The Atlantic (Dec. 6, 2022), https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/.

[2] Jonathan H. Choi et al., ChatGPT Goes to Law School (2023).

[3] Megan Cerullo, AI ChatgPT Is Helping CEOs Think. Will It Also Take Your Job?, CBS News (Jan. 24, 2023), https://www.cbsnews.com/news/chatgpt-chatbot-artificial-intelligence-job-replacement/.

[4] Richie Koch, ChatGPT, AI, and the Future of Privacy, Proton (Jan. 27, 2023), https://proton.me/blog/privacy-and-chatgpt.

[5] Alec Radford & Karthik Narasimhan, Improving Language Understanding by Generative Pre-Training (2018).

[6] Lance Eliot, Some Insist That Generative AI ChatGPT Is a Mirror Into the Soul of Humanity, Vexing AI Ethics and AI Law, Forbes (Jan. 29, 2023), https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/?sh=1f2940bd12db.

[7] Kevin Poireault, #DataPrivacyWeek: Addressing ChatGPT’s Shortfalls in Data Protection Law Compliance, Info Security Magazine (Jan. 28, 2022), https://www.infosecurity-magazine.com/news-features/chatgpt-shortfalls-data-protection/.

[8] Lance Eliot, Generative AI ChatGPT Can Disturbingly Gobble Up Your Private and Confidential Data, Forewarns AI Ethics and AI Law, Forbes (Jan. 27, 2023),  https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=9f856a47fdb1.


Meta Faces Class Action Lawsuits Over Pixel Tool Data Controversy

Ray Mestad, MJLST Staffer

With a market capitalization of $341 billion, Meta Platforms is one of the most valuable companies in the world.[1] Information is a prized asset for Meta, but how that information is acquired continues to be a source of conflict. Their Meta “Pixel” tool is a piece of code that allows websites to track visitor activity.[2] However, what Meta does with the data after it is acquired may be in violation of a variety of privacy laws. Because of that, Meta is now facing almost fifty class action lawsuits due to Pixel’s use of data from video players and healthcare patient portals.[3]

What is Pixel?

Pixel is an analytical tool that tracks visitor actions on a website.[4] In theory, the actions that are tracked include purchases, registrations, cart additions, searches and more. This information can then be used by the website owners to better understand user behavior. Website owners can more efficiently use ad spend by tailoring ads to relevant users and finding more receptive users based on Pixel’s analysis.[5]

In the world of search engine optimization and web analysis tools like Pixel are common, and there are other sites, like Google Analytics, that provide similar functions. However, there are two key differences between these other tools and Pixel. First, Pixel has in some cases accidentally scraped private, identifiable information from websites. Second, Pixel can connect that information to the social profiles on their flagship website, Facebook. Whether intentionally or accidentally, Pixel has been found to have grabbed personal information beyond the simple user web actions it was supposed to be limited to and connected them to Facebook profiles.[6]

Pixel and Patient Healthcare Information

It’s estimated that, until recently, one third of the top 100 hospitals in the country used Pixel on their websites.[7] However, that number may decrease after Meta’s recent data privacy issues. Meta faced both criticism and legal action in the summer of 2022 for its treatment of user data on healthcare websites. Pixel incorrectly retrieved private patient information, including names, conditions, email addresses and more. Meta then targeted hospital website users with ads on Facebook, using the information Pixel collected from hospital websites and patient portals by matching user information with their Facebook accounts.[8] Novant Health, a healthcare provider, ran advertisements promoting vaccinations in 2020. They then added Pixel code to their website to evaluate the effectiveness of the campaign. Pixel proceeded to send private and identifiable user information to Meta.[9] Another hospital (and Meta’s co-defendant in the lawsuit), the University of California San Francisco and Dignity Health (“UCSF”), was accused of illegally gathering patient information via Pixel code on their patient portal. Private medical information was then distributed to Meta. At some point, it is claimed that pharmaceutical companies then gained access to this medical information and sent out targeted ads based thereon.[10] That is just one example – all in all, more than 1 million patients have been affected by this Pixel breach.[11] 

Pixel and Video Tracking

The problems did not stop there. Following its patient portal controversy, Meta again faced criticism for obtaining protected user data with Pixel, this time in the context of video consumption. There are currently 47 proposed class actions against Meta for violations of the Video Privacy Protection Act (the “VPPA”). The VPPA was created in the 1980’s to cover videotape and audio-visual materials. No longer confined to the rental store, the VPPA has now taken on a much broader meaning after the growth of the internet. 

These class actions accuse Meta of using the Pixel tool to take video user data from a variety of company websites, including the NFL, NPR, the Boston Globe, Bloomberg Law and many more. The classes allege that by collecting video viewing activity in a personally identifiable manner without consent (matching Facebook user IDs to the activity rather than anonymously), so Pixel users could target their ads at the viewers, Pixel violated the VPPA. Under the VPPA Meta is not the defendant in these lawsuits, but rather the companies that shared user information with Meta.[12]

Causes of Action

The relatively new area of data privacy is scarcely litigated by the federal government due to the lack of statutes protecting consumer privacy on the federal level. Because of that, the number of data protection civil litigants can be expected to continue to grow. [13] HIPAA is the Health Insurance Portability and Accountability Act, an act created in 1996 to protect patient information from disclosure without patient consent. In the patient portal cases, HIPAA actions would have to be initiated by the US government. Claimants are therefore suing Meta under consumer protection and other privacy laws like the California Confidentiality of Medical Information Act, the Federal Wiretap Act, and the Comprehensive Computer Data Access and Fraud Act instead.[14] These state Acts allow individuals to sue, when under Federal Acts like HIPPA, the Government may move slowly, or not at all. And in the cases of video tracking, the litigants may only sue the video provider, not Meta itself.[15] Despite that wrinkle of benefit to Meta, their involvement in more privacy disputes is not ideal for the tech giant as it may hurt the trustworthiness of Meta Platforms in the eyes of the public.

Possible Outcomes

If found liable, the VPPA violations could result in damages of $2,500 per class member.[16] Punitive damages for the healthcare data breaches could run in the millions as well and would vary state to state due to the variety of acts the claims are brought in violation of.[17] Specifically, in the UCSF data case class members are seeking punitive damages of $5 million.[18] One possible hang-up that may become an issue for claimants are arbitration agreements. If the terms and conditions of either hospital patient portals or video provider websites contain arbitration clauses, litigants may have difficulty overcoming them. On the one hand, these terms and conditions may be binding and force the parties to attend mandatory arbitration meetings. On the other hand, consumer rights attorneys may argue that consent needs to come from forms separate from online user agreements.[19] If more lawsuits emerge due to the actions of Pixel, it is quite possible that companies will move away from the web analytics tools to avoid potential liability. It remains to be seen whether the convenience and utility of Meta Pixel stops being worth the risk the web analytics tools present to websites.

Notes

[1] Meta Nasdaq, https://www.google.com/finance/quote/META:NASDAQ (last visited Oct. 21, 2022).

[2] Meta Pixel, Meta for Developers, https://developers.facebook.com/docs/meta-pixel/.

[3] Sky Witley, Meta Pixel’s Video Tracking Spurs Wave of Data Privacy Suits, (Oct. 13, 2022, 3:55 AM), Bloomberg Law, https://news.bloomberglaw.com/privacy-and-data-security/meta-pixels-video-tracking-spurs-wave-of-consumer-privacy-suits.

[4] Meta Pixel, https://adwisely.com/glossary/meta-pixel/ (last visited Oct. 21, 2022).

[5] Ted Vrountas, What Is the Meta Pixel & What Does It Do?, https://instapage.com/blog/meta-pixel.

[6] Steve Adler, Meta Facing Further Class Action Lawsuit Over Use of Meta Pixel Code on Hospital Websites, HIPPA Journal (Aug. 1, 2022), https://www.hipaajournal.com/meta-facing-further-class-action-lawsuit-over-use-of-meta-pixel-code-on-hospital-websites/.

[7] Id.

[8] Id.

[9] Bill Toulas, Misconfigured Meta Pixel exposed healthcare data of 1.3M patients, Bleeping Computer (Aug. 22, 2022, 2:16 PM), https://www.bleepingcomputer.com/news/security/misconfigured-meta-pixel-exposed-healthcare-data-of-13m-patients/.

[10] Adler, supra note 6.

[11] Toulas, supra note 9.

[12] Witley, supra note 3. 

[13] Id.

[14] Adler, supra note 6.

[15] Witley, supra note 3.

[16] Id

[17] Dave Muoio, Northwestern Memorial the latest hit with a class action over Meta’s alleged patient data mining, Fierce Healthcare (Aug. 12, 2022 10:30AM), https://www.fiercehealthcare.com/health-tech/report-third-top-hospitals-websites-collecting-patient-data-facebook.

[18] Id.

[19] Witley, supra note 3.




iMessedUp – Why Apple’s iOS 16 Update Is a Mistake in the Eyes of Litigators.

Carlisle Ghirardini, MJLST Staffer

Have you ever wished you could unsend a text message? Has autocorrect ever created a typo you would give anything to edit? Apple’s recent iOS 16 update makes these dreams come true. The new software allows you to edit a text message a maximum of five times for up to 15 minutes after delivery and to fully unsend a text for up to two minutes after delivery.[1] While this update might be a dream for a sloppy texter, it may become a nightmare for a victim hoping to use text messages as legal evidence. 

But I Thought my Texts Were Private?

Regardless of the passcode on your phone, or other security measures you may use to keep your correspondence private, text messages can be used as relevant evidence in litigation so long as they can be authenticated.[2] Under the Federal Rules of Evidence Rule 901(a), such authentication only requires proof sufficient to support a finding that the evidence at issue is what you claim it is.[3] Absent access to the defendant’s phone, a key way to authenticate texts includes demonstrating the personal nature of the messages, which emulate earlier communication.[4] However, for texts to be admitted as evidence beyond hearsay, proof of the messages through screenshots, printouts, or other tangible methods of authentication is vital.[5]

A perpetrator may easily abuse the iOS 16 features by crafting harmful messages and then editing or unsending them. This has several negative effects. First, the fact that this capability is available may increase perpetrator utilization of text, knowing that disappearing harassment will be easier to get away with. Further, victims will be less likely to capture the evidence in the short time before the proof is rescinded, but after the damage has already been done. Attorney Michelle Simpson Tuegal who spoke out against this software shared how “victims of trauma cannot be relied upon, in that moment, to screenshot these messages to retain them for any future legal proceedings.”[6] Finally, when the victims are without proof and the perpetrator denies sending, psychological pain may result from such “gaslighting” and undermining of the victim’s experience.[7]

Why are Text Messages so Important?

Text messages have been critical evidence in proving the guilt of the defendant in many types of cases. One highly publicized example is the trial of Michelle Carter, who sent manipulative text messages to encourage her then 22-year-old boyfriend to commit suicide.[8] Not only were these texts of value in proving reckless conduct, they also proved Carter guilty of involuntary manslaughter as her words were shown to be the cause of the victim’s death. Without evidence of this communication, the case may have turned out very differently. Who is to say that Carter would not have succeeded in her abuse by sending and then unsending or editing her messages later?

Text messaging is also a popular tool for perpetrators of sexual harassment, and it happens every day. In a Rhode Island Supreme Court case, communication via iMessage was central to the finding of 1st degree sexual assault, as the 17-year-old plaintiff felt too afraid to receive a hospital examination after her attack.[9] Fortunately, the plaintiff had saved photos of inappropriate messages the perpetrator sent after the incident, amongst other records of their texting history, which properly authenticated the texts and connected him to the crime. It is important to note, however, that the incriminating screenshots were not taken until the morning after and with the help of a family member. This demonstrates how it is not often the first instinct of a victim to immediately memorialize evidence, especially when the content may be associated with shame or trauma. The new iOS feature may take away this opportunity to help one’s case through messages which can paint a picture of the incident or the relationship between the parties.

Apple Recognized That They Messed Up

The current iOS 16 update offering two minutes to recall messages and 15 minutes to edit them is actually an amendment to Apple’s originally offered timeframe of 15 minutes to unsend. This change came in light of efforts from an advocate for survivors of sexual harassment and assault. The advocate wrote a letter to the Apple CEO warning of the dangers of this new unsending capability.[10] While the decreased timeframe that resulted leaves less room for abuse of the feature, editing is just as dangerous as unsending. With no limit to how much text you can edit, one could send full sentences of verbal abuse simply just to later edit and replace them with a one-word message. Furthermore, if someone is reading the harmful messages in real time, the shorter window only gives them less time to react – less time to save the messages for evidence. While we can hope that the newly decreased window makes perpetrators think harder before sending a text that they may not be able to delete, this is wishful thinking. The fact that almost half of young people have reported being victims to cyberbullying when there has been no option to rescind or edit one’s messages shows that the length of the iOS feature likely does not matter.[11] The abilities of the new Apple software should be disabled; their “fix” to the update is not enough. The costs of what such a feature will do to victims and their chances of success in litigation outweigh the benefits to the careless texter. 

Notes

[1] Sofia Pitt, Apple Now Lets You Edit and Unsend Imessages on Your Iphone. Here’s How to Do It, CNBC (Sep. 12, 2022, 1:12 PM), https://www.cnbc.com/2022/09/12/how-to-unsend-imessages-in-ios-16.html.

[2] FED. R. EVID. 901(a).

[3] Id.

[4] United States v. Teran, 496 Fed. Appx. 287 (4th Cir. 2012).

[5] State v. Mulcahey, 219 A.3d 735 (R.I. Sup. Ct. 2019).

[6] Jess Hollington, Latest Ios 16 Beta Addresses Rising Safety Concerns for Message Editing, DIGITALTRENDS (Jul. 27, 2022) https://www.digitaltrends.com/mobile/ios-16-beta-4-message-editing-unsend-safety-concerns-fix/

[7] Id.

[8] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. Sup. Ct. 2018).

[9] Mulcahey, 219 A.3d at 740.

[10] Hollington, supra note 5.

[11] 45 Cyberbullying Statistics and Facts to Make Texting Safer, SLICKTEXT (Jan. 4, 2022) https://www.slicktext.com/blog/2020/05/cyberbullying-statistics-facts/.




Would Autonomous Vehicles (AVs) Interfere With Our Fourth Amendment Rights?

Thao Nguyen, MJLST Staffer

Traffic accidents are a major issue in the U.S. and around the world. Although car safety features are continuously enhanced and improved, traffic crashes continue to be the leading cause of non-natural death for U.S. citizens. Most of the time, the primary causes are human errors rather than instrumental failures. Therefore, autonomous vehicles (“AVs”), which promise to be the automobiles that operate themselves without the human driver, are an exciting up and coming technology, studied and developed in both academia and industry[1].

To drive themselves, AVs must be able to perform two key tasks: sensing the surrounding environment and “driving”—essentially replacing the eyes and hands of the human driver.[2] The standard AV design today includes a sensing system that collects information from the outside world, assisting the “driving” function. The sensing system is composed of a variety of sensors,[3] most commonly a Light Detection and Ranging (LiDAR) and cameras.[4] A LiDAR is a device that emits laser pulses and uses sound navigation and ranging (“SONAR”) principles to get a depth estimation of the surroundings: the emitted laser pulses travel forward, hit an object, then bounce back to the receivers; the time taken for the pulses to travel back is measured, and the distance is computed. With this information about distance and depth, a 3D point cloud map is generated about the surrounding environment. In addition to precise 3D coordinates, most LiDAR systems also record “intensity.” “Intensity” is the measure of the return strength of the laser pulse, which is based, in part, on the reflectivity of the surface struck by the laser pulse. LiDAR “intensity” data thus reveal helpful information about the surface characteristics of their surroundings. The two sensors, the camera and the LiDAR, complement each other: the former conveys rich appearance data with more details on the objects, whereas the latter is able to capture 3D measurements[5]. Fusing the information acquired by each allows the sensing system to gain a reliable environmental perception.[6]

LiDAR sensing technology is usually combined with artificial intelligence, as its goal is to imitate and eventually replace human perception in driving. Today, the majority of artificial intelligences use “machine learning,” a method that gives computers the ability to learn without explicitly being programmed. With machine learning, computers train itself to do new tasks in a similar manner as do humans: by exploring data, identifying patterns, and improving upon past experiences. Applied machine learning is data-driven: the greater the breadth and depth of the data supplied to the computer, the greater the variety and complexity of the tasks that the computer can program itself to do. Since “driving” is a combination of multiple high-complexity tasks, such as object detection, path planning, localization, lane detection, etc., an AV that drives itself requires voluminous data in order to operate properly and effectively.

“Big data” is already considered a valuable commodity in the modern world. In the case of AVs, however, this data will be of public streets and road users, and the large-scale collection of this data is empowered further by various technologies to detect and identify, track and trace, mine and profile data. When profiles about a person’s traffic movements and behaviors exist in a database somewhere, there is a great temptation for the information to be used for other purposes than the purpose for which they were originally collected, as has been the case with a lot of other “big data” today. Law enforcement officers who get their hands on these AVs data can track and monitor people’s whereabouts, pinpointing individuals whose trajectories touch on suspicious locations at a high frequency. The trajectories can be matched with the individual identified via use of car models and license plates. The police may then identify crime suspects based on being able to see the trajectories of everyone in the same town, rather than taking the trouble to identify and physically track each suspect. Can this use of data by law enforcement be sufficiently justified?

As we know, use of “helpful” police tools may be restricted by the Fourth Amendment, and for good reasons. Although surveillance helps police officers detect criminals,[7] extraneous surveillance has its social costs: restricted privacy and a sense of being “watched” by the government inhibits citizens’ productivity, creativity, spontaneity, and causes other psychological effects.[8] Case law has given us guidance to interpret and apply the Fourth Amendment standards of “trespass” or “unreasonable searches and seizures” by the police. Three principal cases, Olmstead v. United States, 277 U.S. 438 (1928), Goldman v. United States, 316 U.S. 129 (1942), and United States v. Jones, 565 U.S. 400 (2012), a modern case, limit Fourth Amendment protection to protecting against physical intrusion into private homes and properties. Such protection would not be helpful in the case of LiDAR, which operates on public street as a remote sensing technology. Nonetheless, despite the Jones case, the more broad “reasonable expectation of privacy” test established by Katz v. United States, 389 U.S. 347 (1967) is more widely accepted. Instead of tracing physical boundaries of “persons, houses, papers, and effects,” the Katz test focuses on whether there is an expectation of privacy that is socially recognized as “reasonable.” The Fourth Amendment “protects people, not places,” wrote the Katz court.[9]

United States v. Knotts, 460 U.S. 276 (1983) was a public street surveillance case that invoked the Katz test. In Knotts, the police installed a beeper on to the defendant’s vehicle to track it. The Court found that such tracking on public streets was not prohibited by the Fourth Amendment: “A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[10] The Knotts Court thus applied the Katz test and considered the question of whether there was a “reasonable expectation of privacy,” meaning that such expectation was recognized as “reasonable” by society.[11] The Court’s answer is in the negative: unlike a person in his dwelling place, a person who is traveling on public streets “voluntarily conveyed to anyone who wanted to look at the fact that he was traveling over particular roads in a particular direction.”[12]

United States v. Maynard, 615 F.3d 544 (2010), another public street surveillance case taking place in the twenty-first century, reconsidered the Knotts holding regarding “reasonable expectation of privacy” on public streets. The Maynard defendant argued that the district court erred in admitting evidence acquired by the police’s warrantless use of a Global Pointing System (GPS) device to track defendant’s movements continuously for a month.[13] The Government invoked United States v. Knotts and its holding that “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[14] The DC Circuit Court of Appeals, however, distinguished Knotts, pointing out that the Government in Knotts used a beeper that tracked a single journey, whereas the Government’s GPS monitoring in Maynard was sustained 24 hours a day continuously for one month.[15]The use of the GPS device over the course of one month did more than simply tracking defendant’s “movements from one place to another.” The result in Maynard was the discovery of the “totality and pattern” of defendant’s movement. [16]The Court is willing to make a distinction between “one path” and “the totality of one’s movement”: since someone’s “totality of movement” is much less exposed to the view of the public and much more revealing of that person’s personal life, it is constitutional for the police to track an individual on “one path,” but not that same individual’s “totality of movement.”

Thus, with time the Supreme Court appears to be recognizing that when it comes to modern surveillance technology, the sheer quantity and the revealing nature of data collected on movements of public street users ought to raise concerns. The straightforward application of these to AV sensing data would be that data concerning a person’s “one path” can be obtained and used, but not the totality of a person’s movement. It is unclear where to draw the line      between “one path” and “the totality of movement.” The surveillance in Knotts was intermittent over the course of three days,[17] whereas the defendant in Maynard was tracked for over one month. The limit would perhaps fall somewhere in between.

Furthermore, this straightforward application is complicated by the fact that the sensors utilized by AVs do not pick up mere locational information. As discussed above, AV sensing system, being composed of multiple sensors, capture both camera images and information about speed, texture, and depth of the object. In other words, AVs do not merely track a vehicle’s location like a beeper or GPS, but they “see” the vehicle through their cameras and LiDAR and radar devices, gaining a wealth of information. This means that even if only data about “one path” of a person movement is extracted, this “one path” data as processed by AV sensing systems is much more in-depth than what a beeper or CSLI can communicate. Adding to this, current developers are proposing to create AVs networks that share data among many vehicles, so that data on “one path” can potentially be combined with other data of the same vehicle’s movement, or multiple views of the same “one path” from different perspectives can be combined. The extensiveness of these data goes far beyond the precedents in Knotts and Maynard. Thus, it is foreseeable that unwarranted subpoenaing AVs sensing data is firmly within the Supreme Court’s definition of a “trespass.”

[1] Tri Nguyen, Fusing LIDAR sensor and RGB camera for object detection in autonomous vehicle with fuzzy logic approach, 2021 International Conference on Information Networking (ICOIN) 788, 788 (2021).

[2] Id. (“An autonomous vehicle or self-driving car is a vehicle having the ability to sense the surrounding environment and capable of operation on its own without any human interference. The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounting on it.”)

[3] Id. “The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounted on it.”

[4] Heng Wang and Xiaodong Zhang, Real-time vehicle detection and tracking using 3D LiDAR, Asian Journal of Control 1, 1 (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”)

[5] Id. (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”) (“Conversely, LiDARs are able to produce 3D measurements and are not affected by the illumination of the environment [9,10].”).

[6] Nguyen, supra note 1, at 788 (“Due to the complementary of two sensors, it is necessary  to gain a more reliable environment perception by fusing the  information acquired from these two sensors.”).

[7] Raymond P. Siljander & Darin D. Fredrickson, Fundamentals of Physical Surveillance: A Guide for Uniformed and Plainclothes Personnel, Second Edition (2002) (abstract).

[8] Tamara Dinev et al., Internet Privacy Concerns and Beliefs About Government Surveillance – An Empirical Investigation, 17 Journal of Strategic Information Systems 214, 221 (2008) (“Surveillance has social costs (Rosen, 2000) and inhibiting effects on spontaneity, creativity, productivity, and other psychological effects.”).

[9] Katz v. United States, 389 U.S. 347, 351 (1967).

[10] United States v. Knotts, , 460 U.S. 276, 281 (1983) (“A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”)

[11] Id. at 282.

[12] Id.

[13] United States v. Maynard, 615 F.3d 544, 549 (2010).

[14]  Id. at 557.

[15] Id. at 556.

[16] Id. at 558 “[O]nes’s movements 24 hours a day for 28 days as he moved among scores of places, thereby discovering the totality and pattern of his movements.”).

[17] Knotts at 276.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.


The StingRay You’ve Never Heard Of: How One of the Most Effective Tools in Law Enforcement Operates Behind a Veil of Secrecy

Dan O’Dea, MJLST Staffer

One of the most effective investigatory tools in law enforcement has operated behind a veil of secrecy for over 15 years. “StingRay” cell phone tower simulators are used by law enforcement agencies to locate and apprehend violent offenders, track persons of interest, monitor crowds when intelligence suggests threats, and intercept signals that could activate devices. When passively operating, StingRays mimic cell phone towers, forcing all nearby cell phones to connect to them, while extracting data in the form of metadata calls, text messages, internet traffic, and location information, even when a connected phone is powered off. They can also inject spying software into phones and prevent phones from accessing cellular data. StingRays were initially used overseas by federal law enforcement agencies to combat terrorism, before spreading into the hands of the Department of Justice and Department of Homeland Security, and now are actively used by local law enforcement agencies in 27 states to solve everything from missing persons cases to thefts of chicken wings.

The use of StingRay devices is highly controversial due to their intrusive nature. Not only does the use of StingRays raise privacy concerns, but tricking phones into connecting to StingRays mimicking cell phone towers prevent accessing legitimate cell phone service towers, which can obstruct access to 911 and other emergency hotlines. Perplexingly, the use of StingRay technology by law enforcement is almost entirely unregulated. Local law enforcement agencies frequently cite secrecy agreements with the FBI and the need to protect an investigatory tool as a means of denying the public information about how StingRays operate, and criminal defense attorneys have almost no means of challenging their use without this information. While the Department of Justice now requires federal agents obtain a warrant to use StingRay technology in criminal cases, an exception is made for matters relating to national security, and the technology may have been used to spy on racial-justice protestors during the Summer of 2020 under this exception. Local law enforcement agencies are almost completely unrestricted in their use of StingRays, and may even conceal their use in criminal prosecutions by tagging their findings as those of a “confidential source,” rather than admitting the use of a controversial investigatory tool. Doing so allows prosecutors to avoid  battling 4th amendment arguments characterizing data obtained by StingRays as unlawful search and seizure.

After existing in a “legal no-man’s land” since the technology’s inception, Senator Ron Wyden (D-OR) and Representative Ted Lieu (D-HI) sought to put an end to the secrecy of StingRays through introducing the Cell-Site Simulator Warrant Act of 2021 in June of 2021. The bill would have mandated that law enforcement agencies obtain a warrant to investigate criminal activity before deploying StingRay technology while also requiring law enforcement agencies to delete data of phones other than those of investigative targets. Further, the legislation would have required agencies to demonstrate a need to use StingRay technology that outweighs any potential harm to the community impacted by the technology. Finally, the bill would have limited authorized use of StingRay technology to the minimum amount of time necessary to conduct an investigation. However, the Cell-Site Simulator Warrant Act of 2021 appears to have died in committee after failing to garner significant legislative support.

Ultimately, no device with the intrusive capabilities of StingRays should be allowed to operate free from the constraints of regulation. While StingRays are among the most effective tools utilized by law enforcement, they are also among the most intrusive into the privacy of the general public. It logically follows that agencies seeking to operate StingRays should be required to make a showing of a need to utilize such an intrusive investigatory tool. In certain situations, it may be easy to establish the need to deploy a StingRay, such as doing so to further the investigation of a missing persons case. In others, law enforcement agencies would correctly find their hands tied should they wish to utilize a StingRay to catch a chicken wing thief.


What the SolarWinds Hack Means for the Future of Law Firm Cybersecurity?

Sam Sylvan, MJLST Staffer

Last December, the massive software company SolarWinds acknowledged that its popular IT-monitoring software, Orion, was hacked earlier in the year. The software was sold to thousands of SolarWinds’ clients, including government and Fortune 500 companies. A software update of Orion provided Russian-backed hackers with a backdoor into the internal systems of approximately 18,000 SolarWinds customers—a number that is likely to increase over time as more organizations discover that they also are victims of the hack. Even the cybersecurity company FireEye that first identified the hack had learned that its own systems were compromised.

The hack has widespread implications on the future of cybersecurity in the legal field. Courts and government attorneys were not able to avoid the Orion hack. The cybercriminals were able to hack into the DOJ’s internal systems, leading the agency to report that the hackers might have breached 3,450 DOJ email inboxes. The Administrative Office of the U.S. Courts is working with DHS to audit vulnerabilities in the CM/ECF system where highly sensitive non-public documents are filed under seal. Although, as of late February, no law firms had announced that they too were victims of the hack, likely because law firms do not typically use Orion software for their IT management, the Orion hack is a wakeup call to law firms across the country regarding their cybersecurity. There have been hacks, including hacks of law firms, but nothing of this magnitude or potential level of sabotage. Now more than ever law firms must contemplate and implement preventative measures and response plans.

Law firms of all sizes handle confidential and highly sensitive client documents and data. Oftentimes, firms have IT specialists but lack cybersecurity experts on the payroll—somebody internal who can aid by continuing to develop cybersecurity defenses. The SolarWinds hack shows why this needs to change, particularly for law firms that handle an exorbitant amount of highly confidential and sensitive client documents and can afford to add these experts to their ranks. Law firms relying exclusively on consultants or other third parties for cybersecurity only further jeopardizes the security of law firms’ document management systems and caches of electronically stored client documents. Indeed, it is reliance on third-party vendors that enabled the SolarWinds hack in the first place.

In addition to adding a specialist to the payroll, there are a number of other specific measures that law firms can take in order to address and bolster their cybersecurity defenses. For those of us who think it is not a matter of “if” but rather “when,” law firms should have an incident response plan ready to go. According to Jim Turner, chief operating officer of Hilltop Consultants, many law firms do not even have an incident response plan in place.

Further, because complacency and outdated IT software is of particular concern for law firms, “vendor vulnerability assessments” should become commonplace across all law firms. False senses of protection need to be discarded and constant reassessment should become the norm. Moreover, firms should upgrade the type of software protection they have in place to include endpoint detection and response (EDR), which uses AI to detect hacking activity on systems. Last, purchasing cyber insurance is a strong safety measure in the event a law firm has to respond to a breach. It would allow for the provision of additional resources needed to effectively respond to hacks.


I’ve Been Shot! Give Me a Donut!: Linking Vaccine Verification Apps to Existing State Immunization Registries

Ian Colby, MJLST Staffer

The gold rush for vaccination appointments is in full swing. After Governor Walz and many other governors announced an acceleration of vaccine eligibility in their states, the newly eligible desperately sought vaccinations to help the world achieve herd immunity to the SARS-CoV-2 virus (“COVID”) and get back to normal life.

The organization administering a person’s initial dose typically gives the recipient an approximately 4” x 3” card that provides the vaccine manufacturer, the date and location of inoculation, and the Centers for Disease Control (“CDC”) logo. The CDC website does not specify what, exactly, this card is for. Likely reasons include informing the patient about the healthcare they just received, a reminder card for a second dose, or providing batch numbers in case a manufacturing issue arises. Maybe they did it for the ‘Gram. However, regardless of the CDC’s reason for the card, many news outlets have latched onto the most likely future use for them: as a passport to get the post-pandemic party started.

Airlines, sports venues, schools, and donut shops are desperate to return to safe mass gatherings and close contact, without needing to enforce as many protective measures. These organizations, in the short-term, will likely seek assurance of a person’s vaccination status. Aside from the equitable and scientific issues with requiring this assurance, these business will likely get “proof” with these CDC vaccination cards. The cardboard and ink security of these cards rivals social security cards in the “high importance – zero protection” category. Warnings of scammers providing blank CDC cards or stealing the vaccinated person’s name and birthdate hit the web last week (No scammers needed: you can get Missouri’s PDF to print one for free).

With so little security, but with a business-need to reopen the economy to vaccinated folks, businesses and governments have turned to digital vaccine passports. Generically named “digital health passes,” these apps will allow a person to show proof of their vaccination status securely. They “provide a path to reviving the economy and getting Americans back to work and play” according to a New York Times article. “For any such certificate or passport to work, it is going to need two things – access to a country’s official records of vaccinations and a secure method of identifying an individual and linking them to their health record.”

A variety of sources have undertaken development of these digital health passes, both governments and private firms. Israel already provides a nationwide digital proof of vaccination known as a Green Pass. Denmark followed suit with the Coronapas. In addition, a number of private companies and nonprofits are vying to become the preeminent vaccine status app for the world’s smartphones. While governments, such as Israel, have preexisting authority to access immunization and identification records, private firms do not. Private firms would require authorization to access your medical records.

So, in the United States, who would run these apps? Not the U.S. federal government. The Biden Administration unequivocally denied that it would ever require vaccine status checks, and would not keep a vaccination database. The federal government does not need to, though. Most states already manage a digital vaccination database, pursuant to laws authorizing them. Every other state, which doesn’t directly authorize them, still maintains a digital database anyway. These immunization information systems (“IIS”) provide quick access to a person’s vaccination status. A state’s resident can make a request for their vaccination status on myriad vaccinations for free and receive the results via email. Texas and Florida, who made big hubbubs about restricting any use of vaccine passports, both have registries to provide proof of vaccination. So does New York, who has already published an app, known as the Excelsior Pass, that does this for the COVID vaccine. The State’s app pulls information from New York’s immunization registry, providing a quick, simple yes-no result for those requiring proof. The app uses IBM’s blockchain technology, which is “designed to enable the secure verification of health credentials such as test results and vaccination records without the need to share underlying medical and personal information.”

With so many options, consumers of vaccine status apps could become overwhelmed. A vaccinated person may need to download innumerable apps to enter myriad activities. “Fake” apps could ask for additional medical information from the unwary. Private app developers may try to justify continued use of the app after the need for COVID vaccination proof passes.

In this competitive atmosphere, apps that partner with state governments likely provide the best form of digital vaccination verification. These apps have direct approval from the states that are required by law to maintain these vaccination records. They provide some authority to avoid scams. And cooperation to achieve state standardization of these apps may facilitate greater use. States seeking to reopen their economies should authorize digital interfaces with their pre-existing immunization registries. Now that the gold rush for vaccinations has started, the gold rush for vaccine passports is something to keep an eye on.