Privacy

AI and Predictive Policing: Balancing Technological Innovation and Civil Liberties

Alexander Engemann, MJLST Staffer

To maximize their effectiveness, police agencies are constantly looking to use the most sophisticated preventative methods and technologies available. Predictive policing is one such technique that fuses data analysis, algorithms, and information technology to anticipate and prevent crime. This approach identifies patterns in data to anticipate when and where crime will occur, allowing agencies to take measures to prevent it.[1] Now, engulfed in an artificial intelligence (“AI”) revolution, law enforcement agencies are eager to take advantage of these developments to augment controversial predictive policing methods.[2]

In precincts that use predictive policing strategies, ample amounts of data are used to categorize citizens with basic demographic information.[3] Now, machine learning and AI tools are augmenting this data which, according to one source vendor, “identifies where and when crime is most likely to occur, enabling [law enforcement] to effectively allocate [their] resources to prevent crime.”[4]

Both predictive policing and AI have faced significant challenges concerning issues of equity and discrimination. In response to these concerns, the European Union has taken proactive steps promulgating sophisticated rules governing AI applications within its territory, continuing its tradition of leading in regulatory initiatives.[5] Dubbed the “Artificial Intelligence Act”, the Union clearly outlined its goal of promoting safe, non-discriminatory AI systems.[6]

Back home, we’ve failed to keep a similar legislative pace, even with certain institutions sounding the alarms.[7] Predictive policing methods have faced similar criticism. In an issue brief, the NAACP emphasized, “[j]urisdictions who use [Artificial Intelligence] argue it enhances public safety, but in reality, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement.”[8] This technological and ideological marriage clearly poses discriminatory risks for law enforcement agencies in a nation where a black person is already exponentially more likely to be stopped without just cause as their white counterparts.[9]

Police agencies are bullish about the technology. Police Chief Magazine, the official publication of the International Association of Chiefs of Police,  paints these techniques in a more favorable light, stating, “[o]ne of the most promising applications of AI in law enforcement is predictive policing…Predictive policing empowers law enforcement to predict potential crime hotspots, ultimately aiding in crime prevention and public safety.[10] In this space, facial recognition software is gaining traction among law enforcement agencies as a powerful tool for identifying suspects and enhancing public safety. Clearview AI stresses their product, “[helps] law enforcement and governments in disrupting and solving crime.”[11]

Predictive policing methods enhanced by AI technology show no signs of slowing down.[12] The obvious advantages to these systems cannot be ignored, allowing agencies to better allocate resources and manage their staff. However, as law enforcement agencies adopt these technologies, it is important to remain vigilant in holding them accountable to any potential ethical implications and biases embedded within their systems. A comprehensive framework for accountability and transparency, similar to European Union guidelines  must be established to ensure deploying predictive policing and AI tools do not come at the expense of marginalized communities. [13]

 

Notes

[1] Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L.J. 259, 265-267 (2012)

[2] Eric M. Baker, I’ve got my AI on You: Artificial Intelligence in the Law Enforcement Domain, 47 (Mar. 2021) (Master’s thesis).

[3] Id. at 48.

[4] Id. at 49 (citing Walt L. Perry et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, RR-233-NIJ (Santa Monica, CA: RAND, 2013), 4, https://www.rand.org/content/dam/rand/ pubs/research_reports/RR200/RR233/RAND_RR233.pdf).

[5] Commission Regulation 2024/1689 or the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (Artificial Intelligence Act), 2024 O.J. (L 1689) 1.

[6] Lukas Arnold, How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination, Penn. J. L. & Soc. Change (May 14,2024), https://www.law.upenn.edu/live/news/16742-how-the-european-unions-ai-act-provides#_ftn1.

[7] See Margaret Hu, Algorithmic Jim Crow, 86 Fordham L. Rev. 633, 664 (2017),

https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5445&context=flr. (“Database screening and digital watchlisting systems, in fact, can serve as complementary and facially colorblind supplements to mass incarcerations systems. The purported colorblindness of mandatory sentencing… parallels the purported colorblindness of mandatory database screening and vetting systems”).

[8] NAACP, Issue Brief: The Use of Artificial Intelligence in Predictive policing, https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief (last visited Nov. 2, 2024).

[9] Will Douglas Heaven, Artificial Intelligence- Predictive policing algorithms are racist. They need to be dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ (citing OJJDP Statistical Briefing Book. Estimated number of arrests by offense and race, 2020. Available: https://ojjdp.ojp.gov/statistical-briefing-book/crime/faqs/ucr_table_2. Released on July 08, 2022).

[10] See The Police Chief, Int’l Ass’n of Chiefs of Police, https://www.policechiefmagazine.org (last visited Nov. 2, 2024);Brandon Epstein, James Emerson, and ChatGPT, “Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives,” Police Chief Online, April 3, 2024.

[11] Clearview AI, https://www.clearview.ai/ (last visited Nov. 3, 2024).

[12] But see Nicholas Ibarra, Santa Cruz Becomes First US City to Approve Ban on Predictive Policing, Santa Cruz Sentinel (June 23, 200) https://evidentchange.org/newsroom/news-of-interest/santa-cruz-becomes-first-us-city-approve-ban-predictive-policing/.

[13] See also Roy Maurer, New York City to Require Bias Audits of AI-Type HR Technology, Society of Human Resources Management (December 19, 2021), https://www.shrm.org/topics-tools/news/technology/new-york-city-to-require-bias-audits-ai-type-hr-technology.

 


What Happens to Your Genetic Data in a Sale or Acquisition?

Colin Loyd, MJLST Staffer

Remember 23andMe—the genetic testing company that once skyrocketed in publicity in the 2010s due to its relatively inexpensive access to genetic testing? It’s now heading toward disaster. This September, its board of directors saw all but one member tender their resignation.[1] At the close of that day’s trading, 23andMe’s share price was $0.35, representing a 99.9% decline in valuation from its peak in 2021.[2] This decline in valuation suggests the company may declare bankruptcy, which often leads to a sale of a company’s assets. Bankruptcy or the sale of assets present a host of complex privacy and regulatory issues, particularly concerning the sale of 23andMe’s most valuable asset—its vast collection of consumer DNA data.[3] This uncertain situation underscores serious concerns surrounding the state of comprehensive privacy protections for genetic information that leave consumers’ sensitive genetic data vulnerable to misuse and exploitation.

23andMe collects and stores massive amounts of user genetic information. However, unlike healthcare providers, 23andMe does not have to comply with the stringent privacy regulations set out in the Health Insurance Portability and Accountability Act (HIPAA).[4] While HIPAA is designed to protect sensitive health data, its protections apply only to a small subset of healthcare related entities.[5] HIPAA only regulates the use of genetic information by “group health plan[s], health insurance issuer[s] that issue[] health insurance coverage, or issuer[s] of a medicare supplemental policy.”[6] 23andMe does not fit into any of these categories and therefore operates outside the scope of HIPAA protections with respect to genetic information, leaving any genetic information it holds largely unregulated.

The Genetic Information Nondiscrimination Act (GINA), enacted in 2008, offers consumer protections by prohibiting discrimination based on an individual’s genetic information with respect to health insurance premium amounts or eligibility requirements for health insurance.[7] GINA also prohibits any deprivation of employment opportunities based on genetic information.[8] However, GINA’s protections do not extend to life insurance, disability insurance, or long-term care insurance.[9] This leaves a gap where genetic information may be used against individuals by entities not subject to GINA.

This regulatory gap is a major concern for consumers, especially with a potential bankruptcy sale looming. If 23andMe sells its assets, including its database of genetic information, the new owner would not have to adhere to the same privacy commitments made by 23andMe. For example, 23andMe promises not to use genetic information it receives for personalized or targeted marketing/advertising without a user’s express consent.[10] This policy likely reflects 23andMe’s efforts to comply with the California Privacy Rights Act (CPRA), which grants consumers the right to direct a business to not share or sell their personal information.[11] However, this right under the CPRA is an opt-out right—not an opt-in right—meaning consumers can stop a future sale of their information but by default there is no initial, regulatory limit on the sale of their personal information.[12] As a result, there’s nothing stopping 23andMe from altering its policies and changing how it uses genetic information. In fact, 23andMe’s Privacy Statement states it “may make changes to this Privacy Statement from time to time.”[13] Any such change would likely be binding if it is clearly communicated to users.[14] 23andMe currently lists email or an in-app notification as methods it may notify its users of any change to the Privacy Statement.[15] If it does so, it’s highly possible a court would view this as “clear communication” and there would be little legal recourse for users to prevent their genetic information from being used in ways they did not anticipate, such as for research or commercial purposes.

For example, say a life insurance company acquires an individual’s genetic data through the purchase of 23andMe’s assets. It could potentially use that data to make decisions about coverage or premiums, even though GINA prohibits health insurers to do the same.[16] This loophole highlights the dangers of having genetic information in the hands of entities not bound by strict privacy protections.

In the event of an acquisition or bankruptcy, 23andMe’s Privacy Statement outlines that personal information, including genetic data, may be among the assets sold or transferred to the new entity.[17] In such a case, the new owner could inherit both the data and the rights to use it under the existing terms, including the ability to modify how the data is used. This could result in uses not originally intended by the user so long as the change is communicated to the user.[18] This transfer clause highlights a key concern for users because it allows their deeply personal genetic data to be passed to another company without additional consent, potentially subjecting them to exploitation by organizations with different data usage policies or commercial interests. While 23andMe must notify users about any changes to the privacy statement or its use of genetic information, it does not specify whether the notice will be given in advance.[19] Any new entity could plan a change to the privacy statement terms–altering how it uses the genetic information while leaving users in the dark until the change is communicated to them, at which point the user’s information may have already been shared with third parties.

The potential 23andMe bankruptcy and sale of assets reveals deep flaws in the current regulatory system governing genetic data privacy. Without HIPAA protections, consumers risk their sensitive genetic information being sold or misused in ways they cannot control. GINA–while offering some protections–still leaves significant gaps, especially in life and disability insurance. As the demand for genetic testing continues to grow, the vulnerabilities exposed by 23andMe’s potential financial troubles highlight the urgent need for better privacy protections. Consumers must be made aware of the risks involved in sharing their genetic data, and regulatory measures must be strengthened to ensure this sensitive information is not misused or sold without their explicit consent.

 

Notes

[1] Independent Directors of 23andMe Resign from Board, 23andMe (Sept. 17, 2024) https://investors.23andme.com/news-releases/news-release-details/independent-directors-23andme-resign-board.

[2] Rolfe Winkler, 23andMe Board Resigns in New Blow to DNA-Testing Company, WALL ST. J. (Sept. 18, 2024) https://www.wsj.com/tech/biotech/23andme-board-resigns-in-new-blow-to-dna-testing-company-12f1a355.

[3] Anne Wojcicki (the last remaining board member) has consistently publicized her plan to take the company private, which is looming larger given the current state of the business financials. Id.

[4] See 42 U.S.C. § 1320d-9(a)(2).

[5] See generally 42 U.S.C. §1320d et seq.

[6] 42 U.S.C. § 1320d-9(a)(2).

[7] Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881.

[8] Id.

[9] Jessica D Tenenbaum & Kenneth W Goodman, Beyond the Genetic Information Nondiscrimination Act: Ethical and Economic Implications of the Exclusion of Disability, Long-term Care and Life Insurance, 14 Personalized Med. 153, 154 (2017).

[10] How We Use Your Information, 23andMe, https://www.23andme.com/legal/how-we-use-info/ (last visited Oct. 14, 2024).

[11] Cal. Civ. Code § 1798.120(a) (Deering 2024).

[12] Id.

[13] Privacy Statement, 23andMe (Sept. 24, 2024) https://www.23andme.com/legal/privacy/full-version/.

[14] See Lee v. Ticketmaster LLC, 817 Fed. App’x 393 (9th Cir. 2019)(upholding terms of use where notice was clearly given to the user, even if the user didn’t check a box to assent to the terms).

[15] Privacy Statement, supra note 13.

[16] See K.S.A. § 40-2259(c)-(d) (carving out the ability for life insurance policies to take into account genetic information when underwriting the policy).

[17] Privacy Statement, supra note 13.

[18] See Ticketmaster, 817 Fed. App’x 393 (2019).

[19] Privacy Statement, supra note 13.


The Double-Helix Dilemma: Navigating Privacy Pitfalls in Direct-to-Consumer Genetic Testing

Ethan Wold, MJLST Staffer

Introduction

On October 22, direct-to-consumer genetic testing (DTC-GT) company 23andME sent emails to a number of its customers informing them of a data breach into the company’s “DNA Relatives” feature that allows customers to compare ancestry information with other users worldwide.[1] While 23andMe and other similar DTC-GT companies offer a number of positive benefits to consumers, such as testing for health predispositions and carrier statuses of certain genes, this latest data breach is a reminder that before choosing to opt into these sorts of services one should be aware of the potential risks that they present.

Background

DTC-GT companies such as 23andMe and Ancestry.com have proliferated and blossomed in recent years. It is estimated over 100 million people have utilized some form of direct-to-consumer genetic testing.[2] Using biospecimens submitted by consumers, these companies sequence and analyze an individual’s genetic information to provide a range of services pertaining to one’s health and ancestry.[3] The October 22 data breach specifically pertained to 23andMe’s “DNA Relatives” feature.[4] The DNA Relatives feature can identify relatives on any branch of one’s family tree by taking advantage of the autosomal chromosomes, the 22 chromosomes that are passed down from your ancestors on both sides of your family, and one’s X chromosome(s).[5] Relatives are identified by comparing the customer’s submitted DNA with the DNA of other 23andMe members who are participating in the DNA Relatives feature.[6] When two people are found to have an identical DNA segment, it is likely they share a recent common ancestor.[7] The DNA Relatives feature even uses the length and number of these identical segments to attempt to predict the relationship between genetic relatives.[8] Given the sensitive nature of sharing genetic information, there are often privacy concerns regarding practices such as the DNA Relatives feature. Yet despite this, the legislation and regulations surrounding DTC-GT is somewhat limited.

Legislation

The Health Insurance Portability and Accountability Act (HIPAA) provides the baseline privacy and data security rules for the healthcare industry.[9] HIPAA’s Privacy Rule regulates the use and disclosure of a person’s “protected health information” by a “covered entity.[10] Under the Act, the type of genetic information collected by 23andMe and other DTC-GT companies does constitute “protected health information.”[11] However, because HIPAA defines a “covered entity” as a health plan, healthcare clearinghouse, or health-care provider, DTC-GT companies do not constitute covered entities and therefore are not under the umbrella of HIPAA’s Privacy Rule.[12]

Thus, the primary source of regulation for DTC-GT companies appears to be the Genetic Information Nondiscrimination Act (GINA). GINA was enacted in 2008 for the purpose of protecting the public from genetic discrimination and alleviating concerns about such discrimination and thereby encouraging individuals to take advantage of genetic testing, technologies, research, and new therapies.[13] GINA defines genetic information as information from genetic tests of an individual or family members and includes information from genetic services or genetic research.[14] Therefore, DTC-GT companies fall under GINA’s jurisdiction. However, GINA only applies to the employment and health insurance industries and thus neglects many other potential arenas where privacy concerns may present.[15] This is especially relevant for 23andMe customers, as signing up for the service serves as consent for the company to use and share your genetic information with their associated third-party providers.[16] As a case in point, in 2018 the pharmaceutical giant GlaxoSmithKline purchased a $300 million stake in 23andMe for the purpose of gaining access to the company’s trove of genetic information for use in their drug development trials.[17]

Executive Regulation

In addition to the legislation above, three different federal administrative agencies primarily regulate the DTC-GT industry: the Food and Drug Administration (FDA), the Centers of Medicare and Medicaid services (CMS), and the Federal Trade Commission (FTC). The FDA has jurisdiction over DTC-GT companies due to the genetic tests they use being labeled as “medical devices”[18] and in 2013 exercised this authority over 23andMe by sending a letter to the company resulting in the suspending of one of its health-related genetic tests.[19] However, the FDA only has jurisdiction over diagnostic tests and therefore does not regulate any of the DTC-GT services related to genealogy such as 23andMe’s DNA Relatives feature.[20] Moreover, the FDA does not have jurisdiction to regulate the other aspects of DTC-GT companies’ activities or data practices.[21] CMS has the ability to regulate DTC-GT companies through enforcement of the Clinical Laboratory Improvements Act (CLIA), which requires that genetic testing laboratories ensure the accuracy, precision, and analytical validity of their tests.[22] But, like the FDA, CMS only has jurisdiction over tests that diagnose a disease or assess health.[23]

Lastly, the FTC has broad authority to regulate unfair or deceptive business practices under the Federal Trade Commission Act (FTCA) and has levied this authority against DTC-GT companies in the past. For example, in 2014 the agency brought an action against two DTC-GT companies who were using genetic tests to match consumers to their nutritional supplements and skincare products.[24] The FTC alleged that the companies’ practices related to data security were unfair and deceptive because they failed to implement reasonable policies and procedures to protect consumers’ personal information and created unnecessary risks to the personal information of nearly 30,000 consumers.[25] This resulted in the companies entering into an agreement with the FTC whereby they agreed to establish and maintain comprehensive data security programs and submit to yearly security audits by independent auditors.[26]

Potential Harms

As the above passages illustrate, the federal government appears to recognize and has at least attempted to mitigate privacy concerns associated with DTC-GT. Additionally, a number of states have passed their own laws that limit DTC-GT in certain aspects.[27] Nevertheless, given the potential magnitude and severity of harm associated with DTC-GT it makes one question if it is enough. Data breaches involving health-related data are growing in frequency and now account for 40% of all reported data breaches.[28] These data breaches result in unauthorized access to DTC-GT consumer-submitted data and can result in a violation of an individual’s genetic privacy. Though GINA aims to prevent it, genetic discrimination in the form of increasing health insurance premiums or denial of coverage by insurance companies due to genetic predispositions remains one of the leading concerns associated with these violations. What’s more, by obtaining genetic information from DTC-GT databases, it is possible for someone to recover a consumer’s surname and combine that with other metadata such as age and state to identify the specific consumer.[29] This may in turn lead to identity theft in the form of opening accounts, taking out loans, or making purchases in your name, potentially damaging your financial well-being and credit score. Dealing with the aftermath of a genetic data breach can also be expensive. You may incur legal fees, credit monitoring costs, or other financial burdens in an attempt to mitigate the damage.

Conclusion

As it sits now, genetic information submitted to DTC-GT companies already contains a significant volume of consequential information. As technology continues to develop and research presses forward, the volume and utility of this information will only grow over time. Thus, it is crucially important to be aware of risks associated with DTC-GT services.

This discussion is not intended to discourage individuals from participating in DTC-GT. These companies and the services they offer provide a host of benefits, such as allowing consumers to access genetic testing without the healthcare system acting as a gatekeeper, thus providing more autonomy and often at a lower price.[30] Furthermore, the information provided can empower consumers to mitigate the risks of certain diseases, allow for more informed family planning, or gain a better understanding of their heritage.[31] DTC-GT has revolutionized the way individuals access and understand their genetic information. However, this accessibility and convenience comes with a host of advantages and disadvantages that must be carefully considered.

Notes

[1] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[2] https://www.ama-assn.org/delivering-care/patient-support-advocacy/protect-sensitive-individual-data-risk-dtc-genetic-tests#:~:text=Use%20of%20direct%2Dto%2Dconsumer,November%202021%20AMA%20Special%20Meeting

[3] https://go-gale-com.ezp3.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[4] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[5] https://customercare.23andme.com/hc/en-us/articles/115004659068-DNA-Relatives-The-Genetic-Relative-Basics

[6] Id.

[7] Id.

[8] Id.

[9] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[10] https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/combined/hipaa-simplification-201303.pdf

[11] Id.

[12] Id; https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[13] https://www.eeoc.gov/statutes/genetic-information-nondiscrimination-act-2008

[14] Id.

[15] https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3035561&blobtype=pdf

[16] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[17] https://news.yahoo.com/news/major-drug-company-now-access-194758309.html

[18] https://uscode.house.gov/view.xhtml?req=(title:21%20section:321%20edition:prelim)

[19] https://core.ac.uk/download/pdf/33135586.pdf

[20] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[21] Id.

[22] https://www.law.cornell.edu/cfr/text/42/493.1253

[23] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[24] https://www.ftc.gov/system/files/documents/cases/140512genelinkcmpt.pdf

[25] Id.

[26] Id.

[27] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[28] Id.

[29] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[30] Id.

[31] Id.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


Mental Health Telehealth Services May Not Be Protecting Your Data

Tessa Wright, MJLST Staffer

The COVID-19 pandemic changed much about our daily lives, and nowhere have those changes been more visible than in the healthcare industry. During the pandemic, there were overflowing emergency rooms coupled with doctor shortages.[1] In-person medical appointments were canceled, and non-emergency patients had to wait months for appointments.[2] In response, the use of telehealth services began to increase rapidly.[3] In fact, one 2020 study found that telehealth visits accounted for less than 1% of health visits prior to the pandemic and increased to as much as 80% of visits when the pandemic was at its peak.[4] And, while the use of telehealth services has decreased slightly in recent years, it seems as though it is likely here to stay. Nowhere has the use of telehealth services been more prevalent than in mental health services.[5] Indeed, as of 2022, telehealth still represented over 36% of outpatient mental health visits.[6] Moreover, a recent study found that since 2020, over one in three mental health outpatient visits have been delivered by telehealth.[7] And while this increased use in telehealth services has helped make mental health services more affordable and accessible to many Americans, this shift in the way healthcare is provided also comes with new legal concerns that have yet to be fully addressed.

Privacy Concerns for Healthcare Providers

One of the largest concerns surrounding the increased use of telehealth in mental health services is privacy. There are several reasons for this. The primary concern has been due to the fact that telehealth takes place over the phone or via personal computers. When using personal devices, it is nearly impossible to ensure HIPAA compliance. However, the majority of healthcare providers now offer telehealth options that connect directly to their private healthcare systems, which allows for more secure data transmission.[8] While there are still concerns surrounding this issue, these secure servers have helped mitigate much of the concern.[9]

Privacy Concerns with Mental Health Apps

The other privacy concern surrounding the use of telehealth services for mental health is a little more difficult to address. This concern comes from the increased use of mental health apps. Mental health apps are mobile apps that allow users to access online talk therapy and psychiatric care.[10] With the increased use of telehealth for mental health services, there has also been an increase in the use of these mental health apps. Americans are used to their private medical information being protected by the Health Insurance Portability and Accountability Act (HIPAA).[11] HIPAA is a federal law that creates privacy rules for our medical records and other individually identifiable health information during the flow of certain health care transactions.[12] But HIPAA wasn’t designed to handle modern technology.[13] The majority of mental health apps are not covered by HIPAA rules, meaning that these tech companies can sell the private health data from their apps to third parties, with or without consent.[14] In fact, a recent study that analyzed 578 mental health-related apps found that nearly half (44%) of the apps shared users’ personal health information with third parties.[15] This personal health information can include psychiatric diagnoses and medication prescriptions, as well as other identifiers including age, gender, ethnicity, religion, credit score, etc.[16]

In fact, according to a 2022 study, a popular therapy app, BetterHelp, was among the worst offenders in terms of privacy.[17] “BetterHelp has been caught in various controversies, including a ‘bait and switch’ scam where it advertised therapists that weren’t actually on its service, poor quality of care (including trying to provide gay clients with conversion therapy), and paying YouTube influencers if their fans sign up for therapy through the app.”[18]

An example of information that does get shared is the intake questionnaire.[19] An intake questionnaire needs to be filled out on BetterHelp, or other therapy apps, in order for the customer to be matched with a provider.[20] The answers to these intake questionnaires were specifically found to have been shared by BetterHelp with an analytics company, along with the approximate location and device of the user.[21]

Another example of the type of data that is shared is metadata.[22] BetterHelp can share information about how long someone uses the app, how long the therapy sessions are, how long someone spends sending messages on the app, what times someone logs into the app, what times someone sends a message or speaks to their therapists, the approximate location of the user, how often someone opens the app, and so on.[23] According to the ACLU, data brokers, Facebook, and Google were found to be among the recipients of other information shared from BetterHelp.[24]

It is also important to note that deleting an account may not remove all of your personal information, and there is no way of knowing what data will remain.[25] It remains unclear how long sensitive information that has been collected and retained could be available for use by the app.

What Solutions Are There?

The U.S. Department of Health and Human Services recently released updated guidance on HIPAA, confirming that the HIPAA Privacy Rule does not apply to most health apps because they are not “covered entities” under the law.[26]  Additionally, the FDA put out guidance saying that it is going to use its enforcement discretion when dealing with mental health apps.[27] This means that if the privacy risk seems to be low, the FDA is not going to enforce or chase these companies.[28]

Ultimately, if mental telehealth services are here to stay, HIPAA will need to be expanded to cover the currently unregulated field of mental health apps. HIPAA and state laws would need to be specifically amended to include digital app-based platforms as covered entities.[29] These mental health apps are offering telehealth services, similar to any healthcare provider that is covered by HIPAA. Knowledge that personal data is being shared so freely by mental health apps often leads to distrust, and due to those privacy concerns, many users have lost confidence in them. In the long run, regulatory oversight would increase the pressure on these companies to show that their service can be trusted, potentially increasing their success by growing their trust with the public as well.

Notes

[1] Gary Drenik, The Future of Telehealth in a Post-Pandemic World, Forbes, (Jun. 2, 2022), https://www.forbes.com/sites/garydrenik/2022/06/02/the-future-of-telehealth-in-a-post-pandemic-world/?sh=2ce7200526e1.

[2] Id.

[3] Id.

[4] Madjid Karimi, et. al., National Survey Trends in Telehealth Use in 2021: Disparities in Utilization and Audio vs. Video Services, Office of Health Policy (Feb. 1, 2022).

[5] Shreya Tewari, How to Navigate Mental Health Apps that May Share Your Data, ACLU (Sep. 28, 2022).

[6] Justin Lo, et. al., Telehealth has Played an Outsized Role Meeting Mental Health Needs During the Covid-19 Pandemic, Kaiser Family Foundation, (Mar. 15, 2022), https://www.kff.org/coronavirus-covid-19/issue-brief/telehealth-has-played-an-outsized-role-meeting-mental-health-needs-during-the-covid-19-pandemic/.

[7] Id.

[8] Supra note 1.

[9] Id.

[10] Heather Landi, With Consumers’ Health and Privacy on the Line, do Mental Wellness Apps Need More Oversight?, Fierce Healthcare, (Apr. 21, 2021), https://www.fiercehealthcare.com/tech/consumers-health-and-privacy-line-does-digital-mental-health-market-need-more-oversight.

[11] Peter Simons, Your Mental Health Information is for Sale, Mad in America, (Feb. 20, 2023), https://www.madinamerica.com/2023/02/mental-health-information-for-sale/.

[12] Supra note 5.

[13] Supra note 11.

[14] Id.

[15] Deb Gordon, Using a Mental Health App? New Study Says Your Data May Be Shared, Forbes, (Dec. 29, 2022), https://www.forbes.com/sites/debgordon/2022/12/29/using-a-mental-health-app-new-study-says-your-data-may-be-shared/?sh=fe47a5fcad2b.

[16] Id.

[17] Supra note 11.

[18] Id.

[19] Supra note 5.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Id.

[25] Supra note 5.

[26] Id.

[27] Supra note 10.

[28] Id.

[29] Supra note 11.


Meta Faces Class Action Lawsuits Over Pixel Tool Data Controversy

Ray Mestad, MJLST Staffer

With a market capitalization of $341 billion, Meta Platforms is one of the most valuable companies in the world.[1] Information is a prized asset for Meta, but how that information is acquired continues to be a source of conflict. Their Meta “Pixel” tool is a piece of code that allows websites to track visitor activity.[2] However, what Meta does with the data after it is acquired may be in violation of a variety of privacy laws. Because of that, Meta is now facing almost fifty class action lawsuits due to Pixel’s use of data from video players and healthcare patient portals.[3]

What is Pixel?

Pixel is an analytical tool that tracks visitor actions on a website.[4] In theory, the actions that are tracked include purchases, registrations, cart additions, searches and more. This information can then be used by the website owners to better understand user behavior. Website owners can more efficiently use ad spend by tailoring ads to relevant users and finding more receptive users based on Pixel’s analysis.[5]

In the world of search engine optimization and web analysis tools like Pixel are common, and there are other sites, like Google Analytics, that provide similar functions. However, there are two key differences between these other tools and Pixel. First, Pixel has in some cases accidentally scraped private, identifiable information from websites. Second, Pixel can connect that information to the social profiles on their flagship website, Facebook. Whether intentionally or accidentally, Pixel has been found to have grabbed personal information beyond the simple user web actions it was supposed to be limited to and connected them to Facebook profiles.[6]

Pixel and Patient Healthcare Information

It’s estimated that, until recently, one third of the top 100 hospitals in the country used Pixel on their websites.[7] However, that number may decrease after Meta’s recent data privacy issues. Meta faced both criticism and legal action in the summer of 2022 for its treatment of user data on healthcare websites. Pixel incorrectly retrieved private patient information, including names, conditions, email addresses and more. Meta then targeted hospital website users with ads on Facebook, using the information Pixel collected from hospital websites and patient portals by matching user information with their Facebook accounts.[8] Novant Health, a healthcare provider, ran advertisements promoting vaccinations in 2020. They then added Pixel code to their website to evaluate the effectiveness of the campaign. Pixel proceeded to send private and identifiable user information to Meta.[9] Another hospital (and Meta’s co-defendant in the lawsuit), the University of California San Francisco and Dignity Health (“UCSF”), was accused of illegally gathering patient information via Pixel code on their patient portal. Private medical information was then distributed to Meta. At some point, it is claimed that pharmaceutical companies then gained access to this medical information and sent out targeted ads based thereon.[10] That is just one example – all in all, more than 1 million patients have been affected by this Pixel breach.[11] 

Pixel and Video Tracking

The problems did not stop there. Following its patient portal controversy, Meta again faced criticism for obtaining protected user data with Pixel, this time in the context of video consumption. There are currently 47 proposed class actions against Meta for violations of the Video Privacy Protection Act (the “VPPA”). The VPPA was created in the 1980’s to cover videotape and audio-visual materials. No longer confined to the rental store, the VPPA has now taken on a much broader meaning after the growth of the internet. 

These class actions accuse Meta of using the Pixel tool to take video user data from a variety of company websites, including the NFL, NPR, the Boston Globe, Bloomberg Law and many more. The classes allege that by collecting video viewing activity in a personally identifiable manner without consent (matching Facebook user IDs to the activity rather than anonymously), so Pixel users could target their ads at the viewers, Pixel violated the VPPA. Under the VPPA Meta is not the defendant in these lawsuits, but rather the companies that shared user information with Meta.[12]

Causes of Action

The relatively new area of data privacy is scarcely litigated by the federal government due to the lack of statutes protecting consumer privacy on the federal level. Because of that, the number of data protection civil litigants can be expected to continue to grow. [13] HIPAA is the Health Insurance Portability and Accountability Act, an act created in 1996 to protect patient information from disclosure without patient consent. In the patient portal cases, HIPAA actions would have to be initiated by the US government. Claimants are therefore suing Meta under consumer protection and other privacy laws like the California Confidentiality of Medical Information Act, the Federal Wiretap Act, and the Comprehensive Computer Data Access and Fraud Act instead.[14] These state Acts allow individuals to sue, when under Federal Acts like HIPPA, the Government may move slowly, or not at all. And in the cases of video tracking, the litigants may only sue the video provider, not Meta itself.[15] Despite that wrinkle of benefit to Meta, their involvement in more privacy disputes is not ideal for the tech giant as it may hurt the trustworthiness of Meta Platforms in the eyes of the public.

Possible Outcomes

If found liable, the VPPA violations could result in damages of $2,500 per class member.[16] Punitive damages for the healthcare data breaches could run in the millions as well and would vary state to state due to the variety of acts the claims are brought in violation of.[17] Specifically, in the UCSF data case class members are seeking punitive damages of $5 million.[18] One possible hang-up that may become an issue for claimants are arbitration agreements. If the terms and conditions of either hospital patient portals or video provider websites contain arbitration clauses, litigants may have difficulty overcoming them. On the one hand, these terms and conditions may be binding and force the parties to attend mandatory arbitration meetings. On the other hand, consumer rights attorneys may argue that consent needs to come from forms separate from online user agreements.[19] If more lawsuits emerge due to the actions of Pixel, it is quite possible that companies will move away from the web analytics tools to avoid potential liability. It remains to be seen whether the convenience and utility of Meta Pixel stops being worth the risk the web analytics tools present to websites.

Notes

[1] Meta Nasdaq, https://www.google.com/finance/quote/META:NASDAQ (last visited Oct. 21, 2022).

[2] Meta Pixel, Meta for Developers, https://developers.facebook.com/docs/meta-pixel/.

[3] Sky Witley, Meta Pixel’s Video Tracking Spurs Wave of Data Privacy Suits, (Oct. 13, 2022, 3:55 AM), Bloomberg Law, https://news.bloomberglaw.com/privacy-and-data-security/meta-pixels-video-tracking-spurs-wave-of-consumer-privacy-suits.

[4] Meta Pixel, https://adwisely.com/glossary/meta-pixel/ (last visited Oct. 21, 2022).

[5] Ted Vrountas, What Is the Meta Pixel & What Does It Do?, https://instapage.com/blog/meta-pixel.

[6] Steve Adler, Meta Facing Further Class Action Lawsuit Over Use of Meta Pixel Code on Hospital Websites, HIPPA Journal (Aug. 1, 2022), https://www.hipaajournal.com/meta-facing-further-class-action-lawsuit-over-use-of-meta-pixel-code-on-hospital-websites/.

[7] Id.

[8] Id.

[9] Bill Toulas, Misconfigured Meta Pixel exposed healthcare data of 1.3M patients, Bleeping Computer (Aug. 22, 2022, 2:16 PM), https://www.bleepingcomputer.com/news/security/misconfigured-meta-pixel-exposed-healthcare-data-of-13m-patients/.

[10] Adler, supra note 6.

[11] Toulas, supra note 9.

[12] Witley, supra note 3. 

[13] Id.

[14] Adler, supra note 6.

[15] Witley, supra note 3.

[16] Id

[17] Dave Muoio, Northwestern Memorial the latest hit with a class action over Meta’s alleged patient data mining, Fierce Healthcare (Aug. 12, 2022 10:30AM), https://www.fiercehealthcare.com/health-tech/report-third-top-hospitals-websites-collecting-patient-data-facebook.

[18] Id.

[19] Witley, supra note 3.




Reconsidering Roe: Has the Line of Fetal Viability Moved?

Claire Colby, MJLST Staffer

After the Supreme Court heard arguments in Dobbs v. Jackson Women’s Health on December 1, legal commentatorsbegan to speculate the case could be a vehicle for overturning Roe v. Wade. The Mississippi statute at issue in Dobbs bans nearly all abortions after 15 weeks. In questioning Mississippi Solicitor General Scott Stewart, Justice Sonia Sotomayor asked about the “advancements in medicine” that have changed the lines of viability since the Court last considered a major challenge to Roe with Planned Parenthood v. Casey in 1992. “What has changed in science to show that the viability line is not a real line…?” she asked.

Roe v. Wade was a 1973 landmark decision in which the Supreme Court adopted a trimester framework for abortion. During the first trimester, the Court held that “the abortion decision and its effectuation must be left to the medical judgement of the pregnant woman’s attending physician.” The court held that states could adopt regulations “reasonably related to maternal health” for abortions after the first trimester, and held that in the third trimester, upon viability, states may “regulate, and even proscribe, abortion except where necessary, in appropriate medical judgement for the preservation of the life or health of the mother.” In 1992, the Court rejected this “rigid trimester” framework in Planned Parenthood v. Casey. In Casey, the Court turned to a viability framework and found that pre-viability, states may not prohibit abortion or impose “a substantial obstacle to the woman’s effective right to elect the procedure.” The Court adopted an “undue burden” standard to determine whether state regulations of pre-viability abortion are unconstitutional.

In Casey, the court defined viability as “the time at which there is a realistic possibility of maintaining and nourishing a life outside the womb.” So when do medical professionals consider a fetus viable? The threshold has moved to earlier in the gestation period since the 1970s, but experts disagree on where to draw the line. According to a journal articlepublished in 2018 in Women’s Health Issues, in 1971, fetal age of approximately 28 weeks was “widely used as the criterion of viability.” The article said that until recently, 24 weeks of gestation was the “widely accepted cutoff for viability in the highest acuity neonatal intensive care units.” According to the article, babies born as early as 22 weeks of gestation had an “overall survival rate of 23%” with “the most aggressive medical management available.” The article rebuked the idea of tying abortion restrictions to viability at all: “Tying abortion provisions to the word viability today is as misguided as it was to tie it to a specific trimester in 1973,” the article stated. “There was no true definition of viability then, and as long as medicine strives to treat every patient uniquely, there will never be one.”

A 2017 practice alert published in the official journal of the American College of Obstetricians and Gynecologists defined “periviable” births —births occurring “near the limit of viability” —as births occurring between 20 and 26 weeks gestation.

According to a 2020 New York Times article, determinations on the gestational age at which a baby is likely to survive outside of the womb are “in a complex moment of transition.” Though technology has improved, “even top academic institutions disagree about the right approach to treating 22- and 23-week babies.” The article reported that the University of California, San Francisco “a top-tier, high resource hospital,” is “transparent about its policy of offering only comfort care for babies that are born up to the first day of the 23rd week, down to the hour.”

In June 2020, a baby born at the Children’s Hospital and Clinics of Minnesota set the world record for the world’s most premature baby to survive, the Washington Post reported. He was born at 21 weeks and two days gestation.

Several medical developments help to explain this earlier period of viability.

According to a 2020 Nature article, “the biggest difference to survival came in the early 1990s with surfactant treatment.” Surfactant is a “slippery substance” that prevents airways from collapsing upon exhalation. According to Kaiser, premature babies with underdeveloped lungs often lack the substance. “When premature lungs are treated with surfactant after birth, the infant’s blood oxygen levels usually improve within minutes.”

A 2018 study published by the Journal of the American Medical Association, administering prenatal steroids to mothers between 22 and 25 weeks gestation prior to delivery led to a “significantly higher” survival rate, but “survival without major morbidities remains low at 22 and 23 weeks.”

The Dobbs ruling is not expected until this summer, when the Court tends to release its major decisions. Even if the Court maintains the viability standard set forth in Casey, recent medical advances may warrant more consideration about where to draw this line.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.


The StingRay You’ve Never Heard Of: How One of the Most Effective Tools in Law Enforcement Operates Behind a Veil of Secrecy

Dan O’Dea, MJLST Staffer

One of the most effective investigatory tools in law enforcement has operated behind a veil of secrecy for over 15 years. “StingRay” cell phone tower simulators are used by law enforcement agencies to locate and apprehend violent offenders, track persons of interest, monitor crowds when intelligence suggests threats, and intercept signals that could activate devices. When passively operating, StingRays mimic cell phone towers, forcing all nearby cell phones to connect to them, while extracting data in the form of metadata calls, text messages, internet traffic, and location information, even when a connected phone is powered off. They can also inject spying software into phones and prevent phones from accessing cellular data. StingRays were initially used overseas by federal law enforcement agencies to combat terrorism, before spreading into the hands of the Department of Justice and Department of Homeland Security, and now are actively used by local law enforcement agencies in 27 states to solve everything from missing persons cases to thefts of chicken wings.

The use of StingRay devices is highly controversial due to their intrusive nature. Not only does the use of StingRays raise privacy concerns, but tricking phones into connecting to StingRays mimicking cell phone towers prevent accessing legitimate cell phone service towers, which can obstruct access to 911 and other emergency hotlines. Perplexingly, the use of StingRay technology by law enforcement is almost entirely unregulated. Local law enforcement agencies frequently cite secrecy agreements with the FBI and the need to protect an investigatory tool as a means of denying the public information about how StingRays operate, and criminal defense attorneys have almost no means of challenging their use without this information. While the Department of Justice now requires federal agents obtain a warrant to use StingRay technology in criminal cases, an exception is made for matters relating to national security, and the technology may have been used to spy on racial-justice protestors during the Summer of 2020 under this exception. Local law enforcement agencies are almost completely unrestricted in their use of StingRays, and may even conceal their use in criminal prosecutions by tagging their findings as those of a “confidential source,” rather than admitting the use of a controversial investigatory tool. Doing so allows prosecutors to avoid  battling 4th amendment arguments characterizing data obtained by StingRays as unlawful search and seizure.

After existing in a “legal no-man’s land” since the technology’s inception, Senator Ron Wyden (D-OR) and Representative Ted Lieu (D-HI) sought to put an end to the secrecy of StingRays through introducing the Cell-Site Simulator Warrant Act of 2021 in June of 2021. The bill would have mandated that law enforcement agencies obtain a warrant to investigate criminal activity before deploying StingRay technology while also requiring law enforcement agencies to delete data of phones other than those of investigative targets. Further, the legislation would have required agencies to demonstrate a need to use StingRay technology that outweighs any potential harm to the community impacted by the technology. Finally, the bill would have limited authorized use of StingRay technology to the minimum amount of time necessary to conduct an investigation. However, the Cell-Site Simulator Warrant Act of 2021 appears to have died in committee after failing to garner significant legislative support.

Ultimately, no device with the intrusive capabilities of StingRays should be allowed to operate free from the constraints of regulation. While StingRays are among the most effective tools utilized by law enforcement, they are also among the most intrusive into the privacy of the general public. It logically follows that agencies seeking to operate StingRays should be required to make a showing of a need to utilize such an intrusive investigatory tool. In certain situations, it may be easy to establish the need to deploy a StingRay, such as doing so to further the investigation of a missing persons case. In others, law enforcement agencies would correctly find their hands tied should they wish to utilize a StingRay to catch a chicken wing thief.


Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.