Privacy

Examining the Constitutionality of Ohio’s New Obscene Material Age Verification Law

Fide Valverde-Rivera, MJLST Staffer

In September 2025, an Ohio law requiring websites that purvey obscene material to verify users’ ages went into effect.[1] Although this law sought to regulate pornographic material and platforms that distribute it, it erroneously exempts some of the largest pornographic websites from compliance while mandating compliance by regular social media sites. Because of this unintended consequence, this law is very likely unconstitutional.

 

General Overview of the New Law and Implementation Problems

Ohio’s new age verification law requires platforms that provide “any material or performance that is obscene or harmful to juveniles” to verify users’ ages.[2] The law exempts “providers of ‘an interactive computer service,’ which is defined . . . as having the same meaning as it does under federal law” from having to comply with the age verification requirements.[3] Federal law defines an “interactive computer service” to include “any platform where third parties can create accounts and can generate content, from social media sites to dating apps, message boards, classified ads, search engines, comment sections, and much more.”[4] Platforms like Pornhub and OnlyFans, two major pornography websites, arguably fall within this definition and qualify for the exemption.[5] Accordingly, Pornhub and OnlyFans are not conducting age verification for Ohio users.[6] However, general-purpose social media platforms like Bluesky—a type of platform lawmakers said would be outside of the law’s scope—have been mandated to begin age verification.[7]

 

Constitutional Considerations

The first step in evaluating the constitutionality of this law requires determining the appropriate level of scrutiny with which it should be examined. In Free Speech Coalition, Inc. v. Paxton, the Supreme Court held that “because accessing material obscene to minors without [age verification] is not [a] constitutionally protected [activity], any burden [an age verification law] imposes on protected activity is only incidental, and the statute triggers only intermediate scrutiny.”[8] It held that it was not subject to strict scrutiny because “speech that is obscene to minors is unprotected to the extent that [a] State imposes an age-verification requirement” and “where the speech in question is unprotected, States may impose ‘restrictions’ based on ‘content’ without triggering strict scrutiny.”[9]

Under intermediate scrutiny, the Supreme Court in Paxton found the Texas age-verification law constitutional for two reasons.[10] First, the law served an important government interest: shielding sexual content from children.[11] Second, the law was adequately tailored in that “the government’s interest ‘would [have been] achieved less effectively absent the regulation’ and the regulation ‘[did] not burden substantially more speech than is necessary to further that interest.’”[12] Age verification laws are a constitutionally-settled way to protect children from obscene material, and Texas’s preferred approach was valid.[13] The Supreme Court in Paxton also held the statute’s targeting of certain sites did not render it unconstitutional because “it [was] reasonable for Texas to conclude that websites with a higher portion of sexual content are more inappropriate for children to visit than those with a lower proportion.”[14]

 

Bottom Line

Here, Ohio’s age-verification law is very likely unconstitutional because it fails to shield children from sexual content. Because platforms with higher proportions of sexual content, the intended targets of this law, are outside of the scope of the law, the law is not adequately tailored to survive an application of intermediate scrutiny. Additionally, the law is overinclusive because social media sites on which obscene content generally represents a minority of the content are bound by the law. Based on these shortcomings, lawmakers and judicial officers alike should anticipate an interested party or parties advancing a facial challenge attacking the constitutionality of this law under the First Amendment. Further, platforms like Bluesky may attempt to advance an as-applied challenge by noting that the law—although written to target pornography websites without “ensnar[ing] social media platforms”—fails to achieve its articulated objectives.[15]

 

Notes

[1] Ohio Rev. Code § 1349.10(B) (2025).

[2] Id.

[3] Elizabeth Nolan Brown, Whoops—Ohio Accidentally Excludes Most Major Port Platforms from Anti-Porn Law, Reason (Oct. 6, 2025, 11:45 AM), https://reason.com/2025/10/06/whoops-ohio-accidentally-excludes-most-major-porn-platforms-from-anti-porn-law/.

[4] Id.

[5] Id.

[6] See id. (“I’m assuming that the exclusion of Pornhub was not intentional, given the way this law’s supporters talked about as a shield against Ohio minors being able to see any sexually oriented material online. One of the law’s biggest proponents, state Rep. Josh Williams (R-Sylvania), has talked about how it would not ensnare social media platforms even though they may contain porn, so perhaps the exclusion of interactive computer services was intended for that purpose. But most major web-porn access points, including OnlyFans and webcamming platforms, also fall under the definition of interactive computer service.”)

[7] See Morgan Trau, Do You Live in Ohio? Do You Watch Porn Online? Your State Legislature Wants to See Some ID, Ohio Cap. J. (Oct. 1, 2025, 4:45 AM), https://ohiocapitaljournal.com/2025/10/01/do-you-live-in-ohio-do-you-watch-porn-online-your-state-legislature-wants-to-see-some-id/ (“[Rep. Josh] Williams said that this [law] won’t impact social media sites like X (formerly known as Twitter) and Reddit, even though both of those platforms contain easily-accessible pornography”); @psychic_twin, Reddit (Sept. 29, 2025, 2:00 PM), https://www.reddit.com/r/Ohio/comments/1ntqr4w/ohio_age_verification_notice_on_bluesky/ (sharing how Bluesky required Ohio users to complete age assurances because “[t]he laws in [the user’s] location require[d] [them] to verify [they’re] an adult before accessing certain features on Bluesky, like adult content and direct messaging”).

[8] Free Speech Coalition, Inc. v. Paxton, 606 U.S. 461, 483 (2025).

[9] Id. at 492.

[10] Id. at 495–96.

[11] Id. at 496.

[12] Id.

[13] Id. at 496–97 (“The specific verification methods that H.B. 1181 permits are also plainly legitimate. At present, H.B. 1181 allows for verification using government-issued identification or transactional data. Verification can take place on the covered website itself or through a third-party service. Other age-restricted services, such as online gambling, alcohol and tobacco sales, and car rentals, rely on the same methods. And, much of the online pornography industry has used analogous methods for decades . . . . H.B. 1181 simply requires established verification methods already in use by pornographic sites and other industries. That choice is well within the State’s discretion under intermediate scrutiny.” (internal citations omitted)).

[14] Id.

[15] Nolan Brown, supra note 3.


Caught in the Digital Dragnet: The Controversy Over Geofence Warrants and Privacy Rights

Yaoyu Tang, MJLST Staffer

 

Picture this: A sunny Saturday afternoon at a bustling shopping mall. Children’s laughter echoes as they pull their parents toward an ice cream stand. Couples meander hand-in-hand past glittering storefronts, while teenagers crowd the food court, joking and snapping selfies. It’s a portrait of ordinary life—until chaos quietly unfolds. A thief strikes a high-end jewelry store and vanishes into the crowd, leaving no trail behind. Frustrated and out of options, law enforcement turns to a geofence warrant, demanding Google provide location data for every smartphone within a quarter-mile radius during the heist. In the days that follow, dozens of innocent shoppers, workers, and passersby find themselves under scrutiny, their routines disrupted simply for being in the wrong place at the wrong time.

This story is not hypothetical—it mirrors real-life cases where geofence warrants have swept innocent individuals into criminal investigations, raising significant concerns about privacy rights and constitutional protections.

Geofence warrants are a modern investigative tool used by law enforcement to gather location data from technology companies.[1] These warrants define a specific geographic area and time frame, compelling companies like Google to provide anonymized location data from all devices within that zone.[2] Investigators then sift through this data to identify potential suspects or witnesses, narrowing the scope to relevant individuals whose movements align with the crime scene and timeline.[3]

The utility of geofence warrants is undeniable. They have been instrumental in solving high-profile cases, such as identifying suspects in robberies, assaults, and even the January 6 Capitol riots.[4] By providing a way to access location data tied to a specific area, geofence warrants enable law enforcement to find leads in cases where traditional investigative techniques might fail.[5] These tools are particularly valuable in situations where there are no direct witnesses or physical evidence, allowing law enforcement to piece together events and identify individuals who were present during criminal activity.[6]

However, the benefits of geofence warrants come with significant risks. Critics argue that these warrants are overly broad and invasive, sweeping up data on innocent bystanders who happen to be in the area.[7] In addition, civil liberties organizations, such as the ACLU and the Electronic Frontier Foundation (EFF), have strongly criticized geofence warrants.[8] They argue that the geofence warrants infringe on privacy rights and disproportionately affect marginalized communities. Without strict limitations, geofence warrants could become tools of mass surveillance, disproportionately targeting marginalized communities or chilling free movement and association. [9] Moreover, this indiscriminate collection of location data raises serious Fourth Amendment concerns, as it can be seen as a form of digital general warrant—a modern equivalent to the invasive searches that the Framers sought to prevent.[10] Tension between their investigative utility and potential for privacy violations has made geofence warrants one of the most controversial tools in modern law enforcement.

The legality of geofence warrants is far from settled, with courts offering conflicting rulings. In United States v. Smith, the Fifth Circuit declared geofence warrants unconstitutional, stating that they amount to general searches.[11] The court emphasized the massive scope of data collected and likened it to rummaging through private information without sufficient cause.[12] The decision heavily relied on Carpenter v. United States, where the Supreme Court held that accessing historical cell-site location information without a warrant violates the Fourth Amendment.[13] In Carpenter, the Court recognized that cell-site location information (CSLI) provides an intimate record of a person’s movements, revealing daily routines, frequent locations, and close personal associations.[14] This information, the Court held, constitutes a “search” within the meaning of the Fourth Amendment, requiring a warrant supported by probable cause.[15] Conversely, the Fourth Circuit in United States v. Chatrie upheld the use of geofence warrants, arguing that users implicitly consent to data collection by agreeing to terms of service with tech companies.[16] The court leaned on the third-party doctrine, which holds that individuals have reduced privacy expectations for information shared with third parties.[17] These conflicting rulings highlight the broader struggle to apply traditional Fourth Amendment principles to digital technologies. The Fifth Circuit’s ruling highlights discomfort with the vast reach of geofence warrants, pointing to their lack of Fourth Amendment particularity.[18] Conversely, the Fourth Circuit’s reliance on the third-party doctrine broadens law enforcement access, framing user consent as a waiver of privacy.[19] This split leaves courts struggling to reconcile privacy with evolving surveillance technology, underscoring the urgent need for clearer standards.

Tech companies like Google play a pivotal role in the geofence warrant debate. Historically, Google stored user location data in a vast internal database known as Sensorvault.[20] This database served as a central repository for location data collected from various Google services, including Google Maps, Search, and Android devices.[21] Law enforcement agencies frequently sought access to this data in response to geofence warrants, making Sensorvault a crucial point of contention in the legal and privacy debates surrounding this technology.[22] However, in 2023, Google announced significant changes to its data policies: location data would be stored on user devices instead of the cloud, backed-up data would be encrypted to prevent unauthorized access, including by Google itself, and default auto-delete settings for location history would reduce data retention from 18 months to three months.[23] These policy changes significantly limit the availability of location data for law enforcement agencies seeking to execute geofence warrants.[24] By storing data locally on user devices and implementing robust encryption and auto-deletion features, Google has effectively reduced the amount of location data accessible to law enforcement.[25] This highlights the significant influence that corporate data policies can exert on law enforcement practices.[26] Other companies, like Apple, have adopted even stricter privacy measures, refusing to comply with all geofence warrant requests.[27]

The debate surrounding the legality and scope of geofence warrants remains contentious. Courts grapple with varying interpretations, legislators struggle to enact comprehensive legislation, and public opinion remains divided. This uncertainty necessitates authoritative guidance. Whether through judicial precedent, legislative reform, or technological advancements that mitigate privacy concerns, achieving a consensus on the permissible use of geofence warrants is crucial. Only with such a consensus can society navigate the delicate balance between public safety and individual privacy rights in the digital era.

 

Notes:

[1] Ronald J. Rychlak, Geofence Warrants: The New Boundaries, 93 MISS. L. Rev. 957-59 (2024).

[2] Id.

[3] Id.

[4] Mark Harris, A Peek Inside the FBI’s Unprecedented January 6 Geofence Dragnet, WIRED(Nov. 28, 2022, 7:00 AM), https://www.wired.com/story/fbi-google-geofence-warrant-january-6/.

[5] Jeff Welty, Recent Developments Concerning Geofence Warrants, N.C. CRIM. L. (Nov. 4, 2024), https://nccriminallaw.sog.unc.edu/recent-developments-concerning-geofence-warrants/.

[6] Prathi Chowdri, Emerging tech and law enforcement: What are geofences and how do they work, POLICE1(Nov. 16, 2023, 9:06 PM), https://www.police1.com/warrants/google-announces-it-will-revoke-access-to-location-history-effectively-blocking-geofence-warrants.

[7] Jennifer Lynch, Is This the End of Geofence Warrants, ELECTRONIC FRONTIER FOUND., https://www.eff.org/deeplinks/2023/12/end-geofence-warrants.

[8] ACLU, ACLU Argues Evidence From Privacy-Invasive Geofence Warrants Should Be Suppressed, https://www.aclu.org/press-releases/aclu-argues-evidence-from-privacy-invasive-geofence-warrants-should-be-suppressed#:~:text=In%20the%20brief%2C%20the%20ACLU,they%20were%20engaged%20in%20criminal.

[9] LYNCH, supra note 7.

[10] Id.

[11] United States v. Smith, 110 F.4th 817 (5th Cir. 2024).

[12] Id. at 28-30.

[13] Id. at 27-29.

[14] Carpenter v. United States, 585 U.S. 296 (2018)

[15] Id.

[16] United States v. Chatrie, 107 F.4th 319 (4th Cir. 2024).

[17] Id. at 326-57.

[18] Smith, 110 F.4th 817, at 27-30.

[19] Chatrie, 107 F.4th 319, at 326-57.

[20] Jennifer Lynch, Google’s Sensorvault Can Tell Police Where You’ve Been, ELECTRONIC FRONTIER FOUND., https://www.eff.org/deeplinks/2019/04/googles-sensorvault-can-tell-police-where-youve-been?.

[21] Id.

 

[22] Id.

[23] Skye Witley, Google’s Location Data Move Will Reshape Geofence Warrant Use, BLOOMBERG L. (Dec. 20, 2023, 4:05 AM), https://news.bloomberglaw.com/privacy-and-data-security/googles-location-data-move-will-reshape-geofence-warrant-use?.

[24] Id.

[25] Id.

 

[26] Id.

 

[27] APPLE, Apple Transparency Report: Government and Private Party Requests, https://www.apple.com/legal/transparency/pdf/requests-2022-H1-en.pdf.


AI and Predictive Policing: Balancing Technological Innovation and Civil Liberties

Alexander Engemann, MJLST Staffer

To maximize their effectiveness, police agencies are constantly looking to use the most sophisticated preventative methods and technologies available. Predictive policing is one such technique that fuses data analysis, algorithms, and information technology to anticipate and prevent crime. This approach identifies patterns in data to anticipate when and where crime will occur, allowing agencies to take measures to prevent it.[1] Now, engulfed in an artificial intelligence (“AI”) revolution, law enforcement agencies are eager to take advantage of these developments to augment controversial predictive policing methods.[2]

In precincts that use predictive policing strategies, ample amounts of data are used to categorize citizens with basic demographic information.[3] Now, machine learning and AI tools are augmenting this data which, according to one source vendor, “identifies where and when crime is most likely to occur, enabling [law enforcement] to effectively allocate [their] resources to prevent crime.”[4]

Both predictive policing and AI have faced significant challenges concerning issues of equity and discrimination. In response to these concerns, the European Union has taken proactive steps promulgating sophisticated rules governing AI applications within its territory, continuing its tradition of leading in regulatory initiatives.[5] Dubbed the “Artificial Intelligence Act”, the Union clearly outlined its goal of promoting safe, non-discriminatory AI systems.[6]

Back home, we’ve failed to keep a similar legislative pace, even with certain institutions sounding the alarms.[7] Predictive policing methods have faced similar criticism. In an issue brief, the NAACP emphasized, “[j]urisdictions who use [Artificial Intelligence] argue it enhances public safety, but in reality, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement.”[8] This technological and ideological marriage clearly poses discriminatory risks for law enforcement agencies in a nation where a black person is already exponentially more likely to be stopped without just cause as their white counterparts.[9]

Police agencies are bullish about the technology. Police Chief Magazine, the official publication of the International Association of Chiefs of Police,  paints these techniques in a more favorable light, stating, “[o]ne of the most promising applications of AI in law enforcement is predictive policing…Predictive policing empowers law enforcement to predict potential crime hotspots, ultimately aiding in crime prevention and public safety.[10] In this space, facial recognition software is gaining traction among law enforcement agencies as a powerful tool for identifying suspects and enhancing public safety. Clearview AI stresses their product, “[helps] law enforcement and governments in disrupting and solving crime.”[11]

Predictive policing methods enhanced by AI technology show no signs of slowing down.[12] The obvious advantages to these systems cannot be ignored, allowing agencies to better allocate resources and manage their staff. However, as law enforcement agencies adopt these technologies, it is important to remain vigilant in holding them accountable to any potential ethical implications and biases embedded within their systems. A comprehensive framework for accountability and transparency, similar to European Union guidelines  must be established to ensure deploying predictive policing and AI tools do not come at the expense of marginalized communities. [13]

 

Notes

[1] Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L.J. 259, 265-267 (2012)

[2] Eric M. Baker, I’ve got my AI on You: Artificial Intelligence in the Law Enforcement Domain, 47 (Mar. 2021) (Master’s thesis).

[3] Id. at 48.

[4] Id. at 49 (citing Walt L. Perry et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, RR-233-NIJ (Santa Monica, CA: RAND, 2013), 4, https://www.rand.org/content/dam/rand/ pubs/research_reports/RR200/RR233/RAND_RR233.pdf).

[5] Commission Regulation 2024/1689 or the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (Artificial Intelligence Act), 2024 O.J. (L 1689) 1.

[6] Lukas Arnold, How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination, Penn. J. L. & Soc. Change (May 14,2024), https://www.law.upenn.edu/live/news/16742-how-the-european-unions-ai-act-provides#_ftn1.

[7] See Margaret Hu, Algorithmic Jim Crow, 86 Fordham L. Rev. 633, 664 (2017),

https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5445&context=flr. (“Database screening and digital watchlisting systems, in fact, can serve as complementary and facially colorblind supplements to mass incarcerations systems. The purported colorblindness of mandatory sentencing… parallels the purported colorblindness of mandatory database screening and vetting systems”).

[8] NAACP, Issue Brief: The Use of Artificial Intelligence in Predictive policing, https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief (last visited Nov. 2, 2024).

[9] Will Douglas Heaven, Artificial Intelligence- Predictive policing algorithms are racist. They need to be dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ (citing OJJDP Statistical Briefing Book. Estimated number of arrests by offense and race, 2020. Available: https://ojjdp.ojp.gov/statistical-briefing-book/crime/faqs/ucr_table_2. Released on July 08, 2022).

[10] See The Police Chief, Int’l Ass’n of Chiefs of Police, https://www.policechiefmagazine.org (last visited Nov. 2, 2024);Brandon Epstein, James Emerson, and ChatGPT, “Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives,” Police Chief Online, April 3, 2024.

[11] Clearview AI, https://www.clearview.ai/ (last visited Nov. 3, 2024).

[12] But see Nicholas Ibarra, Santa Cruz Becomes First US City to Approve Ban on Predictive Policing, Santa Cruz Sentinel (June 23, 200) https://evidentchange.org/newsroom/news-of-interest/santa-cruz-becomes-first-us-city-approve-ban-predictive-policing/.

[13] See also Roy Maurer, New York City to Require Bias Audits of AI-Type HR Technology, Society of Human Resources Management (December 19, 2021), https://www.shrm.org/topics-tools/news/technology/new-york-city-to-require-bias-audits-ai-type-hr-technology.

 


What Happens to Your Genetic Data in a Sale or Acquisition?

Colin Loyd, MJLST Staffer

Remember 23andMe—the genetic testing company that once skyrocketed in publicity in the 2010s due to its relatively inexpensive access to genetic testing? It’s now heading toward disaster. This September, its board of directors saw all but one member tender their resignation.[1] At the close of that day’s trading, 23andMe’s share price was $0.35, representing a 99.9% decline in valuation from its peak in 2021.[2] This decline in valuation suggests the company may declare bankruptcy, which often leads to a sale of a company’s assets. Bankruptcy or the sale of assets present a host of complex privacy and regulatory issues, particularly concerning the sale of 23andMe’s most valuable asset—its vast collection of consumer DNA data.[3] This uncertain situation underscores serious concerns surrounding the state of comprehensive privacy protections for genetic information that leave consumers’ sensitive genetic data vulnerable to misuse and exploitation.

23andMe collects and stores massive amounts of user genetic information. However, unlike healthcare providers, 23andMe does not have to comply with the stringent privacy regulations set out in the Health Insurance Portability and Accountability Act (HIPAA).[4] While HIPAA is designed to protect sensitive health data, its protections apply only to a small subset of healthcare related entities.[5] HIPAA only regulates the use of genetic information by “group health plan[s], health insurance issuer[s] that issue[] health insurance coverage, or issuer[s] of a medicare supplemental policy.”[6] 23andMe does not fit into any of these categories and therefore operates outside the scope of HIPAA protections with respect to genetic information, leaving any genetic information it holds largely unregulated.

The Genetic Information Nondiscrimination Act (GINA), enacted in 2008, offers consumer protections by prohibiting discrimination based on an individual’s genetic information with respect to health insurance premium amounts or eligibility requirements for health insurance.[7] GINA also prohibits any deprivation of employment opportunities based on genetic information.[8] However, GINA’s protections do not extend to life insurance, disability insurance, or long-term care insurance.[9] This leaves a gap where genetic information may be used against individuals by entities not subject to GINA.

This regulatory gap is a major concern for consumers, especially with a potential bankruptcy sale looming. If 23andMe sells its assets, including its database of genetic information, the new owner would not have to adhere to the same privacy commitments made by 23andMe. For example, 23andMe promises not to use genetic information it receives for personalized or targeted marketing/advertising without a user’s express consent.[10] This policy likely reflects 23andMe’s efforts to comply with the California Privacy Rights Act (CPRA), which grants consumers the right to direct a business to not share or sell their personal information.[11] However, this right under the CPRA is an opt-out right—not an opt-in right—meaning consumers can stop a future sale of their information but by default there is no initial, regulatory limit on the sale of their personal information.[12] As a result, there’s nothing stopping 23andMe from altering its policies and changing how it uses genetic information. In fact, 23andMe’s Privacy Statement states it “may make changes to this Privacy Statement from time to time.”[13] Any such change would likely be binding if it is clearly communicated to users.[14] 23andMe currently lists email or an in-app notification as methods it may notify its users of any change to the Privacy Statement.[15] If it does so, it’s highly possible a court would view this as “clear communication” and there would be little legal recourse for users to prevent their genetic information from being used in ways they did not anticipate, such as for research or commercial purposes.

For example, say a life insurance company acquires an individual’s genetic data through the purchase of 23andMe’s assets. It could potentially use that data to make decisions about coverage or premiums, even though GINA prohibits health insurers to do the same.[16] This loophole highlights the dangers of having genetic information in the hands of entities not bound by strict privacy protections.

In the event of an acquisition or bankruptcy, 23andMe’s Privacy Statement outlines that personal information, including genetic data, may be among the assets sold or transferred to the new entity.[17] In such a case, the new owner could inherit both the data and the rights to use it under the existing terms, including the ability to modify how the data is used. This could result in uses not originally intended by the user so long as the change is communicated to the user.[18] This transfer clause highlights a key concern for users because it allows their deeply personal genetic data to be passed to another company without additional consent, potentially subjecting them to exploitation by organizations with different data usage policies or commercial interests. While 23andMe must notify users about any changes to the privacy statement or its use of genetic information, it does not specify whether the notice will be given in advance.[19] Any new entity could plan a change to the privacy statement terms–altering how it uses the genetic information while leaving users in the dark until the change is communicated to them, at which point the user’s information may have already been shared with third parties.

The potential 23andMe bankruptcy and sale of assets reveals deep flaws in the current regulatory system governing genetic data privacy. Without HIPAA protections, consumers risk their sensitive genetic information being sold or misused in ways they cannot control. GINA–while offering some protections–still leaves significant gaps, especially in life and disability insurance. As the demand for genetic testing continues to grow, the vulnerabilities exposed by 23andMe’s potential financial troubles highlight the urgent need for better privacy protections. Consumers must be made aware of the risks involved in sharing their genetic data, and regulatory measures must be strengthened to ensure this sensitive information is not misused or sold without their explicit consent.

 

Notes

[1] Independent Directors of 23andMe Resign from Board, 23andMe (Sept. 17, 2024) https://investors.23andme.com/news-releases/news-release-details/independent-directors-23andme-resign-board.

[2] Rolfe Winkler, 23andMe Board Resigns in New Blow to DNA-Testing Company, WALL ST. J. (Sept. 18, 2024) https://www.wsj.com/tech/biotech/23andme-board-resigns-in-new-blow-to-dna-testing-company-12f1a355.

[3] Anne Wojcicki (the last remaining board member) has consistently publicized her plan to take the company private, which is looming larger given the current state of the business financials. Id.

[4] See 42 U.S.C. § 1320d-9(a)(2).

[5] See generally 42 U.S.C. §1320d et seq.

[6] 42 U.S.C. § 1320d-9(a)(2).

[7] Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881.

[8] Id.

[9] Jessica D Tenenbaum & Kenneth W Goodman, Beyond the Genetic Information Nondiscrimination Act: Ethical and Economic Implications of the Exclusion of Disability, Long-term Care and Life Insurance, 14 Personalized Med. 153, 154 (2017).

[10] How We Use Your Information, 23andMe, https://www.23andme.com/legal/how-we-use-info/ (last visited Oct. 14, 2024).

[11] Cal. Civ. Code § 1798.120(a) (Deering 2024).

[12] Id.

[13] Privacy Statement, 23andMe (Sept. 24, 2024) https://www.23andme.com/legal/privacy/full-version/.

[14] See Lee v. Ticketmaster LLC, 817 Fed. App’x 393 (9th Cir. 2019)(upholding terms of use where notice was clearly given to the user, even if the user didn’t check a box to assent to the terms).

[15] Privacy Statement, supra note 13.

[16] See K.S.A. § 40-2259(c)-(d) (carving out the ability for life insurance policies to take into account genetic information when underwriting the policy).

[17] Privacy Statement, supra note 13.

[18] See Ticketmaster, 817 Fed. App’x 393 (2019).

[19] Privacy Statement, supra note 13.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


AR/VR/XR: Breaking the Wall of Legal Issues Used to Limit in Either the Real-World or the Virtual-World

Sophia Yao, MJLST Staffer

From Pokémon Go to the Metaverse,[1] VR headsets to XR glasses, vision technology is quickly changing our lives in many aspects. The best-known companies or groups that have joined this market include Apple’s Vision Products Group (VPG), Meta’s Reality Lab, Microsoft, and others. Especially after Apple published its Vision Pro in 2023, no one doubts that this technology will soon be a vital driver for both tech and business. Regardless of why, can this type of technology significantly impact human genes? What industries will be impacted by this technology? And what kinds of legal risks are to come?

Augmented Reality (“AR”) refers to a display of a real-world environment whose elements are augmented by (i.e., overlaid with) one or more layers of text, data, symbols, images, or other graphical display elements.[2] Virtual Reality (“VR”) is using a kind of device (e.g., headsets or multi-projected environments) to create a simulated and immersive environment that can provide an experience either similar to or completely different from the real world,[3] while Mixed Reality/Extended Reality (XR) glasses are relatively compact and sleek, and weigh much less than VR headsets.[4] XR’s most distinguished quality from VR is that individuals can still see the world around them with XR by projecting a translucent screen on top of the real world. Seemingly, the differences between these three vision technologies may soon be eliminated with the possibility of their combination into once device.

Typically, vision technology assists people in mentally processing 2-D information into a 3-D world by integrating digital information directly into real objects or environments. This can improve individuals’ ability to absorb information, make decisions, and execute required tasks quickly, efficiently, and accurately. However, many people report feeling nauseous after using such products, ear pain, and a disconnect between their eyes and body.[5] Even experts who use AR/VR products in emerging psychotherapy treatments admit that there have been adverse effects in AR/VR trials due to mismatching the direct contradiction between the visual system and the motion system.[6] Researchers also discovered that it affects the way people behave in social situations due to feeling less socially connected to others.[7]

In 2022, the global augmented reality market was valued at nearly $32 billion and is projected to reach $88 billion by 2026.[8] As indicated by industry specialists and examiners, outside of gaming, a significant portion of vision technology income will accumulate from e-commerce and retail (fashion and beauty), manufacturing, the education industry, healthcare, real estate, and e-sports, which will further impact entertainment, cost of living, and innovation.[9] To manage this tremendous opportunity, it is crucial to understand potential legal risks and develop a comprehensive legal strategy to address these upcoming challenges.

To expand one’s business model, it is important to maximize the protection of intellectual property (IP), including virtual worlds, characters, and experiences. Doing so also aligns with contractual concerns, service remedies, and liability for infringement of third-party IP. For example, when filing an IP prosecution, it is difficult to argue that the hardware-executing invention (characters or data information) is a unique machine, and that the designated steps performed by the hardware are special under MPEP § 2106.05(d).[10] Furthermore, the Federal Circuit has cautioned the abstraction of inventions – that “[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas…[T]read carefully in constructing this exclusionary principle lest it swallows all of the patent law.”[11]

From a consumer perspective, legal concerns may include data privacy, harassment, virtual trespass, or even violent attacks due to the aforementioned disconnect between individuals’ eyes and bodies. Courts’ views on virtual trespass created by vision technology devices is ambiguous. It is also unclear whether courts will accept the defense of error in judgment due to the adverse effects of using AR/VR devices. One of the most significant concerns is the protection of the younger generations, since they are often the target consumers and those who are spending the most time using these devices. Experts have raised concerns about the adverse effects of using AR/VR devices, questioning whether they negatively impact the mental and physical health of younger generations. Another concern is that these individuals may experience a decline in social communication skills and feel a stronger connection to machines rather than to human beings. Many other legal risks are hanging around the use of AR/VR devices, such as private data collection without consent by constantly scanning the users’ surrounding circumstances, although some contend that the Children’s Online Privacy Protection Act (COPPA) prohibits the collection of personally identifiable information if an operator believes a user to be under the age of thirteen.[12]

According to research trends, combining AR, VR, and MR/XR will allow users to transcend distance, time, and scale, to bring people together in shared virtual environments, enhance comprehension, communication, and decisionmaking efficiency. Once the boundaries between the real-world and virtual-world are eliminated, AR/VR devices will “perfectly” integrate with the physical world, whether or not we are prepared for this upcoming world.

Notes

[1] Eric Ravenscraft, What is the Meteverse, Exactly?, Wired (Jun. 15, 2023, 6:04 PM), https://www.wired.com/story/what-is-the-metaverse/.

[2] Travis Alley, ARTICLE: Pokemon Go: Emerging Liability Arising from Virtual Trespass for Augmented Reality Applications, 4 Tex. A&M J. Prop. L. 273 (2018).

[3] Law Offices of Salar Atrizadeh, Virtual and Augmented Reality Laws, Internet Law. Blog (Dec. 17, 2018), https://www.internetlawyer-blog.com/virtual-and-augmented-reality-laws/.

[4] Simon Hill, Review: Viture One XR Glasses, Wired (Sep. 1, 2023, 7:00 AM), https://www.wired.com/review/viture-one-xr-glasses/.

[5] Alexis Souchet, Virtual Reality has Negative Side Effects—New Research Shows That Can be a Problem in the Workplace, The Conversation (Aug. 8, 2023, 8:29 AM), https://theconversation.com/virtual-reality-has-negative-side-effects-new-research-shows-that-can-be-a-problem-in-the-workplace-210532#:~:text=Some%20negative%20symptoms%20of%20VR,nausea%20and%20increased%20muscle%20fatigue.

[6] John Torous et al., Adverse Effects of Virtual and Augmented Reality Interventions in Psychiatry: Systematic Review, JMIR Ment Health (May 5, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199391/.

[7] How Augmented Reality Affects People’s Behavior, Sci.Daily (May 22, 2019), https://www.sciencedaily.com/releases/2019/05/190522101944.htm.

[8] Augmented Reality (AR) Market by Device Type (Head-mounted Display, Head-up Display), Offering (Hardware, Software), Application (Consumer, Commercial, Healthcare), Technology, and Geography – Global Forecast, Mkt. and Mkt., https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html.

[9] Hill, supra note 4.

[10] Manual of Patent Examining Proc. (MPEP) § 2106.05(d) (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_13d41_124 (explaining an evaluation standard on when determining whether a claim recites significantly more than a judicial exception depends on whether the additional elements(s) are well-understood, routine, conventional activities previously known to the industry).

[11] Manual of Patent Examining Proc. (MPEP) § 2106.04 (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_139db_e0; see also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (2016).

[12] 16 CFR pt. 312.


The Double-Helix Dilemma: Navigating Privacy Pitfalls in Direct-to-Consumer Genetic Testing

Ethan Wold, MJLST Staffer

Introduction

On October 22, direct-to-consumer genetic testing (DTC-GT) company 23andME sent emails to a number of its customers informing them of a data breach into the company’s “DNA Relatives” feature that allows customers to compare ancestry information with other users worldwide.[1] While 23andMe and other similar DTC-GT companies offer a number of positive benefits to consumers, such as testing for health predispositions and carrier statuses of certain genes, this latest data breach is a reminder that before choosing to opt into these sorts of services one should be aware of the potential risks that they present.

Background

DTC-GT companies such as 23andMe and Ancestry.com have proliferated and blossomed in recent years. It is estimated over 100 million people have utilized some form of direct-to-consumer genetic testing.[2] Using biospecimens submitted by consumers, these companies sequence and analyze an individual’s genetic information to provide a range of services pertaining to one’s health and ancestry.[3] The October 22 data breach specifically pertained to 23andMe’s “DNA Relatives” feature.[4] The DNA Relatives feature can identify relatives on any branch of one’s family tree by taking advantage of the autosomal chromosomes, the 22 chromosomes that are passed down from your ancestors on both sides of your family, and one’s X chromosome(s).[5] Relatives are identified by comparing the customer’s submitted DNA with the DNA of other 23andMe members who are participating in the DNA Relatives feature.[6] When two people are found to have an identical DNA segment, it is likely they share a recent common ancestor.[7] The DNA Relatives feature even uses the length and number of these identical segments to attempt to predict the relationship between genetic relatives.[8] Given the sensitive nature of sharing genetic information, there are often privacy concerns regarding practices such as the DNA Relatives feature. Yet despite this, the legislation and regulations surrounding DTC-GT is somewhat limited.

Legislation

The Health Insurance Portability and Accountability Act (HIPAA) provides the baseline privacy and data security rules for the healthcare industry.[9] HIPAA’s Privacy Rule regulates the use and disclosure of a person’s “protected health information” by a “covered entity.[10] Under the Act, the type of genetic information collected by 23andMe and other DTC-GT companies does constitute “protected health information.”[11] However, because HIPAA defines a “covered entity” as a health plan, healthcare clearinghouse, or health-care provider, DTC-GT companies do not constitute covered entities and therefore are not under the umbrella of HIPAA’s Privacy Rule.[12]

Thus, the primary source of regulation for DTC-GT companies appears to be the Genetic Information Nondiscrimination Act (GINA). GINA was enacted in 2008 for the purpose of protecting the public from genetic discrimination and alleviating concerns about such discrimination and thereby encouraging individuals to take advantage of genetic testing, technologies, research, and new therapies.[13] GINA defines genetic information as information from genetic tests of an individual or family members and includes information from genetic services or genetic research.[14] Therefore, DTC-GT companies fall under GINA’s jurisdiction. However, GINA only applies to the employment and health insurance industries and thus neglects many other potential arenas where privacy concerns may present.[15] This is especially relevant for 23andMe customers, as signing up for the service serves as consent for the company to use and share your genetic information with their associated third-party providers.[16] As a case in point, in 2018 the pharmaceutical giant GlaxoSmithKline purchased a $300 million stake in 23andMe for the purpose of gaining access to the company’s trove of genetic information for use in their drug development trials.[17]

Executive Regulation

In addition to the legislation above, three different federal administrative agencies primarily regulate the DTC-GT industry: the Food and Drug Administration (FDA), the Centers of Medicare and Medicaid services (CMS), and the Federal Trade Commission (FTC). The FDA has jurisdiction over DTC-GT companies due to the genetic tests they use being labeled as “medical devices”[18] and in 2013 exercised this authority over 23andMe by sending a letter to the company resulting in the suspending of one of its health-related genetic tests.[19] However, the FDA only has jurisdiction over diagnostic tests and therefore does not regulate any of the DTC-GT services related to genealogy such as 23andMe’s DNA Relatives feature.[20] Moreover, the FDA does not have jurisdiction to regulate the other aspects of DTC-GT companies’ activities or data practices.[21] CMS has the ability to regulate DTC-GT companies through enforcement of the Clinical Laboratory Improvements Act (CLIA), which requires that genetic testing laboratories ensure the accuracy, precision, and analytical validity of their tests.[22] But, like the FDA, CMS only has jurisdiction over tests that diagnose a disease or assess health.[23]

Lastly, the FTC has broad authority to regulate unfair or deceptive business practices under the Federal Trade Commission Act (FTCA) and has levied this authority against DTC-GT companies in the past. For example, in 2014 the agency brought an action against two DTC-GT companies who were using genetic tests to match consumers to their nutritional supplements and skincare products.[24] The FTC alleged that the companies’ practices related to data security were unfair and deceptive because they failed to implement reasonable policies and procedures to protect consumers’ personal information and created unnecessary risks to the personal information of nearly 30,000 consumers.[25] This resulted in the companies entering into an agreement with the FTC whereby they agreed to establish and maintain comprehensive data security programs and submit to yearly security audits by independent auditors.[26]

Potential Harms

As the above passages illustrate, the federal government appears to recognize and has at least attempted to mitigate privacy concerns associated with DTC-GT. Additionally, a number of states have passed their own laws that limit DTC-GT in certain aspects.[27] Nevertheless, given the potential magnitude and severity of harm associated with DTC-GT it makes one question if it is enough. Data breaches involving health-related data are growing in frequency and now account for 40% of all reported data breaches.[28] These data breaches result in unauthorized access to DTC-GT consumer-submitted data and can result in a violation of an individual’s genetic privacy. Though GINA aims to prevent it, genetic discrimination in the form of increasing health insurance premiums or denial of coverage by insurance companies due to genetic predispositions remains one of the leading concerns associated with these violations. What’s more, by obtaining genetic information from DTC-GT databases, it is possible for someone to recover a consumer’s surname and combine that with other metadata such as age and state to identify the specific consumer.[29] This may in turn lead to identity theft in the form of opening accounts, taking out loans, or making purchases in your name, potentially damaging your financial well-being and credit score. Dealing with the aftermath of a genetic data breach can also be expensive. You may incur legal fees, credit monitoring costs, or other financial burdens in an attempt to mitigate the damage.

Conclusion

As it sits now, genetic information submitted to DTC-GT companies already contains a significant volume of consequential information. As technology continues to develop and research presses forward, the volume and utility of this information will only grow over time. Thus, it is crucially important to be aware of risks associated with DTC-GT services.

This discussion is not intended to discourage individuals from participating in DTC-GT. These companies and the services they offer provide a host of benefits, such as allowing consumers to access genetic testing without the healthcare system acting as a gatekeeper, thus providing more autonomy and often at a lower price.[30] Furthermore, the information provided can empower consumers to mitigate the risks of certain diseases, allow for more informed family planning, or gain a better understanding of their heritage.[31] DTC-GT has revolutionized the way individuals access and understand their genetic information. However, this accessibility and convenience comes with a host of advantages and disadvantages that must be carefully considered.

Notes

[1] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[2] https://www.ama-assn.org/delivering-care/patient-support-advocacy/protect-sensitive-individual-data-risk-dtc-genetic-tests#:~:text=Use%20of%20direct%2Dto%2Dconsumer,November%202021%20AMA%20Special%20Meeting

[3] https://go-gale-com.ezp3.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[4] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[5] https://customercare.23andme.com/hc/en-us/articles/115004659068-DNA-Relatives-The-Genetic-Relative-Basics

[6] Id.

[7] Id.

[8] Id.

[9] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[10] https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/combined/hipaa-simplification-201303.pdf

[11] Id.

[12] Id; https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[13] https://www.eeoc.gov/statutes/genetic-information-nondiscrimination-act-2008

[14] Id.

[15] https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3035561&blobtype=pdf

[16] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[17] https://news.yahoo.com/news/major-drug-company-now-access-194758309.html

[18] https://uscode.house.gov/view.xhtml?req=(title:21%20section:321%20edition:prelim)

[19] https://core.ac.uk/download/pdf/33135586.pdf

[20] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[21] Id.

[22] https://www.law.cornell.edu/cfr/text/42/493.1253

[23] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[24] https://www.ftc.gov/system/files/documents/cases/140512genelinkcmpt.pdf

[25] Id.

[26] Id.

[27] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[28] Id.

[29] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[30] Id.

[31] Id.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


The Policy Future for Telehealth After the Pandemic

Jack Atterberry, MJLST Staffer

The Pandemic Accelerated Telehealth Utilization

Before the Covid-19 pandemic began, telehealth usage in the United States healthcare system was insignificant (rounding to 0%) as a percentage of total outpatient care visits.[1] In the two years after the beginning of the pandemic, telehealth usage soared to over 10% of outpatient visits and has been widely used across all payer categories including Medicare and Medicaid.[2] The social distancing realities during the pandemic years coupled with federal policy measures allowed for this radical transition toward telehealth care visits.

In response to the onset of Covid-19, the US federal government relaxed and modified many telehealth regulations which have expanded the permissible access of telehealth care services. After a public health emergency was declared in early 2020, the Center for Medicare & Medicaid Services (CMS) and the Department of Health and Human Services (HHS) modified preexisting telehealth-related regulations to expand the permissible use of those services.  Specifically, CMS temporarily expanded Medicare coverage to include telehealth services without the need for in-person visits, removed telehealth practice restrictions such as expanding the type of providers that could provide telehealth, and increased the reimbursement rates for telehealth services to bring them closer to in-person visit rates.[3] In addition, HHS implemented modifications such as greater HIPAA flexibility by easing requirements around using popular communication platforms such as Zoom, Skype, and FaceTime provided that they are used in good faith.[4]  Collectively, these changes helped lead to a significant rise in telehealth services and expanded access to care for many people that otherwise would not receive healthcare.  Unfortunately, many of these telehealth policy provisions are set to expire in 2024, leaving open the question of whether the benefits of telehealth care expansion will be here to stay after the public emergency measures end.[5]

Issues with Telehealth Care Delivery Between States

A big legal impediment to telehealth expansion in the US is the complex interplay of state and federal laws and regulations impacting telehealth care delivery. At the state level, key state differences in the following areas have historically held back the expansion of telehealth.  First, licensing and credentialing requirements for healthcare providers are most often licensed at the state level – this has created a barrier for providers who want to offer telehealth services across state lines. While many states have implemented temporary waivers or joined interstate medical licensure compacts to address this issue during the pandemic, many states have not done so and huge inconsistencies exist. Besides these issues, states also differ with regard to reimbursement policy as states differ significantly in how different payer types insure differently in different regions—this has led to confusion for providers about whether to deliver care in certain states for fear of not getting reimbursed adequately. Although the federal health emergency helped ease interstate telehealth restrictions since the pandemic started, these challenges will likely persist after the temporary telehealth measures are lifted at the end of 2024.

What the pandemic-era temporary easing of telehealth restrictions taught us is that interstate telehealth improves health outcomes, increases patient satisfaction, and decreases gaps in care delivery.  In particular, rural communities and other underserved areas with relatively fewer healthcare providers benefited greatly from the ability to receive care from an out of state provider.  For example, patients in states like Montana, North Dakota, and South Dakota benefit immensely from being able to talk with an out of state mental health provider because of the severe shortages of psychiatrists, psychologists, and other mental health practitioners in those states.[6]  In addition, a 2021 study by the Bipartisan Policy Center highlighted that patients in states which joined interstate licensure compacts experienced a noticeable improvement in care experience and healthcare workforces experienced a decreased burden on their chronically stressed providers.[7]  These positive outcomes resulting from eased interstate healthcare regulations should inform telehealth policy moving forward.

Policy Bottlenecks to Telehealth Care Access Expansion

The presence of telehealth in American healthcare is surprisingly uncertain as the US emerges from the pandemic years.  As the public health emergency measures which removed various legal and regulatory barriers to accessing telehealth expire next year, many Americans could be left without access to healthcare via telehealth services. To ensure that telehealth remains a part of American healthcare moving forward, federal and state policy makers will need to act to bring about long term certainty in the telehealth regulatory framework.  In particular, advocacy groups such as the American Telehealth Association recommend that policy makers focus on key policy changes such as removing licensing barriers to interstate telehealth care, modernizing reimbursement payment structures to align with value-based payment principles, and permanently adopting pandemic-era telehealth access for Medicare, Federally Qualified Health Centers, and Rural Health Clinics.[8]  In addition, another valuable federal regulatory policy change would be to continue allowing the prescription of controlled substances without an in-person visit.  This would entail modifying the Ryan Haight Act, which requires an in-person medical exam before prescribing controlled substances.[9]  Like any healthcare reform in the US, cementing these lasting telehealth policy changes as law will be a major uphill battle.  Nonetheless, expanding access to telehealth could be a bipartisan policy opportunity for lawmakers as it would bring about expanded access to care and help drive the transition toward value-based care leading to better health outcomes for patients.

Notes

[1] https://www.healthsystemtracker.org/brief/outpatient-telehealth-use-soared-early-in-the-covid-19-pandemic-but-has-since-receded/

[2] https://www.cms.gov/newsroom/press-releases/new-hhs-study-shows-63-fold-increase-medicare-telehealth-utilization-during-pandemic#:~:text=Taken%20as%20a%20whole%2C%20the,Island%2C%20New%20Hampshire%20and%20Connecticut.

[3] https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency

[4] Id.

[5] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[6] https://thinkbiggerdogood.org/enhancing-the-capacity-of-the-mental-health-and-addiction-workforce-a-framework/?_cldee=anVsaWFkaGFycmlzQGdtYWlsLmNvbQ%3d%3d&recipientid=contact-ddf72678e25aeb11988700155d3b3c69-e949ac3beff94a799393fb4e9bbe3757&utm_source=ClickDimensions&utm_medium=email&utm_campaign=Health%20%7C%20Mental%20Health%20Access%20%7C%2010.19.21&esid=e4588cef-7520-ec11-b6e6-002248246368

[7] https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2021/11/BPC-Health-Licensure-Brief_WEB.pdf

[8] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[9] https://www.aafp.org/pubs/fpm/issues/2021/0500/p9.html


Perhaps Big Tech Regulation Belongs on Congress’s for You Page

Kira Le, MJLST Staffer

On Thursday, March 23, 2023, TikTok CEO Shou Zi Chew testified before a Congressional panel for five hours in order to convince Congress that the social media platform should not be banned in the United States. The hearing came one week after reports surfaced that the Committee on Foreign Investment was threatening a ban unless TikTok’s parent company ByteDance sells its share of the company.[1] Lawmakers on both sides of the aisle, as well as FBI officials, are allegedly concerned with the possibility of the Chinese government manipulating users’ experience on the platform or threatening the security of the data of its more than 150 million users in the United States.[2] Despite Chew’s testimony that TikTok plans to contract with U.S. tech giant Oracle to store U.S. data on U.S. servers on U.S. soil, preventing Chinese interference on the platform and recommending content to U.S. users through Oracle infrastructure, lawmakers were not convinced, and not a single one offered support for TikTok.[3]

In terms of what’s to come for TikTok’s future in the United States, Senator Marco Rubio updated his website on Monday, March 27, 2023 with information on “when TikTok will be banned,” claiming his proposed ANTI-SOCIAL CCP Act is the only bipartisan, bicameral legislation that would actually prevent TikTok from operating in the United States.[4] In order to cut off the platform’s access to critical functions needed to remain online, the proposed statute would require the president to use the International Emergency Economic Powers Act to block and prohibit all transactions with TikTok, ByteDance, and any subsidiary or successor within 30 days.[5] Senator Rubio explains that the proposed legislation “requires the president to block and prohibit transactions with social media companies owned or otherwise controlled by countries or entities of concern.”[6] Reuters reports that The White House supports the Senate bill known as the RESTRICT Act.[7] However, former President Trump made an almost identical attempt to ban the app in 2020.[8]TikTok was successful in quashing the effort, and would almost certainly challenge any future attempts.[9] Further, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, “To justify a TikTok ban, the government would have to demonstrate that privacy and security concerns can’t be addressed in narrower ways. The government hasn’t demonstrated this, and we doubt it could. Restricting access to a speech platform that is used by millions of Americans every day would set a dangerous precedent for regulating our digital public sphere more broadly.”[10]

Despite what Congress may want the public to think, it certainly has other options for protecting Americans and their data from Big Tech companies like TikTok. For example, nothing is stopping U.S. lawmakers from following in the footsteps of the European Parliament, which passed the Digital Markets Act just last year.[11] Although the main purpose of the Act is to limit anticompetitive conduct by large technology companies, it includes several provisions on protecting the personal data of users of defined “gatekeeper” firms. Under the Act, a gatekeeper is a company that provides services such as online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communications services; operating systems; web browsers; and online advertising services that are gateways for business to reach end users.[12] The Digital Markets Act forbids these gatekeepers from processing the personal data of end users for the purpose of providing online advertisement services, combining or cross-using their personal data, or signing users into other services in order to combine their personal data without their explicit consent.[13]

The penalties associated with violations of the Act give it some serious teeth. For noncompliance, the European Commission may impose a fine of up to 10% of the offending gatekeeper’s total worldwide turnover in the preceding year in the first instance, and up to 20% if the gatekeeper has committed the same or a similar infringement laid out in specific articles at some point in the eight preceding years.[14] For any company, not limited to gatekeepers, the Commission may impose a fine of up to 1% of total worldwide turnover in the preceding year for failing to provide the Commission with information as required by various articles in the Act. Finally, in order to compel any company to comply with specific decisions of the Commission and other articles in the regulation, the Commission may impose period penalty payments of up to 5% of the average daily worldwide turnover in the preceding year, per day.[15]

If U.S. lawmakers who have backed bipartisan legislation giving President Biden a path to ban TikTok are truly concerned about preventing the spread of misinformation on the platform, who truly believe, as Representative Gus Bilirakis claims to, that it is “literally leading to death” and that “[w]e must save our children from big tech companies” who allow harmful content to be viewed and spread without regulation, then perhaps Congress should simply: regulate it.[16] After the grueling congressional hearing, the Chinese foreign ministry stated in a regular news briefing that it has never asked companies “to collect or provide data from abroad to the Chinese government in a way that violated local laws…”[17]During his testimony, Chew also argued that TikTok is no different than other social media giants, and has even sought to put stronger safeguards in place as compared to its competitors.[18] Granted, some lawmakers have expressed support for comprehensive data privacy legislation that would apply to all tech companies.[19] Perhaps it would be more fruitful for U.S. lawmakers to focus on doing so.

Notes

[1] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[2] Id.

[3] Id.; David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[4] FAQ: When Will TikTok Be Banned?, MARCO RUBIO US SENATOR FOR FLORIDA (Mar. 27, 2023), https://www.rubio.senate.gov/public/index.cfm/press-releases?ContentRecord_id=C5313B3F-8173-4DC8-B1D9-9566F3E2595C.

[5] Id.

[6] Id.

[7] Factbox: Why a Broad US TikTok Ban is Unlikely to Take Effect Soon, REUTERS (Mar. 23, 2023), https://www.reuters.com/technology/why-broad-us-tiktok-ban-is-unlikely-take-effect-soon-2023-03-23/.

[8] Id.

[9] Id.

[10] Id.

[11] Council Regulation (EU) 2022/1925 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. L 265/1 [hereinafter Digital Markets Act].

[12] Id., Art. 3, 2022 O.J. L 265/28, 30.

[13] Id. art. 5, at 33.

[14] Id. art. 30, at 51, 52.

[15] Id. art. 17, at 44.

[16] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[17] David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[18] Daniel Flatley, Five Key Moments From TikTok CEO’s Combative Hearing in Congress, BLOOMBERG (Mar. 23, 2023), https://www.bloomberg.com/news/articles/2023-03-23/five-key-moments-from-tiktok-ceo-s-combative-hearing-in-congress#xj4y7vzkg.

[19] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.