Data

Your Digital Doppelgänger

Lillie Grant, MJLST Staffer

What counts as harm in an age of inference?

Modern systems do not just collect information; they generate it.[1] From patterns in behavior, timing, and interaction, they derive conclusions about people that those people never actually shared.[2] Often, those conclusions are more revealing than anything someone would voluntarily disclose.[3] And yet, the law does not clearly or consistently treat that process as harmful.[4]

Privacy law has mostly been built around disclosure.[5] The usual question is whether information was knowingly shared, improperly collected, or revealed to the wrong people.[6] The basic idea is that the data starts with the individual and then moves outward.[7] But inference does not work like that.[8] It is not about what is given; it is about what is created.[9]

The difference is more significant than it first appears, because when a system converts small pieces of behavior into conclusions about a person, it does more than record activity; it interprets it, producing not just a list of actions but a statement about their meaning.[10]

The law has not caught up. Courts are much more comfortable recognizing harm when inferred information shows up in the world in a visible way.[11] If something is revealed, shared, or used in a way that clearly affects someone, it looks like a familiar kind of injury.[12] It has consequences that feel real and immediate.[13]

But most inferences never get that far.[14] They stay inside the system that produced them.[15] They shape what someone sees, what is recommended, what is prioritized, and sometimes what opportunities are available, all without a discrete, traceable event.[16] Even when those inferences are accurate or deeply personal, they often do not trigger legal protection.[17] There is no clear moment where something was “disclosed,” and without that, courts struggle to recognize harm at all.[18]

That leaves a gap: privacy law still depends on the idea that information is something a person gives.[19] Something you can point to and say, “This was shared.”[20] But inferred data does not fit into that model.[21] It is not handed over; it is built, and because of that, it slips past categories that were never designed to capture this kind of process.[22] The problem is not just theoretical; it affects whether someone can even bring a claim.[23] To get into court, a plaintiff has to show a concrete injury.[24] Not just a feeling that something is off, but something the law is willing to recognize as harm.[25] When the issue is inference, the information may shape real outcomes but does so quietly, without a clear moment that satisfies the law’s demand for discrete injury.[26]

At the same time, these inferences are not meaningless. They are the product. Companies are not just collecting data for the sake of it; they are turning it into insights that can be used to target ads, keep people engaged, and make money.[27] The value is not just in what people do, but in what can be figured out from what they do.[28]

That raises a harder question. If a company can take your behavior, turn it into something new, and profit from it, what exactly belongs to you? The raw data came from you, but the conclusion did not. The law tends to treat that distinction as important.[29] It is not obvious that it should settle the issue at all.[30]

Recent lawsuits by authors challenge the use of their works to train AI systems as a form of uncompensated extraction,[31] but because those claims focus on the inputs used to build these systems, they leave open a distinct question: whether individuals have any claim to the inferences generated about them, suggesting the problem is not just data use but the unrecognized extraction and monetization of information produced about individuals.

There are limited signals in existing law suggesting that creating new data about a person can itself be treated as harm, most clearly in biometric cases where courts have recognized that generating something like a faceprint is significant even without further use.[32]

Part of what makes inference so difficult is that it does not feel like a clear violation. There is no obvious intrusion or single moment where something is taken; instead, it happens gradually as bits of behavior accumulate and are turned into meaning that appear harmless on their own but are surprisingly complete in the aggregate.[33] That creates a deeper tension. The better systems get at understanding people, the less clear it becomes what it even means, legally, to “know” something about someone.[34] At what point does a pattern become information? And at what point does producing that information start to matter in a legal sense?

The better framing is to abandon disclosure as the organizing principle. Maybe the issue is not disclosure at all. Maybe it is extraction. Systems are not just observing behavior; they are pulling meaning out of it and turning that meaning into something usable.[35] That something can be scaled, sold, and built into entire business models.[36] But the legal rules we have are still mostly about what people choose to share, not what can be created from what they do.[37]

If that is right, the problem is only intensifying, as systems increasingly rely on information that no one explicitly provided but that still feels personal, making it harder to say that nothing of consequence is being taken. The law offers no clear answer, leaving inferred data central in practice but misaligned with doctrines of harm. This leaves individuals in a position where systems can form detailed conclusions about them while they have little ability to see or challenge those conclusions, reflecting a definition of harm that no longer matches how information is actually produced and used.

 

Notes

[1] See generally Joan M Wrabetz, What Is Inferred Data and Why Is It Important?, ABA (Aug. 22, 2022), https://www.americanbar.org/groups/business_law/resources/business-law-today/2022-september/what-is-inferred-data-and-why-is-it-important/.

[2] Id.

[3] See Hal Conick, AI and the Law, Univ. Chi. L. Sch. (Dec. 9, 2024), https://www.law.uchicago.edu/news/ai-and-law.

[4] Sandra Wachter & Brent Mittelstadt, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, 2019 Colum. Bus. L. Rev. 494.

[5] See Overview of the Privacy Act of 1974: Conditions of Disclosure to Third Parties, U.S. Dep’t of Just., https://www.justice.gov/opcl/overview-privacy-act-1974-2020-edition/disclosures-third-parties (last visited Apr. 9, 2026, at 16:12 CST).

[6] Id.

[7] Id.

[8] See Wrabetz, supra note 1.

[9] Id.

[10] Id.

[11] See Harith Khawaja, Injury, in Fact: The Internet, the Americans with Disabilities Act, and Standing in Digital Spaces, 36 Stan. L. & Pol’y Rev. 165, 172 (2025).

[12] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Danielle Keats Citron & Daniel Solove, Privacy Harms, 102 B.U.L Rev 793 (2022).

[13] Id.

[14] Jeffrey Erickson, What Is AI Inference?, Oracle (Apr. 2, 2024), https://www.oracle.com/artificial-intelligence/ai-inference/#:~:text=Inference%2C%20to%20a%20lay%20person,in%20the%20training%20data%20set.

[15] Id.

[16] Id.

[17] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Citron & Solove, supra note 12.

[18] Id.

[19] Citron & Solove, supra note 12.

[20] See Pamela J. Wisniewski & Xinru Page, Privacy Theories and Frameworks, in Modern Socio-Technical Perspectives on Privacy 15 (2022).

[21] Wrabetz, supra note 1.

[22] See Privacy by Proxy: Regulating Inferred Identities in AI Systems, IAPP (Nov. 12, 2025), https://iapp.org/news/a/privacy-by-proxy-regulating-inferred-identities-in-ai-systems.

[23] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).

[24] Id.

[25] Id.

[26] Wrabetz, supra note 1.

[27] Id.

[28] Id.

[29] Id.

[30] Id.

[31] See Pramode Chiruvolu et al., Fair Use and AI Training: Two Recent Decisions Highlight the Complexity of This Issue, Skadden, Arps, Slate, Meagher & Flom LLP (July 8, 2025) https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training.

[32] See Ross D. Emmerman & Mark Goldberg, Illinois Supreme Court Rules No Actual Harm Needed for Biometric Information Protection Act Claims; Floodgates Open, Loeb & Loeb LLP (Jan. 2019) https://www.loeb.com/en/insights/publications/2019/01/illinois-supreme-court-rules-no-actual-harm-needed.

[33] Wrabetz, supra note 1.

[34] Id.

[35] Id.

[36] Id.

[37] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); ); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).


The “Search Party” Backfire: How a Super Bowl Ad Ignited a Bipartisan Privacy Reckoning in Minnesota

Ella Stromberg, MJLST Staffer

Introduction: Heartwarming Ad with a Chilling Reality

During the 2026 Super Bowl, a commercial meant to pull at the heartstrings of millions of viewers instead ignited a firestorm of debate over the future of American privacy. The advertisement was Ring’s new “Search Party” feature, a tool designed to help find lost dogs by utilizing a neighborhood-wide network of AI-powered doorbell cameras.[1] While the mission of reuniting lost pets with their families appears noble, the ad’s high-profile debut served as a rare moment of corporate transparency regarding the vast surveillance infrastructure growing around us. The resulting backlash has exposed a significant gap in consumer surveillance laws, one that Minnesota legislators are now aggressively moving to fill.

How the Ring “Search Party” Feature Works

To the casual viewer, the Search Party feature seems like a simple community service. However, the underlying mechanics are far more complex. The feature utilizes AI to scan footage from opted-in neighbor cameras to identify lost pets based on characteristics such as breed, size, and fur pattern.[2] The feature is enabled by default, meaning users are automatically enrolled unless they navigate a multi-step process to opt out.[3] This captured footage can be stored for up to 180 days, creating a massive, retrospective, searchable database of neighborhood activity.[4] Ring founder, Jamie Siminoff, defended this expansion, noting that advances in AI allow these features to be implemented at a scale and speed previously impossible.[5]

The Viral Backlash

The marketing for the Search Party feature encouraged users to “be a hero in your neighborhood”, but the public reception was decidedly less heroic.[6] In the week following the Super Bowl ad, nearly 50% of social media conversations regarding Ring were negative, compared to only 14% that were positive.[7] Users took to platforms like Reddit to claim they were requesting refunds, while some even posted videos of themselves destroying their Ring cameras in protest.[8] Legal experts were equally struck by the campaign. Dr. Jane Kirtley, Professor of Media, Ethics, and Law here at the University of Minnesota, noted it was interesting that Ring would be “so candid about the potential use of this particular technology.”[9] Critics argued the ad was “creepy” and “dystopian,” suggesting that if AI can be used to track a specific dog across a neighborhood, there is little barrier to using the same infrastructure to track specific people.[10]

Privacy Advocates’ Responses

The concerns raised by privacy advocates like the Electronic Frontier Foundation (EFF) center on the fundamental problem of consent.[11] While a camera owner might opt-in to the network, the Ring cameras also record every passerby, from postal workers to neighbors, without their permission.[12] EFF attorney Mario Trujillo warns that this creates a “large surveillance apparatus” that can easily be tapped into by law enforcement.[13]

There is also fear of a slippery slope when Search Party is combined with Ring’s “Familiar Faces” facial recognition technology, which identifies specific individuals who approach a doorway.[14] Congressman Raja Krishnamoorthi (D-IL) expressed concerns in a formal letter to Ring, warning that the opt-out design is confusing and risks creating 24/7 surveillance networks near sensitive locations like hospitals, schools, and courthouses.[15] Further, the history of partnerships between Ring and surveillance companies like Flock Safety has raised alarms regarding data-sharing with federal agencies like ICE.[16] Although Ring recently canceled its partnership with Flock, citing resource constraints, advocates remain wary of how easily private residential data can be integrated into broader police intelligence networks.[17]

Minnesota’s Rapid Legislative Response

The Search Party controversy made one thing clear: Minnesota currently lacks laws preventing private companies from sharing this type of residential video data with third parties or government entities.[18] In response, a bipartisan group of Minnesota lawmakers introduced a five-bill package aimed at regulating AI and protecting digital rights.[19] Led by the unlikely duo of Senator Erin Maye Quade and Senator Eric Lucero, the bills target several key areas: SF 1857 targets prohibiting children under 18 from accessing AI chatbots,[20] SF 1856 bans health insurers from using AI to determine medical necessity,[21] SF 3098 blocks “dynamic pricing” set by AI algorithms,[22] SF 1886 mandates disclosure when a consumer is interacting with AI,[23] and SF 1120 aims to create a landmark ban on reverse warrants.[24]

SF 1120 has particular significance stemming from the Ring ad. It would prohibit the government from using reverse location or reverse keyword searches, which are digital dragnets that compel tech companies to hand over data on every device in a specific area or every person who searched for a specific term.[25] The bill includes a civil cause of action, allowing individuals to sue for $1,000 per violation if their data is obtained unlawfully.[26] Senator Lucero argued these controls are necessary to “empower individuals against these multi-billion dollar industries.”[27]

The path to enactment faces two major hurdles. First, law enforcement groups, including the Minnesota Bureau of Criminal Apprehension, testified that banning reverse warrants would have “extensive negative consequences” for solving complex crimes.[28] Second, a federal complication looms; an Executive Order from President Trump establishes an AI litigation task force to challenge state laws, threatening to pull funding from states with “onerous” AI laws.[29]

Looking Forward

The Ring Super Bowl ad was intended to be a marketing triumph, but instead, it became a rare moment where the public saw a glimpse of the surveillance nightmare being built around them. The swift, bipartisan response in the Minnesota legislature signals that surveillance privacy is no longer a partisan issue but now a fundamental question of constitutional rights that the public wants answers to. As these bills move through the legislature, they highlight the unresolved tension between legitimate law enforcement needs and Fourth Amendment protections. If passed, Minnesota’s approach could become a model for state-level digital rights, provided it can survive the looming threat of federal preemption. For now, the Search Party backfire serves as a potent reminder that in the age of AI, “common-sense guardrails” are no longer optional; they are necessary.[30]

 

Notes

[1] See Ring, Search Party from Ring | Be a Hero in Your Neighborhood, YouTube (Feb. 2, 2026), https://www.youtube.com/watch?v=OheUzrXsKrY.

[2] Abby Haymond, Ring’s New AI Lost Dog Feature Raises Privacy Concerns, WDAM (Feb. 11, 2026 at 22:10 CST), https://www.wdam.com/2026/02/12/rings-new-ai-lost-dog-feature-raises-privacy-concerns/.

[3] Todd Bishop, What Ring’s ‘Search Party’ Actually Does, And Why It’s Super Bowl Ad Gave People the Creeps, GeekWire (Feb. 10, 2026 at 11:14), https://www.geekwire.com/2026/what-rings-search-party-actually-does-and-why-its-super-bowl-ad-gave-people-the-creeps/.

[4] Madison Lisowski & Danae Holmes, Concerns Over AI Video Surveillance Grow Following Big Game Ad, W. Mass. News (Mar. 2, 2026 at 15:10 CST), https://www.westernmassnews.com/2026/03/02/concerns-over-ring-cameras-grow-following-big-game-ad/.

[5] Bishop, supra note 3.

[6] See e.g., Lisowski & Holmes, supra note 4.

[7] Sam Sabin, Doorbell Cams, Surveillance Tech Face Growing Backlash, Axios (Feb. 17, 2026), https://www.axios.com/2026/02/17/doorbell-cams-and-surveillance-tech-face-growing-public-backlash.

[8] Id.

[9] Corin Hoggard, Ring’s AI Feature Raises Privacy Alarms, Fox 9 (Feb. 10, 2026 at 9:37 CST), https://www.fox9.com/news/rings-ai-feature-raises-privacy-alarms.

[10] Bishop, supra note 3; Haymond, supra note 2.

[11] See, e.g., Beryl Lipton, No One, Including Our Furry Friends, Will Be Safer in Ring’s Surveillance Nightmare, Elec. Frontier Found. (Feb. 10, 2026), https://www.eff.org/deeplinks/2026/02/no-one-including-our-furry-friends-will-be-safer-rings-surveillance-nightmare-0.

[12] Haymond, supra note 2.

[13] Id.

[14] Id. See also Lipton, supra note 11; Bishop, supra note 3.

[15] Rep. Raja Krishnamoorthi, Krishnamoorthi Raises Alarm Over Ring’s New AI “Search Party” Feature, Citing Privacy and Civil Liberties Concerns (Feb. 27, 2026), https://krishnamoorthi.house.gov/media/press-releases/krishnamoorthi-raises-alarm-over-rings-new-ai-search-party-feature-citing.

[16] Bishop, supra note 3; Jay Stanley, Flock’s Aggressive Expansions Go Far Beyond Simple Driver Surveillance, ACLU (Aug. 18, 2025), https://www.aclu.org/news/privacy-technology/flock-roundup.

[17] Sabin, supra note 7; Lipton, supra note 11.

[18] Hoggard, supra note 9.

[19] Howard Thompson, MN Lawmakers Introduce AI Regulations Aimed at Protecting Children, Curtailing Surveillance, Fox 9 (Mar. 9, 2026 at 13:46 CDT), https://www.fox9.com/news/mn-lawmakers-introduce-ai-regulations-aimed-protecting-children-curtailing-surveillance.

[20] S.F. 1857, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/1857/versions/0/.

[21] S.F. 1856, 94th Leg., Reg. Sess. (Minn. 2025),  https://www.revisor.mn.gov/bills/94/2025/0/SF/1856/versions/latest/.

[22] S.F. 3098, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/3098/versions/latest/.

[23] S.F. 1866, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/1886/versions/latest/.

[24] S.F. 1120, 94th Leg., Reg. Sess. (Minn. 2025), https://www.revisor.mn.gov/bills/94/2025/0/SF/1120/versions/latest/.

[25] Id.

[26] Id.

[27] Michelle Griffith, Minnesota Lawmakers Push Bipartisan Measures to Regulate AI, SC Times (Mar. 11, 2026 at 2:45 CT), https://www.sctimes.com/story/news/politics/2026/03/11/minnesota-senate-considers-bipartisan-push-to-regulate-ai-artificial-intelligence-dfl-gop/89082394007/.

[28] Id; Minn. Bureau of Criminal Apprehension, BCA Opposition to S.F. 1120 (Minn. Senate Comm. on Judiciary and Public Safety, Mar. 5, 2026), https://assets.senate.mn/committees/2025-2026/3128_Committee_on_Judiciary_and_Public_Safety/BCA-Opposition-to-SF1120-3-5-26-Signed-3-5-26.pdf (letter from BCA Superintendent Evans to Chair Latz opposing SF 1120).

[29] Exec. Order No. 14365, Ensuring a National Policy Framework for Artificial Intelligence, 90 Fed. Reg. 58499 (Dec. 2025), https://www.federalregister.gov/documents/2025/12/16/2025-23092/ensuring-a-national-policy-framework-for-artificial-intelligence; Thompson, supra note 19.

[30] Chris Farrell & Ellen Finn, Slate of Bills Looking to Regulate AI Introduced at Minnesota Capitol, Minn. Pub. Radio (Mar. 9, 2026 at 13:35), https://www.mprnews.org/episode/2026/03/09/slate-of-bills-looking-to-regulate-ai-introduced-at-state-capitol.


Examining the Constitutionality of Ohio’s New Obscene Material Age Verification Law

Fide Valverde-Rivera, MJLST Staffer

In September 2025, an Ohio law requiring websites that purvey obscene material to verify users’ ages went into effect.[1] Although this law sought to regulate pornographic material and platforms that distribute it, it erroneously exempts some of the largest pornographic websites from compliance while mandating compliance by regular social media sites. Because of this unintended consequence, this law is very likely unconstitutional.

 

General Overview of the New Law and Implementation Problems

Ohio’s new age verification law requires platforms that provide “any material or performance that is obscene or harmful to juveniles” to verify users’ ages.[2] The law exempts “providers of ‘an interactive computer service,’ which is defined . . . as having the same meaning as it does under federal law” from having to comply with the age verification requirements.[3] Federal law defines an “interactive computer service” to include “any platform where third parties can create accounts and can generate content, from social media sites to dating apps, message boards, classified ads, search engines, comment sections, and much more.”[4] Platforms like Pornhub and OnlyFans, two major pornography websites, arguably fall within this definition and qualify for the exemption.[5] Accordingly, Pornhub and OnlyFans are not conducting age verification for Ohio users.[6] However, general-purpose social media platforms like Bluesky—a type of platform lawmakers said would be outside of the law’s scope—have been mandated to begin age verification.[7]

 

Constitutional Considerations

The first step in evaluating the constitutionality of this law requires determining the appropriate level of scrutiny with which it should be examined. In Free Speech Coalition, Inc. v. Paxton, the Supreme Court held that “because accessing material obscene to minors without [age verification] is not [a] constitutionally protected [activity], any burden [an age verification law] imposes on protected activity is only incidental, and the statute triggers only intermediate scrutiny.”[8] It held that it was not subject to strict scrutiny because “speech that is obscene to minors is unprotected to the extent that [a] State imposes an age-verification requirement” and “where the speech in question is unprotected, States may impose ‘restrictions’ based on ‘content’ without triggering strict scrutiny.”[9]

Under intermediate scrutiny, the Supreme Court in Paxton found the Texas age-verification law constitutional for two reasons.[10] First, the law served an important government interest: shielding sexual content from children.[11] Second, the law was adequately tailored in that “the government’s interest ‘would [have been] achieved less effectively absent the regulation’ and the regulation ‘[did] not burden substantially more speech than is necessary to further that interest.’”[12] Age verification laws are a constitutionally-settled way to protect children from obscene material, and Texas’s preferred approach was valid.[13] The Supreme Court in Paxton also held the statute’s targeting of certain sites did not render it unconstitutional because “it [was] reasonable for Texas to conclude that websites with a higher portion of sexual content are more inappropriate for children to visit than those with a lower proportion.”[14]

 

Bottom Line

Here, Ohio’s age-verification law is very likely unconstitutional because it fails to shield children from sexual content. Because platforms with higher proportions of sexual content, the intended targets of this law, are outside of the scope of the law, the law is not adequately tailored to survive an application of intermediate scrutiny. Additionally, the law is overinclusive because social media sites on which obscene content generally represents a minority of the content are bound by the law. Based on these shortcomings, lawmakers and judicial officers alike should anticipate an interested party or parties advancing a facial challenge attacking the constitutionality of this law under the First Amendment. Further, platforms like Bluesky may attempt to advance an as-applied challenge by noting that the law—although written to target pornography websites without “ensnar[ing] social media platforms”—fails to achieve its articulated objectives.[15]

 

Notes

[1] Ohio Rev. Code § 1349.10(B) (2025).

[2] Id.

[3] Elizabeth Nolan Brown, Whoops—Ohio Accidentally Excludes Most Major Port Platforms from Anti-Porn Law, Reason (Oct. 6, 2025, 11:45 AM), https://reason.com/2025/10/06/whoops-ohio-accidentally-excludes-most-major-porn-platforms-from-anti-porn-law/.

[4] Id.

[5] Id.

[6] See id. (“I’m assuming that the exclusion of Pornhub was not intentional, given the way this law’s supporters talked about as a shield against Ohio minors being able to see any sexually oriented material online. One of the law’s biggest proponents, state Rep. Josh Williams (R-Sylvania), has talked about how it would not ensnare social media platforms even though they may contain porn, so perhaps the exclusion of interactive computer services was intended for that purpose. But most major web-porn access points, including OnlyFans and webcamming platforms, also fall under the definition of interactive computer service.”)

[7] See Morgan Trau, Do You Live in Ohio? Do You Watch Porn Online? Your State Legislature Wants to See Some ID, Ohio Cap. J. (Oct. 1, 2025, 4:45 AM), https://ohiocapitaljournal.com/2025/10/01/do-you-live-in-ohio-do-you-watch-porn-online-your-state-legislature-wants-to-see-some-id/ (“[Rep. Josh] Williams said that this [law] won’t impact social media sites like X (formerly known as Twitter) and Reddit, even though both of those platforms contain easily-accessible pornography”); @psychic_twin, Reddit (Sept. 29, 2025, 2:00 PM), https://www.reddit.com/r/Ohio/comments/1ntqr4w/ohio_age_verification_notice_on_bluesky/ (sharing how Bluesky required Ohio users to complete age assurances because “[t]he laws in [the user’s] location require[d] [them] to verify [they’re] an adult before accessing certain features on Bluesky, like adult content and direct messaging”).

[8] Free Speech Coalition, Inc. v. Paxton, 606 U.S. 461, 483 (2025).

[9] Id. at 492.

[10] Id. at 495–96.

[11] Id. at 496.

[12] Id.

[13] Id. at 496–97 (“The specific verification methods that H.B. 1181 permits are also plainly legitimate. At present, H.B. 1181 allows for verification using government-issued identification or transactional data. Verification can take place on the covered website itself or through a third-party service. Other age-restricted services, such as online gambling, alcohol and tobacco sales, and car rentals, rely on the same methods. And, much of the online pornography industry has used analogous methods for decades . . . . H.B. 1181 simply requires established verification methods already in use by pornographic sites and other industries. That choice is well within the State’s discretion under intermediate scrutiny.” (internal citations omitted)).

[14] Id.

[15] Nolan Brown, supra note 3.


Caught in the Digital Dragnet: The Controversy Over Geofence Warrants and Privacy Rights

Yaoyu Tang, MJLST Staffer

 

Picture this: A sunny Saturday afternoon at a bustling shopping mall. Children’s laughter echoes as they pull their parents toward an ice cream stand. Couples meander hand-in-hand past glittering storefronts, while teenagers crowd the food court, joking and snapping selfies. It’s a portrait of ordinary life—until chaos quietly unfolds. A thief strikes a high-end jewelry store and vanishes into the crowd, leaving no trail behind. Frustrated and out of options, law enforcement turns to a geofence warrant, demanding Google provide location data for every smartphone within a quarter-mile radius during the heist. In the days that follow, dozens of innocent shoppers, workers, and passersby find themselves under scrutiny, their routines disrupted simply for being in the wrong place at the wrong time.

This story is not hypothetical—it mirrors real-life cases where geofence warrants have swept innocent individuals into criminal investigations, raising significant concerns about privacy rights and constitutional protections.

Geofence warrants are a modern investigative tool used by law enforcement to gather location data from technology companies.[1] These warrants define a specific geographic area and time frame, compelling companies like Google to provide anonymized location data from all devices within that zone.[2] Investigators then sift through this data to identify potential suspects or witnesses, narrowing the scope to relevant individuals whose movements align with the crime scene and timeline.[3]

The utility of geofence warrants is undeniable. They have been instrumental in solving high-profile cases, such as identifying suspects in robberies, assaults, and even the January 6 Capitol riots.[4] By providing a way to access location data tied to a specific area, geofence warrants enable law enforcement to find leads in cases where traditional investigative techniques might fail.[5] These tools are particularly valuable in situations where there are no direct witnesses or physical evidence, allowing law enforcement to piece together events and identify individuals who were present during criminal activity.[6]

However, the benefits of geofence warrants come with significant risks. Critics argue that these warrants are overly broad and invasive, sweeping up data on innocent bystanders who happen to be in the area.[7] In addition, civil liberties organizations, such as the ACLU and the Electronic Frontier Foundation (EFF), have strongly criticized geofence warrants.[8] They argue that the geofence warrants infringe on privacy rights and disproportionately affect marginalized communities. Without strict limitations, geofence warrants could become tools of mass surveillance, disproportionately targeting marginalized communities or chilling free movement and association. [9] Moreover, this indiscriminate collection of location data raises serious Fourth Amendment concerns, as it can be seen as a form of digital general warrant—a modern equivalent to the invasive searches that the Framers sought to prevent.[10] Tension between their investigative utility and potential for privacy violations has made geofence warrants one of the most controversial tools in modern law enforcement.

The legality of geofence warrants is far from settled, with courts offering conflicting rulings. In United States v. Smith, the Fifth Circuit declared geofence warrants unconstitutional, stating that they amount to general searches.[11] The court emphasized the massive scope of data collected and likened it to rummaging through private information without sufficient cause.[12] The decision heavily relied on Carpenter v. United States, where the Supreme Court held that accessing historical cell-site location information without a warrant violates the Fourth Amendment.[13] In Carpenter, the Court recognized that cell-site location information (CSLI) provides an intimate record of a person’s movements, revealing daily routines, frequent locations, and close personal associations.[14] This information, the Court held, constitutes a “search” within the meaning of the Fourth Amendment, requiring a warrant supported by probable cause.[15] Conversely, the Fourth Circuit in United States v. Chatrie upheld the use of geofence warrants, arguing that users implicitly consent to data collection by agreeing to terms of service with tech companies.[16] The court leaned on the third-party doctrine, which holds that individuals have reduced privacy expectations for information shared with third parties.[17] These conflicting rulings highlight the broader struggle to apply traditional Fourth Amendment principles to digital technologies. The Fifth Circuit’s ruling highlights discomfort with the vast reach of geofence warrants, pointing to their lack of Fourth Amendment particularity.[18] Conversely, the Fourth Circuit’s reliance on the third-party doctrine broadens law enforcement access, framing user consent as a waiver of privacy.[19] This split leaves courts struggling to reconcile privacy with evolving surveillance technology, underscoring the urgent need for clearer standards.

Tech companies like Google play a pivotal role in the geofence warrant debate. Historically, Google stored user location data in a vast internal database known as Sensorvault.[20] This database served as a central repository for location data collected from various Google services, including Google Maps, Search, and Android devices.[21] Law enforcement agencies frequently sought access to this data in response to geofence warrants, making Sensorvault a crucial point of contention in the legal and privacy debates surrounding this technology.[22] However, in 2023, Google announced significant changes to its data policies: location data would be stored on user devices instead of the cloud, backed-up data would be encrypted to prevent unauthorized access, including by Google itself, and default auto-delete settings for location history would reduce data retention from 18 months to three months.[23] These policy changes significantly limit the availability of location data for law enforcement agencies seeking to execute geofence warrants.[24] By storing data locally on user devices and implementing robust encryption and auto-deletion features, Google has effectively reduced the amount of location data accessible to law enforcement.[25] This highlights the significant influence that corporate data policies can exert on law enforcement practices.[26] Other companies, like Apple, have adopted even stricter privacy measures, refusing to comply with all geofence warrant requests.[27]

The debate surrounding the legality and scope of geofence warrants remains contentious. Courts grapple with varying interpretations, legislators struggle to enact comprehensive legislation, and public opinion remains divided. This uncertainty necessitates authoritative guidance. Whether through judicial precedent, legislative reform, or technological advancements that mitigate privacy concerns, achieving a consensus on the permissible use of geofence warrants is crucial. Only with such a consensus can society navigate the delicate balance between public safety and individual privacy rights in the digital era.

 

Notes:

[1] Ronald J. Rychlak, Geofence Warrants: The New Boundaries, 93 MISS. L. Rev. 957-59 (2024).

[2] Id.

[3] Id.

[4] Mark Harris, A Peek Inside the FBI’s Unprecedented January 6 Geofence Dragnet, WIRED(Nov. 28, 2022, 7:00 AM), https://www.wired.com/story/fbi-google-geofence-warrant-january-6/.

[5] Jeff Welty, Recent Developments Concerning Geofence Warrants, N.C. CRIM. L. (Nov. 4, 2024), https://nccriminallaw.sog.unc.edu/recent-developments-concerning-geofence-warrants/.

[6] Prathi Chowdri, Emerging tech and law enforcement: What are geofences and how do they work, POLICE1(Nov. 16, 2023, 9:06 PM), https://www.police1.com/warrants/google-announces-it-will-revoke-access-to-location-history-effectively-blocking-geofence-warrants.

[7] Jennifer Lynch, Is This the End of Geofence Warrants, ELECTRONIC FRONTIER FOUND., https://www.eff.org/deeplinks/2023/12/end-geofence-warrants.

[8] ACLU, ACLU Argues Evidence From Privacy-Invasive Geofence Warrants Should Be Suppressed, https://www.aclu.org/press-releases/aclu-argues-evidence-from-privacy-invasive-geofence-warrants-should-be-suppressed#:~:text=In%20the%20brief%2C%20the%20ACLU,they%20were%20engaged%20in%20criminal.

[9] LYNCH, supra note 7.

[10] Id.

[11] United States v. Smith, 110 F.4th 817 (5th Cir. 2024).

[12] Id. at 28-30.

[13] Id. at 27-29.

[14] Carpenter v. United States, 585 U.S. 296 (2018)

[15] Id.

[16] United States v. Chatrie, 107 F.4th 319 (4th Cir. 2024).

[17] Id. at 326-57.

[18] Smith, 110 F.4th 817, at 27-30.

[19] Chatrie, 107 F.4th 319, at 326-57.

[20] Jennifer Lynch, Google’s Sensorvault Can Tell Police Where You’ve Been, ELECTRONIC FRONTIER FOUND., https://www.eff.org/deeplinks/2019/04/googles-sensorvault-can-tell-police-where-youve-been?.

[21] Id.

 

[22] Id.

[23] Skye Witley, Google’s Location Data Move Will Reshape Geofence Warrant Use, BLOOMBERG L. (Dec. 20, 2023, 4:05 AM), https://news.bloomberglaw.com/privacy-and-data-security/googles-location-data-move-will-reshape-geofence-warrant-use?.

[24] Id.

[25] Id.

 

[26] Id.

 

[27] APPLE, Apple Transparency Report: Government and Private Party Requests, https://www.apple.com/legal/transparency/pdf/requests-2022-H1-en.pdf.


The Power of Preference or Monopoly? Unpacking Google’s Search Engine Domination

Donovan Ennevor, MJLST Staffer

When searching for an answer to a query online, would you ever use a different search engine than Google? The answer for most people is almost certainly no. Google’s search engine has achieved such market domination that “to Google” has become a verb in the English language.[1] Google controls 90% of the U.S. search engine market, with its closest competitors Yahoo and Bing holding around 3% each.[2] Is this simply because Google offers a superior product or is there some other more nefarious reason?

According to the Department of Justice (“DOJ”), the answer is the latter: Google has dominated its competitors by engaging in illegal practices and creating a monopoly. Federal Judge Amit Mehta agreed with the DOJ’s position and ruled in August 2024 that Google’s market domination was a monopoly achieved through improper means.[3] The remedies for Google’s breach of antitrust law are yet to be determined; however, their consequences could have far reaching implications for the future of Google and Big Tech.

United States v. Google LLC

In October 2020, the DOJ and 11 states filed a civil suit against Google in the U.S. District Court for the District of Columbia, alleging violations of U.S. antitrust laws.[4] A coalition of 35 states, Guam, Puerto Rico, and Washington D.C. filed a similar lawsuit in December 2020.[5] In 2021, the cases were consolidated into a single proceeding to address the overlapping claims.[6] An antitrust case of this magnitude had not been brought in nearly two decades.[7]

The petitioners’ complaint argued that Google’s dominance did not solely arise through superior technology, but rather, through exclusionary agreements designed to stifle competition in online search engine and search advertising markets.[8] The complaint alleged that Google maintained its monopolies by engaging in practices such as entering into exclusivity agreements that prohibited the preinstallation of competitors’ search engines, forcing preinstallation of Google’s search engine in prime mobile device locations, and making it undeletable regardless of consumer preference.[9] For example, Google’s agreement with Apple required that all Apple products and tools have Google as the preinstalled default—essentially an exclusive—search engine.[10] Google also allegedly used its monopoly profits to fund the payments to secure preferential treatment on devices, web browsers, and other search access points, creating a self-reinforcing cycle of monopolization.[11]

According to the petitioners, these practices not only limited competitor opportunities, but also harmed consumers by reducing search engine options and diminishing quality, particularly in areas like privacy and data use.[12] Furthermore, Google’s dominance in search advertising has allowed it to charge higher prices, impacting advertisers and lowering service quality—outcomes unlikely in a more competitive market.[13]

Google rebutted the petitioners’ argument, asserting instead that its search product is preferred due to its superiority and is freely chosen by its consumers.[14] Google also noted that if users wish to switch to a different search engine, they can do so easily.[15]

However, Judge Mehta agreed with the arguments posed by the petitioners and held Google’s market dominance in search and search advertising constituted a monopoly, achieved through exclusionary practices violating U.S. antitrust laws.[16] The case will now move to the remedy determination phase, where the DOJ and Google will argue what remedies are appropriate to impose on Google during a hearing in April 2025.[17]

The Proposed Remedies and Implications

In November, the petitioners filed their final proposed remedies—both behavioral and structural—for Google with the court.[18] Behavioral remedies govern a company’s conduct whereas structural remedies generally refer to reorganization and or divestment.[19]  The proposed behavioral remedies include barring Google from entering exclusive preinstallation agreements and requiring Google to license certain indexes, data, and models that drive its search engine.[20] These remedies would help create more opportunities for competing search engines to gain visibility and improve their search capabilities and ad services. The petitioner’s filing mentioned they would also pursue structural remedies including forcing Google to breakup or divest from its Chrome browser and Android mobile operating system.[21] To ensure Google adheres to these changes, the petitioners proposed appointing a court-monitored technical committee to oversee Google’s compliance.[22]

It could be many years before any of the proposed remedies are actually instituted, given that Google has indicated it will appeal Judge Mehta’s ruling.[23] Additionally, given precedent it is unlikely that any structural remedies will be imposed or enforced.[24] However, any remedies ultimately approved would set a precedent for regulatory control over Big Tech, signaling that the U.S. government is willing to take strong steps to curb monopolistic practices. This could encourage further action against other tech giants and redefine regulatory expectations across the industry, particularly around data transparency and competition in digital advertising.

 

Notes

[1] See Virginia Heffernan, Just Google It: A Short History of a Newfound Verb, Wired (Nov. 15, 2017, 7:00 AM), https://www.wired.com/story/just-google-it-a-short-history-of-a-newfound-verb/.

[2] Justice Department Calls for Sanctions Against Google in Landmark Antitrust Case, Nat’l Pub. Radio, (Oct. 9, 2024, 12:38 AM), https://www.npr.org/2024/10/09/nx-s1-5146006/justice-department-sanctions-google-search-engine-lawsuit [hereinafter Calls for Sanctions Against Google].

[3] United States v. Google LLC, 2024 WL 3647498, 1, 134 (2024).

[4] Justice Department Sues Monopolist Google For Violating Antitrust Laws, U.S. Dep’t of Just. (Oct. 20, 2020), https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws [hereinafter Justice Department Calls for Sanctions].

[5] Dara Kerr, United States Takes on Google in Biggest Tech Monopoly Trial of 21st Century, Nat’l Pub. Radio, (Sept. 12, 2023, 5:00 AM), https://www.npr.org/2023/09/12/1198558372/doj-google-monopoly-antitrust-trial-search-engine.

[6] Tracker Detail US v. Google LLC / State of Colorado v. Google LLC, TechPolicy.Press, https://www.techpolicy.press/tracker/us-v-google-llc/ (last visited Nov. 20, 2024).

[7] Calls for Sanctions Against Google, supra note 2 (“The last antitrust case of this magnitude to make it to trial was in 1998, when the Justice Department sued Microsoft.”).

[8] Justice Department Calls for Sanctions, supra note 4.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Kerrr, supra note 5.

[15] Id.

[16] United States v. Google LLC, 2024 WL 3647498, 1, 4 (2024).

[17] Calls for Sanctions Against Google, supra note 2.

[18] Steve Brachmann, DOJ, State AGs File Proposed Remedial Framework in Google Search Antitrust Case, (Oct. 13, 2024, 12:15 PM), https://ipwatchdog.com/2024/10/13/doj-state-ags-file-proposed-remedial-framework-google-search-antitrust-case/id=182031/.

[19] Dan Robinson, Uncle Sam may force Google to sell Chrome browser, or Android OS, The Reg. (Oct. 9, 2024, 12:56 pm), https://www.theregister.com/2024/10/09/usa_vs_google_proposed_remedies/.

[20] Brachmann, supra note 18.

[21] Exec. Summary of Plaintiff’s Proposed Final Judgement at 3–4, United States v. Google LLC No. 1:20-cv-03010-APM (D.D.C. Nov. 20, 2024). Id at 4.

[22] Id.

[23] See Jane Wolfe & Miles Kruppa, Google Loses Antitrust Case Over Search-Engine Dominance, Wall Street J. (Aug. 5, 2024, 5:02 pm), https://www.wsj.com/tech/google-loses-federal-antitrust-case-27810c43?mod=article_inline.

[24] See Makenzie Holland, Google Breakup Unlikely in Event of Guilty Verdict, Tech Target (Oct. 11, 2023), https://www.techtarget.com/searchcio/news/366555177/Google-breakup-unlikely-in-event-of-guilty-verdict. See also Michael Brick, U.S. Appeals Court Overturns Microsoft Antitrust Ruling, N.Y. Times (Jun 28, 2001), https://www.nytimes.com/2001/06/28/business/us-appeals-court-overturns-microsoft-antitrust-ruling.html. (summarizing the U.S. Court of Appeals decision overturning of the structural remedies imposed on Microsoft in an antitrust case).

 

 


What Happens to Your Genetic Data in a Sale or Acquisition?

Colin Loyd, MJLST Staffer

Remember 23andMe—the genetic testing company that once skyrocketed in publicity in the 2010s due to its relatively inexpensive access to genetic testing? It’s now heading toward disaster. This September, its board of directors saw all but one member tender their resignation.[1] At the close of that day’s trading, 23andMe’s share price was $0.35, representing a 99.9% decline in valuation from its peak in 2021.[2] This decline in valuation suggests the company may declare bankruptcy, which often leads to a sale of a company’s assets. Bankruptcy or the sale of assets present a host of complex privacy and regulatory issues, particularly concerning the sale of 23andMe’s most valuable asset—its vast collection of consumer DNA data.[3] This uncertain situation underscores serious concerns surrounding the state of comprehensive privacy protections for genetic information that leave consumers’ sensitive genetic data vulnerable to misuse and exploitation.

23andMe collects and stores massive amounts of user genetic information. However, unlike healthcare providers, 23andMe does not have to comply with the stringent privacy regulations set out in the Health Insurance Portability and Accountability Act (HIPAA).[4] While HIPAA is designed to protect sensitive health data, its protections apply only to a small subset of healthcare related entities.[5] HIPAA only regulates the use of genetic information by “group health plan[s], health insurance issuer[s] that issue[] health insurance coverage, or issuer[s] of a medicare supplemental policy.”[6] 23andMe does not fit into any of these categories and therefore operates outside the scope of HIPAA protections with respect to genetic information, leaving any genetic information it holds largely unregulated.

The Genetic Information Nondiscrimination Act (GINA), enacted in 2008, offers consumer protections by prohibiting discrimination based on an individual’s genetic information with respect to health insurance premium amounts or eligibility requirements for health insurance.[7] GINA also prohibits any deprivation of employment opportunities based on genetic information.[8] However, GINA’s protections do not extend to life insurance, disability insurance, or long-term care insurance.[9] This leaves a gap where genetic information may be used against individuals by entities not subject to GINA.

This regulatory gap is a major concern for consumers, especially with a potential bankruptcy sale looming. If 23andMe sells its assets, including its database of genetic information, the new owner would not have to adhere to the same privacy commitments made by 23andMe. For example, 23andMe promises not to use genetic information it receives for personalized or targeted marketing/advertising without a user’s express consent.[10] This policy likely reflects 23andMe’s efforts to comply with the California Privacy Rights Act (CPRA), which grants consumers the right to direct a business to not share or sell their personal information.[11] However, this right under the CPRA is an opt-out right—not an opt-in right—meaning consumers can stop a future sale of their information but by default there is no initial, regulatory limit on the sale of their personal information.[12] As a result, there’s nothing stopping 23andMe from altering its policies and changing how it uses genetic information. In fact, 23andMe’s Privacy Statement states it “may make changes to this Privacy Statement from time to time.”[13] Any such change would likely be binding if it is clearly communicated to users.[14] 23andMe currently lists email or an in-app notification as methods it may notify its users of any change to the Privacy Statement.[15] If it does so, it’s highly possible a court would view this as “clear communication” and there would be little legal recourse for users to prevent their genetic information from being used in ways they did not anticipate, such as for research or commercial purposes.

For example, say a life insurance company acquires an individual’s genetic data through the purchase of 23andMe’s assets. It could potentially use that data to make decisions about coverage or premiums, even though GINA prohibits health insurers to do the same.[16] This loophole highlights the dangers of having genetic information in the hands of entities not bound by strict privacy protections.

In the event of an acquisition or bankruptcy, 23andMe’s Privacy Statement outlines that personal information, including genetic data, may be among the assets sold or transferred to the new entity.[17] In such a case, the new owner could inherit both the data and the rights to use it under the existing terms, including the ability to modify how the data is used. This could result in uses not originally intended by the user so long as the change is communicated to the user.[18] This transfer clause highlights a key concern for users because it allows their deeply personal genetic data to be passed to another company without additional consent, potentially subjecting them to exploitation by organizations with different data usage policies or commercial interests. While 23andMe must notify users about any changes to the privacy statement or its use of genetic information, it does not specify whether the notice will be given in advance.[19] Any new entity could plan a change to the privacy statement terms–altering how it uses the genetic information while leaving users in the dark until the change is communicated to them, at which point the user’s information may have already been shared with third parties.

The potential 23andMe bankruptcy and sale of assets reveals deep flaws in the current regulatory system governing genetic data privacy. Without HIPAA protections, consumers risk their sensitive genetic information being sold or misused in ways they cannot control. GINA–while offering some protections–still leaves significant gaps, especially in life and disability insurance. As the demand for genetic testing continues to grow, the vulnerabilities exposed by 23andMe’s potential financial troubles highlight the urgent need for better privacy protections. Consumers must be made aware of the risks involved in sharing their genetic data, and regulatory measures must be strengthened to ensure this sensitive information is not misused or sold without their explicit consent.

 

Notes

[1] Independent Directors of 23andMe Resign from Board, 23andMe (Sept. 17, 2024) https://investors.23andme.com/news-releases/news-release-details/independent-directors-23andme-resign-board.

[2] Rolfe Winkler, 23andMe Board Resigns in New Blow to DNA-Testing Company, WALL ST. J. (Sept. 18, 2024) https://www.wsj.com/tech/biotech/23andme-board-resigns-in-new-blow-to-dna-testing-company-12f1a355.

[3] Anne Wojcicki (the last remaining board member) has consistently publicized her plan to take the company private, which is looming larger given the current state of the business financials. Id.

[4] See 42 U.S.C. § 1320d-9(a)(2).

[5] See generally 42 U.S.C. §1320d et seq.

[6] 42 U.S.C. § 1320d-9(a)(2).

[7] Genetic Information Nondiscrimination Act of 2008, Pub. L. No. 110-233, 122 Stat. 881.

[8] Id.

[9] Jessica D Tenenbaum & Kenneth W Goodman, Beyond the Genetic Information Nondiscrimination Act: Ethical and Economic Implications of the Exclusion of Disability, Long-term Care and Life Insurance, 14 Personalized Med. 153, 154 (2017).

[10] How We Use Your Information, 23andMe, https://www.23andme.com/legal/how-we-use-info/ (last visited Oct. 14, 2024).

[11] Cal. Civ. Code § 1798.120(a) (Deering 2024).

[12] Id.

[13] Privacy Statement, 23andMe (Sept. 24, 2024) https://www.23andme.com/legal/privacy/full-version/.

[14] See Lee v. Ticketmaster LLC, 817 Fed. App’x 393 (9th Cir. 2019)(upholding terms of use where notice was clearly given to the user, even if the user didn’t check a box to assent to the terms).

[15] Privacy Statement, supra note 13.

[16] See K.S.A. § 40-2259(c)-(d) (carving out the ability for life insurance policies to take into account genetic information when underwriting the policy).

[17] Privacy Statement, supra note 13.

[18] See Ticketmaster, 817 Fed. App’x 393 (2019).

[19] Privacy Statement, supra note 13.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


AR/VR/XR: Breaking the Wall of Legal Issues Used to Limit in Either the Real-World or the Virtual-World

Sophia Yao, MJLST Staffer

From Pokémon Go to the Metaverse,[1] VR headsets to XR glasses, vision technology is quickly changing our lives in many aspects. The best-known companies or groups that have joined this market include Apple’s Vision Products Group (VPG), Meta’s Reality Lab, Microsoft, and others. Especially after Apple published its Vision Pro in 2023, no one doubts that this technology will soon be a vital driver for both tech and business. Regardless of why, can this type of technology significantly impact human genes? What industries will be impacted by this technology? And what kinds of legal risks are to come?

Augmented Reality (“AR”) refers to a display of a real-world environment whose elements are augmented by (i.e., overlaid with) one or more layers of text, data, symbols, images, or other graphical display elements.[2] Virtual Reality (“VR”) is using a kind of device (e.g., headsets or multi-projected environments) to create a simulated and immersive environment that can provide an experience either similar to or completely different from the real world,[3] while Mixed Reality/Extended Reality (XR) glasses are relatively compact and sleek, and weigh much less than VR headsets.[4] XR’s most distinguished quality from VR is that individuals can still see the world around them with XR by projecting a translucent screen on top of the real world. Seemingly, the differences between these three vision technologies may soon be eliminated with the possibility of their combination into once device.

Typically, vision technology assists people in mentally processing 2-D information into a 3-D world by integrating digital information directly into real objects or environments. This can improve individuals’ ability to absorb information, make decisions, and execute required tasks quickly, efficiently, and accurately. However, many people report feeling nauseous after using such products, ear pain, and a disconnect between their eyes and body.[5] Even experts who use AR/VR products in emerging psychotherapy treatments admit that there have been adverse effects in AR/VR trials due to mismatching the direct contradiction between the visual system and the motion system.[6] Researchers also discovered that it affects the way people behave in social situations due to feeling less socially connected to others.[7]

In 2022, the global augmented reality market was valued at nearly $32 billion and is projected to reach $88 billion by 2026.[8] As indicated by industry specialists and examiners, outside of gaming, a significant portion of vision technology income will accumulate from e-commerce and retail (fashion and beauty), manufacturing, the education industry, healthcare, real estate, and e-sports, which will further impact entertainment, cost of living, and innovation.[9] To manage this tremendous opportunity, it is crucial to understand potential legal risks and develop a comprehensive legal strategy to address these upcoming challenges.

To expand one’s business model, it is important to maximize the protection of intellectual property (IP), including virtual worlds, characters, and experiences. Doing so also aligns with contractual concerns, service remedies, and liability for infringement of third-party IP. For example, when filing an IP prosecution, it is difficult to argue that the hardware-executing invention (characters or data information) is a unique machine, and that the designated steps performed by the hardware are special under MPEP § 2106.05(d).[10] Furthermore, the Federal Circuit has cautioned the abstraction of inventions – that “[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas…[T]read carefully in constructing this exclusionary principle lest it swallows all of the patent law.”[11]

From a consumer perspective, legal concerns may include data privacy, harassment, virtual trespass, or even violent attacks due to the aforementioned disconnect between individuals’ eyes and bodies. Courts’ views on virtual trespass created by vision technology devices is ambiguous. It is also unclear whether courts will accept the defense of error in judgment due to the adverse effects of using AR/VR devices. One of the most significant concerns is the protection of the younger generations, since they are often the target consumers and those who are spending the most time using these devices. Experts have raised concerns about the adverse effects of using AR/VR devices, questioning whether they negatively impact the mental and physical health of younger generations. Another concern is that these individuals may experience a decline in social communication skills and feel a stronger connection to machines rather than to human beings. Many other legal risks are hanging around the use of AR/VR devices, such as private data collection without consent by constantly scanning the users’ surrounding circumstances, although some contend that the Children’s Online Privacy Protection Act (COPPA) prohibits the collection of personally identifiable information if an operator believes a user to be under the age of thirteen.[12]

According to research trends, combining AR, VR, and MR/XR will allow users to transcend distance, time, and scale, to bring people together in shared virtual environments, enhance comprehension, communication, and decisionmaking efficiency. Once the boundaries between the real-world and virtual-world are eliminated, AR/VR devices will “perfectly” integrate with the physical world, whether or not we are prepared for this upcoming world.

Notes

[1] Eric Ravenscraft, What is the Meteverse, Exactly?, Wired (Jun. 15, 2023, 6:04 PM), https://www.wired.com/story/what-is-the-metaverse/.

[2] Travis Alley, ARTICLE: Pokemon Go: Emerging Liability Arising from Virtual Trespass for Augmented Reality Applications, 4 Tex. A&M J. Prop. L. 273 (2018).

[3] Law Offices of Salar Atrizadeh, Virtual and Augmented Reality Laws, Internet Law. Blog (Dec. 17, 2018), https://www.internetlawyer-blog.com/virtual-and-augmented-reality-laws/.

[4] Simon Hill, Review: Viture One XR Glasses, Wired (Sep. 1, 2023, 7:00 AM), https://www.wired.com/review/viture-one-xr-glasses/.

[5] Alexis Souchet, Virtual Reality has Negative Side Effects—New Research Shows That Can be a Problem in the Workplace, The Conversation (Aug. 8, 2023, 8:29 AM), https://theconversation.com/virtual-reality-has-negative-side-effects-new-research-shows-that-can-be-a-problem-in-the-workplace-210532#:~:text=Some%20negative%20symptoms%20of%20VR,nausea%20and%20increased%20muscle%20fatigue.

[6] John Torous et al., Adverse Effects of Virtual and Augmented Reality Interventions in Psychiatry: Systematic Review, JMIR Ment Health (May 5, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199391/.

[7] How Augmented Reality Affects People’s Behavior, Sci.Daily (May 22, 2019), https://www.sciencedaily.com/releases/2019/05/190522101944.htm.

[8] Augmented Reality (AR) Market by Device Type (Head-mounted Display, Head-up Display), Offering (Hardware, Software), Application (Consumer, Commercial, Healthcare), Technology, and Geography – Global Forecast, Mkt. and Mkt., https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html.

[9] Hill, supra note 4.

[10] Manual of Patent Examining Proc. (MPEP) § 2106.05(d) (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_13d41_124 (explaining an evaluation standard on when determining whether a claim recites significantly more than a judicial exception depends on whether the additional elements(s) are well-understood, routine, conventional activities previously known to the industry).

[11] Manual of Patent Examining Proc. (MPEP) § 2106.04 (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_139db_e0; see also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (2016).

[12] 16 CFR pt. 312.


The Double-Helix Dilemma: Navigating Privacy Pitfalls in Direct-to-Consumer Genetic Testing

Ethan Wold, MJLST Staffer

Introduction

On October 22, direct-to-consumer genetic testing (DTC-GT) company 23andME sent emails to a number of its customers informing them of a data breach into the company’s “DNA Relatives” feature that allows customers to compare ancestry information with other users worldwide.[1] While 23andMe and other similar DTC-GT companies offer a number of positive benefits to consumers, such as testing for health predispositions and carrier statuses of certain genes, this latest data breach is a reminder that before choosing to opt into these sorts of services one should be aware of the potential risks that they present.

Background

DTC-GT companies such as 23andMe and Ancestry.com have proliferated and blossomed in recent years. It is estimated over 100 million people have utilized some form of direct-to-consumer genetic testing.[2] Using biospecimens submitted by consumers, these companies sequence and analyze an individual’s genetic information to provide a range of services pertaining to one’s health and ancestry.[3] The October 22 data breach specifically pertained to 23andMe’s “DNA Relatives” feature.[4] The DNA Relatives feature can identify relatives on any branch of one’s family tree by taking advantage of the autosomal chromosomes, the 22 chromosomes that are passed down from your ancestors on both sides of your family, and one’s X chromosome(s).[5] Relatives are identified by comparing the customer’s submitted DNA with the DNA of other 23andMe members who are participating in the DNA Relatives feature.[6] When two people are found to have an identical DNA segment, it is likely they share a recent common ancestor.[7] The DNA Relatives feature even uses the length and number of these identical segments to attempt to predict the relationship between genetic relatives.[8] Given the sensitive nature of sharing genetic information, there are often privacy concerns regarding practices such as the DNA Relatives feature. Yet despite this, the legislation and regulations surrounding DTC-GT is somewhat limited.

Legislation

The Health Insurance Portability and Accountability Act (HIPAA) provides the baseline privacy and data security rules for the healthcare industry.[9] HIPAA’s Privacy Rule regulates the use and disclosure of a person’s “protected health information” by a “covered entity.[10] Under the Act, the type of genetic information collected by 23andMe and other DTC-GT companies does constitute “protected health information.”[11] However, because HIPAA defines a “covered entity” as a health plan, healthcare clearinghouse, or health-care provider, DTC-GT companies do not constitute covered entities and therefore are not under the umbrella of HIPAA’s Privacy Rule.[12]

Thus, the primary source of regulation for DTC-GT companies appears to be the Genetic Information Nondiscrimination Act (GINA). GINA was enacted in 2008 for the purpose of protecting the public from genetic discrimination and alleviating concerns about such discrimination and thereby encouraging individuals to take advantage of genetic testing, technologies, research, and new therapies.[13] GINA defines genetic information as information from genetic tests of an individual or family members and includes information from genetic services or genetic research.[14] Therefore, DTC-GT companies fall under GINA’s jurisdiction. However, GINA only applies to the employment and health insurance industries and thus neglects many other potential arenas where privacy concerns may present.[15] This is especially relevant for 23andMe customers, as signing up for the service serves as consent for the company to use and share your genetic information with their associated third-party providers.[16] As a case in point, in 2018 the pharmaceutical giant GlaxoSmithKline purchased a $300 million stake in 23andMe for the purpose of gaining access to the company’s trove of genetic information for use in their drug development trials.[17]

Executive Regulation

In addition to the legislation above, three different federal administrative agencies primarily regulate the DTC-GT industry: the Food and Drug Administration (FDA), the Centers of Medicare and Medicaid services (CMS), and the Federal Trade Commission (FTC). The FDA has jurisdiction over DTC-GT companies due to the genetic tests they use being labeled as “medical devices”[18] and in 2013 exercised this authority over 23andMe by sending a letter to the company resulting in the suspending of one of its health-related genetic tests.[19] However, the FDA only has jurisdiction over diagnostic tests and therefore does not regulate any of the DTC-GT services related to genealogy such as 23andMe’s DNA Relatives feature.[20] Moreover, the FDA does not have jurisdiction to regulate the other aspects of DTC-GT companies’ activities or data practices.[21] CMS has the ability to regulate DTC-GT companies through enforcement of the Clinical Laboratory Improvements Act (CLIA), which requires that genetic testing laboratories ensure the accuracy, precision, and analytical validity of their tests.[22] But, like the FDA, CMS only has jurisdiction over tests that diagnose a disease or assess health.[23]

Lastly, the FTC has broad authority to regulate unfair or deceptive business practices under the Federal Trade Commission Act (FTCA) and has levied this authority against DTC-GT companies in the past. For example, in 2014 the agency brought an action against two DTC-GT companies who were using genetic tests to match consumers to their nutritional supplements and skincare products.[24] The FTC alleged that the companies’ practices related to data security were unfair and deceptive because they failed to implement reasonable policies and procedures to protect consumers’ personal information and created unnecessary risks to the personal information of nearly 30,000 consumers.[25] This resulted in the companies entering into an agreement with the FTC whereby they agreed to establish and maintain comprehensive data security programs and submit to yearly security audits by independent auditors.[26]

Potential Harms

As the above passages illustrate, the federal government appears to recognize and has at least attempted to mitigate privacy concerns associated with DTC-GT. Additionally, a number of states have passed their own laws that limit DTC-GT in certain aspects.[27] Nevertheless, given the potential magnitude and severity of harm associated with DTC-GT it makes one question if it is enough. Data breaches involving health-related data are growing in frequency and now account for 40% of all reported data breaches.[28] These data breaches result in unauthorized access to DTC-GT consumer-submitted data and can result in a violation of an individual’s genetic privacy. Though GINA aims to prevent it, genetic discrimination in the form of increasing health insurance premiums or denial of coverage by insurance companies due to genetic predispositions remains one of the leading concerns associated with these violations. What’s more, by obtaining genetic information from DTC-GT databases, it is possible for someone to recover a consumer’s surname and combine that with other metadata such as age and state to identify the specific consumer.[29] This may in turn lead to identity theft in the form of opening accounts, taking out loans, or making purchases in your name, potentially damaging your financial well-being and credit score. Dealing with the aftermath of a genetic data breach can also be expensive. You may incur legal fees, credit monitoring costs, or other financial burdens in an attempt to mitigate the damage.

Conclusion

As it sits now, genetic information submitted to DTC-GT companies already contains a significant volume of consequential information. As technology continues to develop and research presses forward, the volume and utility of this information will only grow over time. Thus, it is crucially important to be aware of risks associated with DTC-GT services.

This discussion is not intended to discourage individuals from participating in DTC-GT. These companies and the services they offer provide a host of benefits, such as allowing consumers to access genetic testing without the healthcare system acting as a gatekeeper, thus providing more autonomy and often at a lower price.[30] Furthermore, the information provided can empower consumers to mitigate the risks of certain diseases, allow for more informed family planning, or gain a better understanding of their heritage.[31] DTC-GT has revolutionized the way individuals access and understand their genetic information. However, this accessibility and convenience comes with a host of advantages and disadvantages that must be carefully considered.

Notes

[1] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[2] https://www.ama-assn.org/delivering-care/patient-support-advocacy/protect-sensitive-individual-data-risk-dtc-genetic-tests#:~:text=Use%20of%20direct%2Dto%2Dconsumer,November%202021%20AMA%20Special%20Meeting

[3] https://go-gale-com.ezp3.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[4] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[5] https://customercare.23andme.com/hc/en-us/articles/115004659068-DNA-Relatives-The-Genetic-Relative-Basics

[6] Id.

[7] Id.

[8] Id.

[9] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[10] https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/combined/hipaa-simplification-201303.pdf

[11] Id.

[12] Id; https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[13] https://www.eeoc.gov/statutes/genetic-information-nondiscrimination-act-2008

[14] Id.

[15] https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3035561&blobtype=pdf

[16] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[17] https://news.yahoo.com/news/major-drug-company-now-access-194758309.html

[18] https://uscode.house.gov/view.xhtml?req=(title:21%20section:321%20edition:prelim)

[19] https://core.ac.uk/download/pdf/33135586.pdf

[20] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[21] Id.

[22] https://www.law.cornell.edu/cfr/text/42/493.1253

[23] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[24] https://www.ftc.gov/system/files/documents/cases/140512genelinkcmpt.pdf

[25] Id.

[26] Id.

[27] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[28] Id.

[29] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[30] Id.

[31] Id.


The Policy Future for Telehealth After the Pandemic

Jack Atterberry, MJLST Staffer

The Pandemic Accelerated Telehealth Utilization

Before the Covid-19 pandemic began, telehealth usage in the United States healthcare system was insignificant (rounding to 0%) as a percentage of total outpatient care visits.[1] In the two years after the beginning of the pandemic, telehealth usage soared to over 10% of outpatient visits and has been widely used across all payer categories including Medicare and Medicaid.[2] The social distancing realities during the pandemic years coupled with federal policy measures allowed for this radical transition toward telehealth care visits.

In response to the onset of Covid-19, the US federal government relaxed and modified many telehealth regulations which have expanded the permissible access of telehealth care services. After a public health emergency was declared in early 2020, the Center for Medicare & Medicaid Services (CMS) and the Department of Health and Human Services (HHS) modified preexisting telehealth-related regulations to expand the permissible use of those services.  Specifically, CMS temporarily expanded Medicare coverage to include telehealth services without the need for in-person visits, removed telehealth practice restrictions such as expanding the type of providers that could provide telehealth, and increased the reimbursement rates for telehealth services to bring them closer to in-person visit rates.[3] In addition, HHS implemented modifications such as greater HIPAA flexibility by easing requirements around using popular communication platforms such as Zoom, Skype, and FaceTime provided that they are used in good faith.[4]  Collectively, these changes helped lead to a significant rise in telehealth services and expanded access to care for many people that otherwise would not receive healthcare.  Unfortunately, many of these telehealth policy provisions are set to expire in 2024, leaving open the question of whether the benefits of telehealth care expansion will be here to stay after the public emergency measures end.[5]

Issues with Telehealth Care Delivery Between States

A big legal impediment to telehealth expansion in the US is the complex interplay of state and federal laws and regulations impacting telehealth care delivery. At the state level, key state differences in the following areas have historically held back the expansion of telehealth.  First, licensing and credentialing requirements for healthcare providers are most often licensed at the state level – this has created a barrier for providers who want to offer telehealth services across state lines. While many states have implemented temporary waivers or joined interstate medical licensure compacts to address this issue during the pandemic, many states have not done so and huge inconsistencies exist. Besides these issues, states also differ with regard to reimbursement policy as states differ significantly in how different payer types insure differently in different regions—this has led to confusion for providers about whether to deliver care in certain states for fear of not getting reimbursed adequately. Although the federal health emergency helped ease interstate telehealth restrictions since the pandemic started, these challenges will likely persist after the temporary telehealth measures are lifted at the end of 2024.

What the pandemic-era temporary easing of telehealth restrictions taught us is that interstate telehealth improves health outcomes, increases patient satisfaction, and decreases gaps in care delivery.  In particular, rural communities and other underserved areas with relatively fewer healthcare providers benefited greatly from the ability to receive care from an out of state provider.  For example, patients in states like Montana, North Dakota, and South Dakota benefit immensely from being able to talk with an out of state mental health provider because of the severe shortages of psychiatrists, psychologists, and other mental health practitioners in those states.[6]  In addition, a 2021 study by the Bipartisan Policy Center highlighted that patients in states which joined interstate licensure compacts experienced a noticeable improvement in care experience and healthcare workforces experienced a decreased burden on their chronically stressed providers.[7]  These positive outcomes resulting from eased interstate healthcare regulations should inform telehealth policy moving forward.

Policy Bottlenecks to Telehealth Care Access Expansion

The presence of telehealth in American healthcare is surprisingly uncertain as the US emerges from the pandemic years.  As the public health emergency measures which removed various legal and regulatory barriers to accessing telehealth expire next year, many Americans could be left without access to healthcare via telehealth services. To ensure that telehealth remains a part of American healthcare moving forward, federal and state policy makers will need to act to bring about long term certainty in the telehealth regulatory framework.  In particular, advocacy groups such as the American Telehealth Association recommend that policy makers focus on key policy changes such as removing licensing barriers to interstate telehealth care, modernizing reimbursement payment structures to align with value-based payment principles, and permanently adopting pandemic-era telehealth access for Medicare, Federally Qualified Health Centers, and Rural Health Clinics.[8]  In addition, another valuable federal regulatory policy change would be to continue allowing the prescription of controlled substances without an in-person visit.  This would entail modifying the Ryan Haight Act, which requires an in-person medical exam before prescribing controlled substances.[9]  Like any healthcare reform in the US, cementing these lasting telehealth policy changes as law will be a major uphill battle.  Nonetheless, expanding access to telehealth could be a bipartisan policy opportunity for lawmakers as it would bring about expanded access to care and help drive the transition toward value-based care leading to better health outcomes for patients.

Notes

[1] https://www.healthsystemtracker.org/brief/outpatient-telehealth-use-soared-early-in-the-covid-19-pandemic-but-has-since-receded/

[2] https://www.cms.gov/newsroom/press-releases/new-hhs-study-shows-63-fold-increase-medicare-telehealth-utilization-during-pandemic#:~:text=Taken%20as%20a%20whole%2C%20the,Island%2C%20New%20Hampshire%20and%20Connecticut.

[3] https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency

[4] Id.

[5] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[6] https://thinkbiggerdogood.org/enhancing-the-capacity-of-the-mental-health-and-addiction-workforce-a-framework/?_cldee=anVsaWFkaGFycmlzQGdtYWlsLmNvbQ%3d%3d&recipientid=contact-ddf72678e25aeb11988700155d3b3c69-e949ac3beff94a799393fb4e9bbe3757&utm_source=ClickDimensions&utm_medium=email&utm_campaign=Health%20%7C%20Mental%20Health%20Access%20%7C%2010.19.21&esid=e4588cef-7520-ec11-b6e6-002248246368

[7] https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2021/11/BPC-Health-Licensure-Brief_WEB.pdf

[8] https://hbr.org/2023/01/its-time-to-cement-telehealths-place-in-u-s-health-care

[9] https://www.aafp.org/pubs/fpm/issues/2021/0500/p9.html