Lillie Grant, MJLST Staffer
What counts as harm in an age of inference?
Modern systems do not just collect information; they generate it.[1] From patterns in behavior, timing, and interaction, they derive conclusions about people that those people never actually shared.[2] Often, those conclusions are more revealing than anything someone would voluntarily disclose.[3] And yet, the law does not clearly or consistently treat that process as harmful.[4]
Privacy law has mostly been built around disclosure.[5] The usual question is whether information was knowingly shared, improperly collected, or revealed to the wrong people.[6] The basic idea is that the data starts with the individual and then moves outward.[7] But inference does not work like that.[8] It is not about what is given; it is about what is created.[9]
The difference is more significant than it first appears, because when a system converts small pieces of behavior into conclusions about a person, it does more than record activity; it interprets it, producing not just a list of actions but a statement about their meaning.[10]
The law has not caught up. Courts are much more comfortable recognizing harm when inferred information shows up in the world in a visible way.[11] If something is revealed, shared, or used in a way that clearly affects someone, it looks like a familiar kind of injury.[12] It has consequences that feel real and immediate.[13]
But most inferences never get that far.[14] They stay inside the system that produced them.[15] They shape what someone sees, what is recommended, what is prioritized, and sometimes what opportunities are available, all without a discrete, traceable event.[16] Even when those inferences are accurate or deeply personal, they often do not trigger legal protection.[17] There is no clear moment where something was “disclosed,” and without that, courts struggle to recognize harm at all.[18]
That leaves a gap: privacy law still depends on the idea that information is something a person gives.[19] Something you can point to and say, “This was shared.”[20] But inferred data does not fit into that model.[21] It is not handed over; it is built, and because of that, it slips past categories that were never designed to capture this kind of process.[22] The problem is not just theoretical; it affects whether someone can even bring a claim.[23] To get into court, a plaintiff has to show a concrete injury.[24] Not just a feeling that something is off, but something the law is willing to recognize as harm.[25] When the issue is inference, the information may shape real outcomes but does so quietly, without a clear moment that satisfies the law’s demand for discrete injury.[26]
At the same time, these inferences are not meaningless. They are the product. Companies are not just collecting data for the sake of it; they are turning it into insights that can be used to target ads, keep people engaged, and make money.[27] The value is not just in what people do, but in what can be figured out from what they do.[28]
That raises a harder question. If a company can take your behavior, turn it into something new, and profit from it, what exactly belongs to you? The raw data came from you, but the conclusion did not. The law tends to treat that distinction as important.[29] It is not obvious that it should settle the issue at all.[30]
Recent lawsuits by authors challenge the use of their works to train AI systems as a form of uncompensated extraction,[31] but because those claims focus on the inputs used to build these systems, they leave open a distinct question: whether individuals have any claim to the inferences generated about them, suggesting the problem is not just data use but the unrecognized extraction and monetization of information produced about individuals.
There are limited signals in existing law suggesting that creating new data about a person can itself be treated as harm, most clearly in biometric cases where courts have recognized that generating something like a faceprint is significant even without further use.[32]
Part of what makes inference so difficult is that it does not feel like a clear violation. There is no obvious intrusion or single moment where something is taken; instead, it happens gradually as bits of behavior accumulate and are turned into meaning that appear harmless on their own but are surprisingly complete in the aggregate.[33] That creates a deeper tension. The better systems get at understanding people, the less clear it becomes what it even means, legally, to “know” something about someone.[34] At what point does a pattern become information? And at what point does producing that information start to matter in a legal sense?
The better framing is to abandon disclosure as the organizing principle. Maybe the issue is not disclosure at all. Maybe it is extraction. Systems are not just observing behavior; they are pulling meaning out of it and turning that meaning into something usable.[35] That something can be scaled, sold, and built into entire business models.[36] But the legal rules we have are still mostly about what people choose to share, not what can be created from what they do.[37]
If that is right, the problem is only intensifying, as systems increasingly rely on information that no one explicitly provided but that still feels personal, making it harder to say that nothing of consequence is being taken. The law offers no clear answer, leaving inferred data central in practice but misaligned with doctrines of harm. This leaves individuals in a position where systems can form detailed conclusions about them while they have little ability to see or challenge those conclusions, reflecting a definition of harm that no longer matches how information is actually produced and used.
Notes
[1] See generally Joan M Wrabetz, What Is Inferred Data and Why Is It Important?, ABA (Aug. 22, 2022), https://www.americanbar.org/groups/business_law/resources/business-law-today/2022-september/what-is-inferred-data-and-why-is-it-important/.
[2] Id.
[3] See Hal Conick, AI and the Law, Univ. Chi. L. Sch. (Dec. 9, 2024), https://www.law.uchicago.edu/news/ai-and-law.
[4] Sandra Wachter & Brent Mittelstadt, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, 2019 Colum. Bus. L. Rev. 494.
[5] See Overview of the Privacy Act of 1974: Conditions of Disclosure to Third Parties, U.S. Dep’t of Just., https://www.justice.gov/opcl/overview-privacy-act-1974-2020-edition/disclosures-third-parties (last visited Apr. 9, 2026, at 16:12 CST).
[6] Id.
[7] Id.
[8] See Wrabetz, supra note 1.
[9] Id.
[10] Id.
[11] See Harith Khawaja, Injury, in Fact: The Internet, the Americans with Disabilities Act, and Standing in Digital Spaces, 36 Stan. L. & Pol’y Rev. 165, 172 (2025).
[12] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Danielle Keats Citron & Daniel Solove, Privacy Harms, 102 B.U.L Rev 793 (2022).
[13] Id.
[14] Jeffrey Erickson, What Is AI Inference?, Oracle (Apr. 2, 2024), https://www.oracle.com/artificial-intelligence/ai-inference/#:~:text=Inference%2C%20to%20a%20lay%20person,in%20the%20training%20data%20set.
[15] Id.
[16] Id.
[17] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021); Citron & Solove, supra note 12.
[18] Id.
[19] Citron & Solove, supra note 12.
[20] See Pamela J. Wisniewski & Xinru Page, Privacy Theories and Frameworks, in Modern Socio-Technical Perspectives on Privacy 15 (2022).
[21] Wrabetz, supra note 1.
[22] See Privacy by Proxy: Regulating Inferred Identities in AI Systems, IAPP (Nov. 12, 2025), https://iapp.org/news/a/privacy-by-proxy-regulating-inferred-identities-in-ai-systems.
[23] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).
[24] Id.
[25] Id.
[26] Wrabetz, supra note 1.
[27] Id.
[28] Id.
[29] Id.
[30] Id.
[31] See Pramode Chiruvolu et al., Fair Use and AI Training: Two Recent Decisions Highlight the Complexity of This Issue, Skadden, Arps, Slate, Meagher & Flom LLP (July 8, 2025) https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training.
[32] See Ross D. Emmerman & Mark Goldberg, Illinois Supreme Court Rules No Actual Harm Needed for Biometric Information Protection Act Claims; Floodgates Open, Loeb & Loeb LLP (Jan. 2019) https://www.loeb.com/en/insights/publications/2019/01/illinois-supreme-court-rules-no-actual-harm-needed.
[33] Wrabetz, supra note 1.
[34] Id.
[35] Id.
[36] Id.
[37] See Spokeo, Inc. v. Robins, 578 U.S. 330 (2016); ); TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).
