Anzario Serrant, MJLST Staffer
Since the ushering in of the new millennium, there has been over a thousand percent increase in the number of active internet users, defined as those who have had access to the internet in the last month.[1] The internet–and technology as a whole–has planted its roots deeply into our everyday lives and morphed the world into what it is today. As the internet transformed, so did our society, shifting from a time when the internet was solely used by government entities and higher-learning institutions[2] to now, where over 60% of the world’s population has regular access to cyberspace.[3] The ever-evolving nature of the internet and technology has brought an ease and convenience like never imagined while also fostering global connectivity. Although this connection may bring the immediate gratification of instantaneously communicating with friends hundreds of miles away, it has also created an arena conducive to the spread of false or inaccurate information—both deliberate and otherwise.
The evolution of misinformation and disinformation has radically changed how societies interact with information, posing new challenges to individuals, governments, and legal systems. Misinformation, the sharing of a verifiably false statement without intent to deceive, and disinformation, a subset of misinformation distinguished by intent to mislead and actual knowledge that the information is false, are not new phenomena.[4] They have existed throughout history, from the spread of rumors during the Black Death[5] to misinformation about HIV/AIDS in the 1980s.[6] In both examples, misinformation promoted ineffective measures, increased ostracization, and inevitably allowed for the loss of countless lives. Today, the internet has exponentially increased the speed and scale at which misinformation spreads, making our society even more vulnerable to associated harms. But who should bear the liability for these harms—individuals, social media companies, both? Additionally, does existing tort law provide adequate remedies to offset these harms?
The Legal Challenge
Given the global reach of social media and the proliferation of both misinformation and disinformation, one critical question arises: Who should be held legally responsible when misinformation causes harm? This question is becoming more pressing, particularly in light of “recent” events like the COVID-19 pandemic, during which unproven treatments were promoted on social media, leading to widespread confusion and, in some cases, physical harm.[7]
Under tort law, legal remedies exist that could potentially address the spread and use of inaccurate information in situations involving a risk of physical harm. These include fraudulent or negligent misrepresentation, conscious misrepresentation involving risk of physical harm, and negligent misrepresentation involving risk of physical harm.[8] However, these legal concepts were developed prior to the internet and applying them to the realm of social media remains challenging.
Fraudulent Misrepresentation and Disinformation
Current tort law provides limited avenues for addressing disinformation, especially on social media. However, fraudulent misrepresentation can help tackle cases involving deliberate financial deception, such as social media investment scams. These scams arguably meet the fraudulent misrepresentation criteria—false promises meant to induce investment, resulting in financial losses for victims.[9] However, the broad, impersonal nature of social media complicates proving “justifiable reliance.” For instance, would a reasonable person rely on an Instagram post from a stranger to make an investment decision?
In limited instances, courts applying a more subjective analysis might be willing to find the victim’s reliance justifiable, but that still leaves various victims unprotected.[10] Given these challenges and the limited prospect for success, it may be more effective to consider the role of social media platforms in spreading disinformation.
Conscious misrepresentation involving risk of physical harm (CMIRPH)
Another tort that applies in limited circumstances is CMIRPH. This tort applies when false or unverified information is knowingly spread to induce action, or with disregard for the likelihood of inducing action, that carries an unreasonable risk of physical harm.[11] The most prominent example of this occurred during the COVID-19 pandemic, when false information about hydroxychloroquine and chloroquine spread online, with some public figures promoting the drugs as cures.[12] In such cases, those spreading false information knew, or should have known, that they were not competent to make those statements and that they posed serious risks to public health.
While this tort could be instrumental in holding individuals accountable for spreading harmful medical misinformation, challenges arise in establishing intent and reliance and the broad scope of social media’s reach can make it difficult to apply traditional legal remedies. Moreover, because representations of opinions are covered by the tort,[13] First Amendment arguments would likely be raised if liability were to be placed on people who publicly posted their inaccurate opinions.
Negligent misrepresentation and Misinformation
While fraudulent misrepresentation applies to disinformation, negligent misrepresentation is more suitable to misinformation. A case for negligent misrepresentation must demonstrate (1) declarant pecuniary interest in the transaction, (2) false information supplied for the guidance of others, (3) justifiable reliance, and (4) breach of reasonable care.[14]
Applying negligent misrepresentation to online misinformation proves difficult. For one, the tort requires that the defendant have a pecuniary interest in the transaction. Much of the misinformation inadvertently spread on social media does not involve financial gain for the poster. Moreover, negligent misrepresentation is limited to cases where misinformation was directed at a specific individual or a defined group, making it hard to apply to content posted on public platforms meant to reach as many people as possible.[15]
Even if these obstacles are overcome, the problem of contributory negligence remains. Courts may find that individuals who act on information from social media without verifying its accuracy bear some responsibility for the harm they suffer.
Negligent misrepresentation involving risk of physical harm (NMIRPH)
In cases where there is risk of physical harm, but no financial loss, NMIRPH applies.[16] This tort is particularly relevant in the context of social media, where misinformation about health treatments can spread rapidly—often without monetary motives.
A notable example involves the spread of false claims about natural remedies in African and Caribbean cultures. In these communities, it is common to see misinformation about the health benefits of certain fruits—such as soursop—which is widely believed to have cancer-curing properties. Social media posts frequently promote such claims, leading individuals to rely on these remedies instead of seeking conventional medical treatment, sometimes with harmful results.
In these cases, the tort’s elements are met. False information is shared, individuals reasonably rely on it—within their cultural context—and physical harm follows. However, applying this tort to social media cases is challenging. Courts must assess whether reliance on such information is reasonable and whether the sharer breached a duty of care. Causation is also difficult to prove given the multiple sources of misinformation online. Moreover, the argument for subjective reliance is strongest within the context of smaller communities—leaving the vast majority of social media posts from strangers unprotected.
The Role of Social Media Platforms
One potential solution is to shift the focus of liability from individuals to the platforms themselves. Social media companies have largely been shielded from liability for user-generated content by Section 230 of the U.S. Communications Decency Act, which grants them immunity from being held responsible for third-party content. It can be argued that this immunity, which was granted to aid their development,[17] is no longer necessary, given the vast power and resources these companies now hold. Moreover, blanket immunity might be removing the incentive for these companies to innovate and find a solution, which only they can. There is also an ability to pay quandary as individuals might not be able to compensate for the widespread harm social media platforms allow them to carry out.
While this approach may offer a more practical means of addressing misinformation at scale, it raises concerns about free speech and the feasibility of monitoring all content posted on large platforms like Facebook, Instagram, or Twitter. Additionally, imposing liability on social media companies could incentivize them to over-censor, potentially stifling legitimate expression.[18]
Conclusion
The legal system must evolve to address the unique challenges posed by online platforms. While existing tort remedies like fraudulent misrepresentation and negligent misrepresentation offer potential avenues for redress, their application to social media is limited by questions of reliance, scope, and practicality. To better protect individuals from the harms caused by misinformation, lawmakers may need to consider updating existing laws or creating new legal frameworks tailored to the realities of the digital world. At the same time, social media companies must be encouraged to take a more active role in curbing the spread of false information, while balancing the need to protect free speech.
Solving the problem of misinformation requires a comprehensive approach, combining legal accountability, platform responsibility, and public education to ensure a more informed and resilient society.
Notes
[1] Hannah Ritchie et al., Internet, Our World in Data, (2023) ourworldindata.org/internet.
[2] See generally Barry Leiner et al., The Past and Future History of the Internet, 40 Commc’ns ACM 102 (1997) (discussing the origins of the internet).
[3] Lexie Pelchen, Internet Usage Statistics In 2024, Forbes Home, (Mar. 1, 2024) https://www.forbes.com/home-improvement/internet/internet-statistics/#:~:text=There%20are%205.35%20billion%20internet%20users%20worldwide.&text=Out%20of%20the%20nearly%208,the%20internet%2C%20according%20to%20Statista.
[4] Audrey Normandin, Redefining “Misinformation,” “Disinformation,” and “Fake News”: Using Social Science Research to Form an Interdisciplinary Model of Online Limited Forums on Social Media Platforms, 44 Campbell L. Rev., 289, 293 (2022).
[5] Melissa De Witte, For Renaissance Italians, Combating Black Plague Was as Much About Politics as It Was Science, According to Stanford Scholar, Stan. Rep., (Mar. 17, 2020) https://news.stanford.edu/stories/2020/05/combating-black-plague-just-much-politics-science (discussing that poor people and foreigners were believed to be the cause—at least partially—of the plague).
[6] 40 Years of HIV Discovery: The First Cases of a Mysterious Disease in the Early 1980s, Institut Pasteur, (May 5, 2023) https://www.pasteur.fr/en/research-journal/news/40-years-hiv-discovery-first-cases-mysterious-disease-early-1980s (“This syndrome is then called the ‘4H disease’ to designate Homosexuals, Heroin addicts, Hemophiliacs and Haitians, before we understand that it does not only concern ‘these populations.’”).
[7] See generally Kacper Niburski & Oskar Niburski, Impact of Trump’s Promotion of Unproven COVID-19 Treatments and Subsequent Internet Trends: Observational Study, J. Med. Internet Rsch., Nov. 22, 2020 (discussing the impact of former President Trump’s promotion of hydroxychloroquine); Matthew Cohen et al., When COVID-19 Prophylaxis Leads to Hydroxychloroquine Poisoning, 10 Sw. Respiratory & Critical Care Chrons., 52 (discussing increase in hydroxychloroquine overdoses following its brief emergency use authorization).
[8] Geelan Fahimy, Liable for Your Lies: Misrepresentation Law as a Mechanism for Regulating Behavior on Social Networking Sites, 39 Pepp. L. Rev., 367, 370–79 (2012).
[9] See Restatement (Second) of Torts § 525 (Am. L. Inst. 1977) (“One who fraudulently makes a misrepresentation of fact, opinion, intention or law for the purpose of inducing another to act or to refrain from action in reliance upon it, is subject to liability to the other in deceit for pecuniary loss caused to him by his justifiable reliance upon the misrepresentation.”).
[10] Justifiable reliance can be proven through either a subjective or objective standard. Restatement (Second) of Torts § 538 (Am. L. Inst. 1977).
[11] Restatement (Second) of Torts § 310 (Am. L. Inst. 1965) (“An actor who makes a misrepresentation is subject to liability to another for physical harm which results from an act done by the other or a third person in reliance upon the truth of the representation, if the actor (a) intends his statement to induce or should realize that is likely to induce action by the other, or a third person, which involves an unreasonable risk of physical harm to the other, and (b) knows (i) that the statement is false, or (ii) that he has not the knowledge which he professes.”).
[12] See Niburski, supra note 7, for a discussion of former President Trump’s statements.
[13] Restatement (Second) of Torts § 310 cmt. b (Am. L. Inst. 1965).
[14] Restatement (Second) of Torts § 552(1) (Am. L. Inst. 1977) (“One who, in the course of his business, profession or employment, or in any other transaction in which he has a pecuniary interest, supplies false information for the guidance of others in their business transactions, is subject to liability for pecuniary loss caused to them by their justifiable reliance upon the information, if he fails to exercise reasonable care or competence in obtaining or communicating the information.”).
[15] Liability under negligent misrepresentation is limited to the person or group that the declarant intended to guide by supplying the information. Restatement (Second) of Torts § 552(2)(a)(1) (Am. L. Inst. 1977).
[16] Restatement (Second) of Torts § 311 (Am. L. Inst. 1965) (“One who negligently gives false information to another is subject to liability for physical harm caused by action taken by the other in reasonable reliance upon such information, where such harm results (a) to the other, or (b) to such third persons as the actor should expect to be put in peril by the action taken. Such negligence may consist of failure to exercise reasonable care (a) in ascertaining the accuracy of the information, or (b) in the manner in which it is communicated.”).
[17] See George Fishback, How the Wolf of Wall Street Shaped the Internet: A Review of Section 230 of the Communications Decency Act, 28 Tex. Intell. Prop. L.J. 275, 276 (2020) (“Section 230 promoted websites to grow without [the] fear . . . of liability for content beyond their control.”).
[18] See Section 230, Elec. Frontier Found. https://www.eff.org/issues/cda230#:~:text=Section%20230%20allows%20for%20web,what%20content%20they%20will%20distribute (last visited Oct. 23, 2024) (“In Canada and Australia, courts have allowed operators of online discussion groups to be punished for things their users have said. That has reduced the amount of user speech online, particularly on controversial subjects.”).