Constitutional Law

Are Social Media Empires Liable for “Terror” Organizations?

Ray Mestad, MJLST Staffer

The practicality, ease of use, and sheer addictiveness of social media has led to its massive explansion around the world. Approximately 65% of the world uses the internet, and of that group, only 5% does not use social media.[1] So 60% of the world is on social media, around 4.76 billion people.[2] For most, social media is one of the simplest ways to stay connected and communicate with friends, family, and other people in their circle. But along with the growing use of social media, questions have been raised regarding the potential liability social media corporations may have for the content that is posted on their platforms. Recently, lawsuits have been filed against companies like Google, Twitter, and Facebook for allegedly allowing groups accused of terrorism to spread their message or plan on their platforms.[3] The question we are left with is to what extent are social media companies responsible for posts on their sites that lead to violence?

The family of Nohemi Gonzales, an American student killed in Paris during a 2015 Islamic State attack, is suing Google for platforming the Islamic State by allowing them to post videos on Youtube, and then recommending them to people with the Google algorithm.[4] And the family of Nawras Alassaf, a Jordanian citizen killed in a 2017 Istanbul Islamic State attack, is suing Twitter, Google, and Facebook, for not doing more to prevent the organization from using their platform as a communications and messaging tool.[5] Gonzales v. Google and Twitter v. Taamneh will both be presenting their oral arguments to the Supreme Court this month, February 2023.[6]

The legal issues in these cases are rooted in Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996.[7] 47 U.S.C. 230 intends to protect freedom of expression by protecting intermediaries that publish information posted by users.[8] Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[9] This protects web platforms from liability for the content that users post.

Further, Section 230(c)(2) states that “No provider or user of an interactive computer service shall be held liable on account of…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[10] This is known as the “Good Samaritan” provision. Like 230(c)(1), Section 230(c)(2) gives internet providers liability protection, allowing them to moderate content in certain circumstances and then providing a safeguard from the free speech claims that would be made against them.[11]

The question is whether or not defendant social media platforms should be shielded from liability for platforming content that has allegedly led to or facilitated violent attacks. In Gonzales, the Justice department stated that although the company is protected against claims for hosting ISIS videos, a claim may be filed against Google for allowing Youtube to provide targeted recommendations of the videos.[12] And in Taamneh, the 9th Circuit agreed with the plaintiffs that there was room for the claim to go forward under the Anti-Terrorism Act because Twitter had generalized knowledge of the Islamic State’s use of their services.[13]

Section 230 has drawn an eclectic mix of critics and supporters. For example, although many conservatives and free speech advocates support the protections of Section 230, there have also been conservatives that oppose the code due to perceived restriction of conservative viewpoints on social media platforms. For example, prominent Republican Josh Hawley from Missouri has come out against the code, stating that the tech platforms ought to be treated as distributors and lose Section 230 protections.[14] In fact, Hawley introduced a piece of legislation opposing Section 230 called the Federal Big Tech Tort Act to impose liability on tech platforms.[15] And on the left, Section 230 is supported by those who believe the voices of the marginalized are protected by 230 and would otherwise be at the whim of tech companies, but opposed by people who fear that the code enables political violence and hate speech.[16]

The Supreme Court has now granted certiorari in both Gonzales and Taamneh. In Gonzales, the plaintiffs are arguing that Section 230 should not protect the actions of Google because the events occurred outside the US, it is preempted by the Justice Against Sponsors of Terrorism Act (JASTA), and the algorithmic recommendations transform Google / Youtube from an interactive computer service to an information content provider.[17] Google is arguing that they should be protected by 230, particularly 230(c)(1).[18] The 9th Circuit stated that although 230 did apply abroad, that JASTA shouldn’t supersede 230. Instead, 230 and JASTA should run parallel to each other. The 9th Circuit further stated that the claims based on revenue sharing (rather than ad targeting) should be dismissed. They did not think Google was contributing to terrorism, because they were motivated by financial enrichment rather than ideology, and affirmed the dismissal, partially because there was not clear enough information of how much support Google had provided to ISIS.[19] Future decisions regarding this case will implicate things like whether algorithmic recommendations should apply to 230.[20]

In Taamneh, the defendants argued that there was no proximate cause, as well as arguing about the inapplicability of Section 230.[21]  Unlike in GonzalesTaamneh had adequately stated a claim for aiding and abetting because the social media companies had more explicit knowledge of how their platforms were being used by these groups. The Taamneh dismissal was reversed. The Supreme Court review of this case will have implications on what it means to support or have a relationship with a group via a social media platform. In both of these cases, fears regarding the scope of 230 were expressed, which could reflect poorly on its applicability going forward.[24]   

Gonzales and Taamneh will hit the Supreme Court soon. If 230 is restricted, it would enable greater free speech but risks exposing more people to harms like hate speech or violence.  However, if 230 is preserved as is, it could restrict the accessibility and openness that has made the internet what it is today. Whichever decision is made, there will be massive implications for what the internet looks like in the future.

Notes

[1] https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/#:~:text=The%20number%20of%20social%20media,growth%20of%20%2B137%20million%20users.

[2] Id.

[3] https://apnews.com/article/islamic-state-group-us-supreme-court-technology-france-social-media-6bee9b5adf33dd15ee64b0d4d4e5ec78

[4] Id.

[5] Id.

[6] https://www.washingtonpost.com/politics/2023/01/03/2023-is-poised-be-landmark-year-tech-legal-bouts/

[7] https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996

[8] https://www.eff.org/issues/cda230

[9] https://casetext.com/statute/united-states-code/title-47-telecommunications/chapter-5-wire-or-radio-communication/subchapter-ii-common-carriers/part-i-common-carrier-regulation/section-230-protection-for-private-blocking-and-screening-of-offensive-material

[10] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[11] https://bipartisanpolicy.org/blog/gonzalez-v-google/

[12] https://www.washingtonpost.com/politics/2022/12/09/tech-critics-urge-supreme-court-narrow-section-230/

[13] https://knightcolumbia.org/blog/twitter-v-taamneh-in-the-supreme-court-whats-at-stake

[14] Supa Washington Post

[15] https://www.hawley.senate.gov/hawley-files-gonzalez-v-google-amicus-brief-supreme-court-challenging-big-techs-section-230

[16] Supa Washington Post

[17] https://www.lawfareblog.com/supreme-court-grants-certiorari-gonzalez-v-google-and-twitter-v-taamneh-overview

[18] Id.

[19] Id.

[20]

[21] Id.

[22] Id.

[23] Id.

[24]Id.


Freedom to Moderate? Circuits Split over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


Would Autonomous Vehicles (AVs) Interfere with Our Fourth Amendment Rights?

Thao Nguyen, MJLST Staffer

Traffic accidents are a major issue in the U.S. and around the world. Although car safety features are continuously enhanced and improved, traffic crashes continue to be the leading cause of non-natural death for U.S. citizens. Most of the time, the primary causes are human errors rather than instrumental failures. Therefore, autonomous vehicles (“AVs”), which promise to be the automobiles that operate themselves without the human driver, are an exciting up and coming technology, studied and developed in both academia and industry[1].

To drive themselves, AVs must be able to perform two key tasks: sensing the surrounding environment and “driving”—essentially replacing the eyes and hands of the human driver.[2] The standard AV design today includes a sensing system that collects information from the outside world, assisting the “driving” function. The sensing system is composed of a variety of sensors,[3] most commonly a Light Detection and Ranging (LiDAR) and cameras.[4] A LiDAR is a device that emits laser pulses and uses sound navigation and ranging (“SONAR”) principles to get a depth estimation of the surroundings: the emitted laser pulses travel forward, hit an object, then bounce back to the receivers; the time taken for the pulses to travel back is measured, and the distance is computed. With this information about distance and depth, a 3D point cloud map is generated about the surrounding environment. In addition to precise 3D coordinates, most LiDAR systems also record “intensity.” “Intensity” is the measure of the return strength of the laser pulse, which is based, in part, on the reflectivity of the surface struck by the laser pulse. LiDAR “intensity” data thus reveal helpful information about the surface characteristics of their surroundings. The two sensors, the camera and the LiDAR, complement each other: the former conveys rich appearance data with more details on the objects, whereas the latter is able to capture 3D measurements[5]. Fusing the information acquired by each allows the sensing system to gain a reliable environmental perception.[6]

LiDAR sensing technology is usually combined with artificial intelligence, as its goal is to imitate and eventually replace human perception in driving. Today, the majority of artificial intelligences use “machine learning,” a method that gives computers the ability to learn without explicitly being programmed. With machine learning, computers train itself to do new tasks in a similar manner as do humans: by exploring data, identifying patterns, and improving upon past experiences. Applied machine learning is data-driven: the greater the breadth and depth of the data supplied to the computer, the greater the variety and complexity of the tasks that the computer can program itself to do. Since “driving” is a combination of multiple high-complexity tasks, such as object detection, path planning, localization, lane detection, etc., an AV that drives itself requires voluminous data in order to operate properly and effectively.

“Big data” is already considered a valuable commodity in the modern world. In the case of AVs, however, this data will be of public streets and road users, and the large-scale collection of this data is empowered further by various technologies to detect and identify, track and trace, mine and profile data. When profiles about a person’s traffic movements and behaviors exist in a database somewhere, there is a great temptation for the information to be used for other purposes than the purpose for which they were originally collected, as has been the case with a lot of other “big data” today. Law enforcement officers who get their hands on these AVs data can track and monitor people’s whereabouts, pinpointing individuals whose trajectories touch on suspicious locations at a high frequency. The trajectories can be matched with the individual identified via use of car models and license plates. The police may then identify crime suspects based on being able to see the trajectories of everyone in the same town, rather than taking the trouble to identify and physically track each suspect. Can this use of data by law enforcement be sufficiently justified?

As we know, use of “helpful” police tools may be restricted by the Fourth Amendment, and for good reasons. Although surveillance helps police officers detect criminals,[7] extraneous surveillance has its social costs: restricted privacy and a sense of being “watched” by the government inhibits citizens’ productivity, creativity, spontaneity, and causes other psychological effects.[8] Case law has given us guidance to interpret and apply the Fourth Amendment standards of “trespass” or “unreasonable searches and seizures” by the police. Three principal cases, Olmstead v. United States, 277 U.S. 438 (1928), Goldman v. United States, 316 U.S. 129 (1942), and United States v. Jones, 565 U.S. 400 (2012), a modern case, limit Fourth Amendment protection to protecting against physical intrusion into private homes and properties. Such protection would not be helpful in the case of LiDAR, which operates on public street as a remote sensing technology. Nonetheless, despite the Jones case, the more broad “reasonable expectation of privacy” test established by Katz v. United States, 389 U.S. 347 (1967) is more widely accepted. Instead of tracing physical boundaries of “persons, houses, papers, and effects,” the Katz test focuses on whether there is an expectation of privacy that is socially recognized as “reasonable.” The Fourth Amendment “protects people, not places,” wrote the Katz court.[9]

United States v. Knotts, 460 U.S. 276 (1983) was a public street surveillance case that invoked the Katz test. In Knotts, the police installed a beeper on to the defendant’s vehicle to track it. The Court found that such tracking on public streets was not prohibited by the Fourth Amendment: “A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[10] The Knotts Court thus applied the Katz test and considered the question of whether there was a “reasonable expectation of privacy,” meaning that such expectation was recognized as “reasonable” by society.[11] The Court’s answer is in the negative: unlike a person in his dwelling place, a person who is traveling on public streets “voluntarily conveyed to anyone who wanted to look at the fact that he was traveling over particular roads in a particular direction.”[12]

United States v. Maynard, 615 F.3d 544 (2010), another public street surveillance case taking place in the twenty-first century, reconsidered the Knotts holding regarding “reasonable expectation of privacy” on public streets. The Maynard defendant argued that the district court erred in admitting evidence acquired by the police’s warrantless use of a Global Pointing System (GPS) device to track defendant’s movements continuously for a month.[13] The Government invoked United States v. Knotts and its holding that “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[14] The DC Circuit Court of Appeals, however, distinguished Knotts, pointing out that the Government in Knotts used a beeper that tracked a single journey, whereas the Government’s GPS monitoring in Maynard was sustained 24 hours a day continuously for one month.[15]The use of the GPS device over the course of one month did more than simply tracking defendant’s “movements from one place to another.” The result in Maynard was the discovery of the “totality and pattern” of defendant’s movement. [16]The Court is willing to make a distinction between “one path” and “the totality of one’s movement”: since someone’s “totality of movement” is much less exposed to the view of the public and much more revealing of that person’s personal life, it is constitutional for the police to track an individual on “one path,” but not that same individual’s “totality of movement.”

Thus, with time the Supreme Court appears to be recognizing that when it comes to modern surveillance technology, the sheer quantity and the revealing nature of data collected on movements of public street users ought to raise concerns. The straightforward application of these to AV sensing data would be that data concerning a person’s “one path” can be obtained and used, but not the totality of a person’s movement. It is unclear where to draw the line      between “one path” and “the totality of movement.” The surveillance in Knotts was intermittent over the course of three days,[17] whereas the defendant in Maynard was tracked for over one month. The limit would perhaps fall somewhere in between.

Furthermore, this straightforward application is complicated by the fact that the sensors utilized by AVs do not pick up mere locational information. As discussed above, AV sensing system, being composed of multiple sensors, capture both camera images and information about speed, texture, and depth of the object. In other words, AVs do not merely track a vehicle’s location like a beeper or GPS, but they “see” the vehicle through their cameras and LiDAR and radar devices, gaining a wealth of information. This means that even if only data about “one path” of a person movement is extracted, this “one path” data as processed by AV sensing systems is much more in-depth than what a beeper or CSLI can communicate. Adding to this, current developers are proposing to create AVs networks that share data among many vehicles, so that data on “one path” can potentially be combined with other data of the same vehicle’s movement, or multiple views of the same “one path” from different perspectives can be combined. The extensiveness of these data goes far beyond the precedents in Knotts and Maynard. Thus, it is foreseeable that unwarranted subpoenaing AVs sensing data is firmly within the Supreme Court’s definition of a “trespass.”

[1] Tri Nguyen, Fusing LIDAR sensor and RGB camera for object detection in autonomous vehicle with fuzzy logic approach, 2021 International Conference on Information Networking (ICOIN) 788, 788 (2021).

[2] Id. (“An autonomous vehicle or self-driving car is a vehicle having the ability to sense the surrounding environment and capable of operation on its own without any human interference. The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounting on it.”)

[3] Id. “The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounted on it.”

[4] Heng Wang and Xiaodong Zhang, Real-time vehicle detection and tracking using 3D LiDAR, Asian Journal of Control 1, 1 (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”)

[5] Id. (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”) (“Conversely, LiDARs are able to produce 3D measurements and are not affected by the illumination of the environment [9,10].”).

[6] Nguyen, supra note 1, at 788 (“Due to the complementary of two sensors, it is necessary  to gain a more reliable environment perception by fusing the  information acquired from these two sensors.”).

[7] Raymond P. Siljander & Darin D. Fredrickson, Fundamentals of Physical Surveillance: A Guide for Uniformed and Plainclothes Personnel, Second Edition (2002) (abstract).

[8] Tamara Dinev et al., Internet Privacy Concerns and Beliefs About Government Surveillance – An Empirical Investigation, 17 Journal of Strategic Information Systems 214, 221 (2008) (“Surveillance has social costs (Rosen, 2000) and inhibiting effects on spontaneity, creativity, productivity, and other psychological effects.”).

[9] Katz v. United States, 389 U.S. 347, 351 (1967).

[10] United States v. Knotts, , 460 U.S. 276, 281 (1983) (“A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”)

[11] Id. at 282.

[12] Id.

[13] United States v. Maynard, 615 F.3d 544, 549 (2010).

[14]  Id. at 557.

[15] Id. at 556.

[16] Id. at 558 “[O]nes’s movements 24 hours a day for 28 days as he moved among scores of places, thereby discovering the totality and pattern of his movements.”).

[17] Knotts at 276.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


The Heavy Cost of Costless Lies

Shuang Liu, MJLST Staffer

Does repetition of a lie make it truer? “What a ridiculous question,” you might think. But according to psychological experiments, the answer is yes.

In a series of psychological experiments, scientists provided true and false statements to participants, repeating only some of the statements, and asked the participants to evaluate whether the statements were true or false. The results showed that people typically evaluated repeated statements truer than those that appeared just once. The effect of repetition was summarized by Christian Unkelbach et al. in 2019:

The effect appears with information ranging from trivia (“The thigh bone is the longest bone in the human body”) to consumer opinions (“Billabong shampoo leaves hair shiny with no residue”) to false news items (“Donald Trump sends his own plane to transport 200 stranded marines”). It is present with repetition intervals from minutes to weeks to months.

In addition to the frequency of statements, temporal order also affects people’s trust in statements. For example, if people read the statement “Falstaff was the last opera of Verdi” first and the statement “Othello was the last opera of Verdi” later, they are more likely to believe the latter statement is false. To make things worse, the phenomenon of confirmation bias reveals that when a person has drawn a conclusion on a given matter, either consciously or subconsciously, the person is inclined to disregard information that contradicts the conclusion.

The implication of these experiments can be huge. Consider a scenario where a famous person says “COVID is not real” with literally no explanation. People will then hear it countless times from various sources including the press, and potentially family, friends, and collogues. As a result, some of these people will tend to believe this lie more than later statements that contradict it but are true. When the lie is closely related to public interest, just as the one in this example, its negative effects are serious.

Nevertheless, the law does not defend people against such serious lies at all. The First Amendment protects free speech including false statements, as long as no defamation issue is involved. Generally, there are two reasons for not outlawing lies. Firstly, the First Amendment “presupposes that right conclusions are more likely to be gathered out of a multitude of tongues, than through any kind of authoritative selection.” Secondly, the “First Amendment freedoms need breathing space to survive.” Penalties for lies will also deter statements that are believed to be true when made, but could be disproven later. However, as will be discussed below, these two reasons are not adequate for allowing lies to be legally costless.

To begin with, the presupposition that truth can be gathered from various contradictory sources does not reflect the reality. Most information people obtain today is secondhand. People can hardly confirm the truthiness of most information directly. Therefore, people have no better option than choosing to believe some of the accessible sources. This choice, as illustrated above, is far from rational. You may think that simple repetition and temporal primacy cannot mislead you. But statistical results show a considerable portion of people can and will be fooled in such ways. Moreover, confirmation bias suggests once a person believes a lie, the person will strengthen the lie in his or her mind by selectively absorbing future information. Accordingly, the presumption that truth can be found from various sources may hold in the scenario of a discovery proceeding in litigation, for example, but never for most people in their daily life.

Moreover, the concern that punishing lies may also deter true statements can be dispelled by a systematic solution. Firstly, whether a speaker is liable for his or her false statement should not turn on whether the statement is false objectively. Rather, the test should be whether the speaker, as a reasonable person, has had sufficient factual bases for the statement before making it. After all, even respectable scientists have made false statements about the nature of the universe, but hardly can anyone say they were lying. Additionally, in order not to disrupt people’s normal life, the requirements of not lying should be imposed only on public officials when they are speaking in their positions. This role-based requirement is consistent with the well-established policy that government officials “are to be treated as men of fortitude, able to thrive in a hardy climate.” It is also aligned with the fact that statements of public officials are more likely to be viewed, heard, reported, and spread, and hence are deserved to be more strictly regulated. Lastly, to be held liable for lying, the false statement should bear some relation to the public interest. Trivial lies that do not hurt the public interest are not worth the legal cost for preventing them.

As can be expected, to outlaw false statements, even only those made by public officials, entails a radical change in the Constitutional law. But the efforts will pay off because people will be less harmed by lies, and the government will receive more credence from people as a result.


Holy Crap: The First Amendment, Septic Systems, and the Strict Scrutiny Standard in Land Use Law

Sarah Bauer, MJLST Staffer

In the Summer of 2021, the U.S. Supreme Court released a bevy of decisions favoring religious freedom. Among these was Mast v. City of Fillmore, a case about, well, septic systems and the First Amendment. But Mast is about so much more than that: it showcases the Court’s commitment to free exercise in a variety of contexts and Justice Gorsuch as a champion of Western sensibilities. It also demonstrates that moving forward, the government is going to need work harder to support that its compelling interest in land use regulation trumps an individual’s free exercise rights.

The Facts of Mast

To understand how septic systems and the First Amendment can even exist in the same sentence, it’s important to know the facts of Mast. In the state of Minnesota, the Pollution Control Agency (MPCA) is responsible for maintaining water quality. It promulgates regulations accordingly, then local governments adopt those regulations into ordinances. Among those are prescriptive regulations about wastewater treatment. At issue is one such ordinance adopted by Fillmore County, Minnesota, that requires most homes to have a modern septic system for the disposal of gray water.

The plaintiffs in the case are Swartzentruber Amish. They sought a religious exemption from the ordinance, saying that their religion forbade the use of that technology. The MPCA instead demanded the installation of the modern system under threat of criminal penalty, civil fines, and eviction from their farms. When the MPCA rejected a low-tech alternative offered by the plaintiffs, a mulch basin system not uncommon in other states, the Amish sought relief on grounds that the ordinance violated the Religious Land Use and Institutionalized Persons Act (RLUIPA). After losing the battle in state courts, the Mast plaintiffs took it to the Supreme Court, where the case was decided in their favor last summer.

The First Amendment and Strict Scrutiny

Mast’s issue is a land use remix of Fulton v. City of Philadelphia, another free exercise case from the same docket. Fulton, the more controversial and well-known of the two, involved the City of Philadelphia’s decision to discontinue contracts with Catholic Social Services (CSS) for placement of children in foster homes. The City said that CSS’s refusal to place children with same-sex couples violated a non-discrimination provision in both the contract and the non-discrimination requirements of the citywide Fair Practices Ordinance. The Supreme Court didn’t buy it, holding instead that the City’s policy impermissibly burdened CSS’s free exercise of religion.

The Fulton decision was important for refining the legal analysis and standards when a law burdens free exercise of religion. First, if a law incidentally burdens religion but is both 1) neutral and 2) generally applicable, then courts will not ordinarily apply a strict scrutiny standard on review. If one of those elements is not met, courts will apply strict scrutiny, and the government will need to show that the law 1) advances a compelling interest and 2) is narrowly tailored to achieve those interests. The trick to strict scrutiny is this: the government’s compelling interest in denying an exception needs to apply specifically to those requesting the religious exception. A law examined under strict scrutiny will not survive if the State only asserts that it has a compelling interest in enforcing its laws generally.

Strict Scrutiny, RLUIPA, and Mast

The Mast Plaintiffs sought relief under RLUIPA. RLUIPA isn’t just a contender for Congress’s “Most Difficult to Pronounce Acronym” Award. It’s a choice legal weapon for those claiming that a land use regulation restricts free exercise of religion. The strict scrutiny standard is built into RLUIPA, meaning that courts skip straight to the question of whether 1) the government had a compelling government interest, and 2) whether the rule was the least restrictive means of furthering that compelling government interest. And now, post-Fulton, that first inquiry involves looking at whether the government had a compelling interest in denying an exception specifically as it applies to plaintiffs.

So that is how we end up with septic systems and the First Amendment in the same case. The Amish sued under RLUIPA, the Court applied strict scrutiny, and the government failed to show that it had a compelling interest in denying the Amish an exception to the rule that they needed to install a septic system for their gray water. Particularly convincing at least from Coloradan Justice Gorsuch’s perspective, were the facts that 1) Minnesota law allowed exemptions to campers and outdoorsman, 2) other jurisdictions allowed for gray water disposal in the same alternative manner suggested by the plaintiffs, and 3) the government couldn’t show that the alternative method wouldn’t effectively filter the water.

So what does this ultimately mean for land use regulation? It means that in the niche area of RLUIPA litigation, religious groups have a stronger strict scrutiny standard to lean on, forcing governments to present more evidence justifying a refusal to extend religious exemptions. And government can’t bypass the standard by making regulations more “generally applicable,” for example by removing exemptions for campers. Strict scrutiny still applies under RLUIPA, and governments are stuck with it, resulting in a possible windfall of exceptions for the religious.


Reconsidering Roe: Has the Line of Fetal Viability Moved?

Claire Colby, MJLST Staffer

After the Supreme Court heard arguments in Dobbs v. Jackson Women’s Health on December 1, legal commentatorsbegan to speculate the case could be a vehicle for overturning Roe v. Wade. The Mississippi statute at issue in Dobbs bans nearly all abortions after 15 weeks. In questioning Mississippi Solicitor General Scott Stewart, Justice Sonia Sotomayor asked about the “advancements in medicine” that have changed the lines of viability since the Court last considered a major challenge to Roe with Planned Parenthood v. Casey in 1992. “What has changed in science to show that the viability line is not a real line…?” she asked.

Roe v. Wade was a 1973 landmark decision in which the Supreme Court adopted a trimester framework for abortion. During the first trimester, the Court held that “the abortion decision and its effectuation must be left to the medical judgement of the pregnant woman’s attending physician.” The court held that states could adopt regulations “reasonably related to maternal health” for abortions after the first trimester, and held that in the third trimester, upon viability, states may “regulate, and even proscribe, abortion except where necessary, in appropriate medical judgement for the preservation of the life or health of the mother.” In 1992, the Court rejected this “rigid trimester” framework in Planned Parenthood v. Casey. In Casey, the Court turned to a viability framework and found that pre-viability, states may not prohibit abortion or impose “a substantial obstacle to the woman’s effective right to elect the procedure.” The Court adopted an “undue burden” standard to determine whether state regulations of pre-viability abortion are unconstitutional.

In Casey, the court defined viability as “the time at which there is a realistic possibility of maintaining and nourishing a life outside the womb.” So when do medical professionals consider a fetus viable? The threshold has moved to earlier in the gestation period since the 1970s, but experts disagree on where to draw the line. According to a journal articlepublished in 2018 in Women’s Health Issues, in 1971, fetal age of approximately 28 weeks was “widely used as the criterion of viability.” The article said that until recently, 24 weeks of gestation was the “widely accepted cutoff for viability in the highest acuity neonatal intensive care units.” According to the article, babies born as early as 22 weeks of gestation had an “overall survival rate of 23%” with “the most aggressive medical management available.” The article rebuked the idea of tying abortion restrictions to viability at all: “Tying abortion provisions to the word viability today is as misguided as it was to tie it to a specific trimester in 1973,” the article stated. “There was no true definition of viability then, and as long as medicine strives to treat every patient uniquely, there will never be one.”

A 2017 practice alert published in the official journal of the American College of Obstetricians and Gynecologists defined “periviable” births —births occurring “near the limit of viability” —as births occurring between 20 and 26 weeks gestation.

According to a 2020 New York Times article, determinations on the gestational age at which a baby is likely to survive outside of the womb are “in a complex moment of transition.” Though technology has improved, “even top academic institutions disagree about the right approach to treating 22- and 23-week babies.” The article reported that the University of California, San Francisco “a top-tier, high resource hospital,” is “transparent about its policy of offering only comfort care for babies that are born up to the first day of the 23rd week, down to the hour.”

In June 2020, a baby born at the Children’s Hospital and Clinics of Minnesota set the world record for the world’s most premature baby to survive, the Washington Post reported. He was born at 21 weeks and two days gestation.

Several medical developments help to explain this earlier period of viability.

According to a 2020 Nature article, “the biggest difference to survival came in the early 1990s with surfactant treatment.” Surfactant is a “slippery substance” that prevents airways from collapsing upon exhalation. According to Kaiser, premature babies with underdeveloped lungs often lack the substance. “When premature lungs are treated with surfactant after birth, the infant’s blood oxygen levels usually improve within minutes.”

A 2018 study published by the Journal of the American Medical Association, administering prenatal steroids to mothers between 22 and 25 weeks gestation prior to delivery led to a “significantly higher” survival rate, but “survival without major morbidities remains low at 22 and 23 weeks.”

The Dobbs ruling is not expected until this summer, when the Court tends to release its major decisions. Even if the Court maintains the viability standard set forth in Casey, recent medical advances may warrant more consideration about where to draw this line.


The StingRay You’ve Never Heard of: How One of the Most Effective Tools in Law Enforcement Operates Behind a Veil of Secrecy

Dan O’Dea, MJLST Staffer

One of the most effective investigatory tools in law enforcement has operated behind a veil of secrecy for over 15 years. “StingRay” cell phone tower simulators are used by law enforcement agencies to locate and apprehend violent offenders, track persons of interest, monitor crowds when intelligence suggests threats, and intercept signals that could activate devices. When passively operating, StingRays mimic cell phone towers, forcing all nearby cell phones to connect to them, while extracting data in the form of metadata calls, text messages, internet traffic, and location information, even when a connected phone is powered off. They can also inject spying software into phones and prevent phones from accessing cellular data. StingRays were initially used overseas by federal law enforcement agencies to combat terrorism, before spreading into the hands of the Department of Justice and Department of Homeland Security, and now are actively used by local law enforcement agencies in 27 states to solve everything from missing persons cases to thefts of chicken wings.

The use of StingRay devices is highly controversial due to their intrusive nature. Not only does the use of StingRays raise privacy concerns, but tricking phones into connecting to StingRays mimicking cell phone towers prevent accessing legitimate cell phone service towers, which can obstruct access to 911 and other emergency hotlines. Perplexingly, the use of StingRay technology by law enforcement is almost entirely unregulated. Local law enforcement agencies frequently cite secrecy agreements with the FBI and the need to protect an investigatory tool as a means of denying the public information about how StingRays operate, and criminal defense attorneys have almost no means of challenging their use without this information. While the Department of Justice now requires federal agents obtain a warrant to use StingRay technology in criminal cases, an exception is made for matters relating to national security, and the technology may have been used to spy on racial-justice protestors during the Summer of 2020 under this exception. Local law enforcement agencies are almost completely unrestricted in their use of StingRays, and may even conceal their use in criminal prosecutions by tagging their findings as those of a “confidential source,” rather than admitting the use of a controversial investigatory tool. Doing so allows prosecutors to avoid  battling 4th amendment arguments characterizing data obtained by StingRays as unlawful search and seizure.

After existing in a “legal no-man’s land” since the technology’s inception, Senator Ron Wyden (D-OR) and Representative Ted Lieu (D-HI) sought to put an end to the secrecy of StingRays through introducing the Cell-Site Simulator Warrant Act of 2021 in June of 2021. The bill would have mandated that law enforcement agencies obtain a warrant to investigate criminal activity before deploying StingRay technology while also requiring law enforcement agencies to delete data of phones other than those of investigative targets. Further, the legislation would have required agencies to demonstrate a need to use StingRay technology that outweighs any potential harm to the community impacted by the technology. Finally, the bill would have limited authorized use of StingRay technology to the minimum amount of time necessary to conduct an investigation. However, the Cell-Site Simulator Warrant Act of 2021 appears to have died in committee after failing to garner significant legislative support.

Ultimately, no device with the intrusive capabilities of StingRays should be allowed to operate free from the constraints of regulation. While StingRays are among the most effective tools utilized by law enforcement, they are also among the most intrusive into the privacy of the general public. It logically follows that agencies seeking to operate StingRays should be required to make a showing of a need to utilize such an intrusive investigatory tool. In certain situations, it may be easy to establish the need to deploy a StingRay, such as doing so to further the investigation of a missing persons case. In others, law enforcement agencies would correctly find their hands tied should they wish to utilize a StingRay to catch a chicken wing thief.


Inconceivable! How the Fourth Amendment Failed the Dread Pirate Roberts in United States v. Ulbricht

Emily Moss, MJLST Staffer

It is not an overstatement to claim that electronic devices, such as laptop and smart phones, have “altered the way we live.” As Chief Justice Roberts stated, “modern cell phones . . . are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.” Riley v. California, 573 U.S. 373, 385 (2014). These devices create new digital records of our everyday lives. United States v. Ulbricht, 858 F.3d 71 (2d Cir. 2017) is one of many cases that grapple with when the government should gain access to these records.

In February 2015, a jury found Ross William Ulbricht (aka “Dread Pirate Roberts” or “DPR”) guilty on seven counts related to his creation and operation of Silk Road. United States v. Ulbricht, 858 F.3d 71, 82 (2d Cir. 2017). Silk Road was an online criminal marketplace where, using the anonymous currency Bitcoin, “users principally bought and sold drugs, false identification documents, and computer hacking software.” Id. Government trial evidence showed that, hoping to protect Silk Road anonymity, DPR commissioned the murders of five people. Id. at 88. However, there is no evidence that the murders actually transpired. Id.

On appeal, the Second Circuit upheld both the conviction and Ulbricht’s two-life sentence. Ulbricht, 858 F.3d at 82. Ulbricht argued, inter alia, that “the warrant[] authorizing the government to search his laptop . . . violated the Fourth Amendment’s particularity requirement.” Id. at 95. The warrant authorized “opening or ‘cursorily reading the first few’ pages of files to ‘determine their precise contents,’ searching for deliberately hidden files, using ‘key word searches through all electronic storage areas,’ and reviewing file ‘directories’ to determine what was relevant.” Id. at 101–02. Ulbricht claimed that the warrant violated the Fourth Amendment’s particularity requirement because it “failed to specify the search terms and protocols” that the government was required to employ while searching Ulbricht’s laptop. Id. at 102.

The court acknowledged that particularity is especially important when the warrant authorizes the search of electronic data, as the search of a computer can expose “a vast trove of personal information” including “sensitive records.” Id. at 99. It noted that “a general search of electronic data is an especially potent threat to privacy because hard drives and e-mail accounts may be ‘akin to a residence in terms of the scope and quantity of private information [they] may contain’ . . . Because of the nature of digital storage, it is not always feasible to ‘extract and segregate responsive data from non-responsive data,’. . . creating a ‘serious risk that every warrant for electronic information will become, in effect, a general warrant.’” Id. (internal citations omitted).

Nonetheless, the court rejected Ulbricht’s claim that the laptop warrant failed to meet the Fourth Amendment’s particularity requirement. It reasoned that it would be impossible to identify how relevant files would be named before the laptop search began, which the government reasonably anticipated when requesting the laptop warrant. Id. at 102 (emphasizing examples where relevant files and folders had misleading names such as “aliaces” or “mbsobzvkhwx4hmjt”). Further, the court held that broad search protocols were appropriate given that the alleged crime involved sophisticated technology and masking identity. Id. Ultimately, the court emphasized that the “fundamental flaw” in Ulbricht’s argument was that it equated a broad warrant with a violation of the particularity requirement. Id. Using the analogy of searching an entire home where there is probable cause to believe that there is relevant evidence somewhere in the home, the court illustrated that a warrant can be both broad and still satisfy the particularity requirement. Id. (citing U.S. Postal Serv. v. C.E.C. Servs., 869 F.2d 184, 187 (2d Cir. 1989)). The court therefore upheld the constitutionality of the warrant. The Supreme Court denied Ulbrich’s writ of certiorari.

Orin Kerr’s equilibrium adjudgment theory of the Fourth Amendment argues that as new tools create imbalanced power on either the side of privacy or the side of law enforcement, the Fourth Amendment must adjust to restore its original balance. The introduction of computers and the internet created an immense change in the tools that both criminals and law enforcement use. Without minimizing the significance of Ulbricht’s crimes, United States v. Ulbricht illustrates this dramatic change. While computers and the internet did create new avenues for crime, computer and internet searches—such as the ones employed by the government—do far more to disrupt the Fourth Amendment’s balance.

Contrary to the court’s argument in Ulbricht, searching a computer is entirely unlike searching a home. First, it is easy to remove items from your home, but the same is not true of computers. Even deleted files often linger on computers where the government can access them. Similarly, when law enforcement finds a file in someone’s home, it still does not know how that file was used, how often it has been viewed, or who has viewed it. But computers do store such information. These, and many other differences demonstrate why particularity, in the context of computer searches, is even more important than the court in UIlbricht acknowledged. Given the immense amount of information available on an individual’s electronic devices, Ulbricht glosses over the implications for personal privacy posed by broad search warrants directed at computers. And with the rapidly changing nature of computer technology, the Fourth Amendment balance will likely continue to stray further from equilibrium at a speed with which the courts will struggle to keep up.

Thus, adjusting the Fourth Amendment power balance related to electronic data will continue to be an important and complicated issue. See, e.g., Proposal 2 Mich. 2020) (amending the state’s constitution “to require a search warrant to access a person’s electronic data or electronic communications,” passing with unanimous Michigan Senate and House of Representative approval, then with 88.8% of voters voting yes on the proposal); People v. Coke, 461 P.3d 508, 516 (Colo. 2020) (“‘Given modern cell phones’ immense storage capacities and ability to collect and store many distinct types of data in one place, this court has recognized that cell phones ‘hold for many Americans the privacies of life’ and are, therefore, entitled to special protections from searches.”) (internal citations omitted). The Supreme Court has ruled on a number of Fourth Amendment and electronic data cases. See, e.g., Carpenter v. United States, 138 S.Ct. 2206 (2018) (warrantless attainment of cell-site records violates the Fourth Amendment); Riley v. California, 134 S.Ct. 2473 (2014) (warrantless search and seizure of digital contents of a cell phone during an arrest violates the Fourth Amendment). However, new issues seem to appear faster than they can be resolved. See, e.g., Nathan Freed Wessler, Jennifer Stisa Granick, & Daniela del Rosario Wertheimer, Our Cars Are Now Roving Computers. Is the Fourth Amendment Ready?, ACLU (May 21, 2019, 3:00 PM), https://www.aclu.org/blog/privacy-technology/surveillance-technologies/our-cars-are-now-roving-computers-fourth-amendment. The Fourth Amendment therefore finds itself in eel infested waters. Is rescue inconceivable?

Special thanks to Professor Rozenshtein for introducing me to Ulbricht and inspiring this blog post in his course Cybersecurity Law and Policy!


The “Circuit Split” That Wasn’t

Sam Sylvan, MJLST Staffer

Earlier this year, the Fourth Circuit punted on an opportunity to determine the constitutional “boundaries of the private search doctrine in the context of electronic searches.” United States v. Fall, 955 F.3d 363, 371 (4th Cir. 2020). The private search doctrine, crafted by the Supreme Court in the 80’s, falls under the Fourth Amendment’s umbrella. The doctrine makes it lawful for law enforcement to “search” something that was initially “searched” by a private third party, because the Fourth Amendment is “wholly inapplicable to a search or seizure, even an unreasonable one, effected by a private individual not acting as an agent of the Government or with the participation or knowledge of any government official.” United States v. Jacobsen, 466 U.S. 109, 113 (1984).

An illustration: Jane stumbles upon incriminating evidence on John’s laptop that implicates John in criminal activity (the “initial private search”), Jane shows the police what she found on the laptop (the “after-occurring” search), and the rest is history for John. But for law enforcement’s after-occurring search to avoid violating the Fourth Amendment, its search must not exceed the scope of the initial private search. “The critical measures [to determine] whether a governmental search exceeds the scope of the private search that preceded it,” United States v. Lichtenberger, 786 F.3d 478, 485 (6th Cir. 2015), include whether “there was a virtual certainty that nothing else of significance was in the [property subjected to the search]” and whether the government’s search “would not tell [law enforcement] anything more than [it] already had been told” or shown by the private searcher. Jacobsen, 466 U.S. at 119.

Of course, the Supreme Court’s holdings from the 80’s that speak to the scope of the Fourth Amendment are often difficult to reconcile with modern-day Fourth Amendment fact patterns that revolve around law enforcement searches of modern electronic devices (laptops; smartphones; etc.). In the key Supreme Court private search doctrine case, Jacobsen (1984), the issue was the constitutionality of a DEA agent’s after-occurring search of a package after a FedEx employee partially opened the package (upon noticing that it was damaged) and saw a white powdery substance.

Since the turn of the millennium, courts of appeals have stretched to apply Jacobsen to rule on the private search doctrine’s application to, and scope of, law enforcement searches of electronics. In 2001, the Fifth Circuit addressed the private search doctrine in a case where the defendant’s estranged wife took a bunch of floppy disks, CDs, and zip disks from the defendant’s property. She and her friend then discovered evidence of defendant’s criminal activity on those disks while searching some of them and turned the collection over to the police, which led to the defendant’s conviction. United States v. Runyan, 275 F.3d 449 (5th Cir. 2001).

There are two crucial holdings in Runyan regarding the private search doctrine. First, the court held that “the police exceeded the scope of the private search when they examined the entire collection of ‘containers’ (i.e., the disks) turned over by the private searchers, rather than confining their [warrantless] search to the selected containers [that were actually] examined by the private searchers.” Id. at 462. Second, the court held that the “police search [did not] exceed[] the scope of the private search when the police examine[d] more items within a particular container [i.e., a particular disk] than did the private searchers” who searched some part of the particular disk but not its entire contents. Id. at 461, 464. Notably absent from this case: a laptop or smartphone.

Eleven years after Runyan, the Seventh Circuit held that the police did not exceed the scope of the private searches conducted by a victim and her mother. Rann v. Atchison, 689 F.3d 832 (7th Cir. 2012) (relying heavily on Runyan). In Rann, the police’s after-occurring search included viewing images (on the one memory card brought to them by the victim and the one zip drive brought to them by the victim’s mother) that the private searchers themselves had not viewed. Id. Likening computer storage disks to containers (as the Runyan court did), the Rann court concluded “that a search of any material on a computer disk is valid if the private [searcher] viewed at least one file on the disk.” Id. at 836 (emphasis added). But notably absent from this case like Runyan: a laptop or smartphone.

Two years after Rann, the Supreme Court decided Riley v. California—a landmark case where the Court unanimously held that the warrantless search of a cellphone during an arrest was unconstitutional. Specific reasoning from the Riley Court is noteworthy insofar as assessing the Fourth Amendment’s (and, in turn, the private search doctrine’s) application to smartphones and laptops. The Court stated:

[W]e generally determine whether to exempt a given type of search from the warrant requirement by assessing, on the one hand, the degree to which it intrudes upon an individual’s privacy and, on the other, the degree to which it is needed for the promotion of legitimate governmental interests. . . . [Smartphones] are in fact minicomputers that also happen to have the capacity to be used as a telephone. They could just as easily be called cameras, video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers. One of the most notable distinguishing features of [smartphones] is their immense storage capacity.

573 U.S. 373, 385, 393 (2014). Riley makes crystal clear that when the property at issue is a laptop or smartphone, the balance between a person’s privacy interests and the governmental interests tips heavily in favor of the individual’s privacy interests. In simpler terms, law enforcement needs a warrant to search a laptop or smartphone unless it has an extremely compelling reason for failing to comply with the Fourth Amendment’s warrant requirement.

 

One year after Riley, the Sixth and Eleventh Circuits—armed with Riley’s insights regarding modern electronic devices—decided Lichtenberger and United States v. Sparks, respectively. The two Circuits held that in both cases the police, in conducting their after-occurring warrantless searches of a laptop (Lichtenberger) and a smartphone (Sparks), exceeded the scope of the initial private searches, reaching these conclusions in large part due to Riley. In Lichtenberger, the police exceeded the scope of the initial private search when, without a warrant, they looked at photographs on the laptop that the private searcher had not looked at, despite the private searcher’s initial viewing of other photographs on the laptop. 786 F.3d 478 (6th Cir. 2015). In Sparks, the police exceeded the scope of the initial private search when, without a warrant, they viewed a video within the same album on the smartphone that the private searcher had scrolled through but which the private searcher did not actually view. 806 F.3d 1323 (11th Cir. 2015), overruled on other grounds by United States v. Ross, 963 F.3d 1056 (11th Cir. 2020) (overruling Sparks “to the extent that [Sparks] holds that [property] abandonment implicates Article III standing”).

 

At first glance, Lichtenberger and Sparks seem irreconcilable with Runyan and Rann, leading many commentators to conclude there is a circuit split regarding the private search doctrine: the “container” approach versus the “file”/“narrow” approach. But I disagree. And there is a rather simple explanation for reaching this conclusion—Riley merely heightened Jacobsen’s “virtual certainty” requirement in determining whether law enforcement exceed the scope of initial private searches of laptops and smartphones. In other words, “virtual certainty” is significantly elevated in the context of smartphones and laptops because of the heightened privacy interests at stake stemming from their immense storage capacities and unique qualities—i.e., they contain information and data about all aspects of our lives to a much greater extent than floppy disks, CDs, zip drives, and camera memory cards. Thus, the only apparent sure way for law enforcement to satisfy the private search doctrine’s “virtual certainty” requirement when a laptop or smartphone is involved (and thereby avoid inviting defendants to invoke the exclusionary rule) is to view exactly what the private searcher viewed.

In contrast, the “virtual certainty” requirement in the context of old school floppy disks, CDs, zip drives, and memory cards is quite simply a lower standard of certainty because the balance between privacy interests and legitimate governmental interests is not tipped heavily in favor of privacy interests.

While floppy disks, CDs, and zip drives somewhat resemble “containers,” such as the package in Jacobsen, smartphones and laptops are entirely different Fourth Amendment beasts. Accordingly, the four cases should all be analyzed through the lens that the particular electronic device at issue in each case is most significant because it guides the determination of whether the after-occurring search fell within the scope of the initial private search. Looking at the case law this way makes it so that it is not the container approach versus the file approach. Rather, it is (justifiably) the container approach for certain older electronic storage devices and the file approach for modern electronic devices that implicate weightier privacy concerns.