Public Safety

Reloaded: What’s Next for Guns After Cargill & Rahimi?

Evan Bracewell, MJLST Staffer

In 2024, the Supreme Court released two major opinions related to gun safety laws, United States v. Rahimi and Garland v. Cargill.[1] Rahimi involved what types of people can possess guns.[2] Cargill involved what types of guns (or more accurately gun technology) people can possess.[3] In Cargill, the Supreme Court struck down an ATF rule that banned bump stocks.[4] A bump stock is an attachment added to a gun which uses the momentum of the gun’s kick to bump the gun between the shooter’s shoulder and trigger finger back and forth causing an increased fire rate.[5] To some, this signaled “another example of the [C]ourt’s hostility towards gun regulation in general.”[6] However, one week after Cargill was decided, the Court ruled in favor of a federal gun safety law in U.S. v. Rahimi. In Rahimi, the Court upheld a federal statute that “prohibits an individual subject to a domestic violence restraining order from possessing a firearm if that order includes a finding that he ‘represents a credible threat to the physical safety of [an] intimate partner,’ or a child of the partner or individual.”[7]

These two cases were the first major rulings on gun safety from the Supreme Court since it established the Bruen framework for handling gun control laws.[8] In New York State Rifle & Pistol Ass’n v. Bruen, the Court held “when the Second Amendment’s plain text covers an individual’s conduct, the Constitution presumptively protects that conduct. To justify its regulation . . . the government must demonstrate that the regulation is consistent with this Nation’s historical tradition of firearm regulation.”[9] This means that for cases where someone is alleging a law has violated a citizen’s Second Amendment rights, in order for the law to survive, judges must find a comparison for the modern law in American regulatory traditions. Following the Bruen framework, uncertainty and confusion spread across the lower courts tasked with applying the historical comparison to modern statutes.[10] Because Cargill concerned an ATF regulation over whether a bump stock fits into the definition of machine guns and was not a Second Amendment challenge, it did not employ the Bruen framework.[11] Rahimi, however, was the Court’s chance to clarify the Bruen test—a chance some say it failed to capitalize on.[12]

The Bruen analysis has received criticism from a wide range of judges.[13] A common issue brought up with Bruen is that it demands judges to be historical researchers and experts and make fact finding on the history of gun control laws.[14] This can result in unpredictable and inconsistent results.[15] Other judges have pointed out that the test may entrench gun laws in the past without any room for modern solutions.[16] The Bruen test has left some judges scratching their heads over history textbooks. When it is combined with the differing outcomes of Cargill and Rahimi, the future of gun laws appears cloudy. On the other hand, it also makes upcoming gun cases potentially incredibly impactful.

Looking ahead, the next significant Supreme Court case on the topic might already be in motion. A petition for certiorari in Bianchi v. Brown has been filed and if granted could be a pivotal decision for gun safety.[17] In Bianchi, the Fourth Circuit recently re-evaluated Maryland’s state law that generally prohibits the sale and possession of assault weapons through a Bruen analysis.[18] The majority decided the law was constitutional because the assault weapons specifically restricted by it are not protected by the Second Amendment and, even if they were, there is a “historical tradition of restricting the use and possession of weapons exceptionally dangerous to civilians.”[19] The Fourth Circuit ultimately upheld a strict gun technology law even in a post-Bruen world.

If the Supreme Court chooses to hear this case, the resulting decision could produce monumental shockwaves for gun laws. Eight other states have adopted assault weapon bans similar to Maryland’s,[20] and an opinion upholding the ban might spark more states to adopt a similar statute. Not to mention, it could demonstrate how the judicial branch may approach a federal ban on assault weapons.[21] The Supreme Court striking down the ban would be a major obstruction to the potential for future strict gun control laws. If the case is heard, hopefully the Supreme Court will use the opportunity to clarify the “labyrinth”[22] that is Bruen which could allow for more predictability in legislating gun technology and safety.

The difference in the rulings of Rahimi and Cargill could be viewed as the Court being more comfortable with laws regulating who can possess guns than rules regulating what gun technology people can possess. Viewing Bianchi through that lens, the Maryland statute may be deemed unconstitutional because it is a ban on guns, not a ban on who can possess guns. However, the Supreme Court could appreciate the Fourth Circuit’s examination of the law under the Bruen framework and comparison to 18th and 19th century regulations of “pistols, bowie knives, brass knuckles, and sand clubs, among other weapons” as excessively dangerous to the public.[23]

Bianchi is not the only case on the horizon for gun laws. “[E]ven if the Court declines to take up the pending [Bianchi v. Brown] petition, it will almost certainly see more petitions soon.”[24] The next major gun technology case will be decided by the Supreme Court this term in Garland v. VanDerStok. That case concerns a “federal rule regulating so-called ‘ghost guns’” which are “untraceable weapons without serial numbers, assembled from components or kits that can be bought online.”[25] Other future gun cases could concern a recently overturned Illinois state law banning certain assault weapons,[26] the minimum age to purchase guns,[27] Connecticut’s ban on assault rifles and large-capacity magazines,[28] or red flag laws that allow a law enforcement agency to temporarily prevent someone deemed an extreme risk from possessing guns.[29]

“Rahimi’s greatest takeaway is likely that the court faces a challenging landscape ahead, which it will have to wade through largely on a case-by-case basis.”[30] The Supreme Court could use a case like Bianchi to clarify the post-Bruen world, bring a sliver of stability to gun laws, and maybe even alleviate the concerns of judges nationwide entrenched in history books.

 

Notes

[1] United States v. Rahimi, 144 S. Ct. 1889 (2024); Garland v. Cargill, 602 U.S. 406 (2024).

[2] See Rahimi, 144 S. Ct. at 1894.

[3] See Cargill, 602 U.S. at 410.

[4] Id. at 415.

[5] Larry Buchanan et al., What Is a Bump Stock and How Does It Work?, N.Y. Times, https://nyti.ms/43NEi6b (last updated June 14, 2024).

[6] Joel Brown, Supreme Court Strikes Down Ban on Gun Bump Stocks, BU Today (June 14, 2024) https://www.bu.edu/articles/2024/supreme-court-strikes-down-ban-on-gun-bump-stocks/ (quoting Cody Jacobs, a Boston University School of Law lecturer).

[7] Rahimi, 144 S. Ct. at 1894 (referencing 18 U.S.C. § 922(g)(8)).

[8] See Michael McCarthy, Justices’ 1st Post-Bruen Gun Ruling Provides Little Guidance, Law360 (June 28, 2024, 4:41 PM), https://www.law360.com/articles/1852462/justices-1st-post-bruen-gun-ruling-provides-little-guidance.

[9] N.Y. State Rifle & Pistol Ass’n v. Bruen, 597 U.S. 1, 17 (2024).

[10] Dave S. Sidhu, Cong. Rsch. Serv., LSB11219, Divided En Banc Federal Appeals Court Rejects Second Amendment Challenge to Maryland’s Ban on “Assault Weapons” 3 (2024) (“Six judges joined the majority opinion in full but wrote separately to highlight the ‘confusion’ that lower courts are experiencing in applying Bruen.”).

[11] Garland v. Cargill, 602 U.S. 406 (2024).

[12] Adam Liptak, Supreme Courtʼs Gun Rulings Leave Baffled Judges Asking for Help, N.Y. Times (Sept. 23, 2024), https://www.nytimes.com/2024/09/23/us/supreme-court-guns-second-amendment.html (“Chief Judge Diaz was not convinced. The Rahimi decision, he wrote, ‘offered little instruction or clarity.’”).

[13] Clara Fong et al., Judges Find Supreme Court’s Bruen Test Unworkable, Brennan Ctr. for Just. (June 26, 2023), https://www.brennancenter.org/our-work/research-reports/judges-find-supreme-courts-bruen-test-unworkable (“In the short time since Bruen was issued, federal judges appointed by Presidents Reagan, Clinton, George W. Bush, Obama, Trump, and Biden have all questioned the opinion.”).

[14] Id.; Liptak, supra note 12 (Quoting Judge Pamela Harris speaking at a conference and describing the issue of receiving two competing briefs where one says something happened in history and the other claims it did not happen).

[15] United States v. Bartucci, 658 F. Supp. 3d 794, 800 (2023) (“In the short time post-Bruen, this has caused disarray among the lower courts when applying the new framework.”).

[16] See Worth v. Harrington, 666 F. Supp. 3d 902, 926 (2023) (“Second Amendment jurisprudence now focuses a lens entirely on the choices made in a very different time, by a very different American people.”); Dave S. Sidhu, Cong. Rsch. Serv., supra note 10, at 3 (“More broadly, the six judges cautioned that overemphasizing the importance of historical analogues may ‘fossilize’ modern legislative attempts and ‘paralyze’ democratic efforts.”).

[17] Andrew Willinger, An Update on Challenges to State Assault Weapon and Magazine Bans, Duke Ctr. for Firearms L. (Nov. 6, 2024), https://firearmslaw.duke.edu/2024/11/an-update-on-challenges-to-state-assault-weapon-and-magazine-bans. It should be noted that the case will be re-named Snope v. Brown if the Supreme Court decides to hear it but Bianchi is used throughout this blog post to avoid confusion.

[18] Bianchi v. Brown, 111 F.4th 438, 442 (4th Cir. 2024) (en banc) (referencing Md. Code Ann., Crim. Law § 4-303 (West 2018)) (“The statute defines ‘assault weapon’ as ‘(1) an assault long gun; (2) an assault pistol; or (3) a copycat weapon.’”).

[19] Dave S. Sidhu, Cong. Rsch. Serv., supra note 10, at 3.

[20]Which States Prohibit Assault Weapons?, Everytown Rsch. & Pol’y, https://everytownresearch.org/rankings/law/assault-weapons-prohibited/ (last updated Jan. 4, 2024).

[21] Dave S. Sidhu, Cong. Rsch. Serv., supra note 10, at 4. (“Judicial evaluations of similar state bans, like Maryland’s, under the Second Amendment may provide an indication of how a federal ban could fare in the courts.”).

[22] Bianchi, 111 F.4th at 473–74 (Diaz, C.J., concurring) (“Bruen has proven to be a labyrinth for lower courts, including our own, with only the one-dimensional history-and-tradition test as a compass.”).

[23] Dave S. Sidhu, Cong. Rsch. Serv., supra note 10, at 3.

[24] Willinger, supra note 17. The Court remanded several cases after Rahimi that could eventually make their way back to the Supreme Court. These cases involve prohibitions on gun possession by certain nonviolent offenders of crimes, certain drug users, and certain people with felony convictions. Dave S. Sidhu, Cong. Rsch. Serv., LSB1108, The Second Amendment at the Supreme Court: Challenges to Federal Gun Laws 2–4 (2024).

[25] Amy Howe, Court Likely to Let Biden’s “Ghost Gun” Regulation Stand, SCOTUSblog (Oct. 8, 2024, 5:07 PM), https://www.scotusblog.com/2024/10/court-likely-to-let-bidens-ghost-gun-regulation-stand/.

[26] Lauren Berg, Ill. Assault Rifle Ban Struck As Unconstitutional, AG To Appeal, Law360 (Nov. 8, 2024, 11:44 PM), https://www.law360.com/articles/2259026/ill-assault-rifle-ban-struck-as-unconstitutional-ag-to-appeal.

[27] Daniel Ducassi, 10th Circ. Backs Colorado Age Limits For Gun Buyers, Law360 (Nov. 6, 2024, 10:43 PM), https://www.law360.com/articles/2257228/10th-circ-backs-colorado-age-limits-for-gun-buyers.

[28] Aaron Keller & Brian Steele, 2nd Circ. Scrutinizes Conn. Restrictions On AR-15s, Law360 (Oct. 16, 2024, 8:39 PM), https://www.law360.co.uk/articles/1890635/2nd-circ-scrutinizes-conn-restrictions-on-ar-15s.

[29] George Woolston, Attorney Says NJ Red Flag Law Violates 2nd Amendment, Law360 (Oct. 28, 2024, 4:57 PM), https://www.law360.com/pulse/articles/2252661/attorney-says-nj-red-flag-law-violates-2nd-amendment.

[30] McCarthy, supra note 8.


Privacy at Risk: Analyzing DHS AI Surveillance Investments

Noah Miller, MJLST Staffer

The concept of widespread surveillance of public areas monitored by artificial intelligence (“AI”) may sound like it comes right out of a dystopian novel, but key investments by the Department of Homeland Security (“DHS”) could make this a reality. Under the Biden Administration, the U.S. has acted quickly and strategically to adopt artificial intelligence as a tool to realize national security objectives.[1] In furtherance of President Biden’s executive goals concerning AI, the Department of Homeland Security has been making investments in surveillance systems that utilize AI algorithms.

Despite the substantial interest in protecting national security, Patrick Toomey, deputy director of the ACLU National Security Project, has criticized the Biden administration for allowing national security agencies to “police themselves as they increasingly subject people in the United States to powerful new technologies.”[2] Notably, these investments have not been tailored towards high-security locations—like airports. Instead, these investments include surveillance in “soft targets”—high-traffic areas with limited security: “Examples include shopping areas, transit facilities, and open-air tourist attractions.”[3] Currently, due to the number of people required to review footage, surveilling most public areas is infeasible; however, emerging AI algorithms would allow for this work to be done automatically. While enhancing security protections in soft targets is a noble and possibly desirable initiative, the potential privacy ramifications of widespread autonomous AI surveillance are extreme. Current Fourth Amendment jurisprudence offers little resistance to this form of surveillance, and the DHS has both been developing this surveillance technology themselves and outsourcing these projects to private corporations.

To foster innovation to combat threats to soft targets, the DHS has created a center called Soft Target Engineering to Neutralize the Threat Reality (“SENTRY”).[4] Of the research areas at SENTRY, one area includes developing “real-time management of threat detection and mitigation.”[5] One project, in this research area, seeks to create AI algorithms that can detect threats in public and crowded areas.[6] Once the algorithm has detected a threat, the particular incident would be sent to a human for confirmation.[7] This would be a substantially more efficient form of surveillance than is currently widely available.

Along with the research conducted through SENTRY, DHS has been making investments in private companies to develop AI surveillance technologies through the Silicon Valley Innovation Program (“SVIP”).[8] Through the SVIP, the DHS has awarded three companies with funding to develop AI surveillance technologies that can detect “anomalous events via video feeds” to improve security in soft targets: Flux Tensor, Lauretta AI, and Analytical AI.[9] First, Flux Tensor currently has demo pilot-ready prototype video feeds that apply “flexible object detection algorithms” to track and pinpoint movements of interest.[10] The technology is used to distinguish human movements and actions from the environment—i.e. weather, glare, and camera movements.[11] Second, Lauretta AI is adjusting their established activity recognition AI to utilize “multiple data points per subject to minimize false alerts.”[12] The technology generates automated reports periodically of detected incidents that are categorized by their relative severity.[13] Third, Analytical AI is in the proof of concept demo phase with AI algorithms that can autonomously track objects in relation to people within a perimeter.[14] The company has already created algorithms that can screen for prohibited items and “on-person threats” (i.e. weapons).[15] All of these technologies are currently in early stages, so the DHS is unlikely to utilize these technologies in the imminent future.

Assuming these AI algorithms are effective and come to fruition, current Fourth Amendment protections seem insufficient to protect against rampant usage of AI surveillance in public areas. In Kyllo v. United States, the Court placed an important limit on law enforcement use of new technologies. The Court held that when new sense-enhancing technology, not in general public use, was utilized to obtain information from a constitutionally protected area, the use of the new technology constitutes a search.[16] Unlike in Kyllo, where the police used thermal imaging to obtain temperature levels on various areas of a house, people subject to AI surveillance in public areas would not be in constitutionally protected areas.[17] Being that people subject to this surveillance would be in public places, they would not have a reasonable expectation of privacy in their movements; therefore, this form of surveillance likely would not constitute a search under prominent Fourth Amendment search analysis.[18]

While the scope and accuracy of this new technology are still to be determined, policymakers and agencies need to implement proper safeguards and proceed cautiously. In the best scenario, this technology can keep citizens safe while mitigating the impact on the public’s privacy interests. In the worst scenario, this technology could effectively turn our public spaces into security checkpoints. Regardless of how relevant actors proceed, this new technology would likely result in at least some decline in the public’s privacy interests. Policymakers should not make a Faustian bargain for the sake of maintaining social order.

 

Notes

[1] See generally Joseph R. Biden Jr., Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, The White House (Oct. 24, 2024), https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/ (explaining how the executive branch intends to utilize artificial intelligence in relation to national security).

[2] ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections, ACLU (Oct. 24, 2024, 12:00 PM), https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections.

[3] Jay Stanley, DHS Focus on “Soft Targets” Risks Out-of-Control Surveillance, ALCU (Oct. 24, 2024), https://www.aclu.org/news/privacy-technology/dhs-focus-on-soft-targets-risks-out-of-control-surveillance.

[4] See Overview, SENTRY, https://sentry.northeastern.edu/overview/#VSF.

[5] Real-Time Management of Threat Detection and Mitigation, SENTRY, https://sentry.northeastern.edu/research/ real-time-threat-detection-and-mitigation/.

[6] See An Artificial Intelligence-Driven Threat Detection and Real-Time Visualization System in Crowded Places, SENTRY, https://sentry.northeastern.edu/research-project/an-artificial-intelligence-driven-threat-detection-and-real-time-visualization-system-in-crowded-places/.

[7] See Id.

[8] See, e.g., SVIP Portfolio and Performers, DHS, https://www.dhs.gov/science-and-technology/svip-portfolio.

[9] Id.

[10] See Securing Soft Targets, DHS, https://www.dhs.gov/science-and-technology/securing-soft-targets.

[11] See pFlux Technology, Flux Tensor, https://fluxtensor.com/technology/.

[12] See Securing Soft Targets, supra note 10.

[13] See Security, Lauretta AI, https://lauretta.io/technologies/security/.

[14] See Securing Soft Targets, supra note 10.

[15] See Technology, Analytical AI, https://www.analyticalai.com/technology.

[16] Kyllo v. United States, 533 U.S. 27, 33 (2001).

[17] Cf. Id.

[18] See generally, Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring) (explaining the test for whether someone may rely on an expectation of privacy).

 

 


AI and Predictive Policing: Balancing Technological Innovation and Civil Liberties

Alexander Engemann, MJLST Staffer

To maximize their effectiveness, police agencies are constantly looking to use the most sophisticated preventative methods and technologies available. Predictive policing is one such technique that fuses data analysis, algorithms, and information technology to anticipate and prevent crime. This approach identifies patterns in data to anticipate when and where crime will occur, allowing agencies to take measures to prevent it.[1] Now, engulfed in an artificial intelligence (“AI”) revolution, law enforcement agencies are eager to take advantage of these developments to augment controversial predictive policing methods.[2]

In precincts that use predictive policing strategies, ample amounts of data are used to categorize citizens with basic demographic information.[3] Now, machine learning and AI tools are augmenting this data which, according to one source vendor, “identifies where and when crime is most likely to occur, enabling [law enforcement] to effectively allocate [their] resources to prevent crime.”[4]

Both predictive policing and AI have faced significant challenges concerning issues of equity and discrimination. In response to these concerns, the European Union has taken proactive steps promulgating sophisticated rules governing AI applications within its territory, continuing its tradition of leading in regulatory initiatives.[5] Dubbed the “Artificial Intelligence Act”, the Union clearly outlined its goal of promoting safe, non-discriminatory AI systems.[6]

Back home, we’ve failed to keep a similar legislative pace, even with certain institutions sounding the alarms.[7] Predictive policing methods have faced similar criticism. In an issue brief, the NAACP emphasized, “[j]urisdictions who use [Artificial Intelligence] argue it enhances public safety, but in reality, there is growing evidence that AI-driven predictive policing perpetuates racial bias, violates privacy rights, and undermines public trust in law enforcement.”[8] This technological and ideological marriage clearly poses discriminatory risks for law enforcement agencies in a nation where a black person is already exponentially more likely to be stopped without just cause as their white counterparts.[9]

Police agencies are bullish about the technology. Police Chief Magazine, the official publication of the International Association of Chiefs of Police,  paints these techniques in a more favorable light, stating, “[o]ne of the most promising applications of AI in law enforcement is predictive policing…Predictive policing empowers law enforcement to predict potential crime hotspots, ultimately aiding in crime prevention and public safety.[10] In this space, facial recognition software is gaining traction among law enforcement agencies as a powerful tool for identifying suspects and enhancing public safety. Clearview AI stresses their product, “[helps] law enforcement and governments in disrupting and solving crime.”[11]

Predictive policing methods enhanced by AI technology show no signs of slowing down.[12] The obvious advantages to these systems cannot be ignored, allowing agencies to better allocate resources and manage their staff. However, as law enforcement agencies adopt these technologies, it is important to remain vigilant in holding them accountable to any potential ethical implications and biases embedded within their systems. A comprehensive framework for accountability and transparency, similar to European Union guidelines  must be established to ensure deploying predictive policing and AI tools do not come at the expense of marginalized communities. [13]

 

Notes

[1] Andrew Guthrie Ferguson, Predictive Policing and Reasonable Suspicion, 62 Emory L.J. 259, 265-267 (2012)

[2] Eric M. Baker, I’ve got my AI on You: Artificial Intelligence in the Law Enforcement Domain, 47 (Mar. 2021) (Master’s thesis).

[3] Id. at 48.

[4] Id. at 49 (citing Walt L. Perry et al., Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations, RR-233-NIJ (Santa Monica, CA: RAND, 2013), 4, https://www.rand.org/content/dam/rand/ pubs/research_reports/RR200/RR233/RAND_RR233.pdf).

[5] Commission Regulation 2024/1689 or the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (Artificial Intelligence Act), 2024 O.J. (L 1689) 1.

[6] Lukas Arnold, How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination, Penn. J. L. & Soc. Change (May 14,2024), https://www.law.upenn.edu/live/news/16742-how-the-european-unions-ai-act-provides#_ftn1.

[7] See Margaret Hu, Algorithmic Jim Crow, 86 Fordham L. Rev. 633, 664 (2017),

https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5445&context=flr. (“Database screening and digital watchlisting systems, in fact, can serve as complementary and facially colorblind supplements to mass incarcerations systems. The purported colorblindness of mandatory sentencing… parallels the purported colorblindness of mandatory database screening and vetting systems”).

[8] NAACP, Issue Brief: The Use of Artificial Intelligence in Predictive policing, https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief (last visited Nov. 2, 2024).

[9] Will Douglas Heaven, Artificial Intelligence- Predictive policing algorithms are racist. They need to be dismantled, MIT Tech. Rev. (July 17, 2020), https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ (citing OJJDP Statistical Briefing Book. Estimated number of arrests by offense and race, 2020. Available: https://ojjdp.ojp.gov/statistical-briefing-book/crime/faqs/ucr_table_2. Released on July 08, 2022).

[10] See The Police Chief, Int’l Ass’n of Chiefs of Police, https://www.policechiefmagazine.org (last visited Nov. 2, 2024);Brandon Epstein, James Emerson, and ChatGPT, “Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives,” Police Chief Online, April 3, 2024.

[11] Clearview AI, https://www.clearview.ai/ (last visited Nov. 3, 2024).

[12] But see Nicholas Ibarra, Santa Cruz Becomes First US City to Approve Ban on Predictive Policing, Santa Cruz Sentinel (June 23, 200) https://evidentchange.org/newsroom/news-of-interest/santa-cruz-becomes-first-us-city-approve-ban-predictive-policing/.

[13] See also Roy Maurer, New York City to Require Bias Audits of AI-Type HR Technology, Society of Human Resources Management (December 19, 2021), https://www.shrm.org/topics-tools/news/technology/new-york-city-to-require-bias-audits-ai-type-hr-technology.

 


Social Media Platforms Won’t “Like” This: How Aggrieved Users Are Circumventing the Section 230 Shield

Claire Carlson, MJLST Staffer

 

Today, almost thirty years after modern social media platforms were introduced, 93% of teens use social media on a daily basis.[1] On average, teens spend nearly five hours a day on social media platforms, with a third reporting that they are “almost constantly” active on one of the top five leading platforms.[2] As social media usage has surged, concerns have grown among users, parents, and lawmakers about its impacts on teens, with primary concerns including cyberbullying, extremism, eating disorders, mental health problems, and sex trafficking.[3] In response, parents have brought a number of lawsuits against social media companies alleging the platforms market to children, connect children with harmful content and individuals, and fail to take the steps necessary to keep children safe.[4]

 

When facing litigation, social media companies often invoke the immunity granted to them under Section 230 of the Communications Decency Act.[5] 47 U.S.C § 230 states, in relevant part, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[6] Federal courts are generally in consensus and interpret the statutory language as providing broad immunity for social media providers.[7] Application of this interpretive framework establishes that social media companies can only be held liable for content they author, whereas Section 230 shields them from liability for harm arising from information or content posted by third-party users of their platforms.[8]

 

In V.V. v. Meta Platforms, Inc., plaintiffs alleged that popular social media platform Snapchat intentionally encourages use by minors and consequently facilitated connections between their twelve-year-old daughter and sex offenders, leading to her assault.[9] The court held that the facts of this case fell squarely within the intended scope of Section 230, as the harm alleged was the result of the content and conduct of third-party platform users, not Snapchat.[10] The court expressed that Section 230 precedent required it to deny relief to the plaintiffs, whose specific circumstances evoked outrage, asserting it lacked judicial authority to do otherwise without legislative action.[11] Consequently, the court held that Section 230 shielded Snapchat from liability for the harm caused by the third-party platform users and that plaintiffs’ only option for redress was to bring suit against the third-party users directly.[12]

 

After decades of cases like V.V., where Section 230 has shielded social media companies from liability, plaintiffs are taking a new approach rooted in tort law. While Section 230 provides social media companies immunity from harm caused by their users, it does not shield them from liability for harm caused by their own platforms and algorithms.[13] Accordingly, plaintiffs are trying to bypass the Section 230 shield with product liability claims alleging that social media companies knowingly, and often intentionally, design defective products aimed at fostering teen addiction.[14] Many of these cases analogize social media companies to tobacco companies – maintaining that they are aware of the risks associated with their products and deliberately conceal them.[15] These claims coincide with the U.S. Surgeon General and 40+ attorney generals imploring Congress to pass legislation mandating warning labels on social media platforms emphasizing the risk of teen addiction and other negative health impacts.[16]

Courts stayed tort addiction cases and postponed rulings last year in anticipation of the Supreme Court ruling on the first Section 230 immunity cases to come before it.[17] In companion cases, Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh, the Supreme Court was expected to shed light on the scope of Section 230 immunity by deciding whether social media companies are immune from liability when the platform’s algorithm recommends content that causes harm.[18] In both, the court declined to answer the Section 230 question and decided the cases on other grounds.[19]

 

Since then, while claims arising from third-party content are continuously dismissed, social media addiction cases have received positive treatment in both state and federal courts.[20] In a federal multidistrict litigation (MDL) proceeding, the presiding judge permitted hundreds of addiction cases alleging defective product (platform and algorithm) design to move forward. In September, the MDL judge issued a case management order, which suggests an early 2026 trial date.[21] Similarly, a California state judge found that Section 230 does not shield social media companies from liability in hundreds of addiction cases, as the alleged harms are based on the company’s design and operation of their platforms, not the content on them.[22] Thus, social media addiction cases are successfully using tort law to bypass Section 230 where their predecessor cases failed.

 

With hundreds of pending social media cases and the Supreme Court’s silence on the scope of Section 230 immunity, the future of litigating and understanding social media platform liability is uncertain.[23] However, the preliminary results seen in state and federal courts evinces that Section 230 is not the infallible immunity shield that social media companies have grown to rely on.

 

Notes

 

[1] Leon Chaddock, What Percentage of Teens Use Social Media? (2024), Sentiment (Jan. 11, 2024), https://www.sentiment.io/how-many-teens-use-social-media/#:~:text=Surveys%20suggest%20that%20over%2093,widely%20used%20in%20our%20survey. In the context of this work, the term “teens” refers to people aged 13-17.

[2] Jonathan Rothwell, Teens Spend Average of 4.8 Hours on Social Media Per Day, Gallup (Oct. 13, 2023), https://news.gallup.com/poll/512576/teens-spend-average-hours-social-media-per-day.aspx; Monica Anderson, Michelle Faverio & Jeffrey Gottfried, Teens, Social Media and Technology 2023, Pew Rsch. Ctr. (Dec. 11, 2023), https://www.pewresearch.org/internet/2023/12/11/teens-social-media-and-technology-2023/.

[3] Chaddock, supra note 1; Ronald V. Miller, Social Media Addiction Lawsuit, Lawsuit Info. Ctr. (Sept. 20, 2024), https://www.lawsuit-information-center.com/social-media-addiction-lawsuits.html#:~:text=Social%20Media%20Companies%20May%20Claim,alleged%20in%20the%20addiction%20lawsuits.

[4] Miller, supra note 3.

[5] Tyler Wampler, Social Media on Trial: How the Supreme Court Could Permanently Alter the Future of the Internet by Limiting Section 230’s Broad Immunity Shield, 90 Tenn. L. Rev. 299, 311–13 (2023).

[6] 47 U.S.C. § 230 (2018).

[7] V.V. v. Meta Platforms, Inc., No. X06UWYCV235032685S, 2024 WL 678248, at *8 (Conn. Super. Ct. Feb. 16, 2024) (citing Brodie v. Green Spot Foods, LLC, 503 F. Supp. 3d 1, 11 (S.D.N.Y. 2020)).

[8] V.V., 2024 WL 678248, at *8; Poole v. Tumblr, Inc., 404 F. Supp. 3d 637, 641 (D. Conn. 2019).

[9] V.V., 2024 WL 678248, at *2.

[10] V.V., 2024 WL 678248, at *11.

[11] V.V., 2024 WL 678248, at *11.

[12] V.V., 2024 WL 678248, at *7, 11.

[13] Miller, supra note 3.

[14] Miller, supra note 3; Isaiah Poritz, Social Media Addiction Suits Take Aim at Big Tech’s Legal Shield, BL (Oct. 25, 2023), https://www.bloomberglaw.com/bloomberglawnews/tech-and-telecom-law/X2KNICTG000000?bna_news_filter=tech-and-telecom-law#jcite.

[15] Kirby Ferguson, Is Social Media Big Tobacco 2.0? Suits Over the Impact on Teens, Bloomberg (May 14, 2024), https://www.bloomberg.com/news/videos/2024-05-14/is-social-media-big-tobacco-2-0-video.

[16] Miller, supra note 3.

[17] Miller, supra note 3; Wampler, supra note 5, at 300, 321; In re Soc. Media Adolescent Addiction/Pers. Inj. Prod. Liab. Litig., 702 F. Supp. 3d 809, 818 (N.D. Cal. 2023) (“[T]he Court was awaiting the possible impact of the Supreme Court’s decision in Gonzalez v. Google. Though that case raised questions regarding the scope of Section 230, the Supreme Court ultimately did not reach them.”).

[18] Wampler, supra note 5, at 300, 339-46; Leading Case, Twitter, Inc. v. Taamneh, 137 Harv. L. Rev. 400, 409 (2023).

[19] Twitter, Inc. v. Taamneh, 598 U.S. 471, 505 (2023) (holding that the plaintiff failed to plausibly allege that defendants aided and abetted terrorists); Gonzalez v. Google LLC, 598 U.S. 617, 622 (2023) (declining to address Section 230 because the plaintiffs failed to state a plausible claim for relief).

[20] Miller, supra note 3.

[21] Miller, supra note 3; 702 F. Supp. at 809, 862.

[22] Miller, supra note 3; Poritz supra note 14.

[23] Leading Case, supra note 18, at 400, 409.


Perhaps Big Tech Regulation Belongs on Congress’s for You Page

Kira Le, MJLST Staffer

On Thursday, March 23, 2023, TikTok CEO Shou Zi Chew testified before a Congressional panel for five hours in order to convince Congress that the social media platform should not be banned in the United States. The hearing came one week after reports surfaced that the Committee on Foreign Investment was threatening a ban unless TikTok’s parent company ByteDance sells its share of the company.[1] Lawmakers on both sides of the aisle, as well as FBI officials, are allegedly concerned with the possibility of the Chinese government manipulating users’ experience on the platform or threatening the security of the data of its more than 150 million users in the United States.[2] Despite Chew’s testimony that TikTok plans to contract with U.S. tech giant Oracle to store U.S. data on U.S. servers on U.S. soil, preventing Chinese interference on the platform and recommending content to U.S. users through Oracle infrastructure, lawmakers were not convinced, and not a single one offered support for TikTok.[3]

In terms of what’s to come for TikTok’s future in the United States, Senator Marco Rubio updated his website on Monday, March 27, 2023 with information on “when TikTok will be banned,” claiming his proposed ANTI-SOCIAL CCP Act is the only bipartisan, bicameral legislation that would actually prevent TikTok from operating in the United States.[4] In order to cut off the platform’s access to critical functions needed to remain online, the proposed statute would require the president to use the International Emergency Economic Powers Act to block and prohibit all transactions with TikTok, ByteDance, and any subsidiary or successor within 30 days.[5] Senator Rubio explains that the proposed legislation “requires the president to block and prohibit transactions with social media companies owned or otherwise controlled by countries or entities of concern.”[6] Reuters reports that The White House supports the Senate bill known as the RESTRICT Act.[7] However, former President Trump made an almost identical attempt to ban the app in 2020.[8]TikTok was successful in quashing the effort, and would almost certainly challenge any future attempts.[9] Further, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, “To justify a TikTok ban, the government would have to demonstrate that privacy and security concerns can’t be addressed in narrower ways. The government hasn’t demonstrated this, and we doubt it could. Restricting access to a speech platform that is used by millions of Americans every day would set a dangerous precedent for regulating our digital public sphere more broadly.”[10]

Despite what Congress may want the public to think, it certainly has other options for protecting Americans and their data from Big Tech companies like TikTok. For example, nothing is stopping U.S. lawmakers from following in the footsteps of the European Parliament, which passed the Digital Markets Act just last year.[11] Although the main purpose of the Act is to limit anticompetitive conduct by large technology companies, it includes several provisions on protecting the personal data of users of defined “gatekeeper” firms. Under the Act, a gatekeeper is a company that provides services such as online search engines; online social networking services; video-sharing platform services; number-independent interpersonal communications services; operating systems; web browsers; and online advertising services that are gateways for business to reach end users.[12] The Digital Markets Act forbids these gatekeepers from processing the personal data of end users for the purpose of providing online advertisement services, combining or cross-using their personal data, or signing users into other services in order to combine their personal data without their explicit consent.[13]

The penalties associated with violations of the Act give it some serious teeth. For noncompliance, the European Commission may impose a fine of up to 10% of the offending gatekeeper’s total worldwide turnover in the preceding year in the first instance, and up to 20% if the gatekeeper has committed the same or a similar infringement laid out in specific articles at some point in the eight preceding years.[14] For any company, not limited to gatekeepers, the Commission may impose a fine of up to 1% of total worldwide turnover in the preceding year for failing to provide the Commission with information as required by various articles in the Act. Finally, in order to compel any company to comply with specific decisions of the Commission and other articles in the regulation, the Commission may impose period penalty payments of up to 5% of the average daily worldwide turnover in the preceding year, per day.[15]

If U.S. lawmakers who have backed bipartisan legislation giving President Biden a path to ban TikTok are truly concerned about preventing the spread of misinformation on the platform, who truly believe, as Representative Gus Bilirakis claims to, that it is “literally leading to death” and that “[w]e must save our children from big tech companies” who allow harmful content to be viewed and spread without regulation, then perhaps Congress should simply: regulate it.[16] After the grueling congressional hearing, the Chinese foreign ministry stated in a regular news briefing that it has never asked companies “to collect or provide data from abroad to the Chinese government in a way that violated local laws…”[17]During his testimony, Chew also argued that TikTok is no different than other social media giants, and has even sought to put stronger safeguards in place as compared to its competitors.[18] Granted, some lawmakers have expressed support for comprehensive data privacy legislation that would apply to all tech companies.[19] Perhaps it would be more fruitful for U.S. lawmakers to focus on doing so.

Notes

[1] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[2] Id.

[3] Id.; David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[4] FAQ: When Will TikTok Be Banned?, MARCO RUBIO US SENATOR FOR FLORIDA (Mar. 27, 2023), https://www.rubio.senate.gov/public/index.cfm/press-releases?ContentRecord_id=C5313B3F-8173-4DC8-B1D9-9566F3E2595C.

[5] Id.

[6] Id.

[7] Factbox: Why a Broad US TikTok Ban is Unlikely to Take Effect Soon, REUTERS (Mar. 23, 2023), https://www.reuters.com/technology/why-broad-us-tiktok-ban-is-unlikely-take-effect-soon-2023-03-23/.

[8] Id.

[9] Id.

[10] Id.

[11] Council Regulation (EU) 2022/1925 on Contestable and Fair Markets in the Digital Sector, 2022 O.J. L 265/1 [hereinafter Digital Markets Act].

[12] Id., Art. 3, 2022 O.J. L 265/28, 30.

[13] Id. art. 5, at 33.

[14] Id. art. 30, at 51, 52.

[15] Id. art. 17, at 44.

[16] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.

[17] David Shepardson & Rami Ayyub, TikTok Congressional Hearing: CEO Shou Zi Chew Grilled by US Lawmakers, REUTERS (Mar. 24, 2023), https://www.reuters.com/technology/tiktok-ceo-face-tough-questions-support-us-ban-grows-2023-03-23/.

[18] Daniel Flatley, Five Key Moments From TikTok CEO’s Combative Hearing in Congress, BLOOMBERG (Mar. 23, 2023), https://www.bloomberg.com/news/articles/2023-03-23/five-key-moments-from-tiktok-ceo-s-combative-hearing-in-congress#xj4y7vzkg.

[19] Ben Kochman, Skeptical Congress Grills TikTok CEO Over Security Concerns, LAW360 (Mar. 23, 2023), https://plus.lexis.com/newsstand#/law360/article/1588929?crid=56f64def-fbff-4ba3-9db0-cbb3898308ce.


Taking Off: How the FAA Reauthorization Bill Could Keep Commercial Flights Grounded

James Challou, MJLST Staffer

The last year has been one that the airline industry is eager to forget. Not only did a record number of flight delays and cancellations occur, but the Federal Aviation Administration (FAA) suffered an extremely rare complete system outage and Southwest dealt with a holiday travel meltdown. These incidents, coupled with recent near collisions on runways, have drawn increased scrutiny from lawmakers in Congress as this year they face a September 30threauthorization deadline for the Federal Aviation Administration Reauthorization Act. And while the Federal Aviation Act is a hotly debated topic, lawmakers and industry professionals all agree that a failure to meet the reauthorization deadline could spell disaster.

The need for reauthorization arises from the structure and funding system of the FAA. Reauthorization is a partial misnomer. Though the airline industry was deregulated in 1978, the practice of FAA reauthorization originated with the Airport and Airway Revenue Act of 1970 which created the Airport and Airway Trust Fund (Trust Fund) that is used to finance FAA investments. The authority to collect taxes and to spend from the Trust Fund must be periodically reauthorized to meet agency and consumer needs. Currently, the Trust Fund provides funds for four major FAA accounts: Operations, Facilities & Equipment (F&E), Research, Engineering and Development (RE&D), and Grants-in-Aid for Airports. If the FAA’s authorization expired without an extension, then the agency would be unable to spend revenues allocated from the Trust Fund. The flip side of the unique reauthorization process is that it offers a regular opportunity for Congress to hold the FAA accountable for unfulfilled mandates, to respond to new problems in air travel, and to advocate for stronger consumer protections because enacted changes in reauthorization acts only span a set time period.

On top of the recent spate of industry complications and near disasters, Congress must sift through a myriad of other concerns and issues that pervade the airline industry for the potential upcoming reauthorization. Consumer protection has become an increasingly pressing and hot-button issue as the deluge of canceled flights in the past year left many consumers disgruntled by the treatment and compensation they received. In fact, the Consumer Federation of America and several other consumer and passengers’ right groups recently called upon the House Transportation Committee and the Senate Commerce Committee to prioritize consumer protections. Their requests include requiring compensation when consumers’ flights are delayed and canceled, holding airlines accountable for publishing unrealistic flight schedules, ending junk fee practices in air travel, including prohibiting fees for family seating and for other such services, and requiring all-in pricing, ending federal preemption of airline regulation and allowing state attorneys general and individuals to hold airlines accountable, encouraging stronger DOT enforcement of passenger protections, and prioritizing consumer voices and experiences.

However, not all are sold on enhancing consumer protections via the reauthorization process. Senator Ted Cruz, the top Republican lawmaker on the Commerce, Science, and Transportation Committee has expressed opposition to increased agency and government intervention in the airline industry, citing free market and regulatory overreach concerns. Instead, Cruz and his allies have suggested that the FAA’s technology is outdated, and their sole focus should be on modernizing it.

Indeed, it appears that in the wake of the FAA system outage most interested parties and lawmakers agree that the aging FAA technology needs updating. While at first glance one might think this provides common ground, the opinions on how to update the FAA’s technology are wide-ranging. For example, while some have flagged IT infrastructure and aviation safety systems as the FAA technology to target in order to augment the FAA’s cybersecurity capacity, others are more concerned with providing the agency direction on the status of new airspace inhabitants such as drones and air taxis to facilitate entrants into the market. Even despite cross-party assent that the FAA’s technology necessitates some level of baseline update, a lack of direction for what this means in practice remains.

Another urgent and seemingly undisputed issue that the reauthorization effort faces is FAA staffing. The FAA’s workforce has severely diminished in the past decade. Air traffic controllers, for example, number 1,000 fewer than a decade ago, and more than 10% are eligible to retire. Moreover, a shortage of technical operations employees has grown so severe that union officials have dubbed it to be approaching crisis levels. Resultingly, most lawmakers agree that expanding the FAA’s workforce is paramount.

However, despite the dearth of air traffic controllers and technical operations employees, this proposition has encountered roadblocks as well. Some lawmakers view this as a solution to increase diversity within the ranks of the FAAand offer solutions revolving around this. Currently, only 2.6% of aviation mechanics are women and 94% of aircraft pilots male and 93% of them White. Lawmakers have made several proposals intended to rectify this disparity centering around reducing the cost of entry into FAA professions. However, Republicans have largely refuted such efforts and criticized such efforts as distractions from the chief concern of safety. Additionally, worker groups continue to air concerns about displacing qualified U.S. pilot candidates and undercutting current pilot pay. Any such modifications to the FAA reauthorization bill will require bipartisan support.

Finally, a lingering battle between Democrats and Republicans regarding the confirmation of President Biden’s nominated commissioner have hampered efforts to forge a bipartisan reauthorization bill. Cruz, again spearheading the Republican contingent, has decried Biden’s nominee for possessing no aviation experience and being overly partisan. Proponents, however, have pointed out that only two of the last five commissioners have had any aviation experience and lauded the nominee’s credentials and experience in the military. The surprisingly acrid fight bodes ominously for a reauthorization bill that will have to be bipartisan and is subject to serious time constraints.

The FAA reauthorization process provides valuable insight into how Congress decides agency directives. However, while safety and technology concerns remain the joint focal point of Congress’ intent for the reauthorization bill, in practice there seems to be little common ground between lawmakers. With a September 13th deadline looming, it is increasingly important that lawmakers cooperate to collectively hammer out a reauthorization bill. Failure to do so would severely cripple the FAA and the airline industry in general.


EJScreen: The Environmental Justice Tool That You Didn’t Know You Needed

Emma Ehrlich, Carlisle Ghirardini, MJLST Staffer

What is EJScreen?

EJScreen was developed by the Environmental Protection Agency (“EPA”) in 2010, 16 years after President Clinton’s Executive Order 12898 required federal agencies to begin keeping data regarding “environmental and human health risks borne by populations identified by race, national origin or income.” The program has been available to the public through the EPA’s website since 2015 and is a mapping tool that allows users to look at specific geographic locations and set overlays that show national percentiles for categories such as income, people of color, pollution, health disparities, etc. Though the EPA warns that EJScreen is simply a screening tool and has its limits, the EPA uses the program in “[i]nforming outreach and engagement practices, [i]mplementing aspects of …permitting, enforcement, [and] compliance, [d]eveloping retrospective reports of EPA work, [and] [e]nhancing geographically based initiatives.”

As the EPA warns on its website, EJScreen does not contain all pertinent information regarding environmental justice and other data should be collected when studying specific areas. However, EJScreen is still being improved and was updated to EJScreen 2.0 in 2022 to account for more data sets, including data on which areas lack access to food, broadband, and medical services, as well as health disparities such as asthma and life expectancy.

Current Uses

EJScreen software is now being used to evaluate the allocation of federal funding. In February of this year, the EPA announced that it will be allocating $1 billion of funding from President Biden’s Bipartisan Infrastructure Law to Superfund cleanup projects such as cleanups of sites containing retired mines, landfills, and processing and manufacturing plants. The EPA said that 60% of new projects are in locations that EJScreen indicated were subject to environmental justice concerns.

EJScreen is also used to evaluate permits. The EPA published its own guidance in August of 2022 to address environmental justice permitting procedures. The guidance encourages states and other recipients of financial assistance from the EPA to use EJScreen as a “starting point” when looking to see if a project whose permit is being considered may conflict with environmental justice goals. The EPA believes this will “make early discussions more meaningful and productive and add predictability and efficiency to the permitting process.” If an early EJScreen brings a project into question, the EPA instructs permitters to consider additional data before making a permitting decision.

Another use of EJScreen is in the review of Title VI Civil Rights Act Complaints. Using the authority provided by Title VI, the EPA has promulgated rules that prohibit any agency or group that is receiving federal funding from the EPA from functioning in a discriminatory way based on race, color, or national origin. The rules also enable people to submit Title VI complaints directly to the EPA when they believe a funding recipient is acting in a discriminatory manner. If it is warranted by the complaint, the EPA will conduct an investigation. Attorneys that have reviewed EPA response letters expressing its decision to conduct an investigation based on a complaint have noted that the EPA often cites EJScreen when explaining why they decided to move forward with an investigation.

In October of 2022, the EPA sent a “Letter of Concern” to the Louisiana Department of Environmental Quality (“LDEQ”) and the Louisiana Department of Health stating that an initial investigation suggests that the two departments have acted in ways that had “disparate adverse impacts on Black residents” when issuing air permits or informing the public of health risks. When discussing a nearby facility’s harmful health effects on residents, the EPA cites data from EJScreen in concluding that the facility is much more likely to have effects on black residents of Louisiana compared to non-black residents. The letter also touches on incorrect uses of EJScreen in saying that LDEQ’s conclusion that a proposed facility would not affect surrounding communities was misleading because the LDEQ used EJScreen to show that there were no residents within a mile of the proposed facility but ignored a school located only 1.02 miles away from the proposed location.

Firms such as Beveridge & Diamond have recognized the usefulness of this technology. They urge industry decision makers to use this free tool, and others similar to it, to preemptively consider environmental justice issues that their permits and projects may face when being reviewed by the EPA or local agencies.

Conclusion

In conclusion, EJScreen has the potential to be a useful tool, especially as the EPA continues to update it with data for additional demographics. However, users of the software should heed EPA’s warning that this is simply a screening tool. It is likely best used to rule out locations for certain projects, rather than be solely relied on for approving projects in certain locations, which requires more recent data to be collected.

Lastly, EJScreen is just one of many environmental justice screening tools being used and developed. Multiple states have been developing their own screening programs, and there is research showing that using state screening software may be more beneficial than national software. An environmental justice screening tool was also developed by the White House Council on Environmental Quality in 2022. Its Climate and Economic Justice Screening Tool is meant to assist the government in assigning federal funding to disadvantaged communities. The consensus seems to be that all available screening tools are helpful in at least some way and should be consulted by funding recipients and permit applicants in the early rounds of their decision making processes.


Hazardous Train Derailment: How a Poor Track Record for Private Railway Company May Impact Negligence Lawsuit Surrounding Major Incident

Annelise Couderc, MJLST Staffer

The Incident

On Friday, February 3rd a train with about 150 cars, many carting hazardous chemicals, derailed in East Palestine, Ohio. The derailment resulted in the leakage and combustion of an estimated 50 train cars containing chemicals hazardous to both humans and the environment. The mayor of East Palestine, Ohio initially evacuated the city, and neighboring towns were told to stay indoors with residents being told they could return five days following the explosion. According to a member of the National Transportation Safety Board, 14 cars containing multiple hazardous chemicals including vinyl chloride, a chemical in plastic products which is associated with increased risk of liver cancer and cancer generally, were “exposed to fire,” combusted into the air which could then be inhaled by residents or leach into the environment. There have been reports by residents of foul smells and headaches since the incident, and locals have reported seeing dead fish in waterways.

The train and railroad in question are owned and operated by Norfolk Southern, a private railway company. Norfolk Southern transports a variety of materials, but is known for its transportation of coal through the East and Midwest regions of the country. In order to prevent a large explosion with the chemicals remaining in the train cars, Norfolk Southern conducted a “controlled release” of the chemicals discharging “potentially deadly fumes into the air” on Monday, February 6th. While the controlled release was likely immediately necessary for safety purposes, exposure to vinyl chloride as a gas can be very dangerous, leading to headaches, nausea, liver cancer, and birth defects.

Government and Norfolk Southern Responds

Following the derailment and fires, a variety of governmental authorities have converged to tackle the issue, in addition to Norfolk Southern. The Environmental Protection Agency (EPA) and Norfolk Southern are monitoring air-quality, and giving guidance to determine when investigators and fire fighters may enter the scene safely. In a joint statement on February 8th, the Governors of Ohio and Pennsylvania, as well as East Palestine’s Fire Chief, announced that evacuated residents could return to their homes. As an act of good faith Norfolk Southern enlisted an independent contractor to work with local and federal officials to test air and water quality, and pledged $25,000 to the American Red Cross and its shelters to help residents. The Ohio National Guard has also been brought onto the scene.

As more information is released, things are heating up in the press as reporters try to learn more about what happened. In a press conference on February 8th with Ohio’s governor, Mike DeWine, the commander of the Ohio National Guard pushed a cable news reporter who refused to stop his live broadcast after asked by authorities and was subsequently arrested and held in jail for five hours. DeWine denies authorizing the arrest, and a Pentagon official has come out condemning the behavior as unacceptable. The Ohio attorney general will lead an investigation into the arrest.

Lawsuit Filed Alleges Negligence

Norfolk Southern’s history regarding brake safety as well as general operational changes in the railroad sector will perhaps play a factor in the lawsuit recently filed in response to the incident. In East Palestine, Ohio, residents and a local business owner are alleging negligence in a lawsuit against Norfolk Southern in federal court. Union organizers have expressed concerns that operating changes and cost-cutting measures like the elimination of 1/3 of workers in the last six years have resulted in less thorough inspection and less preventative maintenance. Although railroads are considered the safest form of transporting hazardous chemicals, Federal Railroad Administration (FRA) data shows that hazardous chemicals were released in 11 accidents in 2022, and 20 in both 2020 and 2018. Recently, there has been an uptick in derailments, and although most occur in remote locations, train car derailments have in fact killed people in the past.

The class-action lawsuit alleges negligence against Norfolk Southern for “failing to maintain and inspect its tracks; failing to maintain and inspect its rail cars; failing to provide appropriate instruction and training to its employees; failing to provide sufficient employees to safely and reasonably operate its trains; and failing to reasonably warn the general public.” The plaintiffs allege the company should have known of the dangers posed, and therefore breached their duty to the public.

Specifically relevant to this accident may be Norfolk Southern’s lobbying efforts against the mandatory use of Electronically Controlled Pneumatic (ECP) brakes. In 2014, likely in response to increased incidents, the Obama administration “proposed improving safety regulations for trains carrying petroleum and other hazardous materials,” which included brake improvement. The 2015 Fixing America’s Surface Transportation (FAST) Act required the Department of Transportation (DOT) to test ECP braking, and the Government Accountability Office to calculate the costs and benefits of ECP braking.[1] The U.S. Government Accountability Office (GAO) conducted a cost benefit test on the ECP braking, and found the costs outweighed the benefits.[2] The FRA, the Pipeline and Hazardous Materials Safety Administration (PHMSA), and DOT subsequently abandoned the ECP brake provision of the regulation in 2017. The move followed a change in administration and over $6 million in lobbying money towards GOP politicians and the Trump administration by the American Association of Railroads, a lobbying group of which Norfolk Southern is a dues-paying member.

Despite bragging about their use of ECP brakes in 2007 in their quarterly report, Norfolk Southern’s lobbying group opposed mandatory ECP brakes, stating “In particular, the proposals for significantly more stringent speed limits than in place today and electronically controlled pneumatic (ECP) brakes could dramatically affect the fluidity of the railroad network and impose tremendous costs without providing offsetting safety benefits.” Although the type of brakes on the train in East Palestine is unknown as of now, a former FRA senior official told a news organization that ECP brakes would have reduced the severity of the accident. Whether or not using ECP braking while hauling hazardous materials constitutes negligence, despite the federal government finding they are not beneficial enough to make it mandatory, the fact that Norfolk Southern opposed its implementation may still influence the litigation.

Although the current lawsuit filed alleges negligence against Norfolk Southern, the private company, it is perhaps possible to approach the legal debate from an agency perspective. Did the PMHSA and FRA permissibly interpret FAST in failing to include ECP braking requirements when they were explicitly mentioned in the FAST text? Did the agencies come to an acceptable conclusion about ECP braking based on the data? If a court were to find the agencies’ decisions were outside of the scope of the authority granted to them by FAST, or that the decision was arbitrary and capricious, the agencies could be forced to reevaluate the regulation regarding ECP braking. Congress could also pass more specific legislation in response, to increase safety measures to prevent something like this from happening again.

The events are still unfolding from the train derailment in Ohio, and there are still many unknown variables. It will be interesting to see how the facts unfold, and how/if residents are about to recoup their losses and recover from the emotional distress this event undoubtedly caused.

Notes

[1] Regulations.gov, regulations.gov (search in search bar for “phmsa-2017-0102”; then choose “Electronically Controlled Pneumatic Braking- Updated Regulatory Impact Analysis”; then click “download.”)

[2] Regulations.gov, regulations.gov (search in search bar for “phmsa-2017-0102”; then choose “Technical Corrections to the Electronically Controlled Pneumatic Braking Final Updated RIA December 2017”; then click “download.”)


Beef (and Residual Hormones?). It’s What’s for Dinner.

Kira Le, MJLST Staffer

The beef industry in the United States has been using hormones, both natural and synthetic, to increase the size of cattle prior to slaughter for more than a century.[1] Capsules are implanted under the skin behind a cow’s ear and release specific doses of hormones over a period of time with the goal of increasing the animal’s size more quickly. Because the use of these hormones in the beef industry involves both drug regulation and food safety regulations, both the U.S. Food and Drug Administration (FDA) and the United States Department of Agriculture (USDA) are responsible for ensuring the safety of the practice and regulating its use.[2] According to the FDA, “scientific data” is used to establish “acceptable” safe limits for hormones in meat by the time it is consumed.[3] Agricultural science experts support the fact that the naturally-occurring hormones used in beef production, such as estrogen, are used in amounts much smaller than those that can be found in other common foods, such as eggs and tofu.[4] However, the debate within the scientific community, and between jurisdictions that allow the sale of hormone-treated beef (such as the United States) and those that have banned its importation (such as the European Union), is still raging on in 2022 and has led to significant distrust in the beef industry by consumers.[5] With the release of research earlier this year presenting opposing conclusions regarding the safety of the use of synthetic hormones in the beef industry, the FDA has a responsibility to acknowledge evidence suggesting that such practices may be harmful to human health.

Some defend the use of hormones in the beef industry as perfectly safe and, at this point, necessary to sustainably feed a planet on which the demand for meat continues to increase with a growing population. Others, such as the European Union and China, both of which have restricted the importation of beef from cattle implanted with growth-promoting hormones, argue that the practice threatens human health.[6] For example, a report out of Food Research Collaboration found that a routinely-used hormone in United States beef production posed a significant risk of cancer.[7] Such a finding is reminiscent of when, in the not-too-distant past, known carcinogen diethylstilbestrol (DES) was used in U.S. cattle production and led to dangerous meat being stocked on grocery store shelves.[8]

This year, research published in the Journal of Applied Animal Research discussed the effects that residual hormones left in beef and the environment have on human health in the United States.[9] Approximately 63% of beef cattle in the United States are implanted with hormones, most of which are synthetic.[10] Despite organizations and agencies such as the FDA assuring consumers that the use of these synthetic hormones in cattle production is safe, the residues that can be left behind may be carcinogenic and/or lead to reproductive or developmental issues in humans.[11] Furthermore, the National Residue Program (NRP), housed in the USDA, is not only the “only federal effort that routinely examines food animal products for drug residues,” but also only examines tissues not commonly consumed, such as the liver and kidney.[12] Researchers Quaid and Abdoun offer the example of Zeranol, a genotoxic synthetic hormone used in beef production in the United States that activates estrogen receptors, causing dependent cell proliferation in the mammary glands that may result in breast cancer.[13] They also noted the problem of residual hormones found in the environment surrounding cattle production locations, which have been found to reduce human male reproductive health and increase the risk of some endocrine cancers.[14]

Also this year, researchers published an article in the Journal of Animal Science claiming that despite the “growing concern” of the effects of residual hormones on human health, including the earlier onset of puberty in girls and an increase in estrogen-related diseases attributed to the excessive consumption of beef, research shows that cattle treated with hormones, “when given at proper administration levels, do not lead to toxic or harmful levels of hormonal residues in their tissues.”[15] The researchers concluded that the hormones have no effect on human health and are not the cause of disease.[16]

Perhaps it is time for the FDA to acknowledge and address the scientific disagreements on the safety of the use of hormones – synthetic hormones, especially – in beef production, as well as reassure consumers that players in the agriculture industry are abiding by safety regulations. Better yet, considering the currentness of the research, the inconsistency of the conclusions, and the seriousness of the issue, formal hearings – held by either the FDA or Congress – may be necessary to rebuild the trust of consumers in the U.S. beef industry.

Notes

[1] Synthetic Hormone Use in Beef and the U.S. Regulatory Dilemma, DES Daughter (Nov. 20, 2016), https://diethylstilbestrol.co.uk/synthetic-hormone-use-in-beef-and-the-us-regulatory-dilemma/.

[2] Id.

[3] Steroid Hormone Implants Used for Growth in Food-Producing Animals, U.S. Food and Drug Admin (Apr. 13, 2022), https://www.fda.gov/animal-veterinary/product-safety-information/steroid-hormone-implants-used-growth-food-producing-animals.

[4] Amanda Blair, Hormones in Beef: Myths vs. Facts, S.D. State Univ. Extension (July 13, 2022), https://extension.sdstate.edu/hormones-beef-myths-vs-facts.

[5] See Julia Calderone, Here’s Why Farmers Inject Hormones Into Beef But Never Into Poultry, Insider (Mar. 31, 2016), https://www.businessinsider.com/no-hormones-chicken-poultry-usda-fda-2016-3 (discussing the debate within the scientific community over whether the use of hormones in animals raised for human consumption is a risk to human health).

[6] New Generation of Livestock Drugs Linked to Cancer, Rafter W. Ranch (June 8, 2022), https://rafterwranch.net/livestock-drugs-linked-to-cancer/.

[7] Id.

[8] Synthetic Hormone Use in Beef and the U.S. Regulatory Dilemma, DES Daughter (Nov. 20, 2016), https://diethylstilbestrol.co.uk/synthetic-hormone-use-in-beef-and-the-us-regulatory-dilemma/.

[9] Mohammed M. Quaid & Khalid A. Abdoun, Safety and Concerns of Hormonal Application in Farm Animal Production: A Review, 50 J. of Applied Animal Rsch. 426 (2022).

[10] Id. at 428.

[11] Id. at 429–30.

[12] Id. at 430.

[13] Id. at 432–33.

[14] Id. at 435.

[15] Holly C. Evans et al., Harnessing the Value of Reproductive Hormones in Cattle Production with Considerations to Animal Welfare and Human Health, 100 J. of Animal Sci. 1, 9 (2022).

[16] Id.


Electric Scooter Regulations in Winter: Why the “Brake” in Service?

Warren Cormack, MJLST Staffer

In the summer of 2018, the city of Minneapolis began a pilot project to introduce 600 electric rental scooters, primarily to the downtown area. The city approved operations for Jump, Lyft, Spin, and Lime in 2019. Two thousand scooters were slated to hit the Minneapolis streets, but the companies deployed less than one thousand to Minneapolis for much of the 2019 season. Still, half a year ago, ride-share scooters from the 2019 authorization could be found all over the streets of Minneapolis and users “racked up about 225,000 rides.”

Minneapolis is a city with a strong winter biking tradition. Yet in February, with winter in full swing, electric scooters are nowhere to be seen. What happened?

The short answer is that Minneapolis’ second pilot program for electric scooters ended in November 2019. When we dig deeper, though, some interesting dynamics affect the use of electric scooters in winter.

Though scooter companies initially targeted warm-weather cities, now colder cities like MilwaukeeBoston, and Minneapolis face challenges associated with operating electric scooters in colder weather. For example, when snow emergencies hit, cities may have to ask companies to remove scooters from the roads.

An important concern for cities is safety. This was a major reason why Minneapolis ended the 2019 scooter pilot in late November. Minneapolis scooter companies agreed that 6 to 10 inches of snow was too much to operate safely. Scooter companies are already being sued for the injuries that they cause, and the odds that someone might injure themselves while riding on snowy ground is higher than when the streets are clear. Still, scooter companies have shown a desire to keep their scooters running unless a winter storm hits.

Though Denver’s weather is not as cold as Minneapolis, it does snow there. Denver’s scooters arrived in May 2018 and the city regulated their numbers within two months. Still, the city did not regulate the months within which the scooters would be available for use. A spokeswoman for Denver Public Works reportedly said: “I think riders are going to have to make their own choices if they want to ride an electric scooter in the winter months.” Denver’s comparable warmth may affect how the city balances safety concerns.

The cold weather is not only an issue for riders. “Scooter companies are still learning how their vehicles perform in various weather conditions and from regular use.” Scooters generally either operate on bike lanes or sidewalks. In either location, the small wheels and limited batteries of scooters can negatively impact their winter-weather suitability. The major scooter models currently in use have minimum operating temperatures of fourteen degrees Fahrenheit. Possibly in response to the limits of popular scooters, Bird (one of the major scooter companies) designed a scooter with more battery and pronounced tire treads. Tier is another company developing scooters that can handle cold weather. The effects of winter may be over-hyped, however. According to a scooter expert, scooters may become slower during the winter, but the cold does not damage their battery.

A final winter issue is simply a lack of riders. Even for European scooter companies that operate throughout the year, about half of the riders stop riding during the winter. Minneapolis data also reflect a roughly 50% decrease from summer’s peak to November. College riders leave home for winter break, prompting companies to reduce the number of deployed scooters. A lack of winter riders caused Lime to ramp down operations in Milwaukee. Though riders in relatively snowy cities like Denver have found that people use scooters through the winter, scooter companies facing higher maintenance costs and lower ridership may be wise to reduce their fleet size.

Scooters may leave for the winter due to safety, maintenance, or lower ridership. This may be caused by city policy or by the companies themselves. If the companies continue to make their scooters more capable of enduring the winter, cities may begin to find themselves at odds with electric scooter companies’ desire to stay open for business.