Social Media Platforms Won’t “Like” This: How Aggrieved Users Are Circumventing the Section 230 Shield

Claire Carlson, MJLST Staffer

 

Today, almost thirty years after modern social media platforms were introduced, 93% of teens use social media on a daily basis.[1] On average, teens spend nearly five hours a day on social media platforms, with a third reporting that they are “almost constantly” active on one of the top five leading platforms.[2] As social media usage has surged, concerns have grown among users, parents, and lawmakers about its impacts on teens, with primary concerns including cyberbullying, extremism, eating disorders, mental health problems, and sex trafficking.[3] In response, parents have brought a number of lawsuits against social media companies alleging the platforms market to children, connect children with harmful content and individuals, and fail to take the steps necessary to keep children safe.[4]

 

When facing litigation, social media companies often invoke the immunity granted to them under Section 230 of the Communications Decency Act.[5] 47 U.S.C § 230 states, in relevant part, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[6] Federal courts are generally in consensus and interpret the statutory language as providing broad immunity for social media providers.[7] Application of this interpretive framework establishes that social media companies can only be held liable for content they author, whereas Section 230 shields them from liability for harm arising from information or content posted by third-party users of their platforms.[8]

 

In V.V. v. Meta Platforms, Inc., plaintiffs alleged that popular social media platform Snapchat intentionally encourages use by minors and consequently facilitated connections between their twelve-year-old daughter and sex offenders, leading to her assault.[9] The court held that the facts of this case fell squarely within the intended scope of Section 230, as the harm alleged was the result of the content and conduct of third-party platform users, not Snapchat.[10] The court expressed that Section 230 precedent required it to deny relief to the plaintiffs, whose specific circumstances evoked outrage, asserting it lacked judicial authority to do otherwise without legislative action.[11] Consequently, the court held that Section 230 shielded Snapchat from liability for the harm caused by the third-party platform users and that plaintiffs’ only option for redress was to bring suit against the third-party users directly.[12]

 

After decades of cases like V.V., where Section 230 has shielded social media companies from liability, plaintiffs are taking a new approach rooted in tort law. While Section 230 provides social media companies immunity from harm caused by their users, it does not shield them from liability for harm caused by their own platforms and algorithms.[13] Accordingly, plaintiffs are trying to bypass the Section 230 shield with product liability claims alleging that social media companies knowingly, and often intentionally, design defective products aimed at fostering teen addiction.[14] Many of these cases analogize social media companies to tobacco companies – maintaining that they are aware of the risks associated with their products and deliberately conceal them.[15] These claims coincide with the U.S. Surgeon General and 40+ attorney generals imploring Congress to pass legislation mandating warning labels on social media platforms emphasizing the risk of teen addiction and other negative health impacts.[16]

Courts stayed tort addiction cases and postponed rulings last year in anticipation of the Supreme Court ruling on the first Section 230 immunity cases to come before it.[17] In companion cases, Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh, the Supreme Court was expected to shed light on the scope of Section 230 immunity by deciding whether social media companies are immune from liability when the platform’s algorithm recommends content that causes harm.[18] In both, the court declined to answer the Section 230 question and decided the cases on other grounds.[19]

 

Since then, while claims arising from third-party content are continuously dismissed, social media addiction cases have received positive treatment in both state and federal courts.[20] In a federal multidistrict litigation (MDL) proceeding, the presiding judge permitted hundreds of addiction cases alleging defective product (platform and algorithm) design to move forward. In September, the MDL judge issued a case management order, which suggests an early 2026 trial date.[21] Similarly, a California state judge found that Section 230 does not shield social media companies from liability in hundreds of addiction cases, as the alleged harms are based on the company’s design and operation of their platforms, not the content on them.[22] Thus, social media addiction cases are successfully using tort law to bypass Section 230 where their predecessor cases failed.

 

With hundreds of pending social media cases and the Supreme Court’s silence on the scope of Section 230 immunity, the future of litigating and understanding social media platform liability is uncertain.[23] However, the preliminary results seen in state and federal courts evinces that Section 230 is not the infallible immunity shield that social media companies have grown to rely on.

 

Notes

 

[1] Leon Chaddock, What Percentage of Teens Use Social Media? (2024), Sentiment (Jan. 11, 2024), https://www.sentiment.io/how-many-teens-use-social-media/#:~:text=Surveys%20suggest%20that%20over%2093,widely%20used%20in%20our%20survey. In the context of this work, the term “teens” refers to people aged 13-17.

[2] Jonathan Rothwell, Teens Spend Average of 4.8 Hours on Social Media Per Day, Gallup (Oct. 13, 2023), https://news.gallup.com/poll/512576/teens-spend-average-hours-social-media-per-day.aspx; Monica Anderson, Michelle Faverio & Jeffrey Gottfried, Teens, Social Media and Technology 2023, Pew Rsch. Ctr. (Dec. 11, 2023), https://www.pewresearch.org/internet/2023/12/11/teens-social-media-and-technology-2023/.

[3] Chaddock, supra note 1; Ronald V. Miller, Social Media Addiction Lawsuit, Lawsuit Info. Ctr. (Sept. 20, 2024), https://www.lawsuit-information-center.com/social-media-addiction-lawsuits.html#:~:text=Social%20Media%20Companies%20May%20Claim,alleged%20in%20the%20addiction%20lawsuits.

[4] Miller, supra note 3.

[5] Tyler Wampler, Social Media on Trial: How the Supreme Court Could Permanently Alter the Future of the Internet by Limiting Section 230’s Broad Immunity Shield, 90 Tenn. L. Rev. 299, 311–13 (2023).

[6] 47 U.S.C. § 230 (2018).

[7] V.V. v. Meta Platforms, Inc., No. X06UWYCV235032685S, 2024 WL 678248, at *8 (Conn. Super. Ct. Feb. 16, 2024) (citing Brodie v. Green Spot Foods, LLC, 503 F. Supp. 3d 1, 11 (S.D.N.Y. 2020)).

[8] V.V., 2024 WL 678248, at *8; Poole v. Tumblr, Inc., 404 F. Supp. 3d 637, 641 (D. Conn. 2019).

[9] V.V., 2024 WL 678248, at *2.

[10] V.V., 2024 WL 678248, at *11.

[11] V.V., 2024 WL 678248, at *11.

[12] V.V., 2024 WL 678248, at *7, 11.

[13] Miller, supra note 3.

[14] Miller, supra note 3; Isaiah Poritz, Social Media Addiction Suits Take Aim at Big Tech’s Legal Shield, BL (Oct. 25, 2023), https://www.bloomberglaw.com/bloomberglawnews/tech-and-telecom-law/X2KNICTG000000?bna_news_filter=tech-and-telecom-law#jcite.

[15] Kirby Ferguson, Is Social Media Big Tobacco 2.0? Suits Over the Impact on Teens, Bloomberg (May 14, 2024), https://www.bloomberg.com/news/videos/2024-05-14/is-social-media-big-tobacco-2-0-video.

[16] Miller, supra note 3.

[17] Miller, supra note 3; Wampler, supra note 5, at 300, 321; In re Soc. Media Adolescent Addiction/Pers. Inj. Prod. Liab. Litig., 702 F. Supp. 3d 809, 818 (N.D. Cal. 2023) (“[T]he Court was awaiting the possible impact of the Supreme Court’s decision in Gonzalez v. Google. Though that case raised questions regarding the scope of Section 230, the Supreme Court ultimately did not reach them.”).

[18] Wampler, supra note 5, at 300, 339-46; Leading Case, Twitter, Inc. v. Taamneh, 137 Harv. L. Rev. 400, 409 (2023).

[19] Twitter, Inc. v. Taamneh, 598 U.S. 471, 505 (2023) (holding that the plaintiff failed to plausibly allege that defendants aided and abetted terrorists); Gonzalez v. Google LLC, 598 U.S. 617, 622 (2023) (declining to address Section 230 because the plaintiffs failed to state a plausible claim for relief).

[20] Miller, supra note 3.

[21] Miller, supra note 3; 702 F. Supp. at 809, 862.

[22] Miller, supra note 3; Poritz supra note 14.

[23] Leading Case, supra note 18, at 400, 409.


You Can Protect Your Data . . . Once.

Jessica Schmitz, MJLST Staffer

We scan our face to access our phones. We scan our hands to save five minutes in the TSA line. Teslas track our eyes to ensure we’re watching the road.[1] Our biometric data is constantly being collected by private entities. Though states like California and Texas are attempting to implement new safeguards for its constituents, Illinois recently rolled back protections under its renowned Biometric Information Privacy Act (BIPA).[2] BIPA  protected consumers from private entities that deceptively or illegally collected biometric data.[3] The new rules overturned the Illinois Supreme Court ruling in Cothron v. White Castle System Inc. that allowed claims to accrue for each violation under BIPA’s provisions.[4] While tech companies and liability insurers are no doubt breathing a sigh of relief at the new reforms, litigants going forward may be left without a remedy if their biometric data is mishandled more than once. Below is a history of BIPA’s passing and impact, followed by the likely ramifications of the new reforms.

BIPA’s Passing Was an Early Victory for Data Privacy Protections

BIPA’s passing in 2008 was one of the earliest consumer protection laws for biometric data collection. At that time, major corporations were piloting finger scanning and facial recognition technology in major cities, including Chicago. The law was designed to not only provide recourse for consumers, but also prescribed preventative measures for companies to follow. BIPA’s protections are broad; companies must publish its data collection and retention policies to the public and cannot retain the information it collects for more than three years.[5] Companies must inform users that they are collecting the data, disclose what is being collected, disclose why it’s being collected, and for how long it intends to store the data.[6] Companies cannot disclose someone’s biometric data without express consent, nor can they profit from the data in any way.[7] Lastly, the data must be stored at least as well as a company stores other confidential data.[8]

Unlike laws in other states, BIPA provided a private right of action to enforce data privacy protections. Following its passage, swaths of lawsuits were filed against major corporations, including Amazon, Southwest Airlines, Google, and Facebook.[9] Under BIPA, companies could be liable for purchasing, improperly collecting, improperly storing, or disseminating biometric data, even if the data was not mishandled.[10] Plaintiffs could recover for every violation under BIPA, and could do so without stating an injury or alleging damages.[11] It is no surprise that BIPA class actions tended to favor plaintiffs, often resulting in large settlements or jury verdicts.[12] Since litigants could collect damages on every violation of BIPA’s provisions, it was difficult for companies to assess their potential liability. Every member of a class action could allege multiple violations, and if found liable, companies would owe, at minimum, $1,000 per violation. The lack of predictability often pushed corporate liability insurance policies into settling rather than risk such large payouts.

The 2023 ruling in Cothron implored the legislature to address concerns of disproportionate corporate liability, stating, “We respectfully suggest that the legislature . . . make clear its intent regarding the assessment of damages under the Act.”[13] The legislature rose to the challenge, fearing the court’s interpretation could bankrupt smaller or mid-size companies.[14] The new provisions to BIPA target the Court’s ruling, providing:

“For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.
(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient. (eff. 8-2-24.)”

Though not left completely without redress, Illinois constituents may now recover only once if their biometric data is recklessly or deceptively collected or disseminated in the same manner.

BIPA Reforms Mark a Trend Towards Laxing Corporate Responsibility

The rollback of BIPA’s provisions come at a time when consumers need it most. The stakes for mishandling biometric data are much higher than that of other collected data. While social security numbers and credit card numbers can be canceled and changed – with varying degrees of ease – most constituents would be unwilling to change their faces and fingerprints for the sake of _____.[15] Ongoing and future technology developments, such as the rise of AI, heightens potential fallout from BIPA violations.  AI-generated deepfakes are becoming more prevalent, targeting both major celebrities like Taylor Swift and Pokimane, and our family members through phishing schemes.[16] These crimes rely on biometric data, utilizing our voices and faces to create realistic depictions of people, and can even recreate our speech cadence and body movements.[17] For victims, recovering on a per-person basis instead of a per-violation basis means they could be further harmed after recovering against a company with no redress.

Corporations, however, have been calling for reforms for year, and believe that these changes will reduce insurance premiums and docket burdens.[18] Prior to the changes, insurers began removing BIPA coverage from litigation insurance plans and adding strict requirements for defense coverage.[19] Insurers also would encourage companies to settle to avoid judgements on a per-violation basis.[20]

Advocates for BIPA reform believe the new changes will reduce insurance costs while still providing litigants with fair outcomes. Though individual litigants may only recover once, they can still recover for actual damages if a company’s actions resulted in more harm than simply violating BIPA’s provisions.  Awards on a per-person basis can still result in hefty settlements or awards that will hold companies accountable for wrongdoing. Instead of stifling corporate accountability, proponents believe the reforms will result in fairer settlements and reduce litigation costs overall.

Without further guidance from the legislature, how the new provisions are applied will be left for state and federal courts to interpret. Specifically, the legislature left one looming question unanswered; do the restrictions apply retroactively? If litigants can only recover from an entity once, are past litigants barred from participating in future actions regarding similar violations? Or do they get one last shot at holding companies accountable? If they lost in a prior suit, can they join a new one? In trying to relieve the court system, the legislature has ironically given courts the loathsome task of interpreting BIPA’s vague new provisions. Litigants and defendants will likely fight tooth and nail to create favorable case law, which is unlikely to be uniform across jurisdictions.

 

Notes

[1] Model Y Owner’s Manual: Cabin Camera, Tesla, https://www.tesla.com/ownersmanual/modely/en_us/GUID-EDAD116F-3C73-40FA-A861-68112FF7961F.html (last visited Sept. 16, 2024).

[2] See generally, California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100 (West 2018); Capture or Use of Biometric Identifier, Tex. Code Ann. § 503.001 (2017); Abraham Gross, Illinois Biometric Privacy Reform Eases Coverage Woes, LexisNexis Law360 (Aug. 8, 2024, 7:13 PM), https://plus.lexis.com/newsstand/law360-insurance-authority/article/1868014/?crid=debb3ba9-22a1-41d6-920e-c1ce2b7a108d&cbc=0,0,0.

[3] Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14/5 (2024) [hereinafter BIPA].

[4] Cothron v. White Castle System, Inc., 216 N.E.3d 918, 924 (Ill. 2023).

[5] BIPA, supra note 3, at sec. 15a.

[6] Id. at sec. 15b.

[7] Id. at sec. 15c-d.

[8] Id. at sec. 15e.

[9] See generally, In re Facebook Biometric Info. Priv. Litig., No. 3:15-CV-03747-JD, 2018 WL 2197546 (N.D. Cal. May 14, 2018); Rivera v. Google Inc., 238 F.Supp.3d 1088 (N.D.Ill., 2017); Miller v. S.W. Airlines Co., No. 18 C 86, 2018 WL 4030590 (N.D. Ill. Aug. 23, 2018), aff’d, 926 F.3d 898 (7th Cir. 2019).

[10] BIPA, supra note 3, at sec. 15.

[11] Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197, 1206 (Ill. 2019).

[12] See, Lauraann Wood, $9M White Castle Fingerprint BIPA Deal Clears Final Approval, LexisNexis Law360 (Aug. 1, 2024, 2:18 PM) https://www.law360.com/articles/1864687?from_lnh=true; Lauraann Wood, BNSF’s $75M BIPA Deal With Truckers Nears Final OK, LexisNexis Law360 (June 17, 2024, 8:54 AM) https://www.law360.com/articles/1848754?from_lnh=true.

[13] Cothron, 216 N.E.3d at 929 (Ill. 2023).

[14] Updates to Illinois’ Biometric Privacy Signed Into Law Thanks to Cunningham, Office of Bill Cunningham: State Senator, https://www.senatorbillcunningham.com/news/508-updates-to-illinois-biometric-privacy-signed-into-law-thanks-to-cunningham (Aug. 2, 2024, 3:13PM).

[15] See, BIPA, supra note 3, at sec. 5c.

[16] Dan Merica & Ali Swenson, Trump’s Post of Fake Taylor Swift Endorsement Is His Latest Embrace Of AI-Generated Images, AP News (Aug. 20, 2024, 3:48 PM), https://apnews.com/article/trump-taylor-swift-fake-endorsement-ai-fec99c412d960932839e3eab8d49fd5f; Bianca Britton, They Appeared in Deepfake Porn Videos Without Their Consent. Few Laws Protect Them, NBC News (Feb. 14, 2023, 2:48 PM), https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372; Charles Bethea, The Terrifying A.I. Scam That Uses Your Loved One’s Voice, The New Yorker (Mar. 7, 2024), https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice.

[17] Catherine Bernaciak & Dominic A. Ross, How Easy is it to Make and Detect a Deepfake?, Carnegie Mellon Univ.: SEI Blog (Mar. 14, 2022), https://insights.sei.cmu.edu/blog/how-easy-is-it-to-make-and-detect-a-deepfake/.

[18] Michael C. Andolina et. al., Emerging Issues and Ambiguities Under Illinois’ Biometric Information Privacy Act, Practitioner Insights Commentaries (May 21, 2020), https://1.next.westlaw.com/Document/Ib04759309b7b11eabea3f0dc9fb69570/View/FullText.html?listSource=Foldering&originationContext=clientid&transitionType=MyResearchHistoryItem&contextData=%28oc.Default%29&VR=3.0&RS=cblt1.0.

[19] Gross, supra note 2.

[20] Id.


Moderating Social Media Content: A Comparative Analysis of European Union and United States Policy

Jaxon Hill, MJLST Staffer

In the wake of the Capitol Hill uprising, former president Donald Trump had several of his social media accounts suspended.1 Twitter explained that their decision to suspend Trump’s account was “due to the risk of further incitement of violence.”2 Though this decision caught a lot of attention in the public eye, Trump was not the first figure in the political sphere to have his account suspended.3 In response to the social media platforms alleged censorship, some states, mainly Florida and Texas, attempted to pass anti-censorship laws which limit the ability for social media companies to moderate content.4 

Now, as litigation ensues for Trump and social media companies fighting the Texas and Florida legislation, the age-old question rears its ugly head: what is free speech?5 Do social media companies have a right to limit free speech? Social media companies are not bound by the First Amendment.6 Thus, barring valid legislation that says otherwise, they are allowed to restrict or moderate content on their platforms. But should they, and, if so, how? How does the answer to these questions differ for public officials on social media? To analyze these considerations, it is worthwhile to look beyond the borders of the United States. This analysis is not meant to presuppose that there is any wrongful conduct on the part of social media companies. Rather, this serves as an opportunity to examine an alternative option to social media content moderation that could provide more clarity to all interested parties. 

  In the European Union, social media companies are required to provide clear and specific information whenever they restrict the content on their platform.7 These statements are called “Statements of Reasons” (“SoRs”) and they must include some reference to whatever law the post violated.8 All SoRs  are  made publicly available to ensure transparency between the users and the organization.9 

An analysis of these SoRs yielded mixed results as to their efficacy but it opened up the door for potential improvements.10 Ultimately, the analysis showed inconsistencies among the various platforms in how or why they moderate content, but those inconsistencies can potentially open up an ability for legislators to clarify social media guidelines.11 

Applying this same principle domestically could allow for greater transparency between consumers, social media companies, and the government. By providing publicly available rationale behind any moderation, social media companies could continue to remove illegal content while not straddling the line of censorship. It is worth noting that there are likely negative financial implications for this policy, though. With states potentially implementing vastly different policies, social media companies may have to increase costs to ensure they are in compliance wherever they operate.12 Nevertheless, absorbing these costs up front may be preferable to “censorship” or “extremism, hatred, [or] misinformation and disinformation.”13 

In terms of the specific application to government officials, it may seem this alternative fails to offer any clarity to the current state of affairs. This assertion may have some merit as government officials have still been able to post harmful social media content in the EU without it being moderated.14 With that being said, politicians engaging with social media is a newer development—domestically and internationally—so more research needs to be conducted to conclude best practices. Regardless, increasing transparency should bar social media companies from making moderation choices unfounded in the law.

 

Notes

1 Bobby Allyn & Tamara Keith, Twitter Permanently Suspends Trump, Citing ‘Risk Of Further Incitement Of Violence’, Npr (Jan. 8, 2021), https://www.npr.org/2021/01/08/954760928/twitter-bans-president-trump-citing-risk-of-further-incitement-of-violence.

2 Id.

3 See Christian Shaffer, Deplatforming Censorship: How Texas Constitutionally Barred Social Media Platform Censorship, 55 Tex. Tech L. Rev. 893, 903-04 (2023) (giving an example of both conservative and liberal users that had their accounts suspended).

4 See Daveed Gartenstein-Ross et al., Anti-Censorship Legislation: A Flawed Attempt to Address a Legitimate Problem, Lawfare (July 27, 2022), https://www.lawfaremedia.org/article/anti-censorship-legislation-flawed-attempt-address-legitimate-problem (explaining the Texas and Florida legislation in-depth).

5 See e.g. Trump v. United States, 219 L.E.2d 991, 1034 (2024) (remanding the case to the lower courts); Moody v. NetChoice, LLC, 219 L.E.2d. 1075, 1104 (2024) (remanding the case to the lower courts).

6 Evelyn Mary Aswad, Taking Exception to Assessments of American Exceptionalism: Why the United States Isn’t Such an Outlier on Free Speech, 126 Dick. L. R. 69, 72 (2021).

7 Chiara Drolsbach & Nicolas Pröllochs, Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (2023), https://arxiv.org/html/2312.04431v1/#bib.bib56.

8  Id.

9 Id.

10 Id. This analysis showed that (1) content moderation varies across platforms in number, (2) content moderation is most often applied to videos and text, whereas images are moderated much less, (3) most rule-breaking content is decided via automated means (except X), (4) there is much variation among how often the platforms choose to moderate illegal content, and (5) the primary reasons for moderation include falling out of the scope of the platform’s services, illegal or harmful speech, and sexualized content. Misinformation was very rarely cited as the reason for moderation.

11 Id.

12 Perkins Coie LLP, More State Content Moderation Laws Coming to Social Media Platforms (November 17, 2022), https://perkinscoie.com/insights/update/more-state-content-moderation-laws-coming-social-media-platforms (recommending social media companies to hire counsel to ensure they are complying with various state laws). 

13 See e.g. Shaffer, supra note 3 (detailing the harms of censorship); Gartenstein-Ross, supra note 4 (outlining the potential harms of restrictive content moderation).

14 Goujard et al., Europe’s Far Right Uses TikTok to Win Youth Vote, Politico (Mar. 17, 2024), https://www.politico.eu/article/tiktok-far-right-european-parliament-politics-europe/ (“Without evidence, [Polish far-right politician, Patryk Jaki] insinuated the person who carried out the attack was a migrant”).

 


An Incomplete Guide to Ethically Integrating AI Into Your Legal Practice

Kevin Frazier, Assistant Professor, Benjamin L. Crump College of Law, St. Thomas University

There is no AI exception in the Model Rules of Professional Conduct and corresponding state rules. Lawyers must proactively develop an understanding of the pros and cons of AI tools. This “practice guide” provides some early pointers for how to do just that—specifically, how to use AI tools while adhering to Model Rule 3.1.​

Model Rule 3.1, in short, mandates that lawyers only bring claims with substantial and legitimate basis in law and fact. This Rule becomes particularly relevant when using AI tools like ChatGPT in your legal research and drafting. On seemingly a daily basis, we hear of a lawyer misusing an AI tool and advancing a claim that is as real as Jack’s beanstalk.

The practice guide emphasizes the need for lawyers to independently verify the outputs from AI tools before relying on them in legal arguments. Such diligence ensures compliance with both MRPC 3.1 and the Federal Rule of Civil Procedure 11, which also discourages frivolous filings. Perhaps more importantly, it also saves the profession from damaging headlines that imply we’re unwilling to do our homework when it comes to learning the ins and outs of AI.

With those goals in mind, the guide offers a few practical steps to safely incorporate AI tools into legal workflows:

  1. Understand the AI Tool’s Function and Limitations: Knowing what the AI can and cannot do is crucial to avoiding reliance on inaccurate legal content.
  2. Independently Verify AI Outputs: Always cross-check AI-generated citations and arguments with trustworthy legal databases or resources.
  3. Document AI-Assisted Processes: Keeping a detailed record of how AI tools were used and verified can be crucial in demonstrating diligence and compliance with ethical standards.

The legal community, specifically bar associations, is actively exploring how to refine ethical rules to better accommodate AI tools. This evolving process necessitates that law students and practitioners stay informed about both technological advancements and corresponding legal ethics reforms.

For law students stepping into this rapidly evolving landscape, understanding how to balance innovation with ethical practice is key. The integration of AI in legal processes is not just about leveraging new tools but doing so in a way that upholds the integrity of the legal profession.


A Digital Brick in the Trump-Biden Wall

Solomon Steen, MJLST Staffer

“Alexander explained to a CBP officer at the limit line between the U.S. and Mexico that he was seeking political asylum and refuge in the United States; the CBP officer told him to “get the fuck out of here” and pushed him backwards onto the cement, causing bruising. Alexander has continued to try to obtain a CBP One appointment every day from Tijuana. To date, he has been unable to obtain a CBP One appointment or otherwise access the U.S. asylum process…”>[1]

Alexander fled kidnapping and threats in Chechnya to seek security in the US.[2] His is a common story of migrants who have received a similar welcome. People have died and been killed waiting for an appointment to apply for asylum at the border.[3] Children with autism and schizophrenia have had to wait, exposed to the elements.[4] People whose medical vulnerabilities should have entitled them to relief have instead been preyed upon by gangs or corrupt police.[5] What is the wall blocking these people from fleeing persecution and reaching safety in the US?

The Biden administration’s failed effort to pass bipartisan legislation to curb access to asylum is part of a broader pattern of Trump-Biden continuity in immigration policy.[6] This continuity is defined by bipartisan support for increased funding for Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) for enforcement of immigration law at the border and in the interior, respectively.[7] Successive Democratic and Republican administrations have increased investment in interior and border enforcement.[8] That investment has expanded technological mechanisms to surveil migrants and facilitate administration of removal.

As part of their efforts to curtail access to asylum, the Biden administration promulgated their Circumvention of Lawful Pathways rule.[9] This rule revived the Trump administration’s entry and transit bans.[10] The transit ban bars migrants from applying for asylum if they crossed through a third country en route to the US.[11] The entry ban bars asylum applicants who did not present themselves at a port of entry.[12] In East Bay Sanctuary Covenant v. Biden, the Ninth Circuit determined the rule was unlawful for directly contradicting Congressional intent in the INA granting a right of asylum to any migrant in the US regardless of manner of entry.[13] The Trump entry ban was similarly found unlawful for directly contravening the same language in the INA.[14] The Biden ban remains in effect to allow litigation regarding its legality to reach its ultimate conclusion.

The Circumvention of Lawful Pathways rule effecting the entry ban gave rise to a pattern and practice of metering asylum applicants, or requiring applicants to present at a port of entry having complied with specific conditions to avoid being turned back.[15] To facilitate the arrival of asylum seekers within a specific appointment window, DHS launched the CBP One app.[16] The app would ostensibly allow asylum applicants to schedule an appointment at a port of entry to present themselves for asylum.[17]

Al Otro Lado (AOL), Haitian Bridge, and other litigants have filed a complaint alleging the government lacks the statutory authorization to force migrants to seek an appointment through the app and that its design frustrates their rights.[18] AOL notes that by requiring migrants to make appointments to claim asylum via the app, the Biden administration has imposed a number of extra-statutory requirements on migrants entitled to claim asylum, which include that they:

(a) have access to an up-to-date, well-functioning smartphone;
(b) fluently read one of the few languages currently supported by CBP One;
(c) have access to a sufficiently strong and reliable mobile internet connection and electricity to submit the necessary information and photographs required by the app;
(d) have the technological literacy to navigate the complicated multi-step process to create an account and request an appointment via CBP One;
(e) are able to survive in a restricted area of Mexico for an indeterminate period of time while trying to obtain an appointment; and
(f) are lucky enough to obtain one of the limited number of appointments at certain POEs.[19]

The Civil Rights Education and Enforcement Center (CREEC) and the Texas Civil Rights Project have similarly filed a complaint with Department of Homeland Security’s Office of Civil Rights and Civil Liberties alleging CBP One is illegally inaccessible to disabled people and this has consequently violated other rights they have as migrants.[20] Migrants may become disabled as a consequence of the immigration process or the persecution they suffered that establish their prima facie claim to asylum.[21] The CREEC complaint specifically cites Section 508 of the Rehabilitation Act, which says disabled members of the public must enjoy access to government tech “comparable to the access” of everyone else.[22]

CREEC and AOL – and the other service organizations joining their respective complaints – note that they have limited capacity to assist asylum seekers.[23] Migrants without such institutional or community support would be more vulnerable being denied access to asylum and subject to opportunistic criminal predation while they wait at the border.[24]

There are a litany of technical problems with the app that can frustrate meritorious asylum claims. The app requires applicants to submit a picture of their face.[25] The app’s facial recognition software frequently fails to identify portraits of darker-skinned people.[26] Racial persecution is one of the statutory grounds for claiming asylum.[27] A victim of race-based persecution can have their asylum claim frustrated on the basis of their race because of this app. Persecution on the basis of membership in a particular social group can also form the basis for an asylum claim.[28] An applicant could establish membership in a particular social group composed of certain disabled people.[29] People with facial disabilities have also struggled with the facial recognition feature.[30]

The mere fact that an app has substituted a human interaction contributes to frustration of disabled migrants’ statutory rights. Medically fragile people statutorily eligible to enter the US via humanitarian parole are unable to access that relief electronically.[31] Individuals with intellectual disabilities have also had their claims delayed by navigating CBP One.[32] Asylum officers are statutorily required to evaluate if asylum seekers lack the mental competence to assist in their applications and, if so, ensure they have qualified assistance to vindicate their claims.[33]

The entry ban has textual exceptions for migrants whose attempts to set appointments are frustrated by technical issues.[34] CBP officials at many ports have a pattern and practice of ignoring those exceptions and refusing all migrants who lack a valid CBP One appointment.[35]

AOL seeks relief in the termination of the CBP One turnback policy: essentially, ensuring people can exercise their statutory right to claim asylum at the border without an appointment.[36] CREEC seeks relief in the form of a fully accessible CBP One app and accommodation policies to ensure disabled asylum seekers can have “meaningful access” to the asylum process.[37]

Comprehensively safeguarding asylum seeker’s rights would require more than abandoning CBP One. A process that ensures medically vulnerable persons can access timely care and persons with intellectual disabilities can get legal assistance would require deploying more border resources, such as co-locating medical and resettlement organization staff with CBP. Meaningfully curbing racial, ethnic, and linguistic discrimination by CBP, ICE, and Asylum Officers would require expensive and extensive retraining. However, it is evident that the CBP One is not serving the ostensible goal of making the asylum process more efficient, though it may serve the political goal of reinforcing the wall.

Notes

[1] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[2] Id. at 46.

[3] Ana Lucia Verduzco & Stephanie Brewer, Kidnapping of Migrants and Asylum Seekers at the Texas-Tamaulipas Border Reaches Intolerable Levels, (Apr. 4, 2024) https://www.wola.org/analysis/kidnapping-migrants-asylum-seekers-texas-tamaulipas-border-intolerable-levels.

[4] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 28, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[5] Linda Urueña Mariño & Christina Asencio, Human Rights First Tracker of Reported Attacks During the Biden Administration Against Asylum Seekers and Migrants Who Are Stranded in and/or Expelled to Mexico, Human Rights First, (Jan. 13, 2022),  at 10, 16, 19, https://humanrightsfirst.org/wp-content/uploads/2022/02/AttacksonAsylumSeekersStrandedinMexicoDuringBidenAdministration.1.13.2022.pdf.

[6] Actions – H.R.815 – 118th Congress (2023-2024): National Security Act, 2024, H.R.815, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/815/all-actions, (failing to pass the immigration language on 02/07/24).

[7] American Immigration Council,The Cost of Immigration Enforcement and Border Security, (Jan. 20, 2021), at 2, https://www.americanimmigrationcouncil.org/sites/default/files/research/the_cost_of_immigration_enforcement_and_border_security.pdf.

[8] Id. at 3-4.

[9] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[10] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[11] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[12] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[13] Id. at 669-70.

[14] E. Bay Sanctuary Covenant v. Trump, 349 F. Supp. 3d 838, 844.

[15] Complaint, at 2, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[16] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[17] Id.

[18] Complaint, at 57, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[19] Complaint, at 3, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[20] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[21] Ruby Ritchin, “I Felt Not Seen, Not Heard”: Gaps in Disability Access at USCIS for People Seeking Protection, 12, (Sep. 19, 2023) https://humanrightsfirst.org/library/i-felt-not-seen-not-heard-gaps-in-disability-access-at-uscis-for-people-seeking-protection.

[22] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 6, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[23] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also Complaint, at 4, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[24] Dara Lind, CBP’s Continued ‘Turnbacks’ Are Sending Asylum Seekers Back to Lethal Danger, (Aug. 10, 2023), https://immigrationimpact.com/2023/08/10/cbp-turnback-policy-lawsuit-danger.

[25] Complaint, at 31, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[26] Id.

[27] 8 U.S.C.A. § 1101(a)(42)(A) (West).

[28] Id.

[29] Hernandez Arellano v. Garland, 856 F. App’x 351, 353 (2d Cir. 2021).

[30] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 9, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[31] Id.

[32] Id.

[33] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[34] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[35] Id. at 23.

[36] Id. at 65-66.

[37] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 10-11, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.


The Stifling Potential of Biden’s Executive Order on AI

Christhy Le, MJLST Staffer

Biden’s Executive Order on “Safe, Secure, and Trustworthy” AI

On October 30, 2023, President Biden issued a landmark Executive Order to address concerns about the burgeoning and rapidly evolving technology of AI. The Biden administration states that the order’s goal is to ensure that America leads the way in seizing the promising potential of AI while managing the risks of AI’s potential misuse.[1] The Executive Order establishes (1) new standards for AI development, and security; (2) increased protections for Americans’ data and privacy; and (3) a plan to develop authentication methods to detect AI-generated content.[2] Notably, Biden’s Executive Order also highlights the need to develop AI in a way that ensures it advances equity and civil rights, fights against algorithmic discrimination, and creates efficiencies and equity in the distribution of governmental resources.[3]

While the Biden administration’s Executive Order has been lauded as the most comprehensive step taken by a President to safeguard against threats posed by AI, its true impact is yet to be seen. The impact of the Executive Order will depend on its implementation by the agencies that have been tasked with taking action. The regulatory heads tasked with implementing Biden’s Executive Order are the Secretary of Commerce, Secretary of Energy, Secretary of Homeland Security, and the National Institute of Standards and Technology.[4] Below is a summary of the key calls-to-action from Biden’s Executive Order:

  • Industry Standards for AI Development: The National Institute of Science and Tech (NIST), Secretary of Commerce, Secretary of Energy, Secretary of Homeland Secretary, and other heads of agencies selected by the Secretary of Commerce will define industry standards and best practices for the development and deployment of safe and secure AI systems.
  • Red-Team Testing and Reporting Requirements: Companies developing or demonstrating an intent to develop potential dual-use foundational models will be required to provide the Federal Government, on an ongoing basis, with information, reports, and records on the training and development of such models. Companies will also be responsible for sharing the results of any AI red-team testing conducted by the NIST.
  • Cybersecurity and Data Privacy: The Department of Homeland Security shall provide an assessment of potential risks related to the use of AI in critical infrastructure sectors and issue a public report on best practices to manage AI-specific cybersecurity risks. The Director of the National Science Foundation shall fund the creation of a research network to advance privacy research and the development of Privacy Enhancing Technologies (PETs).
  • Synthetic Content Detection and Authentication: The Secretary of Commerce and heads of other relevant agencies will provide a report outlining existing methods and the potential development of further standards/techniques to authenticate content, track its provenance, detect synthetic content, and label synthetic content.
  • Maintaining Competition and Innovation: The government will invest in AI research by creating at least four new National AI Research Institutes and launch a pilot distributing computational, data, model, and training resources to support AI-related research and development. The Secretary of Veterans Affairs will also be tasked with hosting nationwide AI Tech Sprint competitions. Additionally, the FTC will be charged with using its authorities to ensure fair competition in the AI and semiconductor industry.
  • Protecting Civil Rights and Equity with AI: The Secretary of Labor will publish a report on effects of AI on the labor market and employees’ well-being. The Attorney General shall implement and enforce existing federal laws to address civil rights and civil liberties violations and discrimination related to AI. The Secretary of Health and Human Services shall publish a plan to utilize automated or algorithmic systems in administering public benefits and services and ensure equitable distribution of government resources.[5]

Potential for Big Tech’s Outsized Influence on Government Action Against AI

Leading up to the issuance of this Executive Order, the Biden administration met repeatedly and exclusively with leaders of big tech companies. In May 2023, President Biden and Vice President Kamala Harris met with the CEOs of leading AI companies–Google, Anthropic, Microsoft, and OpenAI.[6] In July 2023, the Biden administration celebrated their achievement of getting seven AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI) to make voluntary commitments to work towards developing AI technology in a safe, secure, and transparent manner.[7] Voluntary commitments generally require tech companies to publish public reports on their developed models, submit to third-party testing of their systems, prioritize research on societal risks posed by AI systems, and invest in cybersecurity.[8] Many industry leaders criticized these voluntary commitments for being vague and “more symbolic than substantive.”[9] Industry leaders also noted the lack of enforcement mechanisms to ensure companies follow through on these commitments.[10] Notably, the White House has only allowed leaders of large tech companies to weigh in on requirements for Biden’s Executive Order.

While a bipartisan group of senators[11] hosted a more diverse audience of tech leaders in their AI Insights Forum, the attendees for the first and second forum were still largely limited to CEOs or Cofounders of prominent tech companies, VC executives, or professors at leading universities.[12] Marc Andreessen, a co-founder of Andreessen Horowitz, a prominent VC fund, noted that in order to protect competition, the “future of AI shouldn’t be dictated by a few large corporations. It should be a group of global voices, pooling together diverse insights and ethical frameworks.”[13] On November 3rd, 2023 a group of prominent academics, VC executives, and heads of AI startups published an open letter to the Biden administration where they voiced their concern about the Executive Order’s potentially stifling effects.[14] The group also welcomed a discussion with the Biden administration on the importance of developing regulations that allowed for robust development of open source AI.[15]

Potential to Stifle Innovation and Stunt Tech Startups

While the language of Biden’s Executive Order is fairly broad and general, it still has the potential to stunt early innovation by smaller AI startups. Industry leaders and AI startup founders have voiced concern over the Executive Order’s reporting requirements and restrictions on models over a certain size.[16] Ironically, Biden’s Order includes a claim that the Federal Trade Commission will “work to promote a fair, open, and competitive ecosystem” by helping developers and small businesses access technical resources and commercialization opportunities.

Despite this promise of providing resources to startups and small businesses, the Executive Order’s stringent reporting and information-sharing requirements will likely have a disproportionately detrimental impact on startups. Andrew Ng, a longtime AI leader and cofounder of Google Brain and Coursera, stated that he is “quite concerned about the reporting requirements for models over a certain size” and is worried about the “overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”[17] Ng believes that regulating AI model size will likely hurt the open-source community and unintentionally benefit tech giants as smaller companies will struggle to comply with the Order’s reporting requirements.[18]

Open source software (OSS) has been around since the 1980s.[19] OSS is code that is free to access, use, and change without restriction.[20] The open source community has played a central part in developing the use and application of AI, as leading AI generative models like ChatGPT and Llama have open-source origins.[21] While both Llama and ChatGPT are no longer open source, their development and advancement heavily relied on using open source models like Transformer, TensorFlow, and Pytorch.[22] Industry leaders have voiced concern that the Executive Order’s broad and vague use of the term “dual-use foundation model” will impose unduly burdensome reporting requirements on small companies.[23] Startups typically have leaner teams, and there is rarely a team solely dedicated to compliance. These reporting requirements will likely create barriers to entry for tech challengers who are pioneering open source AI, as only incumbents with greater financial resources will be able to comply with the Executive Order’s requirements.

While Biden’s Executive Order is unlikely to bring any immediate change, the broad reporting requirements outlined in the Order are likely to stifle emerging startups and pioneers of open source AI.

Notes

[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[2] Id.

[3] Id.

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[5] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.

[7] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.

[8] https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.

[9] https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html.

[10] Id.

[11] https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum.

[12] https://techpolicy.press/us-senate-ai-insight-forum-tracker/.

[13] https://www.schumer.senate.gov/imo/media/doc/Marc%20Andreessen.pdf.

[14] https://twitter.com/martin_casado/status/1720517026538778657?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1720517026538778657%7Ctwgr%5Ec9ecbf7ac4fe23b03d91aea32db04b2e3ca656df%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcointelegraph.com%2Fnews%2Fbiden-ai-executive-order-certainly-challenging-open-source-ai-industry-insiders.

[15] Id.

[16] https://www.cnbc.com/2023/11/02/biden-ai-executive-order-industry-civil-rights-labor-groups-react.html.

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[18] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[19] https://www.brookings.edu/articles/how-open-source-software-shapes-ai-policy/.

[20] Id.

[21] https://www.zdnet.com/article/why-open-source-is-the-cradle-of-artificial-intelligence/.

[22] Id.

[23] Casado, supra note 14.


A Requiem for Fear, Death, and Dying: Law and Medicine’s Perpetually Unfinished Composition

Audrey Hutchinson, MJLST Staffer

In the 18th and 19th century, the coffins of newly deceased lay six feet below, but were often outfitted with a novel accessory emerging from the freshly turned earth: a bell hung from an inconspicuous stake, its clapper adorned with a rope that disappeared beneath the dirt.[1] Rather than this display serving as a bygone tradition of the mourning process—some symbolic way to emulate connection with the departed—the bell served a more practical purpose: it was an emergency safeguard against premature burial.[2] The design, and all its variously patented 18th and 19th century designs, draws upon a foundational—and by some biopsychological theories, a biologically imperative—quality: fear of death.[3]

In the mid-1700’s, the French author Jacques Benigne Winslow published a book ominously titled The Uncertainty of the Signs of Death and the Danger of Precipitate Interments and Dissections, marking a decisive and public moment in medical history where death was introduced as something nebulous rather than definite to a highly unsettled public.[4] For centuries, medical tests and parameters had existed by which doctors could “affirmatively” conclude a patient had, indeed, passed.[5] While the Victorian newspapers were riddled with adverts for “safety coffins” in a macabre, but unsurprising expression of capitalism in the wake of mounting cholera deaths and the accompanying rate of premature burial reports, efforts to evade the liminal space of “dying” and the finality of “death” can be seen as far back as ancient Hebrew scriptures, wherein resuscitation attempts via chest compressions are described.[6] Perhaps this is unsurprising: psychologist and experimental theorist Robert C. Bolles conceptualized that fear is “a hypothetical cause [motivation] of behavior” and that its main purpose is to keep organisms alive.[7] Perhaps there has always been a subconscious doubt or suspicion about the finality of death, or perhaps it was human desperation and delusion arising from loss that has left behind an ancient record of fear and subsequent acts of defiance in the face of death still germane today.

Contemporarily we see the fruits of this fear of dying, death, or being somewhere in between in the form of advances in medical technology and legal guidelines. Though death is still commonly understood to be a discrete status—a state one enters but cannot exit—medical and legal definitions have, over time, evolved approaching death more gingerly—the former understanding death as a nuanced scale, the latter drawing hard lines on that scale.[8] Today, 43 states have enacted the Uniform Law Commission’s Uniform Determination of Death Act (“UDDA”).[9] The UDDA requires two distinct standards be met for someone to effectively, and legally, be deemed dead:  1) the irreversible cessation of circulatory and respiratory functions, and 2) the irreversible cessation of all functions of the entire brain, including the brainstem.[10] The UDDA’s legal determination of death, in its bright line language, relies in large part on  “generally accepted medical standards” of the medical practice and practitioner discretion. While the loss of respiratory, circulatory, and total brain death of the entire brain are the common parameters of determining death medically, the UDDA is distinctly “silent on acceptable diagnostic tests [and] procedures.” It is argued that the language is purposeful in creating statutory flexibility in an era of constant scientific and medical research, understanding, and innovation.

As it relates to brain death, the medical approach to determining is a scale that contemplates brain injury/activity and somatic survival, a “continuous biological spectrum”[11] that naturally contemplates not only a patient’s current status, but the possibility and likelihood of both degenerative and improved changes in status. But, as a matter of policy and regulation, the UDDA drew a bright line between the two and called it brain-death. Someone in a permanent vegetative state is not considered braindead, but someone with a necrotic “liquified” brain is. As a result, the medical determination of death is arguably subservient to the legal determination, designating a point of no return–not because the medical professionals see no alternate path, but the law has provided a blindfold required from that point forward.

While this may be an efficient way to ensure people are not denied advanced and improved medical practices, it also means that there is ambiguity and variance from state to state as to the nature of governing factual guidelines and standards. There are practical and policy reasons for this, including maximizing efficacy and reach of organ donation systems and generally preventing strain on healthcare resources and systems; nonetheless, the brightline fails to be so bright. While the Commission could have situated the UDDA such that the determination of legal brain death and medical brain death worked in tandem, being triggered at some distinct moment by certain explicit conditions or after certain standardized medical tests, it did not.

Is that because it will not, or because it simply cannot do so? Today, the standards become increasingly muddied by advancements in technology to prolong life that have, in turn, paradoxically, also prolonged the process of dying—expanding the scope of that liminal space. Artificial means of keeping someone alive where they otherwise could not stay so imperatively creates a discrete state of the act of dying. New legal and medical methods of describing these states have become imperative with lively debate ongoing concerning bridging the medical-legal gap concerning death determination[12]—specifically, the distinction between the “permanent” (will not reverse) and “irreversible” (cannot reverse) cessation of cardiac, respiratory, and neurological function relative to the meaning of a determination of death.[13] James Bernat, a neurologist and academic who examines the convergence of ethics, philosophy, and neurology, is a contemporary advocate calling for reconciliation between medical practice with the law.[14] Dr. Bernat suggests the UDDA’s irreversibility standard—a function that has stopped and cannot be restarted—be replaced with a permanence standard—a function that has stopped, will not restart on its own, and no intervention will be undertaken to restart it.[15] This distinction, in large part, is attempting to address the incongruence of the UDDA’s language that, by the ULC’s own concession, “sets the general legal standard for determining death, but not the medical criteria for doing so.”[16] In effect, in trying to define and characterize death and dying, we have created a dynamic wherein one could be medically dead, but not legally.[17]

Upon his death bed, composer Frédéric Chopin uttered his last words: “The earth is suffocating …. Swear to make them cut me open, so that I won’t be buried alive.”[18] A century and a half later, yet only time will tell if law and medicine can find a way to reconcile the increasingly ambiguous nature of dying and define death explicitly and discretely—no bells required.

Notes

[1] Steven B. Harris, M.D. The Society for the Recovery of Persons Apparently Dead. Cryonics (Sept. 1990) https://www.cryonicsarchive.org/library/persons-apparently-dead/.

[2] Id.

[3] Id.; Shannon E. Grogans et. al., The nature and neurobiology of fear and anxiety: State of the science and opportunities for accelerating discovery, Neuroscience & Biobehavioral Reviews, Volume 151, 2023, 105237, ISSN 0149-7634, https://doi.org/10.1016/j.neubiorev.2023.105237.

[4] Harris, supra note 1.

[5] Id.

[6] Id.

[7] Grogans et. al., supra note 3.

[8] Robert D. Truog, Lessons from the Case of Jahi McMath. The Hastings Center report vol. 48, Suppl. 4 (2018): S70-S73. doi:10.1002/hast.961.

[9] Unif. Determination of death act § 1 (Nat’l Conf. of Comm’n on Unif. L Comm’n. 1981).

[10] Id.

[11] Truog supra at S72.

[12] James L. Bernat, “Conceptual Issues in DCDD Donor Death Determination.” The Hastings Center report vol. 48 Suppl 4 (2018): S26-S28. doi:10.1002/hast.948.

[13] James Bernat, (2010). How the Distinction between ‘Irreversible’ and ‘Permanent’ Illuminates Circulatory-Respiratory Death Determination. The Journal of Medicine and Philosophy. 35. 242-55. 10.1093/jmp/jhq018.

[14] Faculty Database: James L. Bernat, M.D. Dartmouth Geisel School of Medicine https://geiselmed.dartmouth.edu/faculty/facultydb/view.php/?uid=353 (last accessed Oct. 23, 2023).

[15] JD and Angela Turi, Death’s Troubled Relationship With the Law Brendan Parent, AMA J Ethics. 2020;22(12):E1055-1061. doi: 10.1001/amajethics.2020.1055; See also, Bernat JL. Point: are donors after circulatory death really dead, and does it matter? Yes and yes. Chest. 2010;138(1):13-16.

[16] Thaddeus Pope, Brain Death and the Law: Hard Cases and Legal Challenges. The Hastings Center report vol. 48 Suppl. 4 (2018): S46-S48. doi:10.1002/hast.954.

[17] Id.

[18] Death: The Last Taboo – Safety Coffins, Australian Museum (Oct. 20, 2020) https://australian.museum/about/history/exhibitions/death-the-last-taboo/safety-coffins/ (last accessed Oct. 23, 2023).


AR/VR/XR: Breaking the Wall of Legal Issues Used to Limit in Either the Real-World or the Virtual-World

Sophia Yao, MJLST Staffer

From Pokémon Go to the Metaverse,[1] VR headsets to XR glasses, vision technology is quickly changing our lives in many aspects. The best-known companies or groups that have joined this market include Apple’s Vision Products Group (VPG), Meta’s Reality Lab, Microsoft, and others. Especially after Apple published its Vision Pro in 2023, no one doubts that this technology will soon be a vital driver for both tech and business. Regardless of why, can this type of technology significantly impact human genes? What industries will be impacted by this technology? And what kinds of legal risks are to come?

Augmented Reality (“AR”) refers to a display of a real-world environment whose elements are augmented by (i.e., overlaid with) one or more layers of text, data, symbols, images, or other graphical display elements.[2] Virtual Reality (“VR”) is using a kind of device (e.g., headsets or multi-projected environments) to create a simulated and immersive environment that can provide an experience either similar to or completely different from the real world,[3] while Mixed Reality/Extended Reality (XR) glasses are relatively compact and sleek, and weigh much less than VR headsets.[4] XR’s most distinguished quality from VR is that individuals can still see the world around them with XR by projecting a translucent screen on top of the real world. Seemingly, the differences between these three vision technologies may soon be eliminated with the possibility of their combination into once device.

Typically, vision technology assists people in mentally processing 2-D information into a 3-D world by integrating digital information directly into real objects or environments. This can improve individuals’ ability to absorb information, make decisions, and execute required tasks quickly, efficiently, and accurately. However, many people report feeling nauseous after using such products, ear pain, and a disconnect between their eyes and body.[5] Even experts who use AR/VR products in emerging psychotherapy treatments admit that there have been adverse effects in AR/VR trials due to mismatching the direct contradiction between the visual system and the motion system.[6] Researchers also discovered that it affects the way people behave in social situations due to feeling less socially connected to others.[7]

In 2022, the global augmented reality market was valued at nearly $32 billion and is projected to reach $88 billion by 2026.[8] As indicated by industry specialists and examiners, outside of gaming, a significant portion of vision technology income will accumulate from e-commerce and retail (fashion and beauty), manufacturing, the education industry, healthcare, real estate, and e-sports, which will further impact entertainment, cost of living, and innovation.[9] To manage this tremendous opportunity, it is crucial to understand potential legal risks and develop a comprehensive legal strategy to address these upcoming challenges.

To expand one’s business model, it is important to maximize the protection of intellectual property (IP), including virtual worlds, characters, and experiences. Doing so also aligns with contractual concerns, service remedies, and liability for infringement of third-party IP. For example, when filing an IP prosecution, it is difficult to argue that the hardware-executing invention (characters or data information) is a unique machine, and that the designated steps performed by the hardware are special under MPEP § 2106.05(d).[10] Furthermore, the Federal Circuit has cautioned the abstraction of inventions – that “[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas…[T]read carefully in constructing this exclusionary principle lest it swallows all of the patent law.”[11]

From a consumer perspective, legal concerns may include data privacy, harassment, virtual trespass, or even violent attacks due to the aforementioned disconnect between individuals’ eyes and bodies. Courts’ views on virtual trespass created by vision technology devices is ambiguous. It is also unclear whether courts will accept the defense of error in judgment due to the adverse effects of using AR/VR devices. One of the most significant concerns is the protection of the younger generations, since they are often the target consumers and those who are spending the most time using these devices. Experts have raised concerns about the adverse effects of using AR/VR devices, questioning whether they negatively impact the mental and physical health of younger generations. Another concern is that these individuals may experience a decline in social communication skills and feel a stronger connection to machines rather than to human beings. Many other legal risks are hanging around the use of AR/VR devices, such as private data collection without consent by constantly scanning the users’ surrounding circumstances, although some contend that the Children’s Online Privacy Protection Act (COPPA) prohibits the collection of personally identifiable information if an operator believes a user to be under the age of thirteen.[12]

According to research trends, combining AR, VR, and MR/XR will allow users to transcend distance, time, and scale, to bring people together in shared virtual environments, enhance comprehension, communication, and decisionmaking efficiency. Once the boundaries between the real-world and virtual-world are eliminated, AR/VR devices will “perfectly” integrate with the physical world, whether or not we are prepared for this upcoming world.

Notes

[1] Eric Ravenscraft, What is the Meteverse, Exactly?, Wired (Jun. 15, 2023, 6:04 PM), https://www.wired.com/story/what-is-the-metaverse/.

[2] Travis Alley, ARTICLE: Pokemon Go: Emerging Liability Arising from Virtual Trespass for Augmented Reality Applications, 4 Tex. A&M J. Prop. L. 273 (2018).

[3] Law Offices of Salar Atrizadeh, Virtual and Augmented Reality Laws, Internet Law. Blog (Dec. 17, 2018), https://www.internetlawyer-blog.com/virtual-and-augmented-reality-laws/.

[4] Simon Hill, Review: Viture One XR Glasses, Wired (Sep. 1, 2023, 7:00 AM), https://www.wired.com/review/viture-one-xr-glasses/.

[5] Alexis Souchet, Virtual Reality has Negative Side Effects—New Research Shows That Can be a Problem in the Workplace, The Conversation (Aug. 8, 2023, 8:29 AM), https://theconversation.com/virtual-reality-has-negative-side-effects-new-research-shows-that-can-be-a-problem-in-the-workplace-210532#:~:text=Some%20negative%20symptoms%20of%20VR,nausea%20and%20increased%20muscle%20fatigue.

[6] John Torous et al., Adverse Effects of Virtual and Augmented Reality Interventions in Psychiatry: Systematic Review, JMIR Ment Health (May 5, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199391/.

[7] How Augmented Reality Affects People’s Behavior, Sci.Daily (May 22, 2019), https://www.sciencedaily.com/releases/2019/05/190522101944.htm.

[8] Augmented Reality (AR) Market by Device Type (Head-mounted Display, Head-up Display), Offering (Hardware, Software), Application (Consumer, Commercial, Healthcare), Technology, and Geography – Global Forecast, Mkt. and Mkt., https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html.

[9] Hill, supra note 4.

[10] Manual of Patent Examining Proc. (MPEP) § 2106.05(d) (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_13d41_124 (explaining an evaluation standard on when determining whether a claim recites significantly more than a judicial exception depends on whether the additional elements(s) are well-understood, routine, conventional activities previously known to the industry).

[11] Manual of Patent Examining Proc. (MPEP) § 2106.04 (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_139db_e0; see also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (2016).

[12] 16 CFR pt. 312.


Regulating the Revolution: A Legal Roadmap to Optimizing AI in Healthcare

Fazal Khan, MD-JD: Nexbridge AI

In the field of healthcare, the integration of artificial intelligence (AI) presents a profound opportunity to revolutionize care delivery, making it more accessible, cost-effective, and personalized. Burgeoning demographic shifts, such as aging populations, are exerting unprecedented pressure on our healthcare systems, exacerbating disparities in care and already-soaring costs. Concurrently, the prevalence of medical errors remains a stubborn challenge. AI stands as a beacon of hope in this landscape, capable of augmenting healthcare capacity and access, streamlining costs by automating processes, and refining the quality and customization of care.

Yet, the journey to harness AI’s full potential is fraught with challenges, most notably the risks of algorithmic bias and the diminution of human interaction. AI systems, if fed with biased data, can become vehicles of silent discrimination against underprivileged groups. It is essential to implement ongoing bias surveillance, promote the inclusion of diverse data sets, and foster community involvement to avert such injustices. Healthcare institutions bear the responsibility of ensuring that AI applications are in strict adherence to anti-discrimination statutes and medical ethical standards.

Moreover, it is crucial to safeguard the essence of human touch and empathy in healthcare. AI’s prowess in automating administrative functions cannot replace the human art inherent in the practice of medicine—be it in complex diagnostic processes, critical decision-making, or nurturing the therapeutic bond between healthcare providers and patients. Policy frameworks must judiciously navigate the fine line between fostering innovation and exercising appropriate control, ensuring that technological advancements do not overshadow fundamental human values.

The quintessential paradigm would be one where human acumen and AI’s analytical capabilities coalesce seamlessly. While humans should steward the realms requiring nuanced judgment and empathic interaction, AI should be relegated to the execution of repetitive tasks and the extrapolation of data-driven insights. Placing patients at the epicenter, this symbiotic union between human clinicians and AI can broaden access to healthcare, reduce expenditures, and enhance service quality, all the while maintaining trust through unyielding transparency. Nonetheless, the realization of such a model mandates proactive risk management and the encouragement of innovation through sagacious governance. By developing governmental and institutional policies that are both cautious and compassionate by design, AI can indeed be the catalyst for a transformative leap in healthcare, enriching the dynamics between medical professionals and the populations they serve.


Raising the Bar: Rule 702 Changes Illuminate the Need for Science Literacy in the Judiciary

David Lee, MJLST Staffer

On December 1, 2023, amendments to Federal Rule of Evidence 702 (FRE 702) took effect.[1] FRE 702 governs the admissibility of expert witness testimony. Central to its purpose is ensuring that such testimony is both relevant to the case and based on a reliable foundation. The rule sets the qualifications for experts based on their knowledge, skill, experience, training, or education, and emphasizes the crucial role of the trial judge as a gatekeeper. This role involves assessing the testimony’s adherence to relevance and reliability before it reaches the jury, thereby upholding the fairness and integrity of the judicial process and ensuring that the legal system remains aligned with evolving scientific and technical knowledge.[2]

Prior to the amendments, there was inconsistent application of FRE 702.[3] According to the Advisory Committee on Evidence Rules, the changes serve to reinforce that the criteria for expert witness admissibility laid out in FRE 702 are just that – criteria for admissibility and not questions of weight.[4] When read properly, FRE 702 makes expert witness reliability a threshold question for judges to answer, and the amendments reinforce this “gatekeeping” function of judges.[5]  With the new amendments clarifying the role of judges as arbiters of whether an expert’s “opinion reflects a reliable application of the principles and methods [of relevant scientific, technical, or other specialized knowledge]” to the facts of the case, it is imperative that the judiciary is sufficiently literate in science and the scientific method to properly serve this function.

Rule 702. Testimony by Expert Witnesses (amendments italicized and stricken)

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if the proponent demonstrates to the court that it is more likely than not that:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue; (b) the testimony is based on sufficient facts or data; (c) the testimony is the product of reliable principles and methods; and (d) the expert has reliably applied expert’s opinion reflects a reliable application of the principles and methods to the facts of the case.

The Importance of Scientific Acumen on the Bench

Science literacy on the bench – referring to the judiciary’s understanding and comprehension of scientific principles and methodologies – has become increasingly vital in the modern legal landscape. This form of literacy encompasses not just a basic grasp of scientific concepts but also an appreciation of how scientific knowledge evolves and how it can be rigorously applied in legal contexts. As courts frequently encounter cases involving complex scientific evidence – from DNA analysis to digital forensics – judges equipped with science literacy are better positioned to evaluate the credibility and relevance of expert testimony accurately. The absence of this scientific acumen can lead to significant judicial errors or misunderstandings.[6] Entire branches of forensic science such as bite mark analysis, microscopic hair comparison, and tire track analysis – once taken for granted as valid and widely accepted by courts – have been discredited as unreliable and lacking scientific underpinnings.[7] These misjudgments about the validity of forensic methods have previously led to wrongful convictions.[8] Lack of understanding in environmental science has sometimes resulted in rulings on cases involving pollution and climate change that are highly controversial regarding their interpretation of the science.[9] These examples underline the necessity for judges to possess a robust foundation in scientific literacy to ensure just and informed decision-making in an era where science and technology are deeply intertwined with legal issues.

The Need for Additional Educational Initiatives

Judges are often apprehensive when confronted with complex scientific evidence in cases, partly due to their limited background in the hard sciences, as illustrated by one judge’s shift from pre-med to law after struggles with organic chemistry.[10] This apprehension underscores the growing necessity for science literacy in the judiciary, particularly given that judges are well-equipped to handle the fundamental aspects of scientific evidence: accuracy in observation and logical reasoning.[11] While judges may not be familiar with the specific terminologies and conventions of various scientific fields, their aptitude in swiftly grasping diverse issues, coupled with focused science education programs, would equip them to adeptly handle scientific matters in court. The approach for addressing the distinctive need for judicial education in science necessarily differs from the typical science education for scientists. Judges don’t require extensive training in theoretical concepts or complex statistical inferences as scientists do. Their role is more akin to a scientific journal editor, assessing if the scientific evidence presented meets acceptable standards. This task is supported by attorneys, who educate judges on pertinent scientific issues through briefs and arguments. The key for judicial science education is accessibility and breadth, given the variety of cases a judge encounters. The Reference Manual on Scientific Evidence, a crucial resource, helps judges understand scientific foundations and make informed decisions without instructing on the admissibility of specific evidence types; however, the most recent edition was published in 2011 and does not reflect advances in science or emerging technologies relevant to judges today.[12] Judicial education programs supported by the Federal Judicial Center further enhance judges’ capabilities in addressing complex scientific and technical information in our rapidly evolving world.[13] While these resources serve an important function, repeated misjudgments of the quality of scientific evidence by courts indicates that additional resources are needed.

The amendments to Federal Rule of Evidence 702 reemphasize the role that judges play regarding scientific and technical evidence. These changes not only clarify the gatekeeping role of judges in assessing expert witness testimony but also highlight the growing imperative for science literacy in the judiciary. This literacy is essential for judges to make informed, accurate decisions in an era increasingly dominated by complex scientific evidence. The evolving landscape of science and technology underscores the need for continuous educational initiatives to equip judges with the necessary tools to adapt and respond effectively. Resources like the Reference Manual on Scientific Evidence – despite needing updates – and educational programs provided by the Federal Judicial Center play a crucial role in this endeavor. As the legal world becomes more intertwined with scientific advancements, the judiciary’s ability to keep pace will be instrumental in upholding the integrity and efficacy of the justice system. This progression towards a more scientifically literate bench is not just a necessity but a responsibility.

Notes

[1] https://www.gand.uscourts.gov/news/federal-rules-amendments-effective-december-1-2023.

[2] https://www.law.cornell.edu/rules/fre/rule_702.

[3] https://www.jdsupra.com/legalnews/upcoming-fre-702-amendment-reemphasizes-6303408.

[4] Id.

[5] https://www.apslaw.com/its-your-business/2023/11/30/return-of-the-gatekeepers-amendments-to-rule-702-clarify-the-standard-of-admissibility-for-expert-witness-testimony.

[6] https://www.americanbar.org/groups/judicial/publications/appellate_issues/2019/winter/untested-forensic-sciences-present-trouble-in-the-courtroom.

[7] Id.

[8] Id.

[9] https://slate.com/news-and-politics/2023/12/supreme-court-vs-science.html.

[10] https://www.americanbar.org/groups/judicial/publications/judges_journal/2017/fall/science-educatifederal-judges.

[11] Id.

[12] https://www.nationalacademies.org/our-work/science-for-judges-development-of-the-reference-manual-on-scientific-evidence-4th-edition.

[13] Id.