2024

A Digital Brick in the Trump-Biden Wall

Solomon Steen, MJLST Staffer

“Alexander explained to a CBP officer at the limit line between the U.S. and Mexico that he was seeking political asylum and refuge in the United States; the CBP officer told him to “get the fuck out of here” and pushed him backwards onto the cement, causing bruising. Alexander has continued to try to obtain a CBP One appointment every day from Tijuana. To date, he has been unable to obtain a CBP One appointment or otherwise access the U.S. asylum process…”>[1]

Alexander fled kidnapping and threats in Chechnya to seek security in the US.[2] His is a common story of migrants who have received a similar welcome. People have died and been killed waiting for an appointment to apply for asylum at the border.[3] Children with autism and schizophrenia have had to wait, exposed to the elements.[4] People whose medical vulnerabilities should have entitled them to relief have instead been preyed upon by gangs or corrupt police.[5] What is the wall blocking these people from fleeing persecution and reaching safety in the US?

The Biden administration’s failed effort to pass bipartisan legislation to curb access to asylum is part of a broader pattern of Trump-Biden continuity in immigration policy.[6] This continuity is defined by bipartisan support for increased funding for Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) for enforcement of immigration law at the border and in the interior, respectively.[7] Successive Democratic and Republican administrations have increased investment in interior and border enforcement.[8] That investment has expanded technological mechanisms to surveil migrants and facilitate administration of removal.

As part of their efforts to curtail access to asylum, the Biden administration promulgated their Circumvention of Lawful Pathways rule.[9] This rule revived the Trump administration’s entry and transit bans.[10] The transit ban bars migrants from applying for asylum if they crossed through a third country en route to the US.[11] The entry ban bars asylum applicants who did not present themselves at a port of entry.[12] In East Bay Sanctuary Covenant v. Biden, the Ninth Circuit determined the rule was unlawful for directly contradicting Congressional intent in the INA granting a right of asylum to any migrant in the US regardless of manner of entry.[13] The Trump entry ban was similarly found unlawful for directly contravening the same language in the INA.[14] The Biden ban remains in effect to allow litigation regarding its legality to reach its ultimate conclusion.

The Circumvention of Lawful Pathways rule effecting the entry ban gave rise to a pattern and practice of metering asylum applicants, or requiring applicants to present at a port of entry having complied with specific conditions to avoid being turned back.[15] To facilitate the arrival of asylum seekers within a specific appointment window, DHS launched the CBP One app.[16] The app would ostensibly allow asylum applicants to schedule an appointment at a port of entry to present themselves for asylum.[17]

Al Otro Lado (AOL), Haitian Bridge, and other litigants have filed a complaint alleging the government lacks the statutory authorization to force migrants to seek an appointment through the app and that its design frustrates their rights.[18] AOL notes that by requiring migrants to make appointments to claim asylum via the app, the Biden administration has imposed a number of extra-statutory requirements on migrants entitled to claim asylum, which include that they:

(a) have access to an up-to-date, well-functioning smartphone;
(b) fluently read one of the few languages currently supported by CBP One;
(c) have access to a sufficiently strong and reliable mobile internet connection and electricity to submit the necessary information and photographs required by the app;
(d) have the technological literacy to navigate the complicated multi-step process to create an account and request an appointment via CBP One;
(e) are able to survive in a restricted area of Mexico for an indeterminate period of time while trying to obtain an appointment; and
(f) are lucky enough to obtain one of the limited number of appointments at certain POEs.[19]

The Civil Rights Education and Enforcement Center (CREEC) and the Texas Civil Rights Project have similarly filed a complaint with Department of Homeland Security’s Office of Civil Rights and Civil Liberties alleging CBP One is illegally inaccessible to disabled people and this has consequently violated other rights they have as migrants.[20] Migrants may become disabled as a consequence of the immigration process or the persecution they suffered that establish their prima facie claim to asylum.[21] The CREEC complaint specifically cites Section 508 of the Rehabilitation Act, which says disabled members of the public must enjoy access to government tech “comparable to the access” of everyone else.[22]

CREEC and AOL – and the other service organizations joining their respective complaints – note that they have limited capacity to assist asylum seekers.[23] Migrants without such institutional or community support would be more vulnerable being denied access to asylum and subject to opportunistic criminal predation while they wait at the border.[24]

There are a litany of technical problems with the app that can frustrate meritorious asylum claims. The app requires applicants to submit a picture of their face.[25] The app’s facial recognition software frequently fails to identify portraits of darker-skinned people.[26] Racial persecution is one of the statutory grounds for claiming asylum.[27] A victim of race-based persecution can have their asylum claim frustrated on the basis of their race because of this app. Persecution on the basis of membership in a particular social group can also form the basis for an asylum claim.[28] An applicant could establish membership in a particular social group composed of certain disabled people.[29] People with facial disabilities have also struggled with the facial recognition feature.[30]

The mere fact that an app has substituted a human interaction contributes to frustration of disabled migrants’ statutory rights. Medically fragile people statutorily eligible to enter the US via humanitarian parole are unable to access that relief electronically.[31] Individuals with intellectual disabilities have also had their claims delayed by navigating CBP One.[32] Asylum officers are statutorily required to evaluate if asylum seekers lack the mental competence to assist in their applications and, if so, ensure they have qualified assistance to vindicate their claims.[33]

The entry ban has textual exceptions for migrants whose attempts to set appointments are frustrated by technical issues.[34] CBP officials at many ports have a pattern and practice of ignoring those exceptions and refusing all migrants who lack a valid CBP One appointment.[35]

AOL seeks relief in the termination of the CBP One turnback policy: essentially, ensuring people can exercise their statutory right to claim asylum at the border without an appointment.[36] CREEC seeks relief in the form of a fully accessible CBP One app and accommodation policies to ensure disabled asylum seekers can have “meaningful access” to the asylum process.[37]

Comprehensively safeguarding asylum seeker’s rights would require more than abandoning CBP One. A process that ensures medically vulnerable persons can access timely care and persons with intellectual disabilities can get legal assistance would require deploying more border resources, such as co-locating medical and resettlement organization staff with CBP. Meaningfully curbing racial, ethnic, and linguistic discrimination by CBP, ICE, and Asylum Officers would require expensive and extensive retraining. However, it is evident that the CBP One is not serving the ostensible goal of making the asylum process more efficient, though it may serve the political goal of reinforcing the wall.

Notes

[1] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[2] Id. at 46.

[3] Ana Lucia Verduzco & Stephanie Brewer, Kidnapping of Migrants and Asylum Seekers at the Texas-Tamaulipas Border Reaches Intolerable Levels, (Apr. 4, 2024) https://www.wola.org/analysis/kidnapping-migrants-asylum-seekers-texas-tamaulipas-border-intolerable-levels.

[4] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 28, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[5] Linda Urueña Mariño & Christina Asencio, Human Rights First Tracker of Reported Attacks During the Biden Administration Against Asylum Seekers and Migrants Who Are Stranded in and/or Expelled to Mexico, Human Rights First, (Jan. 13, 2022),  at 10, 16, 19, https://humanrightsfirst.org/wp-content/uploads/2022/02/AttacksonAsylumSeekersStrandedinMexicoDuringBidenAdministration.1.13.2022.pdf.

[6] Actions – H.R.815 – 118th Congress (2023-2024): National Security Act, 2024, H.R.815, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/815/all-actions, (failing to pass the immigration language on 02/07/24).

[7] American Immigration Council,The Cost of Immigration Enforcement and Border Security, (Jan. 20, 2021), at 2, https://www.americanimmigrationcouncil.org/sites/default/files/research/the_cost_of_immigration_enforcement_and_border_security.pdf.

[8] Id. at 3-4.

[9] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[10] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[11] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[12] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[13] Id. at 669-70.

[14] E. Bay Sanctuary Covenant v. Trump, 349 F. Supp. 3d 838, 844.

[15] Complaint, at 2, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[16] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[17] Id.

[18] Complaint, at 57, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[19] Complaint, at 3, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[20] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[21] Ruby Ritchin, “I Felt Not Seen, Not Heard”: Gaps in Disability Access at USCIS for People Seeking Protection, 12, (Sep. 19, 2023) https://humanrightsfirst.org/library/i-felt-not-seen-not-heard-gaps-in-disability-access-at-uscis-for-people-seeking-protection.

[22] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 6, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[23] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also Complaint, at 4, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[24] Dara Lind, CBP’s Continued ‘Turnbacks’ Are Sending Asylum Seekers Back to Lethal Danger, (Aug. 10, 2023), https://immigrationimpact.com/2023/08/10/cbp-turnback-policy-lawsuit-danger.

[25] Complaint, at 31, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[26] Id.

[27] 8 U.S.C.A. § 1101(a)(42)(A) (West).

[28] Id.

[29] Hernandez Arellano v. Garland, 856 F. App’x 351, 353 (2d Cir. 2021).

[30] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 9, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[31] Id.

[32] Id.

[33] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[34] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[35] Id. at 23.

[36] Id. at 65-66.

[37] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 10-11, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.


The Stifling Potential of Biden’s Executive Order on AI

Christhy Le, MJLST Staffer

Biden’s Executive Order on “Safe, Secure, and Trustworthy” AI

On October 30, 2023, President Biden issued a landmark Executive Order to address concerns about the burgeoning and rapidly evolving technology of AI. The Biden administration states that the order’s goal is to ensure that America leads the way in seizing the promising potential of AI while managing the risks of AI’s potential misuse.[1] The Executive Order establishes (1) new standards for AI development, and security; (2) increased protections for Americans’ data and privacy; and (3) a plan to develop authentication methods to detect AI-generated content.[2] Notably, Biden’s Executive Order also highlights the need to develop AI in a way that ensures it advances equity and civil rights, fights against algorithmic discrimination, and creates efficiencies and equity in the distribution of governmental resources.[3]

While the Biden administration’s Executive Order has been lauded as the most comprehensive step taken by a President to safeguard against threats posed by AI, its true impact is yet to be seen. The impact of the Executive Order will depend on its implementation by the agencies that have been tasked with taking action. The regulatory heads tasked with implementing Biden’s Executive Order are the Secretary of Commerce, Secretary of Energy, Secretary of Homeland Security, and the National Institute of Standards and Technology.[4] Below is a summary of the key calls-to-action from Biden’s Executive Order:

  • Industry Standards for AI Development: The National Institute of Science and Tech (NIST), Secretary of Commerce, Secretary of Energy, Secretary of Homeland Secretary, and other heads of agencies selected by the Secretary of Commerce will define industry standards and best practices for the development and deployment of safe and secure AI systems.
  • Red-Team Testing and Reporting Requirements: Companies developing or demonstrating an intent to develop potential dual-use foundational models will be required to provide the Federal Government, on an ongoing basis, with information, reports, and records on the training and development of such models. Companies will also be responsible for sharing the results of any AI red-team testing conducted by the NIST.
  • Cybersecurity and Data Privacy: The Department of Homeland Security shall provide an assessment of potential risks related to the use of AI in critical infrastructure sectors and issue a public report on best practices to manage AI-specific cybersecurity risks. The Director of the National Science Foundation shall fund the creation of a research network to advance privacy research and the development of Privacy Enhancing Technologies (PETs).
  • Synthetic Content Detection and Authentication: The Secretary of Commerce and heads of other relevant agencies will provide a report outlining existing methods and the potential development of further standards/techniques to authenticate content, track its provenance, detect synthetic content, and label synthetic content.
  • Maintaining Competition and Innovation: The government will invest in AI research by creating at least four new National AI Research Institutes and launch a pilot distributing computational, data, model, and training resources to support AI-related research and development. The Secretary of Veterans Affairs will also be tasked with hosting nationwide AI Tech Sprint competitions. Additionally, the FTC will be charged with using its authorities to ensure fair competition in the AI and semiconductor industry.
  • Protecting Civil Rights and Equity with AI: The Secretary of Labor will publish a report on effects of AI on the labor market and employees’ well-being. The Attorney General shall implement and enforce existing federal laws to address civil rights and civil liberties violations and discrimination related to AI. The Secretary of Health and Human Services shall publish a plan to utilize automated or algorithmic systems in administering public benefits and services and ensure equitable distribution of government resources.[5]

Potential for Big Tech’s Outsized Influence on Government Action Against AI

Leading up to the issuance of this Executive Order, the Biden administration met repeatedly and exclusively with leaders of big tech companies. In May 2023, President Biden and Vice President Kamala Harris met with the CEOs of leading AI companies–Google, Anthropic, Microsoft, and OpenAI.[6] In July 2023, the Biden administration celebrated their achievement of getting seven AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI) to make voluntary commitments to work towards developing AI technology in a safe, secure, and transparent manner.[7] Voluntary commitments generally require tech companies to publish public reports on their developed models, submit to third-party testing of their systems, prioritize research on societal risks posed by AI systems, and invest in cybersecurity.[8] Many industry leaders criticized these voluntary commitments for being vague and “more symbolic than substantive.”[9] Industry leaders also noted the lack of enforcement mechanisms to ensure companies follow through on these commitments.[10] Notably, the White House has only allowed leaders of large tech companies to weigh in on requirements for Biden’s Executive Order.

While a bipartisan group of senators[11] hosted a more diverse audience of tech leaders in their AI Insights Forum, the attendees for the first and second forum were still largely limited to CEOs or Cofounders of prominent tech companies, VC executives, or professors at leading universities.[12] Marc Andreessen, a co-founder of Andreessen Horowitz, a prominent VC fund, noted that in order to protect competition, the “future of AI shouldn’t be dictated by a few large corporations. It should be a group of global voices, pooling together diverse insights and ethical frameworks.”[13] On November 3rd, 2023 a group of prominent academics, VC executives, and heads of AI startups published an open letter to the Biden administration where they voiced their concern about the Executive Order’s potentially stifling effects.[14] The group also welcomed a discussion with the Biden administration on the importance of developing regulations that allowed for robust development of open source AI.[15]

Potential to Stifle Innovation and Stunt Tech Startups

While the language of Biden’s Executive Order is fairly broad and general, it still has the potential to stunt early innovation by smaller AI startups. Industry leaders and AI startup founders have voiced concern over the Executive Order’s reporting requirements and restrictions on models over a certain size.[16] Ironically, Biden’s Order includes a claim that the Federal Trade Commission will “work to promote a fair, open, and competitive ecosystem” by helping developers and small businesses access technical resources and commercialization opportunities.

Despite this promise of providing resources to startups and small businesses, the Executive Order’s stringent reporting and information-sharing requirements will likely have a disproportionately detrimental impact on startups. Andrew Ng, a longtime AI leader and cofounder of Google Brain and Coursera, stated that he is “quite concerned about the reporting requirements for models over a certain size” and is worried about the “overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”[17] Ng believes that regulating AI model size will likely hurt the open-source community and unintentionally benefit tech giants as smaller companies will struggle to comply with the Order’s reporting requirements.[18]

Open source software (OSS) has been around since the 1980s.[19] OSS is code that is free to access, use, and change without restriction.[20] The open source community has played a central part in developing the use and application of AI, as leading AI generative models like ChatGPT and Llama have open-source origins.[21] While both Llama and ChatGPT are no longer open source, their development and advancement heavily relied on using open source models like Transformer, TensorFlow, and Pytorch.[22] Industry leaders have voiced concern that the Executive Order’s broad and vague use of the term “dual-use foundation model” will impose unduly burdensome reporting requirements on small companies.[23] Startups typically have leaner teams, and there is rarely a team solely dedicated to compliance. These reporting requirements will likely create barriers to entry for tech challengers who are pioneering open source AI, as only incumbents with greater financial resources will be able to comply with the Executive Order’s requirements.

While Biden’s Executive Order is unlikely to bring any immediate change, the broad reporting requirements outlined in the Order are likely to stifle emerging startups and pioneers of open source AI.

Notes

[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[2] Id.

[3] Id.

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[5] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.

[7] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.

[8] https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.

[9] https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html.

[10] Id.

[11] https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum.

[12] https://techpolicy.press/us-senate-ai-insight-forum-tracker/.

[13] https://www.schumer.senate.gov/imo/media/doc/Marc%20Andreessen.pdf.

[14] https://twitter.com/martin_casado/status/1720517026538778657?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1720517026538778657%7Ctwgr%5Ec9ecbf7ac4fe23b03d91aea32db04b2e3ca656df%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcointelegraph.com%2Fnews%2Fbiden-ai-executive-order-certainly-challenging-open-source-ai-industry-insiders.

[15] Id.

[16] https://www.cnbc.com/2023/11/02/biden-ai-executive-order-industry-civil-rights-labor-groups-react.html.

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[18] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[19] https://www.brookings.edu/articles/how-open-source-software-shapes-ai-policy/.

[20] Id.

[21] https://www.zdnet.com/article/why-open-source-is-the-cradle-of-artificial-intelligence/.

[22] Id.

[23] Casado, supra note 14.


A Requiem for Fear, Death, and Dying: Law and Medicine’s Perpetually Unfinished Composition

Audrey Hutchinson, MJLST Staffer

In the 18th and 19th century, the coffins of newly deceased lay six feet below, but were often outfitted with a novel accessory emerging from the freshly turned earth: a bell hung from an inconspicuous stake, its clapper adorned with a rope that disappeared beneath the dirt.[1] Rather than this display serving as a bygone tradition of the mourning process—some symbolic way to emulate connection with the departed—the bell served a more practical purpose: it was an emergency safeguard against premature burial.[2] The design, and all its variously patented 18th and 19th century designs, draws upon a foundational—and by some biopsychological theories, a biologically imperative—quality: fear of death.[3]

In the mid-1700’s, the French author Jacques Benigne Winslow published a book ominously titled The Uncertainty of the Signs of Death and the Danger of Precipitate Interments and Dissections, marking a decisive and public moment in medical history where death was introduced as something nebulous rather than definite to a highly unsettled public.[4] For centuries, medical tests and parameters had existed by which doctors could “affirmatively” conclude a patient had, indeed, passed.[5] While the Victorian newspapers were riddled with adverts for “safety coffins” in a macabre, but unsurprising expression of capitalism in the wake of mounting cholera deaths and the accompanying rate of premature burial reports, efforts to evade the liminal space of “dying” and the finality of “death” can be seen as far back as ancient Hebrew scriptures, wherein resuscitation attempts via chest compressions are described.[6] Perhaps this is unsurprising: psychologist and experimental theorist Robert C. Bolles conceptualized that fear is “a hypothetical cause [motivation] of behavior” and that its main purpose is to keep organisms alive.[7] Perhaps there has always been a subconscious doubt or suspicion about the finality of death, or perhaps it was human desperation and delusion arising from loss that has left behind an ancient record of fear and subsequent acts of defiance in the face of death still germane today.

Contemporarily we see the fruits of this fear of dying, death, or being somewhere in between in the form of advances in medical technology and legal guidelines. Though death is still commonly understood to be a discrete status—a state one enters but cannot exit—medical and legal definitions have, over time, evolved approaching death more gingerly—the former understanding death as a nuanced scale, the latter drawing hard lines on that scale.[8] Today, 43 states have enacted the Uniform Law Commission’s Uniform Determination of Death Act (“UDDA”).[9] The UDDA requires two distinct standards be met for someone to effectively, and legally, be deemed dead:  1) the irreversible cessation of circulatory and respiratory functions, and 2) the irreversible cessation of all functions of the entire brain, including the brainstem.[10] The UDDA’s legal determination of death, in its bright line language, relies in large part on  “generally accepted medical standards” of the medical practice and practitioner discretion. While the loss of respiratory, circulatory, and total brain death of the entire brain are the common parameters of determining death medically, the UDDA is distinctly “silent on acceptable diagnostic tests [and] procedures.” It is argued that the language is purposeful in creating statutory flexibility in an era of constant scientific and medical research, understanding, and innovation.

As it relates to brain death, the medical approach to determining is a scale that contemplates brain injury/activity and somatic survival, a “continuous biological spectrum”[11] that naturally contemplates not only a patient’s current status, but the possibility and likelihood of both degenerative and improved changes in status. But, as a matter of policy and regulation, the UDDA drew a bright line between the two and called it brain-death. Someone in a permanent vegetative state is not considered braindead, but someone with a necrotic “liquified” brain is. As a result, the medical determination of death is arguably subservient to the legal determination, designating a point of no return–not because the medical professionals see no alternate path, but the law has provided a blindfold required from that point forward.

While this may be an efficient way to ensure people are not denied advanced and improved medical practices, it also means that there is ambiguity and variance from state to state as to the nature of governing factual guidelines and standards. There are practical and policy reasons for this, including maximizing efficacy and reach of organ donation systems and generally preventing strain on healthcare resources and systems; nonetheless, the brightline fails to be so bright. While the Commission could have situated the UDDA such that the determination of legal brain death and medical brain death worked in tandem, being triggered at some distinct moment by certain explicit conditions or after certain standardized medical tests, it did not.

Is that because it will not, or because it simply cannot do so? Today, the standards become increasingly muddied by advancements in technology to prolong life that have, in turn, paradoxically, also prolonged the process of dying—expanding the scope of that liminal space. Artificial means of keeping someone alive where they otherwise could not stay so imperatively creates a discrete state of the act of dying. New legal and medical methods of describing these states have become imperative with lively debate ongoing concerning bridging the medical-legal gap concerning death determination[12]—specifically, the distinction between the “permanent” (will not reverse) and “irreversible” (cannot reverse) cessation of cardiac, respiratory, and neurological function relative to the meaning of a determination of death.[13] James Bernat, a neurologist and academic who examines the convergence of ethics, philosophy, and neurology, is a contemporary advocate calling for reconciliation between medical practice with the law.[14] Dr. Bernat suggests the UDDA’s irreversibility standard—a function that has stopped and cannot be restarted—be replaced with a permanence standard—a function that has stopped, will not restart on its own, and no intervention will be undertaken to restart it.[15] This distinction, in large part, is attempting to address the incongruence of the UDDA’s language that, by the ULC’s own concession, “sets the general legal standard for determining death, but not the medical criteria for doing so.”[16] In effect, in trying to define and characterize death and dying, we have created a dynamic wherein one could be medically dead, but not legally.[17]

Upon his death bed, composer Frédéric Chopin uttered his last words: “The earth is suffocating …. Swear to make them cut me open, so that I won’t be buried alive.”[18] A century and a half later, yet only time will tell if law and medicine can find a way to reconcile the increasingly ambiguous nature of dying and define death explicitly and discretely—no bells required.

Notes

[1] Steven B. Harris, M.D. The Society for the Recovery of Persons Apparently Dead. Cryonics (Sept. 1990) https://www.cryonicsarchive.org/library/persons-apparently-dead/.

[2] Id.

[3] Id.; Shannon E. Grogans et. al., The nature and neurobiology of fear and anxiety: State of the science and opportunities for accelerating discovery, Neuroscience & Biobehavioral Reviews, Volume 151, 2023, 105237, ISSN 0149-7634, https://doi.org/10.1016/j.neubiorev.2023.105237.

[4] Harris, supra note 1.

[5] Id.

[6] Id.

[7] Grogans et. al., supra note 3.

[8] Robert D. Truog, Lessons from the Case of Jahi McMath. The Hastings Center report vol. 48, Suppl. 4 (2018): S70-S73. doi:10.1002/hast.961.

[9] Unif. Determination of death act § 1 (Nat’l Conf. of Comm’n on Unif. L Comm’n. 1981).

[10] Id.

[11] Truog supra at S72.

[12] James L. Bernat, “Conceptual Issues in DCDD Donor Death Determination.” The Hastings Center report vol. 48 Suppl 4 (2018): S26-S28. doi:10.1002/hast.948.

[13] James Bernat, (2010). How the Distinction between ‘Irreversible’ and ‘Permanent’ Illuminates Circulatory-Respiratory Death Determination. The Journal of Medicine and Philosophy. 35. 242-55. 10.1093/jmp/jhq018.

[14] Faculty Database: James L. Bernat, M.D. Dartmouth Geisel School of Medicine https://geiselmed.dartmouth.edu/faculty/facultydb/view.php/?uid=353 (last accessed Oct. 23, 2023).

[15] JD and Angela Turi, Death’s Troubled Relationship With the Law Brendan Parent, AMA J Ethics. 2020;22(12):E1055-1061. doi: 10.1001/amajethics.2020.1055; See also, Bernat JL. Point: are donors after circulatory death really dead, and does it matter? Yes and yes. Chest. 2010;138(1):13-16.

[16] Thaddeus Pope, Brain Death and the Law: Hard Cases and Legal Challenges. The Hastings Center report vol. 48 Suppl. 4 (2018): S46-S48. doi:10.1002/hast.954.

[17] Id.

[18] Death: The Last Taboo – Safety Coffins, Australian Museum (Oct. 20, 2020) https://australian.museum/about/history/exhibitions/death-the-last-taboo/safety-coffins/ (last accessed Oct. 23, 2023).


AR/VR/XR: Breaking the Wall of Legal Issues Used to Limit in Either the Real-World or the Virtual-World

Sophia Yao, MJLST Staffer

From Pokémon Go to the Metaverse,[1] VR headsets to XR glasses, vision technology is quickly changing our lives in many aspects. The best-known companies or groups that have joined this market include Apple’s Vision Products Group (VPG), Meta’s Reality Lab, Microsoft, and others. Especially after Apple published its Vision Pro in 2023, no one doubts that this technology will soon be a vital driver for both tech and business. Regardless of why, can this type of technology significantly impact human genes? What industries will be impacted by this technology? And what kinds of legal risks are to come?

Augmented Reality (“AR”) refers to a display of a real-world environment whose elements are augmented by (i.e., overlaid with) one or more layers of text, data, symbols, images, or other graphical display elements.[2] Virtual Reality (“VR”) is using a kind of device (e.g., headsets or multi-projected environments) to create a simulated and immersive environment that can provide an experience either similar to or completely different from the real world,[3] while Mixed Reality/Extended Reality (XR) glasses are relatively compact and sleek, and weigh much less than VR headsets.[4] XR’s most distinguished quality from VR is that individuals can still see the world around them with XR by projecting a translucent screen on top of the real world. Seemingly, the differences between these three vision technologies may soon be eliminated with the possibility of their combination into once device.

Typically, vision technology assists people in mentally processing 2-D information into a 3-D world by integrating digital information directly into real objects or environments. This can improve individuals’ ability to absorb information, make decisions, and execute required tasks quickly, efficiently, and accurately. However, many people report feeling nauseous after using such products, ear pain, and a disconnect between their eyes and body.[5] Even experts who use AR/VR products in emerging psychotherapy treatments admit that there have been adverse effects in AR/VR trials due to mismatching the direct contradiction between the visual system and the motion system.[6] Researchers also discovered that it affects the way people behave in social situations due to feeling less socially connected to others.[7]

In 2022, the global augmented reality market was valued at nearly $32 billion and is projected to reach $88 billion by 2026.[8] As indicated by industry specialists and examiners, outside of gaming, a significant portion of vision technology income will accumulate from e-commerce and retail (fashion and beauty), manufacturing, the education industry, healthcare, real estate, and e-sports, which will further impact entertainment, cost of living, and innovation.[9] To manage this tremendous opportunity, it is crucial to understand potential legal risks and develop a comprehensive legal strategy to address these upcoming challenges.

To expand one’s business model, it is important to maximize the protection of intellectual property (IP), including virtual worlds, characters, and experiences. Doing so also aligns with contractual concerns, service remedies, and liability for infringement of third-party IP. For example, when filing an IP prosecution, it is difficult to argue that the hardware-executing invention (characters or data information) is a unique machine, and that the designated steps performed by the hardware are special under MPEP § 2106.05(d).[10] Furthermore, the Federal Circuit has cautioned the abstraction of inventions – that “[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas…[T]read carefully in constructing this exclusionary principle lest it swallows all of the patent law.”[11]

From a consumer perspective, legal concerns may include data privacy, harassment, virtual trespass, or even violent attacks due to the aforementioned disconnect between individuals’ eyes and bodies. Courts’ views on virtual trespass created by vision technology devices is ambiguous. It is also unclear whether courts will accept the defense of error in judgment due to the adverse effects of using AR/VR devices. One of the most significant concerns is the protection of the younger generations, since they are often the target consumers and those who are spending the most time using these devices. Experts have raised concerns about the adverse effects of using AR/VR devices, questioning whether they negatively impact the mental and physical health of younger generations. Another concern is that these individuals may experience a decline in social communication skills and feel a stronger connection to machines rather than to human beings. Many other legal risks are hanging around the use of AR/VR devices, such as private data collection without consent by constantly scanning the users’ surrounding circumstances, although some contend that the Children’s Online Privacy Protection Act (COPPA) prohibits the collection of personally identifiable information if an operator believes a user to be under the age of thirteen.[12]

According to research trends, combining AR, VR, and MR/XR will allow users to transcend distance, time, and scale, to bring people together in shared virtual environments, enhance comprehension, communication, and decisionmaking efficiency. Once the boundaries between the real-world and virtual-world are eliminated, AR/VR devices will “perfectly” integrate with the physical world, whether or not we are prepared for this upcoming world.

Notes

[1] Eric Ravenscraft, What is the Meteverse, Exactly?, Wired (Jun. 15, 2023, 6:04 PM), https://www.wired.com/story/what-is-the-metaverse/.

[2] Travis Alley, ARTICLE: Pokemon Go: Emerging Liability Arising from Virtual Trespass for Augmented Reality Applications, 4 Tex. A&M J. Prop. L. 273 (2018).

[3] Law Offices of Salar Atrizadeh, Virtual and Augmented Reality Laws, Internet Law. Blog (Dec. 17, 2018), https://www.internetlawyer-blog.com/virtual-and-augmented-reality-laws/.

[4] Simon Hill, Review: Viture One XR Glasses, Wired (Sep. 1, 2023, 7:00 AM), https://www.wired.com/review/viture-one-xr-glasses/.

[5] Alexis Souchet, Virtual Reality has Negative Side Effects—New Research Shows That Can be a Problem in the Workplace, The Conversation (Aug. 8, 2023, 8:29 AM), https://theconversation.com/virtual-reality-has-negative-side-effects-new-research-shows-that-can-be-a-problem-in-the-workplace-210532#:~:text=Some%20negative%20symptoms%20of%20VR,nausea%20and%20increased%20muscle%20fatigue.

[6] John Torous et al., Adverse Effects of Virtual and Augmented Reality Interventions in Psychiatry: Systematic Review, JMIR Ment Health (May 5, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199391/.

[7] How Augmented Reality Affects People’s Behavior, Sci.Daily (May 22, 2019), https://www.sciencedaily.com/releases/2019/05/190522101944.htm.

[8] Augmented Reality (AR) Market by Device Type (Head-mounted Display, Head-up Display), Offering (Hardware, Software), Application (Consumer, Commercial, Healthcare), Technology, and Geography – Global Forecast, Mkt. and Mkt., https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html.

[9] Hill, supra note 4.

[10] Manual of Patent Examining Proc. (MPEP) § 2106.05(d) (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_13d41_124 (explaining an evaluation standard on when determining whether a claim recites significantly more than a judicial exception depends on whether the additional elements(s) are well-understood, routine, conventional activities previously known to the industry).

[11] Manual of Patent Examining Proc. (MPEP) § 2106.04 (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_139db_e0; see also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (2016).

[12] 16 CFR pt. 312.


Regulating the Revolution: A Legal Roadmap to Optimizing AI in Healthcare

Fazal Khan, MD-JD: Nexbridge AI

In the field of healthcare, the integration of artificial intelligence (AI) presents a profound opportunity to revolutionize care delivery, making it more accessible, cost-effective, and personalized. Burgeoning demographic shifts, such as aging populations, are exerting unprecedented pressure on our healthcare systems, exacerbating disparities in care and already-soaring costs. Concurrently, the prevalence of medical errors remains a stubborn challenge. AI stands as a beacon of hope in this landscape, capable of augmenting healthcare capacity and access, streamlining costs by automating processes, and refining the quality and customization of care.

Yet, the journey to harness AI’s full potential is fraught with challenges, most notably the risks of algorithmic bias and the diminution of human interaction. AI systems, if fed with biased data, can become vehicles of silent discrimination against underprivileged groups. It is essential to implement ongoing bias surveillance, promote the inclusion of diverse data sets, and foster community involvement to avert such injustices. Healthcare institutions bear the responsibility of ensuring that AI applications are in strict adherence to anti-discrimination statutes and medical ethical standards.

Moreover, it is crucial to safeguard the essence of human touch and empathy in healthcare. AI’s prowess in automating administrative functions cannot replace the human art inherent in the practice of medicine—be it in complex diagnostic processes, critical decision-making, or nurturing the therapeutic bond between healthcare providers and patients. Policy frameworks must judiciously navigate the fine line between fostering innovation and exercising appropriate control, ensuring that technological advancements do not overshadow fundamental human values.

The quintessential paradigm would be one where human acumen and AI’s analytical capabilities coalesce seamlessly. While humans should steward the realms requiring nuanced judgment and empathic interaction, AI should be relegated to the execution of repetitive tasks and the extrapolation of data-driven insights. Placing patients at the epicenter, this symbiotic union between human clinicians and AI can broaden access to healthcare, reduce expenditures, and enhance service quality, all the while maintaining trust through unyielding transparency. Nonetheless, the realization of such a model mandates proactive risk management and the encouragement of innovation through sagacious governance. By developing governmental and institutional policies that are both cautious and compassionate by design, AI can indeed be the catalyst for a transformative leap in healthcare, enriching the dynamics between medical professionals and the populations they serve.