Articles by mjlst

A Digital Brick in the Trump-Biden Wall

Solomon Steen, MJLST Staffer

“Alexander explained to a CBP officer at the limit line between the U.S. and Mexico that he was seeking political asylum and refuge in the United States; the CBP officer told him to “get the fuck out of here” and pushed him backwards onto the cement, causing bruising. Alexander has continued to try to obtain a CBP One appointment every day from Tijuana. To date, he has been unable to obtain a CBP One appointment or otherwise access the U.S. asylum process…”>[1]

Alexander fled kidnapping and threats in Chechnya to seek security in the US.[2] His is a common story of migrants who have received a similar welcome. People have died and been killed waiting for an appointment to apply for asylum at the border.[3] Children with autism and schizophrenia have had to wait, exposed to the elements.[4] People whose medical vulnerabilities should have entitled them to relief have instead been preyed upon by gangs or corrupt police.[5] What is the wall blocking these people from fleeing persecution and reaching safety in the US?

The Biden administration’s failed effort to pass bipartisan legislation to curb access to asylum is part of a broader pattern of Trump-Biden continuity in immigration policy.[6] This continuity is defined by bipartisan support for increased funding for Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) for enforcement of immigration law at the border and in the interior, respectively.[7] Successive Democratic and Republican administrations have increased investment in interior and border enforcement.[8] That investment has expanded technological mechanisms to surveil migrants and facilitate administration of removal.

As part of their efforts to curtail access to asylum, the Biden administration promulgated their Circumvention of Lawful Pathways rule.[9] This rule revived the Trump administration’s entry and transit bans.[10] The transit ban bars migrants from applying for asylum if they crossed through a third country en route to the US.[11] The entry ban bars asylum applicants who did not present themselves at a port of entry.[12] In East Bay Sanctuary Covenant v. Biden, the Ninth Circuit determined the rule was unlawful for directly contradicting Congressional intent in the INA granting a right of asylum to any migrant in the US regardless of manner of entry.[13] The Trump entry ban was similarly found unlawful for directly contravening the same language in the INA.[14] The Biden ban remains in effect to allow litigation regarding its legality to reach its ultimate conclusion.

The Circumvention of Lawful Pathways rule effecting the entry ban gave rise to a pattern and practice of metering asylum applicants, or requiring applicants to present at a port of entry having complied with specific conditions to avoid being turned back.[15] To facilitate the arrival of asylum seekers within a specific appointment window, DHS launched the CBP One app.[16] The app would ostensibly allow asylum applicants to schedule an appointment at a port of entry to present themselves for asylum.[17]

Al Otro Lado (AOL), Haitian Bridge, and other litigants have filed a complaint alleging the government lacks the statutory authorization to force migrants to seek an appointment through the app and that its design frustrates their rights.[18] AOL notes that by requiring migrants to make appointments to claim asylum via the app, the Biden administration has imposed a number of extra-statutory requirements on migrants entitled to claim asylum, which include that they:

(a) have access to an up-to-date, well-functioning smartphone;
(b) fluently read one of the few languages currently supported by CBP One;
(c) have access to a sufficiently strong and reliable mobile internet connection and electricity to submit the necessary information and photographs required by the app;
(d) have the technological literacy to navigate the complicated multi-step process to create an account and request an appointment via CBP One;
(e) are able to survive in a restricted area of Mexico for an indeterminate period of time while trying to obtain an appointment; and
(f) are lucky enough to obtain one of the limited number of appointments at certain POEs.[19]

The Civil Rights Education and Enforcement Center (CREEC) and the Texas Civil Rights Project have similarly filed a complaint with Department of Homeland Security’s Office of Civil Rights and Civil Liberties alleging CBP One is illegally inaccessible to disabled people and this has consequently violated other rights they have as migrants.[20] Migrants may become disabled as a consequence of the immigration process or the persecution they suffered that establish their prima facie claim to asylum.[21] The CREEC complaint specifically cites Section 508 of the Rehabilitation Act, which says disabled members of the public must enjoy access to government tech “comparable to the access” of everyone else.[22]

CREEC and AOL – and the other service organizations joining their respective complaints – note that they have limited capacity to assist asylum seekers.[23] Migrants without such institutional or community support would be more vulnerable being denied access to asylum and subject to opportunistic criminal predation while they wait at the border.[24]

There are a litany of technical problems with the app that can frustrate meritorious asylum claims. The app requires applicants to submit a picture of their face.[25] The app’s facial recognition software frequently fails to identify portraits of darker-skinned people.[26] Racial persecution is one of the statutory grounds for claiming asylum.[27] A victim of race-based persecution can have their asylum claim frustrated on the basis of their race because of this app. Persecution on the basis of membership in a particular social group can also form the basis for an asylum claim.[28] An applicant could establish membership in a particular social group composed of certain disabled people.[29] People with facial disabilities have also struggled with the facial recognition feature.[30]

The mere fact that an app has substituted a human interaction contributes to frustration of disabled migrants’ statutory rights. Medically fragile people statutorily eligible to enter the US via humanitarian parole are unable to access that relief electronically.[31] Individuals with intellectual disabilities have also had their claims delayed by navigating CBP One.[32] Asylum officers are statutorily required to evaluate if asylum seekers lack the mental competence to assist in their applications and, if so, ensure they have qualified assistance to vindicate their claims.[33]

The entry ban has textual exceptions for migrants whose attempts to set appointments are frustrated by technical issues.[34] CBP officials at many ports have a pattern and practice of ignoring those exceptions and refusing all migrants who lack a valid CBP One appointment.[35]

AOL seeks relief in the termination of the CBP One turnback policy: essentially, ensuring people can exercise their statutory right to claim asylum at the border without an appointment.[36] CREEC seeks relief in the form of a fully accessible CBP One app and accommodation policies to ensure disabled asylum seekers can have “meaningful access” to the asylum process.[37]

Comprehensively safeguarding asylum seeker’s rights would require more than abandoning CBP One. A process that ensures medically vulnerable persons can access timely care and persons with intellectual disabilities can get legal assistance would require deploying more border resources, such as co-locating medical and resettlement organization staff with CBP. Meaningfully curbing racial, ethnic, and linguistic discrimination by CBP, ICE, and Asylum Officers would require expensive and extensive retraining. However, it is evident that the CBP One is not serving the ostensible goal of making the asylum process more efficient, though it may serve the political goal of reinforcing the wall.

Notes

[1] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[2] Id. at 46.

[3] Ana Lucia Verduzco & Stephanie Brewer, Kidnapping of Migrants and Asylum Seekers at the Texas-Tamaulipas Border Reaches Intolerable Levels, (Apr. 4, 2024) https://www.wola.org/analysis/kidnapping-migrants-asylum-seekers-texas-tamaulipas-border-intolerable-levels.

[4] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 28, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[5] Linda Urueña Mariño & Christina Asencio, Human Rights First Tracker of Reported Attacks During the Biden Administration Against Asylum Seekers and Migrants Who Are Stranded in and/or Expelled to Mexico, Human Rights First, (Jan. 13, 2022),  at 10, 16, 19, https://humanrightsfirst.org/wp-content/uploads/2022/02/AttacksonAsylumSeekersStrandedinMexicoDuringBidenAdministration.1.13.2022.pdf.

[6] Actions – H.R.815 – 118th Congress (2023-2024): National Security Act, 2024, H.R.815, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/815/all-actions, (failing to pass the immigration language on 02/07/24).

[7] American Immigration Council,The Cost of Immigration Enforcement and Border Security, (Jan. 20, 2021), at 2, https://www.americanimmigrationcouncil.org/sites/default/files/research/the_cost_of_immigration_enforcement_and_border_security.pdf.

[8] Id. at 3-4.

[9] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[10] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[11] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[12] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[13] Id. at 669-70.

[14] E. Bay Sanctuary Covenant v. Trump, 349 F. Supp. 3d 838, 844.

[15] Complaint, at 2, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[16] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[17] Id.

[18] Complaint, at 57, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[19] Complaint, at 3, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[20] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[21] Ruby Ritchin, “I Felt Not Seen, Not Heard”: Gaps in Disability Access at USCIS for People Seeking Protection, 12, (Sep. 19, 2023) https://humanrightsfirst.org/library/i-felt-not-seen-not-heard-gaps-in-disability-access-at-uscis-for-people-seeking-protection.

[22] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 6, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[23] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also Complaint, at 4, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[24] Dara Lind, CBP’s Continued ‘Turnbacks’ Are Sending Asylum Seekers Back to Lethal Danger, (Aug. 10, 2023), https://immigrationimpact.com/2023/08/10/cbp-turnback-policy-lawsuit-danger.

[25] Complaint, at 31, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[26] Id.

[27] 8 U.S.C.A. § 1101(a)(42)(A) (West).

[28] Id.

[29] Hernandez Arellano v. Garland, 856 F. App’x 351, 353 (2d Cir. 2021).

[30] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 9, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[31] Id.

[32] Id.

[33] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[34] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[35] Id. at 23.

[36] Id. at 65-66.

[37] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 10-11, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.


The Stifling Potential of Biden’s Executive Order on AI

Christhy Le, MJLST Staffer

Biden’s Executive Order on “Safe, Secure, and Trustworthy” AI

On October 30, 2023, President Biden issued a landmark Executive Order to address concerns about the burgeoning and rapidly evolving technology of AI. The Biden administration states that the order’s goal is to ensure that America leads the way in seizing the promising potential of AI while managing the risks of AI’s potential misuse.[1] The Executive Order establishes (1) new standards for AI development, and security; (2) increased protections for Americans’ data and privacy; and (3) a plan to develop authentication methods to detect AI-generated content.[2] Notably, Biden’s Executive Order also highlights the need to develop AI in a way that ensures it advances equity and civil rights, fights against algorithmic discrimination, and creates efficiencies and equity in the distribution of governmental resources.[3]

While the Biden administration’s Executive Order has been lauded as the most comprehensive step taken by a President to safeguard against threats posed by AI, its true impact is yet to be seen. The impact of the Executive Order will depend on its implementation by the agencies that have been tasked with taking action. The regulatory heads tasked with implementing Biden’s Executive Order are the Secretary of Commerce, Secretary of Energy, Secretary of Homeland Security, and the National Institute of Standards and Technology.[4] Below is a summary of the key calls-to-action from Biden’s Executive Order:

  • Industry Standards for AI Development: The National Institute of Science and Tech (NIST), Secretary of Commerce, Secretary of Energy, Secretary of Homeland Secretary, and other heads of agencies selected by the Secretary of Commerce will define industry standards and best practices for the development and deployment of safe and secure AI systems.
  • Red-Team Testing and Reporting Requirements: Companies developing or demonstrating an intent to develop potential dual-use foundational models will be required to provide the Federal Government, on an ongoing basis, with information, reports, and records on the training and development of such models. Companies will also be responsible for sharing the results of any AI red-team testing conducted by the NIST.
  • Cybersecurity and Data Privacy: The Department of Homeland Security shall provide an assessment of potential risks related to the use of AI in critical infrastructure sectors and issue a public report on best practices to manage AI-specific cybersecurity risks. The Director of the National Science Foundation shall fund the creation of a research network to advance privacy research and the development of Privacy Enhancing Technologies (PETs).
  • Synthetic Content Detection and Authentication: The Secretary of Commerce and heads of other relevant agencies will provide a report outlining existing methods and the potential development of further standards/techniques to authenticate content, track its provenance, detect synthetic content, and label synthetic content.
  • Maintaining Competition and Innovation: The government will invest in AI research by creating at least four new National AI Research Institutes and launch a pilot distributing computational, data, model, and training resources to support AI-related research and development. The Secretary of Veterans Affairs will also be tasked with hosting nationwide AI Tech Sprint competitions. Additionally, the FTC will be charged with using its authorities to ensure fair competition in the AI and semiconductor industry.
  • Protecting Civil Rights and Equity with AI: The Secretary of Labor will publish a report on effects of AI on the labor market and employees’ well-being. The Attorney General shall implement and enforce existing federal laws to address civil rights and civil liberties violations and discrimination related to AI. The Secretary of Health and Human Services shall publish a plan to utilize automated or algorithmic systems in administering public benefits and services and ensure equitable distribution of government resources.[5]

Potential for Big Tech’s Outsized Influence on Government Action Against AI

Leading up to the issuance of this Executive Order, the Biden administration met repeatedly and exclusively with leaders of big tech companies. In May 2023, President Biden and Vice President Kamala Harris met with the CEOs of leading AI companies–Google, Anthropic, Microsoft, and OpenAI.[6] In July 2023, the Biden administration celebrated their achievement of getting seven AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI) to make voluntary commitments to work towards developing AI technology in a safe, secure, and transparent manner.[7] Voluntary commitments generally require tech companies to publish public reports on their developed models, submit to third-party testing of their systems, prioritize research on societal risks posed by AI systems, and invest in cybersecurity.[8] Many industry leaders criticized these voluntary commitments for being vague and “more symbolic than substantive.”[9] Industry leaders also noted the lack of enforcement mechanisms to ensure companies follow through on these commitments.[10] Notably, the White House has only allowed leaders of large tech companies to weigh in on requirements for Biden’s Executive Order.

While a bipartisan group of senators[11] hosted a more diverse audience of tech leaders in their AI Insights Forum, the attendees for the first and second forum were still largely limited to CEOs or Cofounders of prominent tech companies, VC executives, or professors at leading universities.[12] Marc Andreessen, a co-founder of Andreessen Horowitz, a prominent VC fund, noted that in order to protect competition, the “future of AI shouldn’t be dictated by a few large corporations. It should be a group of global voices, pooling together diverse insights and ethical frameworks.”[13] On November 3rd, 2023 a group of prominent academics, VC executives, and heads of AI startups published an open letter to the Biden administration where they voiced their concern about the Executive Order’s potentially stifling effects.[14] The group also welcomed a discussion with the Biden administration on the importance of developing regulations that allowed for robust development of open source AI.[15]

Potential to Stifle Innovation and Stunt Tech Startups

While the language of Biden’s Executive Order is fairly broad and general, it still has the potential to stunt early innovation by smaller AI startups. Industry leaders and AI startup founders have voiced concern over the Executive Order’s reporting requirements and restrictions on models over a certain size.[16] Ironically, Biden’s Order includes a claim that the Federal Trade Commission will “work to promote a fair, open, and competitive ecosystem” by helping developers and small businesses access technical resources and commercialization opportunities.

Despite this promise of providing resources to startups and small businesses, the Executive Order’s stringent reporting and information-sharing requirements will likely have a disproportionately detrimental impact on startups. Andrew Ng, a longtime AI leader and cofounder of Google Brain and Coursera, stated that he is “quite concerned about the reporting requirements for models over a certain size” and is worried about the “overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”[17] Ng believes that regulating AI model size will likely hurt the open-source community and unintentionally benefit tech giants as smaller companies will struggle to comply with the Order’s reporting requirements.[18]

Open source software (OSS) has been around since the 1980s.[19] OSS is code that is free to access, use, and change without restriction.[20] The open source community has played a central part in developing the use and application of AI, as leading AI generative models like ChatGPT and Llama have open-source origins.[21] While both Llama and ChatGPT are no longer open source, their development and advancement heavily relied on using open source models like Transformer, TensorFlow, and Pytorch.[22] Industry leaders have voiced concern that the Executive Order’s broad and vague use of the term “dual-use foundation model” will impose unduly burdensome reporting requirements on small companies.[23] Startups typically have leaner teams, and there is rarely a team solely dedicated to compliance. These reporting requirements will likely create barriers to entry for tech challengers who are pioneering open source AI, as only incumbents with greater financial resources will be able to comply with the Executive Order’s requirements.

While Biden’s Executive Order is unlikely to bring any immediate change, the broad reporting requirements outlined in the Order are likely to stifle emerging startups and pioneers of open source AI.

Notes

[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[2] Id.

[3] Id.

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[5] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.

[7] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.

[8] https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.

[9] https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html.

[10] Id.

[11] https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum.

[12] https://techpolicy.press/us-senate-ai-insight-forum-tracker/.

[13] https://www.schumer.senate.gov/imo/media/doc/Marc%20Andreessen.pdf.

[14] https://twitter.com/martin_casado/status/1720517026538778657?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1720517026538778657%7Ctwgr%5Ec9ecbf7ac4fe23b03d91aea32db04b2e3ca656df%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcointelegraph.com%2Fnews%2Fbiden-ai-executive-order-certainly-challenging-open-source-ai-industry-insiders.

[15] Id.

[16] https://www.cnbc.com/2023/11/02/biden-ai-executive-order-industry-civil-rights-labor-groups-react.html.

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[18] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[19] https://www.brookings.edu/articles/how-open-source-software-shapes-ai-policy/.

[20] Id.

[21] https://www.zdnet.com/article/why-open-source-is-the-cradle-of-artificial-intelligence/.

[22] Id.

[23] Casado, supra note 14.


A Requiem for Fear, Death, and Dying: Law and Medicine’s Perpetually Unfinished Composition

Audrey Hutchinson, MJLST Staffer

In the 18th and 19th century, the coffins of newly deceased lay six feet below, but were often outfitted with a novel accessory emerging from the freshly turned earth: a bell hung from an inconspicuous stake, its clapper adorned with a rope that disappeared beneath the dirt.[1] Rather than this display serving as a bygone tradition of the mourning process—some symbolic way to emulate connection with the departed—the bell served a more practical purpose: it was an emergency safeguard against premature burial.[2] The design, and all its variously patented 18th and 19th century designs, draws upon a foundational—and by some biopsychological theories, a biologically imperative—quality: fear of death.[3]

In the mid-1700’s, the French author Jacques Benigne Winslow published a book ominously titled The Uncertainty of the Signs of Death and the Danger of Precipitate Interments and Dissections, marking a decisive and public moment in medical history where death was introduced as something nebulous rather than definite to a highly unsettled public.[4] For centuries, medical tests and parameters had existed by which doctors could “affirmatively” conclude a patient had, indeed, passed.[5] While the Victorian newspapers were riddled with adverts for “safety coffins” in a macabre, but unsurprising expression of capitalism in the wake of mounting cholera deaths and the accompanying rate of premature burial reports, efforts to evade the liminal space of “dying” and the finality of “death” can be seen as far back as ancient Hebrew scriptures, wherein resuscitation attempts via chest compressions are described.[6] Perhaps this is unsurprising: psychologist and experimental theorist Robert C. Bolles conceptualized that fear is “a hypothetical cause [motivation] of behavior” and that its main purpose is to keep organisms alive.[7] Perhaps there has always been a subconscious doubt or suspicion about the finality of death, or perhaps it was human desperation and delusion arising from loss that has left behind an ancient record of fear and subsequent acts of defiance in the face of death still germane today.

Contemporarily we see the fruits of this fear of dying, death, or being somewhere in between in the form of advances in medical technology and legal guidelines. Though death is still commonly understood to be a discrete status—a state one enters but cannot exit—medical and legal definitions have, over time, evolved approaching death more gingerly—the former understanding death as a nuanced scale, the latter drawing hard lines on that scale.[8] Today, 43 states have enacted the Uniform Law Commission’s Uniform Determination of Death Act (“UDDA”).[9] The UDDA requires two distinct standards be met for someone to effectively, and legally, be deemed dead:  1) the irreversible cessation of circulatory and respiratory functions, and 2) the irreversible cessation of all functions of the entire brain, including the brainstem.[10] The UDDA’s legal determination of death, in its bright line language, relies in large part on  “generally accepted medical standards” of the medical practice and practitioner discretion. While the loss of respiratory, circulatory, and total brain death of the entire brain are the common parameters of determining death medically, the UDDA is distinctly “silent on acceptable diagnostic tests [and] procedures.” It is argued that the language is purposeful in creating statutory flexibility in an era of constant scientific and medical research, understanding, and innovation.

As it relates to brain death, the medical approach to determining is a scale that contemplates brain injury/activity and somatic survival, a “continuous biological spectrum”[11] that naturally contemplates not only a patient’s current status, but the possibility and likelihood of both degenerative and improved changes in status. But, as a matter of policy and regulation, the UDDA drew a bright line between the two and called it brain-death. Someone in a permanent vegetative state is not considered braindead, but someone with a necrotic “liquified” brain is. As a result, the medical determination of death is arguably subservient to the legal determination, designating a point of no return–not because the medical professionals see no alternate path, but the law has provided a blindfold required from that point forward.

While this may be an efficient way to ensure people are not denied advanced and improved medical practices, it also means that there is ambiguity and variance from state to state as to the nature of governing factual guidelines and standards. There are practical and policy reasons for this, including maximizing efficacy and reach of organ donation systems and generally preventing strain on healthcare resources and systems; nonetheless, the brightline fails to be so bright. While the Commission could have situated the UDDA such that the determination of legal brain death and medical brain death worked in tandem, being triggered at some distinct moment by certain explicit conditions or after certain standardized medical tests, it did not.

Is that because it will not, or because it simply cannot do so? Today, the standards become increasingly muddied by advancements in technology to prolong life that have, in turn, paradoxically, also prolonged the process of dying—expanding the scope of that liminal space. Artificial means of keeping someone alive where they otherwise could not stay so imperatively creates a discrete state of the act of dying. New legal and medical methods of describing these states have become imperative with lively debate ongoing concerning bridging the medical-legal gap concerning death determination[12]—specifically, the distinction between the “permanent” (will not reverse) and “irreversible” (cannot reverse) cessation of cardiac, respiratory, and neurological function relative to the meaning of a determination of death.[13] James Bernat, a neurologist and academic who examines the convergence of ethics, philosophy, and neurology, is a contemporary advocate calling for reconciliation between medical practice with the law.[14] Dr. Bernat suggests the UDDA’s irreversibility standard—a function that has stopped and cannot be restarted—be replaced with a permanence standard—a function that has stopped, will not restart on its own, and no intervention will be undertaken to restart it.[15] This distinction, in large part, is attempting to address the incongruence of the UDDA’s language that, by the ULC’s own concession, “sets the general legal standard for determining death, but not the medical criteria for doing so.”[16] In effect, in trying to define and characterize death and dying, we have created a dynamic wherein one could be medically dead, but not legally.[17]

Upon his death bed, composer Frédéric Chopin uttered his last words: “The earth is suffocating …. Swear to make them cut me open, so that I won’t be buried alive.”[18] A century and a half later, yet only time will tell if law and medicine can find a way to reconcile the increasingly ambiguous nature of dying and define death explicitly and discretely—no bells required.

Notes

[1] Steven B. Harris, M.D. The Society for the Recovery of Persons Apparently Dead. Cryonics (Sept. 1990) https://www.cryonicsarchive.org/library/persons-apparently-dead/.

[2] Id.

[3] Id.; Shannon E. Grogans et. al., The nature and neurobiology of fear and anxiety: State of the science and opportunities for accelerating discovery, Neuroscience & Biobehavioral Reviews, Volume 151, 2023, 105237, ISSN 0149-7634, https://doi.org/10.1016/j.neubiorev.2023.105237.

[4] Harris, supra note 1.

[5] Id.

[6] Id.

[7] Grogans et. al., supra note 3.

[8] Robert D. Truog, Lessons from the Case of Jahi McMath. The Hastings Center report vol. 48, Suppl. 4 (2018): S70-S73. doi:10.1002/hast.961.

[9] Unif. Determination of death act § 1 (Nat’l Conf. of Comm’n on Unif. L Comm’n. 1981).

[10] Id.

[11] Truog supra at S72.

[12] James L. Bernat, “Conceptual Issues in DCDD Donor Death Determination.” The Hastings Center report vol. 48 Suppl 4 (2018): S26-S28. doi:10.1002/hast.948.

[13] James Bernat, (2010). How the Distinction between ‘Irreversible’ and ‘Permanent’ Illuminates Circulatory-Respiratory Death Determination. The Journal of Medicine and Philosophy. 35. 242-55. 10.1093/jmp/jhq018.

[14] Faculty Database: James L. Bernat, M.D. Dartmouth Geisel School of Medicine https://geiselmed.dartmouth.edu/faculty/facultydb/view.php/?uid=353 (last accessed Oct. 23, 2023).

[15] JD and Angela Turi, Death’s Troubled Relationship With the Law Brendan Parent, AMA J Ethics. 2020;22(12):E1055-1061. doi: 10.1001/amajethics.2020.1055; See also, Bernat JL. Point: are donors after circulatory death really dead, and does it matter? Yes and yes. Chest. 2010;138(1):13-16.

[16] Thaddeus Pope, Brain Death and the Law: Hard Cases and Legal Challenges. The Hastings Center report vol. 48 Suppl. 4 (2018): S46-S48. doi:10.1002/hast.954.

[17] Id.

[18] Death: The Last Taboo – Safety Coffins, Australian Museum (Oct. 20, 2020) https://australian.museum/about/history/exhibitions/death-the-last-taboo/safety-coffins/ (last accessed Oct. 23, 2023).


AR/VR/XR: Breaking the Wall of Legal Issues Used to Limit in Either the Real-World or the Virtual-World

Sophia Yao, MJLST Staffer

From Pokémon Go to the Metaverse,[1] VR headsets to XR glasses, vision technology is quickly changing our lives in many aspects. The best-known companies or groups that have joined this market include Apple’s Vision Products Group (VPG), Meta’s Reality Lab, Microsoft, and others. Especially after Apple published its Vision Pro in 2023, no one doubts that this technology will soon be a vital driver for both tech and business. Regardless of why, can this type of technology significantly impact human genes? What industries will be impacted by this technology? And what kinds of legal risks are to come?

Augmented Reality (“AR”) refers to a display of a real-world environment whose elements are augmented by (i.e., overlaid with) one or more layers of text, data, symbols, images, or other graphical display elements.[2] Virtual Reality (“VR”) is using a kind of device (e.g., headsets or multi-projected environments) to create a simulated and immersive environment that can provide an experience either similar to or completely different from the real world,[3] while Mixed Reality/Extended Reality (XR) glasses are relatively compact and sleek, and weigh much less than VR headsets.[4] XR’s most distinguished quality from VR is that individuals can still see the world around them with XR by projecting a translucent screen on top of the real world. Seemingly, the differences between these three vision technologies may soon be eliminated with the possibility of their combination into once device.

Typically, vision technology assists people in mentally processing 2-D information into a 3-D world by integrating digital information directly into real objects or environments. This can improve individuals’ ability to absorb information, make decisions, and execute required tasks quickly, efficiently, and accurately. However, many people report feeling nauseous after using such products, ear pain, and a disconnect between their eyes and body.[5] Even experts who use AR/VR products in emerging psychotherapy treatments admit that there have been adverse effects in AR/VR trials due to mismatching the direct contradiction between the visual system and the motion system.[6] Researchers also discovered that it affects the way people behave in social situations due to feeling less socially connected to others.[7]

In 2022, the global augmented reality market was valued at nearly $32 billion and is projected to reach $88 billion by 2026.[8] As indicated by industry specialists and examiners, outside of gaming, a significant portion of vision technology income will accumulate from e-commerce and retail (fashion and beauty), manufacturing, the education industry, healthcare, real estate, and e-sports, which will further impact entertainment, cost of living, and innovation.[9] To manage this tremendous opportunity, it is crucial to understand potential legal risks and develop a comprehensive legal strategy to address these upcoming challenges.

To expand one’s business model, it is important to maximize the protection of intellectual property (IP), including virtual worlds, characters, and experiences. Doing so also aligns with contractual concerns, service remedies, and liability for infringement of third-party IP. For example, when filing an IP prosecution, it is difficult to argue that the hardware-executing invention (characters or data information) is a unique machine, and that the designated steps performed by the hardware are special under MPEP § 2106.05(d).[10] Furthermore, the Federal Circuit has cautioned the abstraction of inventions – that “[a]t some level, all inventions embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas…[T]read carefully in constructing this exclusionary principle lest it swallows all of the patent law.”[11]

From a consumer perspective, legal concerns may include data privacy, harassment, virtual trespass, or even violent attacks due to the aforementioned disconnect between individuals’ eyes and bodies. Courts’ views on virtual trespass created by vision technology devices is ambiguous. It is also unclear whether courts will accept the defense of error in judgment due to the adverse effects of using AR/VR devices. One of the most significant concerns is the protection of the younger generations, since they are often the target consumers and those who are spending the most time using these devices. Experts have raised concerns about the adverse effects of using AR/VR devices, questioning whether they negatively impact the mental and physical health of younger generations. Another concern is that these individuals may experience a decline in social communication skills and feel a stronger connection to machines rather than to human beings. Many other legal risks are hanging around the use of AR/VR devices, such as private data collection without consent by constantly scanning the users’ surrounding circumstances, although some contend that the Children’s Online Privacy Protection Act (COPPA) prohibits the collection of personally identifiable information if an operator believes a user to be under the age of thirteen.[12]

According to research trends, combining AR, VR, and MR/XR will allow users to transcend distance, time, and scale, to bring people together in shared virtual environments, enhance comprehension, communication, and decisionmaking efficiency. Once the boundaries between the real-world and virtual-world are eliminated, AR/VR devices will “perfectly” integrate with the physical world, whether or not we are prepared for this upcoming world.

Notes

[1] Eric Ravenscraft, What is the Meteverse, Exactly?, Wired (Jun. 15, 2023, 6:04 PM), https://www.wired.com/story/what-is-the-metaverse/.

[2] Travis Alley, ARTICLE: Pokemon Go: Emerging Liability Arising from Virtual Trespass for Augmented Reality Applications, 4 Tex. A&M J. Prop. L. 273 (2018).

[3] Law Offices of Salar Atrizadeh, Virtual and Augmented Reality Laws, Internet Law. Blog (Dec. 17, 2018), https://www.internetlawyer-blog.com/virtual-and-augmented-reality-laws/.

[4] Simon Hill, Review: Viture One XR Glasses, Wired (Sep. 1, 2023, 7:00 AM), https://www.wired.com/review/viture-one-xr-glasses/.

[5] Alexis Souchet, Virtual Reality has Negative Side Effects—New Research Shows That Can be a Problem in the Workplace, The Conversation (Aug. 8, 2023, 8:29 AM), https://theconversation.com/virtual-reality-has-negative-side-effects-new-research-shows-that-can-be-a-problem-in-the-workplace-210532#:~:text=Some%20negative%20symptoms%20of%20VR,nausea%20and%20increased%20muscle%20fatigue.

[6] John Torous et al., Adverse Effects of Virtual and Augmented Reality Interventions in Psychiatry: Systematic Review, JMIR Ment Health (May 5, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10199391/.

[7] How Augmented Reality Affects People’s Behavior, Sci.Daily (May 22, 2019), https://www.sciencedaily.com/releases/2019/05/190522101944.htm.

[8] Augmented Reality (AR) Market by Device Type (Head-mounted Display, Head-up Display), Offering (Hardware, Software), Application (Consumer, Commercial, Healthcare), Technology, and Geography – Global Forecast, Mkt. and Mkt., https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html.

[9] Hill, supra note 4.

[10] Manual of Patent Examining Proc. (MPEP) § 2106.05(d) (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_13d41_124 (explaining an evaluation standard on when determining whether a claim recites significantly more than a judicial exception depends on whether the additional elements(s) are well-understood, routine, conventional activities previously known to the industry).

[11] Manual of Patent Examining Proc. (MPEP) § 2106.04 (USPTO), https://www.uspto.gov/web/offices/pac/mpep/s2106.html#ch2100_d29a1b_139db_e0; see also Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (2016).

[12] 16 CFR pt. 312.


Regulating the Revolution: A Legal Roadmap to Optimizing AI in Healthcare

Fazal Khan, MD-JD: Nexbridge AI

In the field of healthcare, the integration of artificial intelligence (AI) presents a profound opportunity to revolutionize care delivery, making it more accessible, cost-effective, and personalized. Burgeoning demographic shifts, such as aging populations, are exerting unprecedented pressure on our healthcare systems, exacerbating disparities in care and already-soaring costs. Concurrently, the prevalence of medical errors remains a stubborn challenge. AI stands as a beacon of hope in this landscape, capable of augmenting healthcare capacity and access, streamlining costs by automating processes, and refining the quality and customization of care.

Yet, the journey to harness AI’s full potential is fraught with challenges, most notably the risks of algorithmic bias and the diminution of human interaction. AI systems, if fed with biased data, can become vehicles of silent discrimination against underprivileged groups. It is essential to implement ongoing bias surveillance, promote the inclusion of diverse data sets, and foster community involvement to avert such injustices. Healthcare institutions bear the responsibility of ensuring that AI applications are in strict adherence to anti-discrimination statutes and medical ethical standards.

Moreover, it is crucial to safeguard the essence of human touch and empathy in healthcare. AI’s prowess in automating administrative functions cannot replace the human art inherent in the practice of medicine—be it in complex diagnostic processes, critical decision-making, or nurturing the therapeutic bond between healthcare providers and patients. Policy frameworks must judiciously navigate the fine line between fostering innovation and exercising appropriate control, ensuring that technological advancements do not overshadow fundamental human values.

The quintessential paradigm would be one where human acumen and AI’s analytical capabilities coalesce seamlessly. While humans should steward the realms requiring nuanced judgment and empathic interaction, AI should be relegated to the execution of repetitive tasks and the extrapolation of data-driven insights. Placing patients at the epicenter, this symbiotic union between human clinicians and AI can broaden access to healthcare, reduce expenditures, and enhance service quality, all the while maintaining trust through unyielding transparency. Nonetheless, the realization of such a model mandates proactive risk management and the encouragement of innovation through sagacious governance. By developing governmental and institutional policies that are both cautious and compassionate by design, AI can indeed be the catalyst for a transformative leap in healthcare, enriching the dynamics between medical professionals and the populations they serve.


Raising the Bar: Rule 702 Changes Illuminate the Need for Science Literacy in the Judiciary

David Lee, MJLST Staffer

On December 1, 2023, amendments to Federal Rule of Evidence 702 (FRE 702) took effect.[1] FRE 702 governs the admissibility of expert witness testimony. Central to its purpose is ensuring that such testimony is both relevant to the case and based on a reliable foundation. The rule sets the qualifications for experts based on their knowledge, skill, experience, training, or education, and emphasizes the crucial role of the trial judge as a gatekeeper. This role involves assessing the testimony’s adherence to relevance and reliability before it reaches the jury, thereby upholding the fairness and integrity of the judicial process and ensuring that the legal system remains aligned with evolving scientific and technical knowledge.[2]

Prior to the amendments, there was inconsistent application of FRE 702.[3] According to the Advisory Committee on Evidence Rules, the changes serve to reinforce that the criteria for expert witness admissibility laid out in FRE 702 are just that – criteria for admissibility and not questions of weight.[4] When read properly, FRE 702 makes expert witness reliability a threshold question for judges to answer, and the amendments reinforce this “gatekeeping” function of judges.[5]  With the new amendments clarifying the role of judges as arbiters of whether an expert’s “opinion reflects a reliable application of the principles and methods [of relevant scientific, technical, or other specialized knowledge]” to the facts of the case, it is imperative that the judiciary is sufficiently literate in science and the scientific method to properly serve this function.

Rule 702. Testimony by Expert Witnesses (amendments italicized and stricken)

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if the proponent demonstrates to the court that it is more likely than not that:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue; (b) the testimony is based on sufficient facts or data; (c) the testimony is the product of reliable principles and methods; and (d) the expert has reliably applied expert’s opinion reflects a reliable application of the principles and methods to the facts of the case.

The Importance of Scientific Acumen on the Bench

Science literacy on the bench – referring to the judiciary’s understanding and comprehension of scientific principles and methodologies – has become increasingly vital in the modern legal landscape. This form of literacy encompasses not just a basic grasp of scientific concepts but also an appreciation of how scientific knowledge evolves and how it can be rigorously applied in legal contexts. As courts frequently encounter cases involving complex scientific evidence – from DNA analysis to digital forensics – judges equipped with science literacy are better positioned to evaluate the credibility and relevance of expert testimony accurately. The absence of this scientific acumen can lead to significant judicial errors or misunderstandings.[6] Entire branches of forensic science such as bite mark analysis, microscopic hair comparison, and tire track analysis – once taken for granted as valid and widely accepted by courts – have been discredited as unreliable and lacking scientific underpinnings.[7] These misjudgments about the validity of forensic methods have previously led to wrongful convictions.[8] Lack of understanding in environmental science has sometimes resulted in rulings on cases involving pollution and climate change that are highly controversial regarding their interpretation of the science.[9] These examples underline the necessity for judges to possess a robust foundation in scientific literacy to ensure just and informed decision-making in an era where science and technology are deeply intertwined with legal issues.

The Need for Additional Educational Initiatives

Judges are often apprehensive when confronted with complex scientific evidence in cases, partly due to their limited background in the hard sciences, as illustrated by one judge’s shift from pre-med to law after struggles with organic chemistry.[10] This apprehension underscores the growing necessity for science literacy in the judiciary, particularly given that judges are well-equipped to handle the fundamental aspects of scientific evidence: accuracy in observation and logical reasoning.[11] While judges may not be familiar with the specific terminologies and conventions of various scientific fields, their aptitude in swiftly grasping diverse issues, coupled with focused science education programs, would equip them to adeptly handle scientific matters in court. The approach for addressing the distinctive need for judicial education in science necessarily differs from the typical science education for scientists. Judges don’t require extensive training in theoretical concepts or complex statistical inferences as scientists do. Their role is more akin to a scientific journal editor, assessing if the scientific evidence presented meets acceptable standards. This task is supported by attorneys, who educate judges on pertinent scientific issues through briefs and arguments. The key for judicial science education is accessibility and breadth, given the variety of cases a judge encounters. The Reference Manual on Scientific Evidence, a crucial resource, helps judges understand scientific foundations and make informed decisions without instructing on the admissibility of specific evidence types; however, the most recent edition was published in 2011 and does not reflect advances in science or emerging technologies relevant to judges today.[12] Judicial education programs supported by the Federal Judicial Center further enhance judges’ capabilities in addressing complex scientific and technical information in our rapidly evolving world.[13] While these resources serve an important function, repeated misjudgments of the quality of scientific evidence by courts indicates that additional resources are needed.

The amendments to Federal Rule of Evidence 702 reemphasize the role that judges play regarding scientific and technical evidence. These changes not only clarify the gatekeeping role of judges in assessing expert witness testimony but also highlight the growing imperative for science literacy in the judiciary. This literacy is essential for judges to make informed, accurate decisions in an era increasingly dominated by complex scientific evidence. The evolving landscape of science and technology underscores the need for continuous educational initiatives to equip judges with the necessary tools to adapt and respond effectively. Resources like the Reference Manual on Scientific Evidence – despite needing updates – and educational programs provided by the Federal Judicial Center play a crucial role in this endeavor. As the legal world becomes more intertwined with scientific advancements, the judiciary’s ability to keep pace will be instrumental in upholding the integrity and efficacy of the justice system. This progression towards a more scientifically literate bench is not just a necessity but a responsibility.

Notes

[1] https://www.gand.uscourts.gov/news/federal-rules-amendments-effective-december-1-2023.

[2] https://www.law.cornell.edu/rules/fre/rule_702.

[3] https://www.jdsupra.com/legalnews/upcoming-fre-702-amendment-reemphasizes-6303408.

[4] Id.

[5] https://www.apslaw.com/its-your-business/2023/11/30/return-of-the-gatekeepers-amendments-to-rule-702-clarify-the-standard-of-admissibility-for-expert-witness-testimony.

[6] https://www.americanbar.org/groups/judicial/publications/appellate_issues/2019/winter/untested-forensic-sciences-present-trouble-in-the-courtroom.

[7] Id.

[8] Id.

[9] https://slate.com/news-and-politics/2023/12/supreme-court-vs-science.html.

[10] https://www.americanbar.org/groups/judicial/publications/judges_journal/2017/fall/science-educatifederal-judges.

[11] Id.

[12] https://www.nationalacademies.org/our-work/science-for-judges-development-of-the-reference-manual-on-scientific-evidence-4th-edition.

[13] Id.


Payment Pending: CFPB Proposes to Regulate Digital Wallets

Kevin Malecha, MJLST Staffer

Federal regulators are increasingly concerned about digital wallets and person-to-person payment (P2P) apps like Apply Pay, Google Pay, Cash App, and Venmo, and how such services might impact the rights of financial consumers. As many as three-quarters of American adults use digital wallets or payment apps and, in 2022, the total value of transactions was estimated at $893 billion, expected to increase to $1.6 trillion by 2027.[1] In November of 2023, the Consumer Financial Protection Bureau proposed a rule that would expand its supervisory powers to cover certain nonbank providers of these services. The CFPB, an independent federal agency within the broader Federal Reserve System, was created by the Dodd-Frank Act in response to the 2007-2008 financial crisis and subsequent recession. The Bureau is tasked with protecting consumers in the financial space by promulgating and enforcing rules governing a wide variety of financial activities like mortgage lending, debt collection, and electronic payments.[2]

The CFPB has identified digital wallets and payment apps as products that threaten consumer financial rights and well-being.[3] First, because these services collect mass amounts of transaction and financial data, they pose a substantial risk to consumer data privacy.[4] Second, if the provider ceases operations or faces a “bank” run, any funds held in digital accounts may be lost because Federal Deposit Insurance Corporation (FDIC) protection, which insures deposits up to $250,000 in traditional banking institutions, is often unavailable for digital wallets.[5]

Enforcement and Supervision

The CFPB holds dual enforcement and supervisory roles. As one of the federal agencies charged with “implementing the Federal consumer financial laws,”[6] the enforcement powers of the CFPB are broad, but enforcement actions are relatively uncommon. In 2022, the Bureau brought twenty enforcement actions.[7] By contrast, the Commodity Futures Trading Commission (CFTC), which is also tasked in part with protecting financial consumers, brought eighty-two enforcement actions in the same period.[8] In contrast to the limited and reactionary nature of enforcement actions, the CFPB’s supervisory authority requires regulated entities to disclose certain documents and data, such as internal policies and audit reports, and allows CFPB examiners to proactively review their actions to ensure compliance.[9] The Bureau describes its supervisory process as a tool for identifying issues and addressing them before violations become systemic or cause significant harm to consumers.[10]

The CFPB already holds enforcement authority over all digital wallet and payment app services via its broad power to adjudicate violations of financial laws wherever they occur.[11] However, the Bureau has so far enjoyed only limited supervisory authority over the industry.[12] Currently, the CFPB only supervises digital wallets and payment apps when those services are provided by banks or when the provider falls under another CFPB supervision rule.[13] As tech companies like Apple and Google – which do not fall under other CFPB supervision rules – have increasingly entered the market, they have gone unsupervised.

Proposed Rule

Under the organic statute, CFPB’s existing supervisory authority covers nonbank persons that offer certain financial services including real estate and mortgage loans, private education loans, and payday loans.[14] In addition, the statute allows the Bureau to promulgate rules to cover other entities that are “larger participant[s] of a market for other consumer financial products or services.”[15] The proposed rule takes advantage of the power to define “larger participants” and expands the definition to include providers of “general-use digital consumer applications,” which the Bureau defines as funds transfer or wallet functionality through a digital application that the consumer uses to make payments for personal, household, or family purposes.[16] An entity is a “larger participant” if it (1) provides general-use digital consumer payment applications with an annual volume of at least five million transactions and (2) is not a small business as defined by the Small Business Administration.[17] The Bureau will make determinations on an individualized basis and may request documents and information from the entity to determine if it satisfies the requirements, which the entity can then dispute.

Implications for Digital Wallet and Payment App Providers

Major companies like Apple and Google can easily foresee that the CFPB intends to supervise them under the new rule. The Director of the CFPB recently compared the two American companies to Chinese tech companies Alibaba and WeChat that offer similar products and that, in the Director’s view, pose a similar risk to consumer data privacy and financial security.[18] For smaller firms, predicting the Bureau’s intentions is challenging, but existing regulations indicate that the Bureau will issue a written communication to initiate supervision.[19] The entity will then have forty-five days to dispute the finding that they meet the regulatory definition of a “larger participant.”[20] In their response, entities may include a statement of the reason for their objection and records, documents, or other information. Then the Assistant Director of the CFPB will review the response and make a determination. The regulation gives the Assistant Director the ability to request records and documents from the entity prior to the initial notification of intended supervision and throughout the determination process.[21] The Assistant Director also may extend the timeframe for determination beyond the forty-five-day window.[22]

If an entity becomes supervised, the Bureau will contact it for an initial conference.[23] The examiners will then determine the scope of future supervision, taking into consideration the responses at the conference, any records requested prior to or during the conference, and a review of the entity’s compliance management program.[24] The Bureau prioritizes its supervisory activities based on entity size, volume of transactions, size and risk of the relevant market, state oversight, and other market information to which the Bureau has access.[25] Ongoing supervision is likely to vary based on these factors, as well, but may include on-site or remote examination, review of documents and records, testing accounts and transactions for compliance with federal statutes and regulations, and continued review of the compliance management system.[26] The Bureau may then issue a confidential report or letter stating the examiner’s opinion that the entity has violated or is at risk of violating a statute or regulation.[27] While these findings are not final determinations, they do outline specific steps for the entity to regain or ensure compliance and should be taken seriously.[28] Supervisory reports or letters are distinct from enforcement actions and generally do not result in an enforcement action.[29] However, violations may be referred to the Bureau’s Office of Enforcement, which would then launch its own investigation.[30]

The likelihood of the proposed rule resulting in an enforcement action is, therefore, relatively low, but the exposure for regulated entities is difficult to measure because the penalties in enforcement actions vary widely. From October 2022 to October 2023, amounts paid by regulated entities ranged from $730,000 paid by a remittance provider that violated Electronic Funds Transfer rules,[31] to $3.7 billion in penalties and redress paid by Wells Fargo for headline-making violations of the Consumer Financial Protection Act.[32]

Notes

[1] Analysis of Deposit Insurance Coverage on Funds Stored Through Payment Apps, Consumer Fin. Prot. Bureau (Jun. 1, 2023), https://www.consumerfinance.gov/data-research/research-reports/issue-spotlight-analysis-of-deposit-insurance-coverage-on-funds-stored-through-payment-apps/full-report.

[2] Final Rules, Consumer Fin. Prot. Bureau, https://www.consumerfinance.gov/rules-policy/final-rules (last visited Nov. 16, 2023).

[3] CFPB Proposes New Federal Oversight of Big Tech Companies and Other Providers of Digital Wallets and Payment Apps, Consumer Fin. Prot. Bureau (Nov. 7, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-new-federal-oversight-of-big-tech-companies-and-other-providers-of-digital-wallets-and-payment-apps.

[4] Id.

[5] Id.

[6] 12 U.S.C. § 5492.

[7] Enforcement by the numbers, Consumer Fin. Prot. Bureau (Nov. 8, 2023), https://www.consumerfinance.gov/enforcement/enforcement-by-the-numbers.

[8] CFTC Releases Annual Enforcement Results, Commodity Futures Trading Comm’n (Oct. 20, 2022), https://www.cftc.gov/PressRoom/PressReleases/8613-22.

[9] CFPB Supervision and Examination Manual, Consumer Fin. Prot. Bureau at Overview 10 (Mar. 2017), https://files.consumerfinance.gov/f/documents/cfpb_supervision-and-examination-manual_2023-09.pdf.

[10] An Introduction to CFPB’s Exams of Financial Companies, Consumer Fin. Prot. Bureau 4 (Jan. 9, 2023), https://files.consumerfinance.gov/f/documents/cfpb_an-introduction-to-cfpbs-exams-of-financial-companies_2023-01.pdf.

[11] 12 U.S.C. §5563(a).

[12] CFPB Proposes New Federal Oversight of Big Tech Companies and Other Providers of Digital Wallets and Payment Apps, Consumer Fin. Prot. Bureau (Nov. 7, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-new-federal-oversight-of-big-tech-companies-and-other-providers-of-digital-wallets-and-payment-apps.

[13] Id.

[14] 12 U.S.C. § 5514.

[15] Id.

[16] Defining Larger Participants of a Market for General-Use Digital Consumer Payment, Consumer Fin. Prot. Bureau 3 (Nov. 7, 2023), https://files.consumerfinance.gov/f/documents/cfpb_nprm-digital-payment-apps-lp-rule_2023-11.pdf.

[17] Id. at 4.

[18] Rohit Chopra, Prepared Remarks of CFPB Director Rohit Chopra at the Brookings Institution Event on Payments in a Digital Century, Consumer Fin. Prot. Bureau (Oct. 6, 2023), https://www.consumerfinance.gov/about-us/newsroom/prepared-remarks-of-cfpb-director-rohit-chopra-at-the-brookings-institution-event-on-payments-in-a-digital-century.

[19] 12 CFR § 1090.103(a).

[20] 12 CFR § 1090.103(b).

[21] 12 CFR § 1090.103(c).

[22] 12 CFR § 1090.103(d).

[23] Defining Larger Participants of a Market for General-Use Digital Consumer Payment, Consumer Fin. Prot. Bureau 6 (Nov. 7, 2023), https://files.consumerfinance.gov/f/documents/cfpb_nprm-digital-payment-apps-lp-rule_2023-11.pdf.

[24] Id.

[25] Id. at 5.

[26] Id. at 6.

[27] An Introduction to CFPB’s Exams of Financial Companies, Consumer Fin. Prot. Bureau 3 (Jan. 9, 2023), https://files.consumerfinance.gov/f/documents/cfpb_an-introduction-to-cfpbs-exams-of-financial-companies_2023-01.pdf.

[28] Id.

[29] Id.

[30] Id.

[31] CFPB Orders Servicio UniTeller to Refund Fees and Pay Penalty for Failing to Follow Remittance, Consumer Fin. Prot. Bureau (Dec. 22, 2022), https://www.consumerfinance.gov/enforcement/actions/servicio-uniteller-inc.

[32] CFPB Orders Wells Fargo to Pay $3.7 Billion for Widespread Mismanagement of Auto Loans, Mortgages, and Deposit Accounts, Consumer Fin. Prot. Bureau (Dec. 20, 2022), https://www.consumerfinance.gov/enforcement/actions/wells-fargo-bank-na-2022.


Conflicts of Interest and Conflicting Interests: The SEC’s Controversial Proposed Rule

Shaadie Ali, MJLST Staffer

A controversial proposed rule from the SEC on AI and conflicts of interest is generating significant pushback from brokers and investment advisers. The proposed rule, dubbed “Reg PDA” by industry commentators in reference to its focus on “predictive data analytics,” was issued on July 26, 2023.[1] Critics claim that, as written, Reg PDA would require broker-dealers and investment managers to effectively eliminate the use of almost all technology when advising clients.[2] The SEC claims the proposed rule is intended to address the potential for AI to hurt more investors more quickly than ever before, but some critics argue that the SEC’s proposed rule would reach far beyond generative AI, covering nearly all technology. Critics also highlight the requirement that conflicts of interest be eliminated or neutralized as nearly impossible to meet and a departure from traditional principles of informed consent in financial advising.[3]

The SEC’s 2-page fact sheet on Reg PDA describes the 239-page proposal as requiring broker-dealers and investment managers to “eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of covered technologies in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests.”[4] The proposal defines covered technology as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.”[5] Critics have described this definition of “covered technology” as overly broad, with some going so far as to suggest that a calculator may be “covered technology.”[6] Despite commentators’ insistence, this particular contention is implausible – in its Notice of Proposed Rulemaking, the SEC stated directly that “[t]he proposed definition…would not include technologies that are designed purely to inform investors.”[7] More broadly, though, the SEC touts the proposal’s broadness as a strength, noting it “is designed to be sufficiently broad and principles-based to continue to be applicable as technology develops and to provide firms with flexibility to develop approaches to their use of technology consistent with their business model.”[8]

This move by the SEC comes amidst concerns raised by SEC chair Gary Gensler and the Biden administration about the potential for the concentration of power in artificial intelligence platforms to cause financial instability.[9] On October 30, 2023, President Biden signed an Executive Order that established new standards for AI safety and directed the issuance of guidance for agencies’ use of AI.[10] When questioned about Reg PDA at an event in early November, Gensler defended the proposed regulation by arguing that it was intended to protect online investors from receiving skewed recommendations.[11] Elsewhere, Gensler warned that it would be “nearly unavoidable” that AI would trigger a financial crisis within the next decade unless regulators intervened soon.[12]

Gensler’s explanatory comments have done little to curb criticism by industry groups, who have continued to submit comments via the SEC’s notice and comment process long after the SEC’s October 10 deadline.[13] In addition to highlighting the potential impacts of Reg PDA on brokers and investment advisers, many commenters questioned whether the SEC had the authority to issue such a rule. The American Free Enterprise Chamber of Commerce (“AmFree”) argued that the SEC exceeded its authority under both its organic statutes and the Administrative Procedures Act (APA) in issuing a blanket prohibition on conflicts of interest.[14] In their public comment, AmFree argued the proposed rule was arbitrary and capricious, pointing to the SEC’s alleged failure to adequately consider the costs associated with the proposal.[15] AmFree also invoked the major questions doctrine to question the SEC’s authority to promulgate the rule, arguing “[i]f Congress had meant to grant the SEC blanket authority to ban conflicts and conflicted communications generally, it would have spoken more clearly.”[16] In his scathing public comment, Robinhood Chief Legal and Corporate Affairs Officer Daniel M. Gallagher alluded to similar APA concerns, calling the proposal “arbitrary and capricious” on the grounds that “[t]he SEC has not demonstrated a need for placing unprecedented regulatory burdens on firms’ use of technology.”[17] Gallagher went on to condemn the proposal’s apparent “contempt for the ordinary person, who under the SEC’s apparent world view [sic] is incapable of thinking for himself or herself.”[18]

Although investor and broker industry groups have harshly criticized Reg PDA, some consumer protection groups have expressed support through public comment. The Consumer Federation of America (CFA) endorsed the proposal as “correctly recogniz[ing] that technology-driven conflicts of interest are too complex and evolve too quickly for the vast majority of investors to understand and protect themselves against, there is significant likelihood of widespread investor harm resulting from technology-driven conflicts of interest, and that disclosure would not effectively address these concerns.”[19] The CFA further argued that the final rule should go even further, citing loopholes in the existing proposal for affiliated entities that control or are controlled by a firm.[20]

More generally, commentators have observed that the SEC’s new prescriptive rule that firms eliminate or neutralize potential conflicts of interest marks a departure from traditional securities laws, wherein disclosure of potential conflicts of interest has historically been sufficient.[21] Historically, conflicts of interest stemming from AI and technology have been regulated the same as any other conflict of interest – while brokers are required to disclose their conflicts, their conduct is primarily regulated through their fiduciary duty to clients. In turn, some commentators have suggested that the legal basis for the proposed regulations is well-grounded in the investment adviser’s fiduciary duty to always act in the best interest of its clients.[22] Some analysts note that “neutralizing” the effects of a conflict of interest from such technology does not necessarily require advisers to discard that technology, but changing the way that firm-favorable information is analyzed or weighed, but it still marks a significant departure from the disclosure regime. Given the widespread and persistent opposition to the rule both through the note and comment process and elsewhere by commentators and analysts, it is unclear whether the SEC will make significant revisions to a final rule. While the SEC could conceivably narrow definitions of “covered technology,” “investor interaction,” and “conflicts of interest,” it is difficult to imagine how the SEC could modify the “eliminate or neutralize” requirement in a way that would bring it into line with the existing disclosure-based regime.

For its part, the SEC under Gensler is likely to continue pursuing regulations on AI regardless of the outcome of Reg PDA. Gensler has long expressed his concerns about the impacts of AI on market stability. In a 2020 paper analyzing regulatory gaps in the use of generative AI in financial markets, Gensler warned, “[e]xisting financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the risks posed by deep learning.”[23] Regardless of how the SEC decides to finalize its approach to AI in conflict of interest issues, it is clear that brokers and advisers are likely to resist broad-based bans on AI in their work going forward.

Notes

[1] Press Release, Sec. and Exch. Comm’n., SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Jul. 26, 2023).

[2] Id.

[3] Jennifer Hughes, SEC faces fierce pushback on plan to police AI investment advice, Financial Times (Nov. 8, 2023), https://www.ft.com/content/766fdb7c-a0b4-40d1-bfbc-35111cdd3436.

[4] Sec. Exch. Comm’n., Fact Sheet: Conflicts of Interest and Predictive Data Analytics (2023).

[5] Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,  88 Fed. Reg. 53960 (Proposed Jul. 26, 2021) (to be codified at 17 C.F.R. pts. 240, 275) [hereinafter Proposed Rule].

[6] Hughes, supra note 3.

[7] Proposed Rule, supra note 5.

[8] Id.

[9] Stefania Palma and Patrick Jenkins, Gary Gensler urges regulators to tame AI risks to financial stability, Financial Times (Oct. 14, 2023), https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac.

[10] Fact Sheet, White House, President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Oct. 30, 2023).

[11] Hughes, supra note 3.

[12] Palma, supra note 9.

[13] See Sec. Exch. Comm’n., Comments on Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (last visited Nov. 13, 2023), https://www.sec.gov/comments/s7-12-23/s71223.htm (listing multiple comments submitted after October 10, 2023).

[14] Am. Free Enter. Chamber of Com., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270180-652582.pdf.

[15] Id. at 14-19.

[16] Id. at 9.

[17] Daniel M. Gallagher, Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-271299-654022.pdf.

[18] Id. at 43.

[19] Consumer Fed’n. of Am., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270400-652982.pdf.

[20] Id.

[21] Ken D. Kumayama et al., SEC Proposes New Conflicts of Interest Rule for Use of AI by Broker-Dealers and Investment Advisers, Skadden (Aug. 10, 2023), https://www.skadden.com/insights/publications/2023/08/sec-proposes-new-conflicts.

[22] Colin Caleb, ANALYSIS: Proposed SEC Regs Won’t Allow Advisers to Sidestep AI, Bloomberg Law (Aug. 10, 2023), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-proposed-sec-regs-wont-allow-advisers-to-sidestep-ai.

[23] Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (MIT Artificial Intel. Glob. Pol’y F., Working Paper 2020) (in which Gensler identifies several potential systemic risks to the financial system, including overreliance and uniformity in financial modeling, overreliance on concentrated centralized datasets, and the potential of regulators to create incentives for less-regulated entities to take on increasingly complex functions in the financial system).


Floating Fans in the Ocean: Recognizing the Significance of Maine’s Recent Bill Regarding Offshore Wind Development Projects

Peter Lyon, MJLST Staffer

Recent efforts in Maine have continued the push for developing sustainable energy sources, specifically including offshore wind energy projects in the Gulf of Maine. Offshore wind projects have captured other coastal states’ and the federal government’s interest for quite some time, though the industry is not well developed due to several practical setbacks and pushback from different stakeholders. Maine has the potential to be a leader in this area, as a bill it passed in July lays more of the groundwork for developing offshore wind energy projects, calls attention to the development of innovative technology, and implements means to adequately address the interests of relevant stakeholders.

“An Act Regarding the Procurement of Energy from Offshore Wind Resources

Maine Governor Janet Mills signed a bill in July to further the development of offshore wind energy projects in the Gulf of Maine, making several amendments to a previous bill and enacting six additional sections.[1] One of the major changes includes declaring a new wind energy goal of three gigawatts of installed capacity by December 2040. This could meet approximately fifty percent of Maine’s anticipated electricity needs at that time.[2] This goal is different from Maine’s unmet 2009 goal of two gigawatts of installed capacity by 2015 and is likely attributable to supply chain issues, higher interest rates, and the rising prices of materials.[3]

To facilitate its three gigawatts by 2040 goal, the bill establishes a process for competitive contracting by requiring the solicitation process and project proposals to be consistent with the Maine Offshore Wind Roadmap issued in 2023,[4] which emphasizes five key topics.[5] It also includes sections pertaining to offshore wind power transmission, supporting the development of port infrastructure and innovative technologies. This may include technologies such as floating or bobbing platforms because the Gulf of Maine is too deep for fixed-structure turbines[6] and storage capacity technology such as large batteries, which would maximize the amount of energy that can be used as it is needed.[7]

The bill also expands the minimum number of advisory board members of the Offshore Wind Research Consortium – a collaborative research initiative created by the bill – from seven to twelve members to reach a wider stakeholder audience. The new advisory board member requirements include adding the “Commissioner of Inland and Wildlife” (or the commissioner’s designee), “at least one individual who is a member of a federally recognized Indian tribe” in Maine, “two individuals with expertise in marine and wildlife habitats,” and “at least one individual with experience in commercial offshore wind power development.”[8] The bill also requires the opportunity for public comment during the project solicitation process.

Engaging with relevant stakeholders at this early stage allows the Consortium’s research to explore and mitigate risks in offshore wind development projects such as the potential negative impact on commercial fishing, species degradation, and harm to ecosystems. These kinds of concerns mirror much of the resistance to offshore wind projects, non-specific to the Gulf of Maine, and the bill emphasizes specific actions to answer them.

Addressing Stakeholder Concerns

Calls for offshore wind energy development have been met with pushback from multiple stakeholder groups, including Native American tribes, members of the commercial fishing industry, and local residents. These and other stakeholders voice concerns about environmental, economic, and social issues. For example, some people argue that installing offshore wind farms could disrupt key fishing and lobstering grounds, which generate more than $1.5 billion for Maine’s economy.[9] This disruption could happen by changing fish migration patterns, changing water temperatures by running large electrical cables onshore, and limiting fishers’ ability to access fishing grounds due to turbine structures being in the way.[10] Another concern is that animals, like the Eastern red bat and other bat species, are vulnerable to flying into wind farm structures.[11] Others simply worry that installing offshore wind farms will disrupt the environment’s natural beauty, as wind farms will be a sort of visual pollution.

In addition to seeking input from relevant stakeholders, the new bill anticipates these kinds of risks and includes specific actions to avoid or mitigate them. The Offshore Wind Research Consortium funds will now also be used to “support conservation that supports species and habitats impacted by offshore wind development,”[12] including research that aims to “avoid or minimize the impact of floating offshore wind power projects on ecosystems and existing uses of the Gulf of Maine.”[13]

Proposals for the development and construction of offshore wind projects must include a “fishing communities investment plan” which “supports innovation and adaptation in response to environmental change, shifting resource economics, and changes in fishing practices associated with offshore wind power development.”[14] Proposals given priority are those that are outside critical fishing and lobstering areas, provide employment and contracting opportunities to people from disadvantaged communities, provide financial or technical support for research regarding wildlife, fisheries, and habitats impacted by offshore wind development, or promote hiring Maine residents and affected community members.[15] Under the bill, proposals must seek to minimize an offshore wind project’s impact on the environment’s visual and scenic character.[16]

The Current State of Offshore Wind Development in the U.S.

Maine is not the only jurisdiction pursuing offshore wind development projects. Most of the locations for offshore wind projects are in federal waters, which means that they often require permits issued by the Bureau of Ocean Energy Management (BOEM), which is housed in the Department of the Interior.[17] The federal government has allocated floating wind leases and has a goal to meet fifteen gigawatts of installed capacity by 2035.[18] Projects are underway in Maine, California, and Oregon, with more in the pipeline.[19]

Maine has the potential to be a leader in offshore wind development projects as its bill passed in July demonstrates the importance of engaging relevant stakeholders, conducting research to avoid or mitigate negative environmental impacts, and prioritizing developments that show commitment to social values. It also emphasizes the role of innovative technology like floating turbines, which are especially relevant because about eighty percent of the world’s offshore wind resource capacity is in locations not well-suited for fixed structures.[20] Offshore wind projects can spur economic growth[21] and contribute to the procurement of sustainable energy while decreasing reliance on non-sustainable sources like fossil fuels. Other jurisdictions should look to Maine’s bill as a great start in the early development of an industry with enormous potential.

Notes

[1] 2023 Me. SP 766.

[2] Maria Gallucci, Maine to go all in on offshore wind, Canary Media (July 25, 2023), https://www.canarymedia.com/articles/wind/maine-to-go-all-in-on-offshore-wind.

[3] Id.

[4] Maine Offshore Wind Roadmap Advisory Committee, The Maine Offshore Wind Roadmap, State of Maine Governor’s Energy Office (February 2023), https://www.maine.gov/energy/sites/maine.gov.energy/files/inline-files/Maine_Offshore_Wind_Roadmap_February_2023.pdf.

[5] Maine’s Offshore Wind Roadmap, State of Maine Governor’s Energy Office, https://www.maine.gov/energy/initiatives/offshorewind/roadmap (last visited Nov. 6, 2023) (stating the Roadmap’s objectives include “supporting economic growth and resiliency, harnessing renewable energy, advancing Maine-based innovation, supporting Maine’s seafood industry, and protecting the Gulf of Maine’s ecosystem.”).

[6] Heather Richards, Gulf of Maine wind could power 100% of New England—Report, E&E News (Oct. 31, 2023), https://subscriber.politicopro.com/article/eenews/2023/10/31/gulf-of-maine-wind-could-give-new-england-a-power-jolt-report-00124295.

[7] Id. (“Offshore wind from the Gulf of Maine could satisfy 72% of New England’s power demand but battery storage is critical; without the right storage capacities, offshore wind could only meet approximately 37% of New England’s needs.”).

[8] 2023 Me. SP 766.

[9] Maria Gallucci, Maine to go all in on offshore wind, Canary Media (July 25, 2023), https://www.canarymedia.com/articles/wind/maine-to-go-all-in-on-offshore-wind.

[10] Bureau of Ocean Energy Management, Gulf of Maine Draft Wind Energy Area (WEA) Notice, Regulations.gov

(October 18, 2023), https://www.regulations.gov/document/BOEM-2023-0054-0001 (see public comments).

[11] Heather Richards, Gulf of Maine wind could power 100% of New England—Report, E&E News (Oct. 31, 2023), https://subscriber.politicopro.com/article/eenews/2023/10/31/gulf-of-maine-wind-could-give-new-england-a-power-jolt-report-00124295.

[12] 2023 Me. SP 766.

[13] Id.

[14] Id.

[15] Id.

[16] Id.

[17] Nicholas P. Jansen, Reducing the Headwinds: the Need for a Federal Approach to Siting Offshore Wind Interconnection Infrastructure, Despite Protective State Laws, 26 Ocean & Coastal L.J. 123 (2021).

[18] Juliana Ennes, California’s floating wind lead threatened by fast-rising Maine, Reuters (September 14, 2023, 10:57 AM), https://www.reuters.com/business/energy/californias-floating-wind-lead-threatened-by-fast-rising-maine-2023-09-14/.

[19] Maria Gallucci, Maine to go all in on offshore wind, Canary Media (July 25, 2023), https://www.canarymedia.com/articles/wind/maine-to-go-all-in-on-offshore-wind.

[20] Id.

[21] Maine Offshore Wind Roadmap Advisory Committee, The Maine Offshore Wind Roadmap, State of Maine Governor’s Energy Office (February 2023), https://www.maine.gov/energy/sites/maine.gov.energy/files/inline-files/Maine_Offshore_Wind_Roadmap_February_2023.pdf.


Cracking the Code: Navigating New SEC Rules Governing Cybersecurity Disclosure

Noah Schottenbauer, MJLST Staffer

In response to the dramatic impact cybersecurity incidents have on investors through the decline of stock value and sizeable costs to companies in rectifying breaches,  the SEC adopted new rules governing cybersecurity-related disclosures for public companies, covering both the disclosure of individual cybersecurity incidents as well as periodic disclosures of a company’s procedures to assess, identify, and manage material cybersecurity risks, management’s role in assessing and managing cybersecurity risks, and the board of directors’ oversight of cybersecurity risks.[1]

Before evaluating the specifics of the new SEC cybersecurity disclosure requirements, it is important to understand why information about cybersecurity incidents is important to investors. In recent years, data breaches have led to an average decline in stock value of 7.5% amongst publicly traded companies, with impacts being felt long after the date of the breach, as demonstrated by companies experiencing a significant data breach underperforming the NASDAQ by an average of 8.6% after one year.[2] One of the forces driving this decline in stock value is the immense costs associated with rectifying a data breach for the affected company. In 2022, the average cost of a data breach for U.S. companies was $9.44 million, drawn from ransom payments, disruptions in business operations, legal and audit fees, and other associated expenses.[3]

Summary Of Required Disclosures

  • Material Cybersecurity Incidents (Form 8-K, Item 1.05)

Amendments to Item 1.05 of Form 8-K require that reporting companies disclose any cybersecurity incident deemed to be material.[4] When making such disclosures, companies are required to “describe the material aspects of the nature, scope, and timing of the incident, and the material impact or reasonably likely material impact on the registrant, including its financial condition and results of operations.”[5]

So, what is a material cybersecurity incident? The SEC defines cybersecurity incident as “an unauthorized occurrence . . . on or conducted through a registrant’s information systems that jeopardizes the confidentiality, integrity, or availability of a registrant’s information systems or any information residing therein.”[6]

The definition of material, on the other hand, lacks the same degree of clarity. Based on context offered by the SEC through the rulemaking process, material is to be used in a way that is consistent with other securities laws.[7] Under this standard, information, or, in this case, a cybersecurity incident, would be considered material if “there is a substantial likelihood that a reasonable shareholder would consider it important.”[8] This determination is made based on a “delicate assessment of the inferences a ‘reasonable shareholder’ would draw from a given set of facts and the significance of those inferences to him.”[9] Even with this added context, what characteristics of a cybersecurity incident make it material remain unclear, but considering the fact that the rules are being implemented with the intent of protecting investor interests, the safest course of action would be to disclose a cybersecurity incident when in doubt of its materiality.[10]

It is important to note that this disclosure mandate is not limited to incidents that occur within the company’s own systems. If a material cybersecurity incident happens on third-party systems that a company utilizes, that too must be disclosed.[11] However, in these situations, companies are only expected to disclose information that is readily accessible, meaning they are not required to go beyond their “regular channels of communication” to gather pertinent information.[12]

Regarding the mechanics of the disclosure, the SEC stipulates that companies must file an Item 1.05 of Form 8-K within four business days of determining that a cybersecurity incident is material.[13] However, delaying disclosure may be allowed in limited circumstances where the United States Attorney General determines that immediate disclosure may seriously threaten national security or public safety.[14]

If there are any changes in the initially-disclosed information or if new material information is discovered that was not available at the time of the first disclosure, registrants are obligated to update their disclosure by filing an amended Form 8-K, ensuring that all relevant information related to the cybersecurity incident is available to the public and stakeholders.[15]

  • Risk Management & Strategy (Regulation S-K, Item 106(b))

Under amendments to Item 106(b) of Regulation S-K, reporting companies are obligated to describe their  “processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats in sufficient detail for a reasonable investor to understand those processes.”[16] When detailing these processes, companies must specifically address three primary points. First, they need to indicate how and if the cybersecurity processes described in Item 106(b) fall under the company’s overarching risk management system or procedures. Second, companies must clarify whether they involve assessors, consultants, auditors, or other third-party entities in relation to these cybersecurity processes. Third,  they must describe if they possess methods to monitor and access significant risks stemming from cybersecurity threats when availing the services of any third-party providers.[17]

In addition to the three enumerated elements under Item 106(b), companies are expected to furnish additional information to ensure a comprehensive understanding of their cybersecurity procedures for potential investors. This supplementary disclosure should encompass “whatever information is necessary, based on their facts and circumstances, for a reasonable investor to understand their cybersecurity processes.”[18] While companies are mandated to reveal if they collaborate with third-party service providers concerning their cybersecurity procedures, they are not required to disclose the specific names of these providers or offer a detailed description of the services these third-party entities provide, thus striking a balance between transparency and confidentiality and ensuring that investors have adequate information.[19]

  • Governance (Regulation S-K, Item 106(c))

Amendments to Regulation S-K, Item 106(c) require that companies: (1) describe the board’s oversight of the risks emanating from cybersecurity threats, and (2) characterize management’s role in both assessing and managing material risks arising from such threats.[20]

When detailing management’s role concerning these cybersecurity threats, there are a number of issues that should be addressed. First, companies should clarify which specific management positions or committees are entrusted with the responsibility of assessing and managing these risks. Additionally, the expertise of these designated individuals or groups should be outlined in such detail as necessary to comprehensively describe the nature of their expertise. Second, a description of the processes these entities employ to stay informed about, and to monitor, the prevention, detection, mitigation, and remediation of cybersecurity incidents should be included. Third, companies should indicate if and how these individuals or committees convey information about such risks to the board of directors or potentially to a designated committee or subcommittee of the board.[21]

The disclosures required under Item 106(c) are aimed at balancing investor accessibility to information with the company’s ability to maintain autonomy in determining cybersecurity practices in the context of organizational structure; therefore, disclosures do not need to be overly detailed.[22]

  • Foreign Private Issuers (Form 6-K & Form 20-F)

The rules addressed above only apply to domestic companies, but the SEC imposed parallel cybersecurity disclosure requirements for foreign private issuers under Form 6-K (incident reporting) and Form 20-K (periodic reporting).[23]

Key Dates

The SEC’s final rules are effective as of September 5, 2023, but the Form 8-K and Regulation S-K reporting requirements have yet to take effect. The key compliance dates for each are as follows:

  • Form 8-K Item 1.05(a) Incident Reporting – December 18, 2023
  • Regulation S-K Periodic Reporting – Fiscal years ending on or after December 15, 2023

Smaller reporting companies are provided with an extra 180 days to comply with Form 8-K Item 1.05. Under this grant, small companies will be expected to begin incident reporting on June 15, 2024. No such extension was granted to smaller reporting companies with regard to Regulation S-K Periodic Reporting.[24]

Potential Impact On Cybersecurity Policy

The actual impact of the SEC’s new disclosure requirements will likely remain unclear for some time, yet the regulations compel companies to adopt a greater sense of discipline and transparency in their cybersecurity practices. Although the primary intent of these rules is investor protection, they may also influence how companies formulate their cybersecurity strategies, given the requirement to discuss such policies in their annual disclosures. This heightened level of accountability, regarding defensive measures and risk management strategies in response to cybersecurity threats, may encourage companies to implement more robust cybersecurity practices or, at the very least, ensure that cybersecurity becomes a regular topic of discussion amongst senior leadership. Consequently, the SEC’s initiative may serve as a catalyst for strengthening cybersecurity policies within corporate entities, while also providing investors with essential information for making informed decisions in the marketplace.

Further Information

The overview of the new SEC rules governing cybersecurity disclosures provided above is precisely that: an overview. For more information regarding the requirements and applicability of these rules please refer to the official rules and the SEC website.

Notes

[1] Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure, Exchange Act Release No. 33-11216, Exchange Act Release No. 34-97989 (July 26, 2023) [hereinafter Final Rule Release], https://www.sec.gov/files/rules/final/2023/33-11216.pdf.

[2] Keman Huang et al., The Devastating Business Impact of a Cyber Breach, Harv. Bus Rev., May 4, 2023, https://hbr.org/2023/05/the-devastating-business-impacts-of-a-cyber-breach.

[3] Id.

[4] Final Rule Release, supra note 1, at 12

[5] Id. at 49.

[6] Id. at 76.

[7] Id. at 14.

[8] TSC Indus. v. Northway, 426 U.S. 438, 449 (1976).

[9] Id. at 450.

[10] Id. at 448.

[11] Final Rule Release, supra note 1, at 30.

[12] Id. at 31.

[13] Id. at 32.

[14] Id. at 28.

[15] Id. at 50–51.

[16] Id. at 61.

[17] Id. at 63.

[18] Id.

[19] Id. at 60.

[20] Id. at 12.

[21] Id. at 70.

[22] Id.

[23] Id. at 12.

[24] Id. at 107.