Agencies

Privacy at Risk: Analyzing DHS AI Surveillance Investments

Noah Miller, MJLST Staffer

The concept of widespread surveillance of public areas monitored by artificial intelligence (“AI”) may sound like it comes right out of a dystopian novel, but key investments by the Department of Homeland Security (“DHS”) could make this a reality. Under the Biden Administration, the U.S. has acted quickly and strategically to adopt artificial intelligence as a tool to realize national security objectives.[1] In furtherance of President Biden’s executive goals concerning AI, the Department of Homeland Security has been making investments in surveillance systems that utilize AI algorithms.

Despite the substantial interest in protecting national security, Patrick Toomey, deputy director of the ACLU National Security Project, has criticized the Biden administration for allowing national security agencies to “police themselves as they increasingly subject people in the United States to powerful new technologies.”[2] Notably, these investments have not been tailored towards high-security locations—like airports. Instead, these investments include surveillance in “soft targets”—high-traffic areas with limited security: “Examples include shopping areas, transit facilities, and open-air tourist attractions.”[3] Currently, due to the number of people required to review footage, surveilling most public areas is infeasible; however, emerging AI algorithms would allow for this work to be done automatically. While enhancing security protections in soft targets is a noble and possibly desirable initiative, the potential privacy ramifications of widespread autonomous AI surveillance are extreme. Current Fourth Amendment jurisprudence offers little resistance to this form of surveillance, and the DHS has both been developing this surveillance technology themselves and outsourcing these projects to private corporations.

To foster innovation to combat threats to soft targets, the DHS has created a center called Soft Target Engineering to Neutralize the Threat Reality (“SENTRY”).[4] Of the research areas at SENTRY, one area includes developing “real-time management of threat detection and mitigation.”[5] One project, in this research area, seeks to create AI algorithms that can detect threats in public and crowded areas.[6] Once the algorithm has detected a threat, the particular incident would be sent to a human for confirmation.[7] This would be a substantially more efficient form of surveillance than is currently widely available.

Along with the research conducted through SENTRY, DHS has been making investments in private companies to develop AI surveillance technologies through the Silicon Valley Innovation Program (“SVIP”).[8] Through the SVIP, the DHS has awarded three companies with funding to develop AI surveillance technologies that can detect “anomalous events via video feeds” to improve security in soft targets: Flux Tensor, Lauretta AI, and Analytical AI.[9] First, Flux Tensor currently has demo pilot-ready prototype video feeds that apply “flexible object detection algorithms” to track and pinpoint movements of interest.[10] The technology is used to distinguish human movements and actions from the environment—i.e. weather, glare, and camera movements.[11] Second, Lauretta AI is adjusting their established activity recognition AI to utilize “multiple data points per subject to minimize false alerts.”[12] The technology generates automated reports periodically of detected incidents that are categorized by their relative severity.[13] Third, Analytical AI is in the proof of concept demo phase with AI algorithms that can autonomously track objects in relation to people within a perimeter.[14] The company has already created algorithms that can screen for prohibited items and “on-person threats” (i.e. weapons).[15] All of these technologies are currently in early stages, so the DHS is unlikely to utilize these technologies in the imminent future.

Assuming these AI algorithms are effective and come to fruition, current Fourth Amendment protections seem insufficient to protect against rampant usage of AI surveillance in public areas. In Kyllo v. United States, the Court placed an important limit on law enforcement use of new technologies. The Court held that when new sense-enhancing technology, not in general public use, was utilized to obtain information from a constitutionally protected area, the use of the new technology constitutes a search.[16] Unlike in Kyllo, where the police used thermal imaging to obtain temperature levels on various areas of a house, people subject to AI surveillance in public areas would not be in constitutionally protected areas.[17] Being that people subject to this surveillance would be in public places, they would not have a reasonable expectation of privacy in their movements; therefore, this form of surveillance likely would not constitute a search under prominent Fourth Amendment search analysis.[18]

While the scope and accuracy of this new technology are still to be determined, policymakers and agencies need to implement proper safeguards and proceed cautiously. In the best scenario, this technology can keep citizens safe while mitigating the impact on the public’s privacy interests. In the worst scenario, this technology could effectively turn our public spaces into security checkpoints. Regardless of how relevant actors proceed, this new technology would likely result in at least some decline in the public’s privacy interests. Policymakers should not make a Faustian bargain for the sake of maintaining social order.

 

Notes

[1] See generally Joseph R. Biden Jr., Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence, The White House (Oct. 24, 2024), https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/ (explaining how the executive branch intends to utilize artificial intelligence in relation to national security).

[2] ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections, ACLU (Oct. 24, 2024, 12:00 PM), https://www.aclu.org/press-releases/aclu-warns-that-biden-harris-administration-rules-on-ai-in-national-security-lack-key-protections.

[3] Jay Stanley, DHS Focus on “Soft Targets” Risks Out-of-Control Surveillance, ALCU (Oct. 24, 2024), https://www.aclu.org/news/privacy-technology/dhs-focus-on-soft-targets-risks-out-of-control-surveillance.

[4] See Overview, SENTRY, https://sentry.northeastern.edu/overview/#VSF.

[5] Real-Time Management of Threat Detection and Mitigation, SENTRY, https://sentry.northeastern.edu/research/ real-time-threat-detection-and-mitigation/.

[6] See An Artificial Intelligence-Driven Threat Detection and Real-Time Visualization System in Crowded Places, SENTRY, https://sentry.northeastern.edu/research-project/an-artificial-intelligence-driven-threat-detection-and-real-time-visualization-system-in-crowded-places/.

[7] See Id.

[8] See, e.g., SVIP Portfolio and Performers, DHS, https://www.dhs.gov/science-and-technology/svip-portfolio.

[9] Id.

[10] See Securing Soft Targets, DHS, https://www.dhs.gov/science-and-technology/securing-soft-targets.

[11] See pFlux Technology, Flux Tensor, https://fluxtensor.com/technology/.

[12] See Securing Soft Targets, supra note 10.

[13] See Security, Lauretta AI, https://lauretta.io/technologies/security/.

[14] See Securing Soft Targets, supra note 10.

[15] See Technology, Analytical AI, https://www.analyticalai.com/technology.

[16] Kyllo v. United States, 533 U.S. 27, 33 (2001).

[17] Cf. Id.

[18] See generally, Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring) (explaining the test for whether someone may rely on an expectation of privacy).

 

 


The Introduction of “Buy Now, Pay Later” Products

Yanan Tang, MJLST Staffer

As of June 2024, it is estimated that more than half of Americans turn to Buy Now, Pay Later (“BNPL”) options to purchase products during financially stressful times. [1] BNPL allows customers to split up the payment of their purchases into four equal payments, requiring a down payment of 25 percent, with the remaining cost covered by three periodic payment installments. [2]

 

Consumer Financial Protection Bureau’s Interpretive Rules

In response to the popularity of BNPL products, the Consumer Financial Protection Bureau (“CFPB”) took action to regulate BNPL products.[3] In issuing its interpretive rules for BNPL, the CFPB aims to outline how these products fit within existing credit regulations. The CFPB’s interpretive rules for BNPL products were introduced in May 2024, following a 60-day review period with mixed feedback. The rules became effective in July, aiming to apply credit card-like consumer protections to BNPL services under the Truth in Lending Act (“TILA”).

Specifically, the interpretive rules assert that these BNPL providers meet the criteria for being “card issuers” and “creditors”, and therefore should be subject to relevant regulations of TILA, which govern credit card disputes and refund rights.[4] Under CFPB’s interpretive rules, BNPL firms are required to investigate disputes, refund returned products or voided services, and provide billing statements.[5]

This blog will first explain the distinction between interpretive rules and notice-and-comment rulemaking to contextualize the CFPB’s regulatory approach. It will then explore the key consumer protections these rules aim to enforce and examine the mixed responses from various stakeholders. Finally, it will analyze the Financial Technology Association’s lawsuit challenging the CFPB’s rules and consider the broader implications for BNPL regulation.

 

Interpretive Rules and Notice-and-Comment Rulemaking Explained

In general, interpretive rules are non-binding and do not require public input, while notice-and-comment rules are binding with the force of law and must follow a formal process, including public feedback, as outlined in the Administrative Procedural Act (“APA”) §553.[6] The “legal effect test” from American Mining Congress v. MSHA helps determine whether a rule is interpretive or legislative by examining factors like legislative authority, the need for a legal basis for enforcement, and whether the rule amends an existing law.[7] While some courts vary in factors to distinguish legislative and interpretive rules, they generally agree that agencies cannot hide real regulations in interpretive rules.

 

Comments Received from Consumer Groups, Traditional Banks, and BNPL Providers

After soliciting comments, CFPB received conflicting feedback on the proposed interpretive rules.[8] However, they also urged the agency to take further action to protect consumers who use BNPL credit.[9] In addition, traditional banks largely supported the rule, because BNPL’s digital user accounts are similar to those of credit cards and should be regulated similarly.[10] In contrast, major BNPL providers protested against CFPB’s rule.[11] Many BNPL providers, like PayPal, raised concerns about administrative procedures and urged CFPB to proceed through notice-and-comment rulemaking.[12] In sum, the conflicting comments highlight the challenge of applying traditional credit regulations to innovative financial products, leading to broader disputes about the rule’s implementation.

 

Financial Technology Association’s Lawsuit against CFPB’s New Rules

After the interpretive rules went into effect in July, FTA filed a lawsuit against the agency to stop the interpretive rule.[13] In their complaint, FTA contends that CFPB bypassed APA’s notice-and-comment rulemaking process, despite the significant change imposed by the rule.[14] FTA argues that the agency exceeded statutory authority under the Truth in Lending Act (TILA) as the act’s definition of “credit card” does not apply to BNPL products.[15] FTA also argues that the rule is arbitrary and capricious because it fails to account for the unique structure of BNPL products and their compliance challenges with Regulation Z.[16]

The ongoing case between FTA and CFPB will likely focus on whether CFPB’s rule is a permissible interpretation of existing law or a substantive rule requiring formal rulemaking under APA § 553. This decision should weigh the nature of BNPL products in relation to consumer protections traditionally associated with credit card-like products. In defending the agency’s interpretive rules against FTA, CFPB could consider highlighting the legislative intent of TILA’s flexibility and rationale for using an interpretive rule.

 

Notes

[1] See Block, Inc., More than Half of Americans Turn to Buy Now, Pay Later During Financially Stressful Times (June 26, 2024), https://investors.block.xyz/investor-news/default.aspx.

[2] Id.

[3] See Paige Smith & Paulina Cachero, Buy Now, Pay Later Needs Credit Card-Like Oversight, CFPB Says, Bloomberg Law (May 22, 2024), https://news.bloomberglaw.com/banking-law/buy-now-pay-later-soon-will-be-treated-more-like-credit-cards.

[4] Id.

[5] Id.

[6] 5 U.S.C.A. § 553.

[7] Am. Mining Cong. v. Mine Safety & Health Admin., 302 U.S. App. D.C. 38, 995 F.2d 1106 (1993).

[8] See Evan Weinberger, CFPB’s ‘Buy Now, Pay Later’ Rule Sparks Conflicting Reactions, Bloomberg Law (Aug. 1, 2024), https://news.bloomberglaw.com/banking-law/cfpbs-buy-now-pay-later-rule-sparks-conflicting-reactions.

[9] See New York City Dep’t of Consumer & Worker Prot., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (Aug. 31, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0027; see also Nat’l Consumer L. Ctr., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017, at 1 (Aug. 1, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0028.

[10] See Independent Community Bankers of Am., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 31, 2024), https://www.regulations.gov/comment/CFPB-2024-0017-0023.

[11] See Financial Technology Ass’n, Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 19, 2024). https://www.regulations.gov/comment/CFPB-2024-0017-0038.

[12] See PayPal, Inc., Comment Letter on Truth in Lending (Regulation Z); Use of Digital User Accounts To Access Buy Now, Pay Later Loans, Docket No. CFPB-2024-0017 (July 31, 2024). https://www.regulations.gov/comment/CFPB-2024-0017-0025.

[13] See Evan Weinberger, CFPB Buy Now, Pay Later Rule Hit With Fintech Group Lawsuit, Bloomberg Law (Oct. 18, 2024), https://news.bloomberglaw.com/banking-law/cfpbs-buy-now-pay-later-rule-hit-with-fintech-group-lawsuit.

[14] Complaint, Fin. Tech. Ass’n v. Consumer Fin. Prot. Bureau, No. 1:24-cv-02966 (D.D.C. Oct. 18, 2024).

[15] Id.

[16] Id.


A Digital Brick in the Trump-Biden Wall

Solomon Steen, MJLST Staffer

“Alexander explained to a CBP officer at the limit line between the U.S. and Mexico that he was seeking political asylum and refuge in the United States; the CBP officer told him to “get the fuck out of here” and pushed him backwards onto the cement, causing bruising. Alexander has continued to try to obtain a CBP One appointment every day from Tijuana. To date, he has been unable to obtain a CBP One appointment or otherwise access the U.S. asylum process…”>[1]

Alexander fled kidnapping and threats in Chechnya to seek security in the US.[2] His is a common story of migrants who have received a similar welcome. People have died and been killed waiting for an appointment to apply for asylum at the border.[3] Children with autism and schizophrenia have had to wait, exposed to the elements.[4] People whose medical vulnerabilities should have entitled them to relief have instead been preyed upon by gangs or corrupt police.[5] What is the wall blocking these people from fleeing persecution and reaching safety in the US?

The Biden administration’s failed effort to pass bipartisan legislation to curb access to asylum is part of a broader pattern of Trump-Biden continuity in immigration policy.[6] This continuity is defined by bipartisan support for increased funding for Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) for enforcement of immigration law at the border and in the interior, respectively.[7] Successive Democratic and Republican administrations have increased investment in interior and border enforcement.[8] That investment has expanded technological mechanisms to surveil migrants and facilitate administration of removal.

As part of their efforts to curtail access to asylum, the Biden administration promulgated their Circumvention of Lawful Pathways rule.[9] This rule revived the Trump administration’s entry and transit bans.[10] The transit ban bars migrants from applying for asylum if they crossed through a third country en route to the US.[11] The entry ban bars asylum applicants who did not present themselves at a port of entry.[12] In East Bay Sanctuary Covenant v. Biden, the Ninth Circuit determined the rule was unlawful for directly contradicting Congressional intent in the INA granting a right of asylum to any migrant in the US regardless of manner of entry.[13] The Trump entry ban was similarly found unlawful for directly contravening the same language in the INA.[14] The Biden ban remains in effect to allow litigation regarding its legality to reach its ultimate conclusion.

The Circumvention of Lawful Pathways rule effecting the entry ban gave rise to a pattern and practice of metering asylum applicants, or requiring applicants to present at a port of entry having complied with specific conditions to avoid being turned back.[15] To facilitate the arrival of asylum seekers within a specific appointment window, DHS launched the CBP One app.[16] The app would ostensibly allow asylum applicants to schedule an appointment at a port of entry to present themselves for asylum.[17]

Al Otro Lado (AOL), Haitian Bridge, and other litigants have filed a complaint alleging the government lacks the statutory authorization to force migrants to seek an appointment through the app and that its design frustrates their rights.[18] AOL notes that by requiring migrants to make appointments to claim asylum via the app, the Biden administration has imposed a number of extra-statutory requirements on migrants entitled to claim asylum, which include that they:

(a) have access to an up-to-date, well-functioning smartphone;
(b) fluently read one of the few languages currently supported by CBP One;
(c) have access to a sufficiently strong and reliable mobile internet connection and electricity to submit the necessary information and photographs required by the app;
(d) have the technological literacy to navigate the complicated multi-step process to create an account and request an appointment via CBP One;
(e) are able to survive in a restricted area of Mexico for an indeterminate period of time while trying to obtain an appointment; and
(f) are lucky enough to obtain one of the limited number of appointments at certain POEs.[19]

The Civil Rights Education and Enforcement Center (CREEC) and the Texas Civil Rights Project have similarly filed a complaint with Department of Homeland Security’s Office of Civil Rights and Civil Liberties alleging CBP One is illegally inaccessible to disabled people and this has consequently violated other rights they have as migrants.[20] Migrants may become disabled as a consequence of the immigration process or the persecution they suffered that establish their prima facie claim to asylum.[21] The CREEC complaint specifically cites Section 508 of the Rehabilitation Act, which says disabled members of the public must enjoy access to government tech “comparable to the access” of everyone else.[22]

CREEC and AOL – and the other service organizations joining their respective complaints – note that they have limited capacity to assist asylum seekers.[23] Migrants without such institutional or community support would be more vulnerable being denied access to asylum and subject to opportunistic criminal predation while they wait at the border.[24]

There are a litany of technical problems with the app that can frustrate meritorious asylum claims. The app requires applicants to submit a picture of their face.[25] The app’s facial recognition software frequently fails to identify portraits of darker-skinned people.[26] Racial persecution is one of the statutory grounds for claiming asylum.[27] A victim of race-based persecution can have their asylum claim frustrated on the basis of their race because of this app. Persecution on the basis of membership in a particular social group can also form the basis for an asylum claim.[28] An applicant could establish membership in a particular social group composed of certain disabled people.[29] People with facial disabilities have also struggled with the facial recognition feature.[30]

The mere fact that an app has substituted a human interaction contributes to frustration of disabled migrants’ statutory rights. Medically fragile people statutorily eligible to enter the US via humanitarian parole are unable to access that relief electronically.[31] Individuals with intellectual disabilities have also had their claims delayed by navigating CBP One.[32] Asylum officers are statutorily required to evaluate if asylum seekers lack the mental competence to assist in their applications and, if so, ensure they have qualified assistance to vindicate their claims.[33]

The entry ban has textual exceptions for migrants whose attempts to set appointments are frustrated by technical issues.[34] CBP officials at many ports have a pattern and practice of ignoring those exceptions and refusing all migrants who lack a valid CBP One appointment.[35]

AOL seeks relief in the termination of the CBP One turnback policy: essentially, ensuring people can exercise their statutory right to claim asylum at the border without an appointment.[36] CREEC seeks relief in the form of a fully accessible CBP One app and accommodation policies to ensure disabled asylum seekers can have “meaningful access” to the asylum process.[37]

Comprehensively safeguarding asylum seeker’s rights would require more than abandoning CBP One. A process that ensures medically vulnerable persons can access timely care and persons with intellectual disabilities can get legal assistance would require deploying more border resources, such as co-locating medical and resettlement organization staff with CBP. Meaningfully curbing racial, ethnic, and linguistic discrimination by CBP, ICE, and Asylum Officers would require expensive and extensive retraining. However, it is evident that the CBP One is not serving the ostensible goal of making the asylum process more efficient, though it may serve the political goal of reinforcing the wall.

Notes

[1] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[2] Id. at 46.

[3] Ana Lucia Verduzco & Stephanie Brewer, Kidnapping of Migrants and Asylum Seekers at the Texas-Tamaulipas Border Reaches Intolerable Levels, (Apr. 4, 2024) https://www.wola.org/analysis/kidnapping-migrants-asylum-seekers-texas-tamaulipas-border-intolerable-levels.

[4] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 28, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[5] Linda Urueña Mariño & Christina Asencio, Human Rights First Tracker of Reported Attacks During the Biden Administration Against Asylum Seekers and Migrants Who Are Stranded in and/or Expelled to Mexico, Human Rights First, (Jan. 13, 2022),  at 10, 16, 19, https://humanrightsfirst.org/wp-content/uploads/2022/02/AttacksonAsylumSeekersStrandedinMexicoDuringBidenAdministration.1.13.2022.pdf.

[6] Actions – H.R.815 – 118th Congress (2023-2024): National Security Act, 2024, H.R.815, 118th Cong. (2024), https://www.congress.gov/bill/118th-congress/house-bill/815/all-actions, (failing to pass the immigration language on 02/07/24).

[7] American Immigration Council,The Cost of Immigration Enforcement and Border Security, (Jan. 20, 2021), at 2, https://www.americanimmigrationcouncil.org/sites/default/files/research/the_cost_of_immigration_enforcement_and_border_security.pdf.

[8] Id. at 3-4.

[9] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[10] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[11] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[12] E. Bay Sanctuary Covenant v. Biden, 993 F.3d 640, 658 (9th Cir. 2021).

[13] Id. at 669-70.

[14] E. Bay Sanctuary Covenant v. Trump, 349 F. Supp. 3d 838, 844.

[15] Complaint, at 2, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[16] Fact Sheet: Circumvention of Lawful Pathways Final Rule, Dept. Homeland Sect’y., (May 11, 2023), https://www.dhs.gov/news/2023/05/11/fact-sheet-circumvention-lawful-pathways-final-rule.

[17] Id.

[18] Complaint, at 57, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[19] Complaint, at 3, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[20] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[21] Ruby Ritchin, “I Felt Not Seen, Not Heard”: Gaps in Disability Access at USCIS for People Seeking Protection, 12, (Sep. 19, 2023) https://humanrightsfirst.org/library/i-felt-not-seen-not-heard-gaps-in-disability-access-at-uscis-for-people-seeking-protection.

[22] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 6, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also 29 U.S.C.A. § 794d (a)(1)(A)(ii) (West).

[23] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 2, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf; see also Complaint, at 4, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[24] Dara Lind, CBP’s Continued ‘Turnbacks’ Are Sending Asylum Seekers Back to Lethal Danger, (Aug. 10, 2023), https://immigrationimpact.com/2023/08/10/cbp-turnback-policy-lawsuit-danger.

[25] Complaint, at 31, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[26] Id.

[27] 8 U.S.C.A. § 1101(a)(42)(A) (West).

[28] Id.

[29] Hernandez Arellano v. Garland, 856 F. App’x 351, 353 (2d Cir. 2021).

[30] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 9, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.

[31] Id.

[32] Id.

[33] Complaint, at 9, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[34] Complaint, at 22, Al Otro Lado and Haitian Bridge Alliance v. Mayorkas, (S.D. Cal. Jul. 26, 2023), No. 3:23-CV-01367-AGS-BLM.

[35] Id. at 23.

[36] Id. at 65-66.

[37] Letter from the Texas Civil Rights Project & the Civil Rights Education & Enforcement Center (CREEC), to U.S. Dept. Homeland Sec., Off. Civ. Rts. & Civ. Liberties (Mar. 25, 2024), at 10-11, https://4b16d9e9-506a-4ada-aeca-7c3e69a4ed29.usrfiles.com/ugd/4b16d9_e98ae77035514157bc1c4c746b5545e6.pdf.


The Stifling Potential of Biden’s Executive Order on AI

Christhy Le, MJLST Staffer

Biden’s Executive Order on “Safe, Secure, and Trustworthy” AI

On October 30, 2023, President Biden issued a landmark Executive Order to address concerns about the burgeoning and rapidly evolving technology of AI. The Biden administration states that the order’s goal is to ensure that America leads the way in seizing the promising potential of AI while managing the risks of AI’s potential misuse.[1] The Executive Order establishes (1) new standards for AI development, and security; (2) increased protections for Americans’ data and privacy; and (3) a plan to develop authentication methods to detect AI-generated content.[2] Notably, Biden’s Executive Order also highlights the need to develop AI in a way that ensures it advances equity and civil rights, fights against algorithmic discrimination, and creates efficiencies and equity in the distribution of governmental resources.[3]

While the Biden administration’s Executive Order has been lauded as the most comprehensive step taken by a President to safeguard against threats posed by AI, its true impact is yet to be seen. The impact of the Executive Order will depend on its implementation by the agencies that have been tasked with taking action. The regulatory heads tasked with implementing Biden’s Executive Order are the Secretary of Commerce, Secretary of Energy, Secretary of Homeland Security, and the National Institute of Standards and Technology.[4] Below is a summary of the key calls-to-action from Biden’s Executive Order:

  • Industry Standards for AI Development: The National Institute of Science and Tech (NIST), Secretary of Commerce, Secretary of Energy, Secretary of Homeland Secretary, and other heads of agencies selected by the Secretary of Commerce will define industry standards and best practices for the development and deployment of safe and secure AI systems.
  • Red-Team Testing and Reporting Requirements: Companies developing or demonstrating an intent to develop potential dual-use foundational models will be required to provide the Federal Government, on an ongoing basis, with information, reports, and records on the training and development of such models. Companies will also be responsible for sharing the results of any AI red-team testing conducted by the NIST.
  • Cybersecurity and Data Privacy: The Department of Homeland Security shall provide an assessment of potential risks related to the use of AI in critical infrastructure sectors and issue a public report on best practices to manage AI-specific cybersecurity risks. The Director of the National Science Foundation shall fund the creation of a research network to advance privacy research and the development of Privacy Enhancing Technologies (PETs).
  • Synthetic Content Detection and Authentication: The Secretary of Commerce and heads of other relevant agencies will provide a report outlining existing methods and the potential development of further standards/techniques to authenticate content, track its provenance, detect synthetic content, and label synthetic content.
  • Maintaining Competition and Innovation: The government will invest in AI research by creating at least four new National AI Research Institutes and launch a pilot distributing computational, data, model, and training resources to support AI-related research and development. The Secretary of Veterans Affairs will also be tasked with hosting nationwide AI Tech Sprint competitions. Additionally, the FTC will be charged with using its authorities to ensure fair competition in the AI and semiconductor industry.
  • Protecting Civil Rights and Equity with AI: The Secretary of Labor will publish a report on effects of AI on the labor market and employees’ well-being. The Attorney General shall implement and enforce existing federal laws to address civil rights and civil liberties violations and discrimination related to AI. The Secretary of Health and Human Services shall publish a plan to utilize automated or algorithmic systems in administering public benefits and services and ensure equitable distribution of government resources.[5]

Potential for Big Tech’s Outsized Influence on Government Action Against AI

Leading up to the issuance of this Executive Order, the Biden administration met repeatedly and exclusively with leaders of big tech companies. In May 2023, President Biden and Vice President Kamala Harris met with the CEOs of leading AI companies–Google, Anthropic, Microsoft, and OpenAI.[6] In July 2023, the Biden administration celebrated their achievement of getting seven AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI) to make voluntary commitments to work towards developing AI technology in a safe, secure, and transparent manner.[7] Voluntary commitments generally require tech companies to publish public reports on their developed models, submit to third-party testing of their systems, prioritize research on societal risks posed by AI systems, and invest in cybersecurity.[8] Many industry leaders criticized these voluntary commitments for being vague and “more symbolic than substantive.”[9] Industry leaders also noted the lack of enforcement mechanisms to ensure companies follow through on these commitments.[10] Notably, the White House has only allowed leaders of large tech companies to weigh in on requirements for Biden’s Executive Order.

While a bipartisan group of senators[11] hosted a more diverse audience of tech leaders in their AI Insights Forum, the attendees for the first and second forum were still largely limited to CEOs or Cofounders of prominent tech companies, VC executives, or professors at leading universities.[12] Marc Andreessen, a co-founder of Andreessen Horowitz, a prominent VC fund, noted that in order to protect competition, the “future of AI shouldn’t be dictated by a few large corporations. It should be a group of global voices, pooling together diverse insights and ethical frameworks.”[13] On November 3rd, 2023 a group of prominent academics, VC executives, and heads of AI startups published an open letter to the Biden administration where they voiced their concern about the Executive Order’s potentially stifling effects.[14] The group also welcomed a discussion with the Biden administration on the importance of developing regulations that allowed for robust development of open source AI.[15]

Potential to Stifle Innovation and Stunt Tech Startups

While the language of Biden’s Executive Order is fairly broad and general, it still has the potential to stunt early innovation by smaller AI startups. Industry leaders and AI startup founders have voiced concern over the Executive Order’s reporting requirements and restrictions on models over a certain size.[16] Ironically, Biden’s Order includes a claim that the Federal Trade Commission will “work to promote a fair, open, and competitive ecosystem” by helping developers and small businesses access technical resources and commercialization opportunities.

Despite this promise of providing resources to startups and small businesses, the Executive Order’s stringent reporting and information-sharing requirements will likely have a disproportionately detrimental impact on startups. Andrew Ng, a longtime AI leader and cofounder of Google Brain and Coursera, stated that he is “quite concerned about the reporting requirements for models over a certain size” and is worried about the “overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”[17] Ng believes that regulating AI model size will likely hurt the open-source community and unintentionally benefit tech giants as smaller companies will struggle to comply with the Order’s reporting requirements.[18]

Open source software (OSS) has been around since the 1980s.[19] OSS is code that is free to access, use, and change without restriction.[20] The open source community has played a central part in developing the use and application of AI, as leading AI generative models like ChatGPT and Llama have open-source origins.[21] While both Llama and ChatGPT are no longer open source, their development and advancement heavily relied on using open source models like Transformer, TensorFlow, and Pytorch.[22] Industry leaders have voiced concern that the Executive Order’s broad and vague use of the term “dual-use foundation model” will impose unduly burdensome reporting requirements on small companies.[23] Startups typically have leaner teams, and there is rarely a team solely dedicated to compliance. These reporting requirements will likely create barriers to entry for tech challengers who are pioneering open source AI, as only incumbents with greater financial resources will be able to comply with the Executive Order’s requirements.

While Biden’s Executive Order is unlikely to bring any immediate change, the broad reporting requirements outlined in the Order are likely to stifle emerging startups and pioneers of open source AI.

Notes

[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[2] Id.

[3] Id.

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[5] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.

[7] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.

[8] https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.

[9] https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html.

[10] Id.

[11] https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum.

[12] https://techpolicy.press/us-senate-ai-insight-forum-tracker/.

[13] https://www.schumer.senate.gov/imo/media/doc/Marc%20Andreessen.pdf.

[14] https://twitter.com/martin_casado/status/1720517026538778657?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1720517026538778657%7Ctwgr%5Ec9ecbf7ac4fe23b03d91aea32db04b2e3ca656df%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcointelegraph.com%2Fnews%2Fbiden-ai-executive-order-certainly-challenging-open-source-ai-industry-insiders.

[15] Id.

[16] https://www.cnbc.com/2023/11/02/biden-ai-executive-order-industry-civil-rights-labor-groups-react.html.

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[18] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[19] https://www.brookings.edu/articles/how-open-source-software-shapes-ai-policy/.

[20] Id.

[21] https://www.zdnet.com/article/why-open-source-is-the-cradle-of-artificial-intelligence/.

[22] Id.

[23] Casado, supra note 14.


Conflicts of Interest and Conflicting Interests: The SEC’s Controversial Proposed Rule

Shaadie Ali, MJLST Staffer

A controversial proposed rule from the SEC on AI and conflicts of interest is generating significant pushback from brokers and investment advisers. The proposed rule, dubbed “Reg PDA” by industry commentators in reference to its focus on “predictive data analytics,” was issued on July 26, 2023.[1] Critics claim that, as written, Reg PDA would require broker-dealers and investment managers to effectively eliminate the use of almost all technology when advising clients.[2] The SEC claims the proposed rule is intended to address the potential for AI to hurt more investors more quickly than ever before, but some critics argue that the SEC’s proposed rule would reach far beyond generative AI, covering nearly all technology. Critics also highlight the requirement that conflicts of interest be eliminated or neutralized as nearly impossible to meet and a departure from traditional principles of informed consent in financial advising.[3]

The SEC’s 2-page fact sheet on Reg PDA describes the 239-page proposal as requiring broker-dealers and investment managers to “eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of covered technologies in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests.”[4] The proposal defines covered technology as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.”[5] Critics have described this definition of “covered technology” as overly broad, with some going so far as to suggest that a calculator may be “covered technology.”[6] Despite commentators’ insistence, this particular contention is implausible – in its Notice of Proposed Rulemaking, the SEC stated directly that “[t]he proposed definition…would not include technologies that are designed purely to inform investors.”[7] More broadly, though, the SEC touts the proposal’s broadness as a strength, noting it “is designed to be sufficiently broad and principles-based to continue to be applicable as technology develops and to provide firms with flexibility to develop approaches to their use of technology consistent with their business model.”[8]

This move by the SEC comes amidst concerns raised by SEC chair Gary Gensler and the Biden administration about the potential for the concentration of power in artificial intelligence platforms to cause financial instability.[9] On October 30, 2023, President Biden signed an Executive Order that established new standards for AI safety and directed the issuance of guidance for agencies’ use of AI.[10] When questioned about Reg PDA at an event in early November, Gensler defended the proposed regulation by arguing that it was intended to protect online investors from receiving skewed recommendations.[11] Elsewhere, Gensler warned that it would be “nearly unavoidable” that AI would trigger a financial crisis within the next decade unless regulators intervened soon.[12]

Gensler’s explanatory comments have done little to curb criticism by industry groups, who have continued to submit comments via the SEC’s notice and comment process long after the SEC’s October 10 deadline.[13] In addition to highlighting the potential impacts of Reg PDA on brokers and investment advisers, many commenters questioned whether the SEC had the authority to issue such a rule. The American Free Enterprise Chamber of Commerce (“AmFree”) argued that the SEC exceeded its authority under both its organic statutes and the Administrative Procedures Act (APA) in issuing a blanket prohibition on conflicts of interest.[14] In their public comment, AmFree argued the proposed rule was arbitrary and capricious, pointing to the SEC’s alleged failure to adequately consider the costs associated with the proposal.[15] AmFree also invoked the major questions doctrine to question the SEC’s authority to promulgate the rule, arguing “[i]f Congress had meant to grant the SEC blanket authority to ban conflicts and conflicted communications generally, it would have spoken more clearly.”[16] In his scathing public comment, Robinhood Chief Legal and Corporate Affairs Officer Daniel M. Gallagher alluded to similar APA concerns, calling the proposal “arbitrary and capricious” on the grounds that “[t]he SEC has not demonstrated a need for placing unprecedented regulatory burdens on firms’ use of technology.”[17] Gallagher went on to condemn the proposal’s apparent “contempt for the ordinary person, who under the SEC’s apparent world view [sic] is incapable of thinking for himself or herself.”[18]

Although investor and broker industry groups have harshly criticized Reg PDA, some consumer protection groups have expressed support through public comment. The Consumer Federation of America (CFA) endorsed the proposal as “correctly recogniz[ing] that technology-driven conflicts of interest are too complex and evolve too quickly for the vast majority of investors to understand and protect themselves against, there is significant likelihood of widespread investor harm resulting from technology-driven conflicts of interest, and that disclosure would not effectively address these concerns.”[19] The CFA further argued that the final rule should go even further, citing loopholes in the existing proposal for affiliated entities that control or are controlled by a firm.[20]

More generally, commentators have observed that the SEC’s new prescriptive rule that firms eliminate or neutralize potential conflicts of interest marks a departure from traditional securities laws, wherein disclosure of potential conflicts of interest has historically been sufficient.[21] Historically, conflicts of interest stemming from AI and technology have been regulated the same as any other conflict of interest – while brokers are required to disclose their conflicts, their conduct is primarily regulated through their fiduciary duty to clients. In turn, some commentators have suggested that the legal basis for the proposed regulations is well-grounded in the investment adviser’s fiduciary duty to always act in the best interest of its clients.[22] Some analysts note that “neutralizing” the effects of a conflict of interest from such technology does not necessarily require advisers to discard that technology, but changing the way that firm-favorable information is analyzed or weighed, but it still marks a significant departure from the disclosure regime. Given the widespread and persistent opposition to the rule both through the note and comment process and elsewhere by commentators and analysts, it is unclear whether the SEC will make significant revisions to a final rule. While the SEC could conceivably narrow definitions of “covered technology,” “investor interaction,” and “conflicts of interest,” it is difficult to imagine how the SEC could modify the “eliminate or neutralize” requirement in a way that would bring it into line with the existing disclosure-based regime.

For its part, the SEC under Gensler is likely to continue pursuing regulations on AI regardless of the outcome of Reg PDA. Gensler has long expressed his concerns about the impacts of AI on market stability. In a 2020 paper analyzing regulatory gaps in the use of generative AI in financial markets, Gensler warned, “[e]xisting financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the risks posed by deep learning.”[23] Regardless of how the SEC decides to finalize its approach to AI in conflict of interest issues, it is clear that brokers and advisers are likely to resist broad-based bans on AI in their work going forward.

Notes

[1] Press Release, Sec. and Exch. Comm’n., SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Jul. 26, 2023).

[2] Id.

[3] Jennifer Hughes, SEC faces fierce pushback on plan to police AI investment advice, Financial Times (Nov. 8, 2023), https://www.ft.com/content/766fdb7c-a0b4-40d1-bfbc-35111cdd3436.

[4] Sec. Exch. Comm’n., Fact Sheet: Conflicts of Interest and Predictive Data Analytics (2023).

[5] Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,  88 Fed. Reg. 53960 (Proposed Jul. 26, 2021) (to be codified at 17 C.F.R. pts. 240, 275) [hereinafter Proposed Rule].

[6] Hughes, supra note 3.

[7] Proposed Rule, supra note 5.

[8] Id.

[9] Stefania Palma and Patrick Jenkins, Gary Gensler urges regulators to tame AI risks to financial stability, Financial Times (Oct. 14, 2023), https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac.

[10] Fact Sheet, White House, President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Oct. 30, 2023).

[11] Hughes, supra note 3.

[12] Palma, supra note 9.

[13] See Sec. Exch. Comm’n., Comments on Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (last visited Nov. 13, 2023), https://www.sec.gov/comments/s7-12-23/s71223.htm (listing multiple comments submitted after October 10, 2023).

[14] Am. Free Enter. Chamber of Com., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270180-652582.pdf.

[15] Id. at 14-19.

[16] Id. at 9.

[17] Daniel M. Gallagher, Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-271299-654022.pdf.

[18] Id. at 43.

[19] Consumer Fed’n. of Am., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270400-652982.pdf.

[20] Id.

[21] Ken D. Kumayama et al., SEC Proposes New Conflicts of Interest Rule for Use of AI by Broker-Dealers and Investment Advisers, Skadden (Aug. 10, 2023), https://www.skadden.com/insights/publications/2023/08/sec-proposes-new-conflicts.

[22] Colin Caleb, ANALYSIS: Proposed SEC Regs Won’t Allow Advisers to Sidestep AI, Bloomberg Law (Aug. 10, 2023), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-proposed-sec-regs-wont-allow-advisers-to-sidestep-ai.

[23] Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (MIT Artificial Intel. Glob. Pol’y F., Working Paper 2020) (in which Gensler identifies several potential systemic risks to the financial system, including overreliance and uniformity in financial modeling, overreliance on concentrated centralized datasets, and the potential of regulators to create incentives for less-regulated entities to take on increasingly complex functions in the financial system).


Who Is Regulating Regulatory Public Comments?

Madeleine Rossi, MJLST Staffer

In 2015 the Federal Communications Commission (FCC) issued a rule on “Protecting and Promoting the Open Internet.”[1] The basic premise of these rules was that internet service providers had unprecedented control over access to information for much of the public. Those in favor of the new rules argued that broadband providers should be required to enable access to all internet content, without either driving or throttling traffic to particular websites for their own benefit. Opponents of these rules – typically industry players such as the same broadband providers that would be regulated – argued that such rules were burdensome and would prevent technological innovation. The fight over these regulations is colloquially known as the fight over “net neutrality.” 

In 2017 the FCC reversed course and put forth a proposal to repeal the 2015 regulations. Any time that an agency proposes a rule, or proposes to repeal a rule, they must go through the notice-and-comment rulemaking procedure. One of the most important parts of this process is the solicitation of public comments. Many rules get put forth without much attention or fanfare from the public. Some rules may only get hundreds of public comments, often coming from the industry that the rule is aimed at. Few proposed rules get attention from the public at large. However, the fight over net neutrality – both the 2015 rules and the repeal of those rules in 2017 – garnered significant public interest. The original 2015 rule amassed almost four million comments.[2] At the time, this was the most public comments that a proposed rule had ever received.[3] In 2017, the rule’s rescission blew past four million comments to acquire a total of almost twenty-two million comments.[4]

At first glance this may seem like a triumph for the democratic purpose of the notice-and-comment requirement. After all, it should be a good thing that so many American citizens are taking an interest in the rules that will ultimately determine how they can use the internet. Unfortunately, that was not the full story. New York Attorney General Letitia James released a report in May of 2021 detailing her office’s investigation into wide ranging fraud that plagued the notice-and-comment process.[5] Of the twenty-two million comments submitted about the repeal, a little under eight million of them were generated by a single college student.[6] These computer-generated comments were in support of the original regulations, but used fake names and fake comments.[7] Another eight million comments were submitted by lead generation companies that were hired by the broadband companies.[8] These companies stole individuals’ identities and submitted computer-generated comments on their behalf.[9] While these comments used real people’s identities, they fabricated the content in support of repealing the 2015 regulations.[10]

Attorney General James’ investigation showed that real comments, submitted by real people, were “drowned out by masses of fake comments and messages being submitted to the government to sway decision-making.”[11] When the investigation was complete, James’ office concluded that nearly eighteen of the twenty-two million comments received by the FCC in 2017 were faked.[12] The swarm of fake comments created the false perception that the public was generally split on the issue of net neutrality. In fact, anywhere from seventy-five to eighty percent of Americans say that they support net neutrality.[13]

This is not an issue that is isolated to the fight over net neutrality. Other rulemaking proceedings have been targeted as well, namely by the same lead generation firms involved in the 2017 notice-and-comment fraud campaign.[14] Attorney General James’ investigation found that regulatory agencies like the Environmental Protection Agency (EPA), which is responsible for promulgating rules that protect people and the environment from risk, had also been targeted by such campaigns.[15] When agencies like the FCC or EPA propose regulations for the protection of the public, the democratic process of notice-and-comment is completely upended when industry players are able to “drown out” real public voices.

So, what can be done to preserve the democratic nature of the notice-and-comment period? As the technology involved in these schemes advances, this is likely to become not only a reoccurring issue but one that could entirely subvert the regulatory process of rulemaking. One way that injured parties are fighting back is with lawsuits.

In May of 2023, Attorney General James announced that she had come to a second agreement with three of the lead generation firms involved with the 2017 scam to falsify public comments.[16] The three companies agreed to pay $615,000 in fines for their involvement.[17] This agreement came in addition to a previous agreement in which the three stipulated to paying four million dollars in fines and agreed to change future lead generating practices, and the litigation is ongoing.[18]

However, more must be done to ensure that the notice-and-comment process is not entirely subverted. Financial punishment after the fact does not account for the harm to the democratic process that is already done. Currently, the only recourse is to sue these companies for their fraudulent and deceptive practices. However, lawsuits will typically only result in financial losses. Financial penalties are important, but they will always come after the fact. Once litigation is under way, the harm has already been done to the American public.

Agencies need to ensure that they are keeping up with the pace of rapidly evolving technology so that they can properly vet the validity of the comments that they receive. While it is important to keep public commenting a relatively open and easy practice, having some kind of vetting procedure has become essential. Perhaps requiring an accompanying email address or phone number for each comment, and then sending a simple verification code. Email or phone numbers could also be contacted during the vetting process once the public comment period closes. While it would likely be impractical to contact each individual independently, a random sample would at least flag whether or not a coordinated and large-scale fake commenting campaign had taken place. 

Additionally, the legislature should keep an eye on fraudulent practices that impact the notice-and-comment process. Lawmakers can and should strengthen laws to punish companies that are engaged in these practices. For example, in Attorney General James’ report she recommends that lawmakers do at least two things. First, they should explicitly and statutorily prohibit “deceptive and unauthorized comments.”[19] To be effective these laws should establish large civil fines. Second, the legislature should “strengthen impersonation laws.”[20] Current impersonation laws were not designed with mass-impersonation fraud in mind. These statutes should be amended to increase penalties when many individuals are impersonated.

In conclusion, the use of fake comments to sway agency rulemaking is a problem that is only going to worsen with time and the advance of technology. This is a serious problem that should be taken as such by both agencies and the legislature. 

Notes

[1] 80 Fed. Reg. 19737.

[2] https://www.brookings.edu/articles/democratizing-and-technocratizing-the-notice-and-comment-process/.

[3] Id.

[4] Id.

[5] https://ag.ny.gov/press-release/2021/attorney-general-james-issues-report-detailing-millions-fake-comments-revealing.

[6] https://www.brookings.edu/articles/democratizing-and-technocratizing-the-notice-and-comment-process/.

[7] Id.

[8] Id.

[9] Id.

[10] Id.

[11] https://ag.ny.gov/press-release/2021/attorney-general-james-issues-report-detailing-millions-fake-comments-revealing.

[12] Id.

[13] https://thehill.com/policy/technology/435009-4-in-5-americans-say-they-support-net-neutrality-poll/, https://publicconsultation.org/united-states/three-in-four-voters-favor-reinstating-net-neutrality/.

[14] Id.

[15] https://apnews.com/article/settlement-fake-public-comments-net-neutrality-ae1f69a1f5415d9f77a41f07c3f6c358.

[16] Id.

[17] Id.

[18] https://apnews.com/article/government-and-politics-technology-business-9f10b43b6aacbc750dfc010ceaedaca7.

[19] https://ag.ny.gov/sites/default/files/oag-fakecommentsreport.pdf.

[20] Id.


Whistleblowers Reveals…—How Can the Legal System Protect and Encourage Whistleblowing?

Vivian Lin, MJLST Staffer

In July 2022, Twitter’s former head of security, Peiter Zatko, filed a 200+ page complaint with Congress and several federal agencies, disclosing Twitter’s potential major security problems that pose a threat to its users and national security.[1] Though it is still unclear whether  these allegations were confirmed, the disclosure drew significant attention because of data privacy implications and calls for whistleblower protection. Whistleblowers play an important role in detecting major issues in corporations and the government. A 2007 survey reported that in private companies, professional auditors were only able to detect 19% of instances of fraud but whistleblowers were able to expose 43% of incidents.[2]In fact, this recent Twitter scandal, along with Facebook’s online safety scandal in 2021[3] and the famous national security scandal disclosed by Edward Snowden, were all revealed by inside whistleblowers. Without these disclosures, the public may never learn of incidents that involve their personal information and security.

An Overview of the U.S. Whistleblower Protection Regulations

Whistleblower laws aim to protect individuals who report illegal or unethical activities in their workplace or government agency. The primary federal law protecting whistleblowers is the Whistleblower Protection Act (WPA), passed in 1989. The WPA provides protections for federal employees who report violations such as  gross mismanagement, gross waste of funds, abuse of authority, or dangers to public health or safety.[4]

In addition to the WPA, there are other federal laws that provide industry specific whistleblower protections in private sectors. For example, the Sarbanes-Oxley Act (SOX) was enacted in response to the corporate accounting scandals of the early 2000s. It requires public companies to establish and maintain internal controls to ensure the accuracy of their financial statements. Whistleblowers who report violations of securities law can receive protection against retaliation, including reinstatement, back pay, and special damages. To further encourage more whistleblowers to come forward with potential securities violations, Congress passed the Dodd-Frank           Wall Street Reform and Consumer Protection Act (Dodd-Frank) in 2010 which provides incentives and additional protections for whistleblowers. The Securities and Exchange Commission (SEC) established its whistleblower protection program under Dodd-Frank to award qualified whistleblowers for their tips that lead to a successful SEC sanction. Finally, the False Claims Act (FCA) allows individuals to file lawsuits on behalf of the government against entities that have committed fraud against the government. Whistleblowers who report fraud under the FCA can receive a percentage of the amount recovered by the government. In general, these laws give protections for whistleblowers in the private corporate setting, providing anti-retaliation protection and incentives for reporting violations.

Concerns Involved in Whistleblowing and Related Laws

While whistleblower laws in the United States provide important protections for individuals who speak out against illegal or unethical activities, there are still risks associated with whistleblowing. Even with the anti-retaliation provisions, whistleblowers still face retaliation from their employer, such as demotion or termination, and may face difficulties finding new employment in their field. For example, a 2011 report indicated that while the percentage of employees who noticed wrongdoings at their workplaces decreased from the 1992 survey, about one-third of those who called out wrongdoings and were identified as whistleblowers experienced retaliation in the form of threats and/or reprisals.[5]

Besides the fear of retaliation, another concern is the low success rate under the WPA when whistleblowers step up to make a claim. A 2015 research analyzed 151 cases where employees sought protection under the WPA and found that 79% of the cases were found in favor of the federal government.[6] Such a low success rate, in addition to potential retaliation, likely discourages employees from disclosing when they identify wrongdoings at their workplace.

A third problem with the current whistleblowing law is that financial incentives do not work as effectively as expected and might negatively impact corporate governance. From the incentives perspective, bounty hunting might actually discourage whistleblowers when not used well. For example, Dodd-Frank provides monetary rewards for people who report financial fraud that will allow the SEC impose a more than $1 million sanction on the violator, but if an employee discovers a wrongdoing that will not lead to a sanction over $1 million, a study shows that the employee will be less likely to report it timely.[7] From a corporate governance perspective, a potential whistleblower might turn to a regulatory agency for the reward rather than reporting it to the company’s internal compliance program, providing the company with the opportunity to do the right thing.[8]

Potential Changes 

There are several ways in which the current whistleblower regulations can improve. First, to encourage employees to stand up and identify wrongdoings at the workplace, the SEC’s whistleblower protection program should exclude the $1 million threshold requirement for any potential reward. Those who notice illegal behaviors that might not result in a $1 million sanction should also receive a reward if they report the potential risks.[9] Second, to deter retaliation, compensation for retaliation should be proportionate to the severity of the wrongdoing uncovered.[10] Currently, statutes mostly offer backpay, front pay, reinstatement, etc. as compensation for retaliation, while receiving punitive damages beyond that is rare. This mechanism does not recognize the public interest in retaliation cases—the public benefits from the whistleblower’s act while she risks retaliation. Finally, bounty programs might not be the right approach given that many whistleblowers are motivated more by their own moral calling rather than money. Perhaps a robust system ensuring whistleblower’s reports be thoroughly investigated and building stronger protections  from retaliation would work better than bounty programs.

In conclusion, whistleblowers play a crucial role in exposing illegal and unethical activities within organizations and government agencies. While current U.S. whistleblower protection regulations offer some safeguards, there are still shortcomings that may discourage employees from reporting wrongdoings. Improving whistleblower protections against retaliation, expanding rewards to include a wider range of disclosures, and refining the approach to investigations are essential steps to strengthen the system. By ensuring that their disclosures are thoroughly investigated and their lives are not severely impacted, we can encourage more whistleblowers to come forward with useful information which will better protect the public interest and maintain a higher standard of transparency, accountability, and corporate governance in the society.

Notes

[1] Donie O’Sullivan et al., Ex-Twitter Exec Blows The Whistle, Alleging Reckless and Negligent Cybersecurity Policies, CNN (Aug. 24, 2022, 5:59 AM EDT), https://edition.cnn.com/2022/08/23/tech/twitter-whistleblower-peiter-zatko-security/index.html.

[2] Kai-D. Bussmann, Economic Crime: People, Culture, and Controls 10 (2007).

[3] Ryan Mac & Cecilia Kang, Whistle-Blower Says Facebook ‘Chooses Profits Over Safety’, N.Y. Times (Oct. 3, 2021), https://www.nytimes.com/2021/10/03/technology/whistle-blower-facebook-frances-haugen.html.

[4] Whistleblower Protection, Office of Inspector General, https://www.oig.dhs.gov/whistleblower-protection#:~:text=The%20Whistleblower%20Protection%20Act%20 (last accessed: Mar. 5, 2023).

[5] U.S. Merit Systems Protection Board, Blowing the Whistle: Barriers to Federal Employees Making Disclosures 27 (2011).

[6] Shelley L. Peffer et al., Whistle Where You Work? The Ineffectiveness of the Federal Whistleblower Protection Act of 1989 and the Promise of the Whistleblower Protection Enhancement Act of 2012, 35 Review of Public Personnel Administration 70 (2015).

[7] Leslie Berger, et al., Hijacking the Moral Imperative: How Financial Incentives Can Discourage Whistleblower Reporting. 36 AUDITING: A Journal of Practice & Theory 1 (2017).

[8] Matt A. Vega, Beyond Incentives: Making Corporate Whistleblowing Moral in the New Era of Dodd- Frank Act “Bounty Hunting”, 45 Conn. L. Rev. 483.

[9] Geoffrey C. Rapp, Mutiny by the Bounties? The Attempt to Reform Wall Street by the New Whistleblower Provisions of the Dodd-Frank Act, 2012 B.Y.U.L. Rev. 73.

[10] David Kwok, The Public Wrong of Whistleblower Retaliation, 96 Hastings L.J. 1225.


Call of Regulation: How Microsoft and Regulators Are Battling for the Future of the Gaming Industry

Caroline Moriarty, MJLST Staffer

In January of 2022 Microsoft announced its proposed acquisition of Activision Blizzard, a video game company, promising to “bring the joy and community of gaming to everyone, across every device.” However, regulators in the United States, the EU, and the United Kingdom have recently indicated that they may block this acquisition due to its antitrust implications. In this post I’ll discuss the proposed acquisition, its antitrust concerns, recent actions from regulators, and prospects for the deal’s success.

Background

Microsoft, along with making the Windows platform, Microsoft Office suite, Surface computers, cloud computing software, and of new relevance, Bing, is a major player in the video game space. Microsoft owns Xbox, which along with Nintendo and Sony (PlayStation) is one of the three most popular gaming consoles. One of the main ways these consoles distinguish themselves from their competitors is by categorizing certain games as “exclusives,” where certain games can only be played on a single console. For example, Spiderman can only be played on PlayStation, the Mario games are exclusive to Nintendo, and Halo can only be played on Xbox. Other games, like Grand Theft Auto, Fortnite, and FIFA are offered on multiple platforms, allowing consumers to play the game on whatever console they already own.

Activision Blizzard is a video game holding company, which means the company owns games developed by game development studios. They then make decisions about marketing, creative direction, and console availability for individual games. Some of their most popular games include World of Warcraft, Candy Crush, Overwatch, and one of the most successful game franchises ever, Call of Duty. Readers outside of the gaming space may recognize Activision Blizzard’s name from recent news stories about its toxic workplace culture.

In January 2022, Microsoft announced its intention to purchase Activision Blizzard for $68.7 billion dollars, which would be the largest acquisition in the company’s history. The company stated that its goals were to expand into mobile gaming, as well as make more titles available, especially through Xbox Game Pass, a streaming service for games. After the announcement, critics pointed out two main issues. First, if Microsoft owned Activision Blizzard, it would be able to make the company’s titles exclusive to Xbox. This is especially problematic in relation to the Call of Duty franchise. Not only does the Call of Duty franchise include the top three most popular games of 2022, but it’s estimated that 400 million people play at least one of the games, 42% of whom play on Playstation. Second, if Microsoft owned Activision Blizzard, it could also make its titles exclusive to Xbox Game Pass, which would change the structure of the relatively new cloud streaming market.

The Regulators

Microsoft’s proposed acquisition has drawn scrutiny from the FTC, the European Commission, and the UK Competition and Markets Authority. In what the New York Times has dubbed “a global alignment on antitrust,” the three regulators have pursued a connected strategy. First, the European Commission announced an investigation of the deal in November, signaling that the deal would take time to close. Then, a month later, the FTC sued in its own administrative court, which is more favorable to antitrust claims. In February 2023, the Competition and Markets Authority released provisional findings on the effect of the acquisition on UK markets, writing that the merger may be expected to result in a substantial lessening of competition. Finally, the EU commission also completed its investigation, concluding that the possibility of Microsoft making Activision Blizzard titles exclusives “could reduce competition in the markets for the distribution of console and PC video games, leading to higher prices, lower quality and less innovation for console game distributors, which may, in turn, be passed on to consumers.” Together, the agencies are indicating a new era in antitrust – one that is much tougher on deals than in the recent past.

Specifically, the FTC called out Microsoft on its past acquisitions in its complaint. When Microsoft acquired Bethesda (another video game company, known for games like The Elder Scrolls: Skyrim) in 2021, the company told the European Commission that they would keep titles available on other consoles. After the deal cleared, Microsoft announced that many Bethesda titles, including highly anticipated games like Starfield and Redfall, would be Microsoft exclusives. The FTC used this in its complaint to show that any promises by Microsoft to keep games like Call of Duty available to all consumers could be broken at any time. Microsoft has disputed this characterization, arguing that the company made decisions to make titles exclusive on a “case-by-case basis,” which was in line with what it told the European Commission.

For the current deal, Microsoft has agreed to make Call of Duty available on the Nintendo Switch, and it claims to have made an offer to Sony, guaranteeing the franchise would remain available on PlayStation for ten years. This type of guarantee is known as conduct remedy, which preserves competition through requirements that the merged firm commits to take certain business actions or refrain from certain business conduct going forward. In contrast, structural remedies usually require a company to divest certain assets by selling parts of the business. One example of conduct remedies was in the Live Nation – Ticketmaster merger. The companies agreed not to retaliate against concert venue customers that switched to a different service nor tie sales of ticketing services to concerts it promoted. However, as the recent Taylor Swift ticketing dilemma proves, conduct remedies may not be effective in eliminating anticompetitive behavior.

Conclusion

Microsoft faces an uphill battle with its proposed acquisition. Despite its claims that Xbox does not exercise outsize influence in the gaming industry, the sheer size and potential effects of this acquisition make Microsoft’s claims much weaker. Further, the company faces stricter scrutiny from new regulators in the United States. Assistant Attorney General Jonathan Kanter, who leads the DOJ’s antitrust division, has already indicated that he prefers structural remedies to conduct ones, and Lina Khan, FTC commissioner, is well known for her opposition to big tech companies. If Microsoft wants this deal to succeed, it may have to provide more convincing evidence that it will act differently than its anticompetitive conduct in the past.


Saving the Planet With Admin Law: Another Blow to Tax Exceptionalism

Caroline Moriarty, MJLST Staffer

Earlier this month, the U.S. Tax Court struck down an administrative notice issued by the IRS regarding conservation easements in Green Valley Investors, LLC v. Commissioner. While the ruling itself may be minor, the court may be signaling a shift away from tax exceptionalism to administrative law under the Administrative Procedures Act (“APA”), which could have major implications for the way the IRS operates. In this post, I will explain what conservation easements are, what the ruling was, and what the ruling may mean for IRS administrative actions going forward. 

Conservation Easements

Conservation easements are used by wealthy taxpayers to get tax deductions. Under Section 170(h) of the Internal Revenue Code (“IRC”), taxpayers who purchase development rights for land, then donate those rights to a charitable organization that pledges not to develop or use the land, get a deduction proportional to the value of the land donated. The public gets the benefit of preserved land, which could be used as a park or nature reserve, and the donor gets a tax break.

However, this deduction led to the creation of “syndicated conservation easements.” In this tax scheme, intermediaries purchase vacant land worth little, hire an appraiser to declare its value to be much higher, then sell stakes in the donation of the land to investors, who get a tax deduction that is four to five times higher than what they paid. In exchange, the intermediaries are paid large fees. 

Conservation easements can be used to protect the environment, and proponents of the deduction argue that the easements are a critical tool in keeping land safe from development pressures. However, the IRS and other critics argue that these deductions are abused and cost the government between $1.3 billion and $2.4 billion in lost tax revenue. Some appraisers in these schemes have been indicted for “fraudulent” and “grossly inflated” land appraisals. Both Congress and the IRS have published research about the potential for abuse. In 2022, the IRS declared the schemes one of their “Dirty Dozen” for the year, writing that “these abusive arrangements do nothing more than game the tax system with grossly inflated tax deductions and generate high fees for promoters.”

Notice 2017-10 and the Tax Court’s Green Valley Ruling

To combat the abuse of conservation easements, the IRS released an administrative notice (the “Notice”) that required taxpayers to disclose any syndicated conservation easements on their tax returns as a “listed transaction.” The notice didn’t go through notice-and-comment procedures from the APA. Then, in 2019, the IRS disallowed over $22 million in charitable deductions on Green Valley and the other petitioners’ taxes for 2014 and 2015 and assessed a variety of penalties.  

While the substantive tax law is complex, Green Valley and the other petitioners challenged the penalties, arguing that the Notice justifying the penalties didn’t go through notice and comment procedures. In response, the IRS argued that Congress had exempted the agency from notice-and-comment procedures. Specifically, the IRS argued that they issued a Treasury Regulation that defined a “listed transaction” as one “identified by notice, regulation, or other form of published guidance,” which should have indicated to Congress that the IRS would be operating outside of APA requirements when issuing notices. 

The Tax Court disagreed, writing “We remain unconvinced that Congress expressly authorized the IRS to identify a syndicated conservation easement transaction as a listed transaction without the APA’s notice-and-comment procedures, as it did in Notice 2017-10.” Essentially, the statutes that Congress wrote allowing for IRS penalties did not determine the criteria for how taxpayers would incur the penalties, so the IRS decided with non-APA reviewed rules. If Congress would have expressly authorized the IRS to determine the requirements for penalties without APA procedures in the penalty statutes, then the Notice would have been valid. 

In invalidating the notice, the Tax Court decided that Notice 2017-10 was a legislative rule requiring notice-and-comment procedures because it imposed substantive reporting obligations on taxpayers with the threat of penalties. Since the decision, the IRS has issued proposed regulations on the same topic that will go through notice and comment procedures, while continuing to defend the validity of the Notice in other circuits (the Tax Court adopted reasoning from a Sixth Circuit decision).

The Future of Administrative Law and the IRS 

The decision follows other recent cases where courts have pushed the IRS to follow APA rules. However, following the APA is a departure from the past understanding of administrative law’s role in tax law. In the past, “tax exceptionalism” described the misperception that tax law is so complex and different from other regulatory regimes that the rules of administrative law don’t apply. This understanding has allowed the IRS to make multiple levels of regulatory guidance, some binding and some not, all without effective oversight from the courts. Further, judicial review is limited for IRS actions by statute, and even if there’s review, it may be ineffective if the judges are not tax experts. 

This movement towards administrative law has implications for both taxpayers and the IRS. For taxpayers, administrative law principles could provide additional avenues to challenge IRS actions and allow for more remedies. For the IRS, the APA may be an additional barrier to their job of collecting tax revenue. At the end of the day, syndicated conservation easements can be used to defraud the government, and the IRS should do something to curtail their potential for abuse. Following notice-and-comment procedures could delay effective tax administration. However, the IRS is an administrative agency, and it doesn’t make sense to think they can make their own rules or act like they’re not subject to the APA. Either way, administrative law will likely continue to prevail in both federal courts and Tax Court, and it will continue to influence tax law as we know it.


The Crypto Wild West Chaos Continues at FTX: Will the DCCPA Fix This?

Jack Atterberry, MJLST Staffer

The FTX Collapse and Its Implications

Over the last few weeks, the company FTX has imploded in what appears to be a massive scam of epic proportions. John Ray III, the former Enron restructuring leader who just took over FTX as CEO in their bankruptcy process, described FTX’s legal and bankruptcy situation as “worse than Enron” and a “complete failure of corporate control.”[1] FTX is a leading cryptocurrency exchange company that provided a platform on which customers could buy and sell crypto assets – similar to a traditional finance stock exchange. As of this past summer, FTX was worth $32 billion and served as a platform that global consumers trusted enough to deposit tens of billions of dollars in assets.[2]

Although FTX and its CEO Sam Bankman-Fried (“SBF”) engaged in numerous questionable and likely illegal business practices, perhaps the greatest fraudulent activity was intermingling customer deposits on the FTX exchange platform with assets from SBF’s asset management firm Alameda Research. Although facts are still being uncovered, preliminary investigations have highlighted that Alameda Research was using customer deposits in their trading and lending activities without customer consent – now customers face the unpleasant reality that their assets (in excess of $1 billion on aggregate) may never be returned.[3] While many lessons in corporate governance can be learned from the FTX situation, a key legal implication of the meltdown is that crypto has a regulatory problem that needs to be addressed by Congress and other US government agencies.

Current State of Government Regulation

Crypto assets are a relatively new asset class and have risen to prominence globally since the publishing of the Bitcoin white paper by the anonymous Satoshi Nakamoto in 2009.[4] Although crypto assets and the business activities associated with them are regulated in the United States, this regulation has been inconsistent and has created uncertainty for businesses and individuals in the ecosystem. Currently, the US Securities and Exchange Commission (“SEC”), state legislatures, the US Treasury, and a host of other government agencies have acted inconsistently. The SEC has inconsistently pursued enforcement actions, state governments have enacted differing digital assets laws, and the Treasury has banned crypto entities without clear rationale.[5] This has been a major problem for the industry and has led companies (including now infamously FTX) to move abroad to seek more regulatory certainty. Companies like FTX have chosen to domicile in jurisdictions like the Bahamas to avoid having to guess what approach various state governments and federal agencies will take with regard to its digital asset business activities.

Earlier in 2022, Congress introduced the Digital Commodities Consumer Protection Act (“DCCPA”) to attempt to fill gaps in the federal regulatory framework that oversees the crypto industry. The Digital Commodities Consumer Protection Act amends the Commodity Exchange Act to create a much-needed comprehensive and robust regulatory framework for spot markets of digital asset commodities. The DCCPA would enable the Commodity Futures Trading Commission (“CFTC”) to require digital asset commodity exchanges to actively prevent fraud and market manipulation, and would provide the CFTC regulatory authority to access quote and trade data allowing them to identify market manipulation more easily.[6] Taken as a whole, the DCCPA would implement consumer protections relating to digital asset commodities, ensure oversight of digital asset commodity platforms (such as FTX, Coinbase, etc.), and better prevent system risk to financial markets.[7] This regulation fills in a necessary gap in federal crypto regulation and industry observers are optimistic of its chances in getting passed as law.[8]

Digital Asset Regulation Has a Long Path Ahead

Despite the potential benefits and strong congressional regulatory action that the DCCPA represents, elements of the bill have been criticized by both the crypto industry and policy experts. According to the Blockchain Association, a leading crypto policy organization, the DCCPA could present negative implications for the decentralized finance (“DeFi”) ecosystem because of the onerous reporting and custody requirements that elements of the DCCPA would inflict on De-Fi protocols and applications[9]. “De-Fi” is a catch-all term for blockchain-based financial tools that allow users to trade, borrow, and loan crypto assets without third-party intermediaries.[10] The DCCPA attempts to regulate intermediary risks associated with digital asset trading whereas the whole point of De-Fi is to remove intermediaries through the use of blockchain software technology.[11] The Blockchain Association has also criticized the DCCPA as providing an overly broad definition for “digital commodity platform” and an overly narrow and ambiguous definition of “digital commodity” which could create future unnecessary turf wars between the SEC and CFTC.[12] When Congress revisits this bill next year, these complexities will likely be brought up in weighing the pros and cons of the bill. Besides the textual contents of the DCCPA, the legislators pushing forward the bill must also deal with the DCCPA’s negative association with Sam Bankman-Fried, the former FTX CEO. The former FTX CEO and suspected fraudster was perhaps the greatest supporter of the bill and lobbied for its provisions before Congress several times.[13] While Bankman-Fried’s support does not necessarily mean anything is wrong with the bill, some legislators and lobbyists may be hesitant to push forward a bill that was heavily influenced by a person who perpetrated a massive fraud scheme severely hurting thousands of consumers.

Though the goal of the DCCPA is to establish CFTC authority over crypto assets that qualify as commodities, the crypto ecosystem will still be left with several unanswered regulatory issues if it is passed. A key question is whether digital assets will be treated as commodities, securities or something else entirely. In addition, another key looming question is how Congress will regulate stablecoins—a type of digital asset where the price is designed to be pegged to another type of asset, typically a real-world asset such as US Treasury bills. For these unanswered questions Congress and the SEC will likely need to provide additional guidance and rules to build on the increased certainty that could be brought about with the DCCPA. By passing an amended version of the DCCPA with more careful attention paid to the De-Fi ecosystem as well as clarified definitions of digital commodities and digital commodity platforms, Congress would go a long way in the right direction to prevent future FTX-like fraud schemes, protect consumers, and ensure crypto innovation stays in the US.

Notes

[1] Ken Sweet & Michelle Chapman, FTX Is a Bigger Mess Than Enron, New CEO Says, Calling It “Unprecedented”, TIME (Nov. 17, 2022), https://time.com/6234801/ftx-fallout-worse-than-enron/

[2] FTX Company Profile, FORBES, https://www.forbes.com/companies/ftx/?sh=506342e23c59

[3] Osipovich et al., They Lived Together, Worked Together and Lost Billions Together: Inside Sam Bankman-Fried’s Doomed FTX Empire, WSJ (Nov. 19, 2022), https://www.wsj.com/articles/sam-bankman-fried-ftx-alameda-bankruptcy-collapse-11668824201

[4] Guardian Nigeria, The idea and a brief history of cryptocurrencies, The Guardian (Dec. 26, 2022), https://guardian.ng/technology/tech/the-idea-and-a-brief-history-of-cryptocurrencies/

[5] Kathryn White, Cryptocurrency regulation: where are we now, and where are we going?, World Economic Forum (Mar. 28, 2022), https://www.weforum.org/agenda/2022/03/where-is-cryptocurrency-regulation-heading/

[6] https://www.agriculture.senate.gov/imo/media/doc/Testimony_Phillips_09.15.2022.pdf

[7] US Senate Agriculture Committee, Crypto One-Pager: The Digital Commodities Consumer Protection Act Closes Regulatory Gaps, https://www.agriculture.senate.gov/imo/media/doc/crypto_one-pager1.pdf

[8] Courtney Degen, Washington wants to regulate cryptocurrency, Pensions & Investments (Oct. 3, 2022), https://www.pionline.com/cryptocurrency/washington-wants-regulate-crypto-path-unclear

[9] Jake Chervinsky, Blockchain Association Calls for Revisions to the Digital Commodities Consumer Protection Act (DCCPA), Blockchain Association (Sept. 15, 2022), https://theblockchainassociation.org/blockchain-association-calls-for-revisions-to-the-digital-commodities-consumer-protection-act-dccpa/

[10] Rakesh Sharma, What is Decentralized Finance (DeFi) and How Does It Work?, Investopedia (Sept. 21, 2022), https://www.investopedia.com/decentralized-finance-defi-5113835.

[11] Jennifer J. Schulpt & Jack Solowey, DeFi Must Be Defended, CATO Institute (Oct. 26, 2022), https://www.cato.org/commentary/defi-must-be-defended

[12] Jake Chervinsky, supra note 7.

[13] Fran Velasquez, Former SEC Official Doubts FTX Crash Will Prompt Congress to Act on Crypto Regulations, CoinDesk (Nov. 16, 2022), https://www.coindesk.com/business/2022/11/16/former-sec-official-doubts-ftx-crash-will-prompt-congress-to-act-on-crypto-regulations/