Regulatory

The Stifling Potential of Biden’s Executive Order on AI

Christhy Le, MJLST Staffer

Biden’s Executive Order on “Safe, Secure, and Trustworthy” AI

On October 30, 2023, President Biden issued a landmark Executive Order to address concerns about the burgeoning and rapidly evolving technology of AI. The Biden administration states that the order’s goal is to ensure that America leads the way in seizing the promising potential of AI while managing the risks of AI’s potential misuse.[1] The Executive Order establishes (1) new standards for AI development, and security; (2) increased protections for Americans’ data and privacy; and (3) a plan to develop authentication methods to detect AI-generated content.[2] Notably, Biden’s Executive Order also highlights the need to develop AI in a way that ensures it advances equity and civil rights, fights against algorithmic discrimination, and creates efficiencies and equity in the distribution of governmental resources.[3]

While the Biden administration’s Executive Order has been lauded as the most comprehensive step taken by a President to safeguard against threats posed by AI, its true impact is yet to be seen. The impact of the Executive Order will depend on its implementation by the agencies that have been tasked with taking action. The regulatory heads tasked with implementing Biden’s Executive Order are the Secretary of Commerce, Secretary of Energy, Secretary of Homeland Security, and the National Institute of Standards and Technology.[4] Below is a summary of the key calls-to-action from Biden’s Executive Order:

  • Industry Standards for AI Development: The National Institute of Science and Tech (NIST), Secretary of Commerce, Secretary of Energy, Secretary of Homeland Secretary, and other heads of agencies selected by the Secretary of Commerce will define industry standards and best practices for the development and deployment of safe and secure AI systems.
  • Red-Team Testing and Reporting Requirements: Companies developing or demonstrating an intent to develop potential dual-use foundational models will be required to provide the Federal Government, on an ongoing basis, with information, reports, and records on the training and development of such models. Companies will also be responsible for sharing the results of any AI red-team testing conducted by the NIST.
  • Cybersecurity and Data Privacy: The Department of Homeland Security shall provide an assessment of potential risks related to the use of AI in critical infrastructure sectors and issue a public report on best practices to manage AI-specific cybersecurity risks. The Director of the National Science Foundation shall fund the creation of a research network to advance privacy research and the development of Privacy Enhancing Technologies (PETs).
  • Synthetic Content Detection and Authentication: The Secretary of Commerce and heads of other relevant agencies will provide a report outlining existing methods and the potential development of further standards/techniques to authenticate content, track its provenance, detect synthetic content, and label synthetic content.
  • Maintaining Competition and Innovation: The government will invest in AI research by creating at least four new National AI Research Institutes and launch a pilot distributing computational, data, model, and training resources to support AI-related research and development. The Secretary of Veterans Affairs will also be tasked with hosting nationwide AI Tech Sprint competitions. Additionally, the FTC will be charged with using its authorities to ensure fair competition in the AI and semiconductor industry.
  • Protecting Civil Rights and Equity with AI: The Secretary of Labor will publish a report on effects of AI on the labor market and employees’ well-being. The Attorney General shall implement and enforce existing federal laws to address civil rights and civil liberties violations and discrimination related to AI. The Secretary of Health and Human Services shall publish a plan to utilize automated or algorithmic systems in administering public benefits and services and ensure equitable distribution of government resources.[5]

Potential for Big Tech’s Outsized Influence on Government Action Against AI

Leading up to the issuance of this Executive Order, the Biden administration met repeatedly and exclusively with leaders of big tech companies. In May 2023, President Biden and Vice President Kamala Harris met with the CEOs of leading AI companies–Google, Anthropic, Microsoft, and OpenAI.[6] In July 2023, the Biden administration celebrated their achievement of getting seven AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI) to make voluntary commitments to work towards developing AI technology in a safe, secure, and transparent manner.[7] Voluntary commitments generally require tech companies to publish public reports on their developed models, submit to third-party testing of their systems, prioritize research on societal risks posed by AI systems, and invest in cybersecurity.[8] Many industry leaders criticized these voluntary commitments for being vague and “more symbolic than substantive.”[9] Industry leaders also noted the lack of enforcement mechanisms to ensure companies follow through on these commitments.[10] Notably, the White House has only allowed leaders of large tech companies to weigh in on requirements for Biden’s Executive Order.

While a bipartisan group of senators[11] hosted a more diverse audience of tech leaders in their AI Insights Forum, the attendees for the first and second forum were still largely limited to CEOs or Cofounders of prominent tech companies, VC executives, or professors at leading universities.[12] Marc Andreessen, a co-founder of Andreessen Horowitz, a prominent VC fund, noted that in order to protect competition, the “future of AI shouldn’t be dictated by a few large corporations. It should be a group of global voices, pooling together diverse insights and ethical frameworks.”[13] On November 3rd, 2023 a group of prominent academics, VC executives, and heads of AI startups published an open letter to the Biden administration where they voiced their concern about the Executive Order’s potentially stifling effects.[14] The group also welcomed a discussion with the Biden administration on the importance of developing regulations that allowed for robust development of open source AI.[15]

Potential to Stifle Innovation and Stunt Tech Startups

While the language of Biden’s Executive Order is fairly broad and general, it still has the potential to stunt early innovation by smaller AI startups. Industry leaders and AI startup founders have voiced concern over the Executive Order’s reporting requirements and restrictions on models over a certain size.[16] Ironically, Biden’s Order includes a claim that the Federal Trade Commission will “work to promote a fair, open, and competitive ecosystem” by helping developers and small businesses access technical resources and commercialization opportunities.

Despite this promise of providing resources to startups and small businesses, the Executive Order’s stringent reporting and information-sharing requirements will likely have a disproportionately detrimental impact on startups. Andrew Ng, a longtime AI leader and cofounder of Google Brain and Coursera, stated that he is “quite concerned about the reporting requirements for models over a certain size” and is worried about the “overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”[17] Ng believes that regulating AI model size will likely hurt the open-source community and unintentionally benefit tech giants as smaller companies will struggle to comply with the Order’s reporting requirements.[18]

Open source software (OSS) has been around since the 1980s.[19] OSS is code that is free to access, use, and change without restriction.[20] The open source community has played a central part in developing the use and application of AI, as leading AI generative models like ChatGPT and Llama have open-source origins.[21] While both Llama and ChatGPT are no longer open source, their development and advancement heavily relied on using open source models like Transformer, TensorFlow, and Pytorch.[22] Industry leaders have voiced concern that the Executive Order’s broad and vague use of the term “dual-use foundation model” will impose unduly burdensome reporting requirements on small companies.[23] Startups typically have leaner teams, and there is rarely a team solely dedicated to compliance. These reporting requirements will likely create barriers to entry for tech challengers who are pioneering open source AI, as only incumbents with greater financial resources will be able to comply with the Executive Order’s requirements.

While Biden’s Executive Order is unlikely to bring any immediate change, the broad reporting requirements outlined in the Order are likely to stifle emerging startups and pioneers of open source AI.

Notes

[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[2] Id.

[3] Id.

[4] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[5] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.

[7] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.

[8] https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.

[9] https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html.

[10] Id.

[11] https://www.heinrich.senate.gov/newsroom/press-releases/read-out-heinrich-convenes-first-bipartisan-senate-ai-insight-forum.

[12] https://techpolicy.press/us-senate-ai-insight-forum-tracker/.

[13] https://www.schumer.senate.gov/imo/media/doc/Marc%20Andreessen.pdf.

[14] https://twitter.com/martin_casado/status/1720517026538778657?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1720517026538778657%7Ctwgr%5Ec9ecbf7ac4fe23b03d91aea32db04b2e3ca656df%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fcointelegraph.com%2Fnews%2Fbiden-ai-executive-order-certainly-challenging-open-source-ai-industry-insiders.

[15] Id.

[16] https://www.cnbc.com/2023/11/02/biden-ai-executive-order-industry-civil-rights-labor-groups-react.html.

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[18] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[19] https://www.brookings.edu/articles/how-open-source-software-shapes-ai-policy/.

[20] Id.

[21] https://www.zdnet.com/article/why-open-source-is-the-cradle-of-artificial-intelligence/.

[22] Id.

[23] Casado, supra note 14.


Payment Pending: CFPB Proposes to Regulate Digital Wallets

Kevin Malecha, MJLST Staffer

Federal regulators are increasingly concerned about digital wallets and person-to-person payment (P2P) apps like Apply Pay, Google Pay, Cash App, and Venmo, and how such services might impact the rights of financial consumers. As many as three-quarters of American adults use digital wallets or payment apps and, in 2022, the total value of transactions was estimated at $893 billion, expected to increase to $1.6 trillion by 2027.[1] In November of 2023, the Consumer Financial Protection Bureau proposed a rule that would expand its supervisory powers to cover certain nonbank providers of these services. The CFPB, an independent federal agency within the broader Federal Reserve System, was created by the Dodd-Frank Act in response to the 2007-2008 financial crisis and subsequent recession. The Bureau is tasked with protecting consumers in the financial space by promulgating and enforcing rules governing a wide variety of financial activities like mortgage lending, debt collection, and electronic payments.[2]

The CFPB has identified digital wallets and payment apps as products that threaten consumer financial rights and well-being.[3] First, because these services collect mass amounts of transaction and financial data, they pose a substantial risk to consumer data privacy.[4] Second, if the provider ceases operations or faces a “bank” run, any funds held in digital accounts may be lost because Federal Deposit Insurance Corporation (FDIC) protection, which insures deposits up to $250,000 in traditional banking institutions, is often unavailable for digital wallets.[5]

Enforcement and Supervision

The CFPB holds dual enforcement and supervisory roles. As one of the federal agencies charged with “implementing the Federal consumer financial laws,”[6] the enforcement powers of the CFPB are broad, but enforcement actions are relatively uncommon. In 2022, the Bureau brought twenty enforcement actions.[7] By contrast, the Commodity Futures Trading Commission (CFTC), which is also tasked in part with protecting financial consumers, brought eighty-two enforcement actions in the same period.[8] In contrast to the limited and reactionary nature of enforcement actions, the CFPB’s supervisory authority requires regulated entities to disclose certain documents and data, such as internal policies and audit reports, and allows CFPB examiners to proactively review their actions to ensure compliance.[9] The Bureau describes its supervisory process as a tool for identifying issues and addressing them before violations become systemic or cause significant harm to consumers.[10]

The CFPB already holds enforcement authority over all digital wallet and payment app services via its broad power to adjudicate violations of financial laws wherever they occur.[11] However, the Bureau has so far enjoyed only limited supervisory authority over the industry.[12] Currently, the CFPB only supervises digital wallets and payment apps when those services are provided by banks or when the provider falls under another CFPB supervision rule.[13] As tech companies like Apple and Google – which do not fall under other CFPB supervision rules – have increasingly entered the market, they have gone unsupervised.

Proposed Rule

Under the organic statute, CFPB’s existing supervisory authority covers nonbank persons that offer certain financial services including real estate and mortgage loans, private education loans, and payday loans.[14] In addition, the statute allows the Bureau to promulgate rules to cover other entities that are “larger participant[s] of a market for other consumer financial products or services.”[15] The proposed rule takes advantage of the power to define “larger participants” and expands the definition to include providers of “general-use digital consumer applications,” which the Bureau defines as funds transfer or wallet functionality through a digital application that the consumer uses to make payments for personal, household, or family purposes.[16] An entity is a “larger participant” if it (1) provides general-use digital consumer payment applications with an annual volume of at least five million transactions and (2) is not a small business as defined by the Small Business Administration.[17] The Bureau will make determinations on an individualized basis and may request documents and information from the entity to determine if it satisfies the requirements, which the entity can then dispute.

Implications for Digital Wallet and Payment App Providers

Major companies like Apple and Google can easily foresee that the CFPB intends to supervise them under the new rule. The Director of the CFPB recently compared the two American companies to Chinese tech companies Alibaba and WeChat that offer similar products and that, in the Director’s view, pose a similar risk to consumer data privacy and financial security.[18] For smaller firms, predicting the Bureau’s intentions is challenging, but existing regulations indicate that the Bureau will issue a written communication to initiate supervision.[19] The entity will then have forty-five days to dispute the finding that they meet the regulatory definition of a “larger participant.”[20] In their response, entities may include a statement of the reason for their objection and records, documents, or other information. Then the Assistant Director of the CFPB will review the response and make a determination. The regulation gives the Assistant Director the ability to request records and documents from the entity prior to the initial notification of intended supervision and throughout the determination process.[21] The Assistant Director also may extend the timeframe for determination beyond the forty-five-day window.[22]

If an entity becomes supervised, the Bureau will contact it for an initial conference.[23] The examiners will then determine the scope of future supervision, taking into consideration the responses at the conference, any records requested prior to or during the conference, and a review of the entity’s compliance management program.[24] The Bureau prioritizes its supervisory activities based on entity size, volume of transactions, size and risk of the relevant market, state oversight, and other market information to which the Bureau has access.[25] Ongoing supervision is likely to vary based on these factors, as well, but may include on-site or remote examination, review of documents and records, testing accounts and transactions for compliance with federal statutes and regulations, and continued review of the compliance management system.[26] The Bureau may then issue a confidential report or letter stating the examiner’s opinion that the entity has violated or is at risk of violating a statute or regulation.[27] While these findings are not final determinations, they do outline specific steps for the entity to regain or ensure compliance and should be taken seriously.[28] Supervisory reports or letters are distinct from enforcement actions and generally do not result in an enforcement action.[29] However, violations may be referred to the Bureau’s Office of Enforcement, which would then launch its own investigation.[30]

The likelihood of the proposed rule resulting in an enforcement action is, therefore, relatively low, but the exposure for regulated entities is difficult to measure because the penalties in enforcement actions vary widely. From October 2022 to October 2023, amounts paid by regulated entities ranged from $730,000 paid by a remittance provider that violated Electronic Funds Transfer rules,[31] to $3.7 billion in penalties and redress paid by Wells Fargo for headline-making violations of the Consumer Financial Protection Act.[32]

Notes

[1] Analysis of Deposit Insurance Coverage on Funds Stored Through Payment Apps, Consumer Fin. Prot. Bureau (Jun. 1, 2023), https://www.consumerfinance.gov/data-research/research-reports/issue-spotlight-analysis-of-deposit-insurance-coverage-on-funds-stored-through-payment-apps/full-report.

[2] Final Rules, Consumer Fin. Prot. Bureau, https://www.consumerfinance.gov/rules-policy/final-rules (last visited Nov. 16, 2023).

[3] CFPB Proposes New Federal Oversight of Big Tech Companies and Other Providers of Digital Wallets and Payment Apps, Consumer Fin. Prot. Bureau (Nov. 7, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-new-federal-oversight-of-big-tech-companies-and-other-providers-of-digital-wallets-and-payment-apps.

[4] Id.

[5] Id.

[6] 12 U.S.C. § 5492.

[7] Enforcement by the numbers, Consumer Fin. Prot. Bureau (Nov. 8, 2023), https://www.consumerfinance.gov/enforcement/enforcement-by-the-numbers.

[8] CFTC Releases Annual Enforcement Results, Commodity Futures Trading Comm’n (Oct. 20, 2022), https://www.cftc.gov/PressRoom/PressReleases/8613-22.

[9] CFPB Supervision and Examination Manual, Consumer Fin. Prot. Bureau at Overview 10 (Mar. 2017), https://files.consumerfinance.gov/f/documents/cfpb_supervision-and-examination-manual_2023-09.pdf.

[10] An Introduction to CFPB’s Exams of Financial Companies, Consumer Fin. Prot. Bureau 4 (Jan. 9, 2023), https://files.consumerfinance.gov/f/documents/cfpb_an-introduction-to-cfpbs-exams-of-financial-companies_2023-01.pdf.

[11] 12 U.S.C. §5563(a).

[12] CFPB Proposes New Federal Oversight of Big Tech Companies and Other Providers of Digital Wallets and Payment Apps, Consumer Fin. Prot. Bureau (Nov. 7, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-new-federal-oversight-of-big-tech-companies-and-other-providers-of-digital-wallets-and-payment-apps.

[13] Id.

[14] 12 U.S.C. § 5514.

[15] Id.

[16] Defining Larger Participants of a Market for General-Use Digital Consumer Payment, Consumer Fin. Prot. Bureau 3 (Nov. 7, 2023), https://files.consumerfinance.gov/f/documents/cfpb_nprm-digital-payment-apps-lp-rule_2023-11.pdf.

[17] Id. at 4.

[18] Rohit Chopra, Prepared Remarks of CFPB Director Rohit Chopra at the Brookings Institution Event on Payments in a Digital Century, Consumer Fin. Prot. Bureau (Oct. 6, 2023), https://www.consumerfinance.gov/about-us/newsroom/prepared-remarks-of-cfpb-director-rohit-chopra-at-the-brookings-institution-event-on-payments-in-a-digital-century.

[19] 12 CFR § 1090.103(a).

[20] 12 CFR § 1090.103(b).

[21] 12 CFR § 1090.103(c).

[22] 12 CFR § 1090.103(d).

[23] Defining Larger Participants of a Market for General-Use Digital Consumer Payment, Consumer Fin. Prot. Bureau 6 (Nov. 7, 2023), https://files.consumerfinance.gov/f/documents/cfpb_nprm-digital-payment-apps-lp-rule_2023-11.pdf.

[24] Id.

[25] Id. at 5.

[26] Id. at 6.

[27] An Introduction to CFPB’s Exams of Financial Companies, Consumer Fin. Prot. Bureau 3 (Jan. 9, 2023), https://files.consumerfinance.gov/f/documents/cfpb_an-introduction-to-cfpbs-exams-of-financial-companies_2023-01.pdf.

[28] Id.

[29] Id.

[30] Id.

[31] CFPB Orders Servicio UniTeller to Refund Fees and Pay Penalty for Failing to Follow Remittance, Consumer Fin. Prot. Bureau (Dec. 22, 2022), https://www.consumerfinance.gov/enforcement/actions/servicio-uniteller-inc.

[32] CFPB Orders Wells Fargo to Pay $3.7 Billion for Widespread Mismanagement of Auto Loans, Mortgages, and Deposit Accounts, Consumer Fin. Prot. Bureau (Dec. 20, 2022), https://www.consumerfinance.gov/enforcement/actions/wells-fargo-bank-na-2022.


Conflicts of Interest and Conflicting Interests: The SEC’s Controversial Proposed Rule

Shaadie Ali, MJLST Staffer

A controversial proposed rule from the SEC on AI and conflicts of interest is generating significant pushback from brokers and investment advisers. The proposed rule, dubbed “Reg PDA” by industry commentators in reference to its focus on “predictive data analytics,” was issued on July 26, 2023.[1] Critics claim that, as written, Reg PDA would require broker-dealers and investment managers to effectively eliminate the use of almost all technology when advising clients.[2] The SEC claims the proposed rule is intended to address the potential for AI to hurt more investors more quickly than ever before, but some critics argue that the SEC’s proposed rule would reach far beyond generative AI, covering nearly all technology. Critics also highlight the requirement that conflicts of interest be eliminated or neutralized as nearly impossible to meet and a departure from traditional principles of informed consent in financial advising.[3]

The SEC’s 2-page fact sheet on Reg PDA describes the 239-page proposal as requiring broker-dealers and investment managers to “eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of covered technologies in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests.”[4] The proposal defines covered technology as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.”[5] Critics have described this definition of “covered technology” as overly broad, with some going so far as to suggest that a calculator may be “covered technology.”[6] Despite commentators’ insistence, this particular contention is implausible – in its Notice of Proposed Rulemaking, the SEC stated directly that “[t]he proposed definition…would not include technologies that are designed purely to inform investors.”[7] More broadly, though, the SEC touts the proposal’s broadness as a strength, noting it “is designed to be sufficiently broad and principles-based to continue to be applicable as technology develops and to provide firms with flexibility to develop approaches to their use of technology consistent with their business model.”[8]

This move by the SEC comes amidst concerns raised by SEC chair Gary Gensler and the Biden administration about the potential for the concentration of power in artificial intelligence platforms to cause financial instability.[9] On October 30, 2023, President Biden signed an Executive Order that established new standards for AI safety and directed the issuance of guidance for agencies’ use of AI.[10] When questioned about Reg PDA at an event in early November, Gensler defended the proposed regulation by arguing that it was intended to protect online investors from receiving skewed recommendations.[11] Elsewhere, Gensler warned that it would be “nearly unavoidable” that AI would trigger a financial crisis within the next decade unless regulators intervened soon.[12]

Gensler’s explanatory comments have done little to curb criticism by industry groups, who have continued to submit comments via the SEC’s notice and comment process long after the SEC’s October 10 deadline.[13] In addition to highlighting the potential impacts of Reg PDA on brokers and investment advisers, many commenters questioned whether the SEC had the authority to issue such a rule. The American Free Enterprise Chamber of Commerce (“AmFree”) argued that the SEC exceeded its authority under both its organic statutes and the Administrative Procedures Act (APA) in issuing a blanket prohibition on conflicts of interest.[14] In their public comment, AmFree argued the proposed rule was arbitrary and capricious, pointing to the SEC’s alleged failure to adequately consider the costs associated with the proposal.[15] AmFree also invoked the major questions doctrine to question the SEC’s authority to promulgate the rule, arguing “[i]f Congress had meant to grant the SEC blanket authority to ban conflicts and conflicted communications generally, it would have spoken more clearly.”[16] In his scathing public comment, Robinhood Chief Legal and Corporate Affairs Officer Daniel M. Gallagher alluded to similar APA concerns, calling the proposal “arbitrary and capricious” on the grounds that “[t]he SEC has not demonstrated a need for placing unprecedented regulatory burdens on firms’ use of technology.”[17] Gallagher went on to condemn the proposal’s apparent “contempt for the ordinary person, who under the SEC’s apparent world view [sic] is incapable of thinking for himself or herself.”[18]

Although investor and broker industry groups have harshly criticized Reg PDA, some consumer protection groups have expressed support through public comment. The Consumer Federation of America (CFA) endorsed the proposal as “correctly recogniz[ing] that technology-driven conflicts of interest are too complex and evolve too quickly for the vast majority of investors to understand and protect themselves against, there is significant likelihood of widespread investor harm resulting from technology-driven conflicts of interest, and that disclosure would not effectively address these concerns.”[19] The CFA further argued that the final rule should go even further, citing loopholes in the existing proposal for affiliated entities that control or are controlled by a firm.[20]

More generally, commentators have observed that the SEC’s new prescriptive rule that firms eliminate or neutralize potential conflicts of interest marks a departure from traditional securities laws, wherein disclosure of potential conflicts of interest has historically been sufficient.[21] Historically, conflicts of interest stemming from AI and technology have been regulated the same as any other conflict of interest – while brokers are required to disclose their conflicts, their conduct is primarily regulated through their fiduciary duty to clients. In turn, some commentators have suggested that the legal basis for the proposed regulations is well-grounded in the investment adviser’s fiduciary duty to always act in the best interest of its clients.[22] Some analysts note that “neutralizing” the effects of a conflict of interest from such technology does not necessarily require advisers to discard that technology, but changing the way that firm-favorable information is analyzed or weighed, but it still marks a significant departure from the disclosure regime. Given the widespread and persistent opposition to the rule both through the note and comment process and elsewhere by commentators and analysts, it is unclear whether the SEC will make significant revisions to a final rule. While the SEC could conceivably narrow definitions of “covered technology,” “investor interaction,” and “conflicts of interest,” it is difficult to imagine how the SEC could modify the “eliminate or neutralize” requirement in a way that would bring it into line with the existing disclosure-based regime.

For its part, the SEC under Gensler is likely to continue pursuing regulations on AI regardless of the outcome of Reg PDA. Gensler has long expressed his concerns about the impacts of AI on market stability. In a 2020 paper analyzing regulatory gaps in the use of generative AI in financial markets, Gensler warned, “[e]xisting financial sector regulatory regimes – built in an earlier era of data analytics technology – are likely to fall short in addressing the risks posed by deep learning.”[23] Regardless of how the SEC decides to finalize its approach to AI in conflict of interest issues, it is clear that brokers and advisers are likely to resist broad-based bans on AI in their work going forward.

Notes

[1] Press Release, Sec. and Exch. Comm’n., SEC Proposes New Requirements to Address Risks to Investors From Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Jul. 26, 2023).

[2] Id.

[3] Jennifer Hughes, SEC faces fierce pushback on plan to police AI investment advice, Financial Times (Nov. 8, 2023), https://www.ft.com/content/766fdb7c-a0b4-40d1-bfbc-35111cdd3436.

[4] Sec. Exch. Comm’n., Fact Sheet: Conflicts of Interest and Predictive Data Analytics (2023).

[5] Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,  88 Fed. Reg. 53960 (Proposed Jul. 26, 2021) (to be codified at 17 C.F.R. pts. 240, 275) [hereinafter Proposed Rule].

[6] Hughes, supra note 3.

[7] Proposed Rule, supra note 5.

[8] Id.

[9] Stefania Palma and Patrick Jenkins, Gary Gensler urges regulators to tame AI risks to financial stability, Financial Times (Oct. 14, 2023), https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac.

[10] Fact Sheet, White House, President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (Oct. 30, 2023).

[11] Hughes, supra note 3.

[12] Palma, supra note 9.

[13] See Sec. Exch. Comm’n., Comments on Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (last visited Nov. 13, 2023), https://www.sec.gov/comments/s7-12-23/s71223.htm (listing multiple comments submitted after October 10, 2023).

[14] Am. Free Enter. Chamber of Com., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270180-652582.pdf.

[15] Id. at 14-19.

[16] Id. at 9.

[17] Daniel M. Gallagher, Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-271299-654022.pdf.

[18] Id. at 43.

[19] Consumer Fed’n. of Am., Comment Letter on Proposed Rule regarding Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers (Oct. 10, 2023), https://www.sec.gov/comments/s7-12-23/s71223-270400-652982.pdf.

[20] Id.

[21] Ken D. Kumayama et al., SEC Proposes New Conflicts of Interest Rule for Use of AI by Broker-Dealers and Investment Advisers, Skadden (Aug. 10, 2023), https://www.skadden.com/insights/publications/2023/08/sec-proposes-new-conflicts.

[22] Colin Caleb, ANALYSIS: Proposed SEC Regs Won’t Allow Advisers to Sidestep AI, Bloomberg Law (Aug. 10, 2023), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-proposed-sec-regs-wont-allow-advisers-to-sidestep-ai.

[23] Gary Gensler and Lily Bailey, Deep Learning and Financial Stability (MIT Artificial Intel. Glob. Pol’y F., Working Paper 2020) (in which Gensler identifies several potential systemic risks to the financial system, including overreliance and uniformity in financial modeling, overreliance on concentrated centralized datasets, and the potential of regulators to create incentives for less-regulated entities to take on increasingly complex functions in the financial system).


Cracking the Code: Navigating New SEC Rules Governing Cybersecurity Disclosure

Noah Schottenbauer, MJLST Staffer

In response to the dramatic impact cybersecurity incidents have on investors through the decline of stock value and sizeable costs to companies in rectifying breaches,  the SEC adopted new rules governing cybersecurity-related disclosures for public companies, covering both the disclosure of individual cybersecurity incidents as well as periodic disclosures of a company’s procedures to assess, identify, and manage material cybersecurity risks, management’s role in assessing and managing cybersecurity risks, and the board of directors’ oversight of cybersecurity risks.[1]

Before evaluating the specifics of the new SEC cybersecurity disclosure requirements, it is important to understand why information about cybersecurity incidents is important to investors. In recent years, data breaches have led to an average decline in stock value of 7.5% amongst publicly traded companies, with impacts being felt long after the date of the breach, as demonstrated by companies experiencing a significant data breach underperforming the NASDAQ by an average of 8.6% after one year.[2] One of the forces driving this decline in stock value is the immense costs associated with rectifying a data breach for the affected company. In 2022, the average cost of a data breach for U.S. companies was $9.44 million, drawn from ransom payments, disruptions in business operations, legal and audit fees, and other associated expenses.[3]

Summary Of Required Disclosures

  • Material Cybersecurity Incidents (Form 8-K, Item 1.05)

Amendments to Item 1.05 of Form 8-K require that reporting companies disclose any cybersecurity incident deemed to be material.[4] When making such disclosures, companies are required to “describe the material aspects of the nature, scope, and timing of the incident, and the material impact or reasonably likely material impact on the registrant, including its financial condition and results of operations.”[5]

So, what is a material cybersecurity incident? The SEC defines cybersecurity incident as “an unauthorized occurrence . . . on or conducted through a registrant’s information systems that jeopardizes the confidentiality, integrity, or availability of a registrant’s information systems or any information residing therein.”[6]

The definition of material, on the other hand, lacks the same degree of clarity. Based on context offered by the SEC through the rulemaking process, material is to be used in a way that is consistent with other securities laws.[7] Under this standard, information, or, in this case, a cybersecurity incident, would be considered material if “there is a substantial likelihood that a reasonable shareholder would consider it important.”[8] This determination is made based on a “delicate assessment of the inferences a ‘reasonable shareholder’ would draw from a given set of facts and the significance of those inferences to him.”[9] Even with this added context, what characteristics of a cybersecurity incident make it material remain unclear, but considering the fact that the rules are being implemented with the intent of protecting investor interests, the safest course of action would be to disclose a cybersecurity incident when in doubt of its materiality.[10]

It is important to note that this disclosure mandate is not limited to incidents that occur within the company’s own systems. If a material cybersecurity incident happens on third-party systems that a company utilizes, that too must be disclosed.[11] However, in these situations, companies are only expected to disclose information that is readily accessible, meaning they are not required to go beyond their “regular channels of communication” to gather pertinent information.[12]

Regarding the mechanics of the disclosure, the SEC stipulates that companies must file an Item 1.05 of Form 8-K within four business days of determining that a cybersecurity incident is material.[13] However, delaying disclosure may be allowed in limited circumstances where the United States Attorney General determines that immediate disclosure may seriously threaten national security or public safety.[14]

If there are any changes in the initially-disclosed information or if new material information is discovered that was not available at the time of the first disclosure, registrants are obligated to update their disclosure by filing an amended Form 8-K, ensuring that all relevant information related to the cybersecurity incident is available to the public and stakeholders.[15]

  • Risk Management & Strategy (Regulation S-K, Item 106(b))

Under amendments to Item 106(b) of Regulation S-K, reporting companies are obligated to describe their  “processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats in sufficient detail for a reasonable investor to understand those processes.”[16] When detailing these processes, companies must specifically address three primary points. First, they need to indicate how and if the cybersecurity processes described in Item 106(b) fall under the company’s overarching risk management system or procedures. Second, companies must clarify whether they involve assessors, consultants, auditors, or other third-party entities in relation to these cybersecurity processes. Third,  they must describe if they possess methods to monitor and access significant risks stemming from cybersecurity threats when availing the services of any third-party providers.[17]

In addition to the three enumerated elements under Item 106(b), companies are expected to furnish additional information to ensure a comprehensive understanding of their cybersecurity procedures for potential investors. This supplementary disclosure should encompass “whatever information is necessary, based on their facts and circumstances, for a reasonable investor to understand their cybersecurity processes.”[18] While companies are mandated to reveal if they collaborate with third-party service providers concerning their cybersecurity procedures, they are not required to disclose the specific names of these providers or offer a detailed description of the services these third-party entities provide, thus striking a balance between transparency and confidentiality and ensuring that investors have adequate information.[19]

  • Governance (Regulation S-K, Item 106(c))

Amendments to Regulation S-K, Item 106(c) require that companies: (1) describe the board’s oversight of the risks emanating from cybersecurity threats, and (2) characterize management’s role in both assessing and managing material risks arising from such threats.[20]

When detailing management’s role concerning these cybersecurity threats, there are a number of issues that should be addressed. First, companies should clarify which specific management positions or committees are entrusted with the responsibility of assessing and managing these risks. Additionally, the expertise of these designated individuals or groups should be outlined in such detail as necessary to comprehensively describe the nature of their expertise. Second, a description of the processes these entities employ to stay informed about, and to monitor, the prevention, detection, mitigation, and remediation of cybersecurity incidents should be included. Third, companies should indicate if and how these individuals or committees convey information about such risks to the board of directors or potentially to a designated committee or subcommittee of the board.[21]

The disclosures required under Item 106(c) are aimed at balancing investor accessibility to information with the company’s ability to maintain autonomy in determining cybersecurity practices in the context of organizational structure; therefore, disclosures do not need to be overly detailed.[22]

  • Foreign Private Issuers (Form 6-K & Form 20-F)

The rules addressed above only apply to domestic companies, but the SEC imposed parallel cybersecurity disclosure requirements for foreign private issuers under Form 6-K (incident reporting) and Form 20-K (periodic reporting).[23]

Key Dates

The SEC’s final rules are effective as of September 5, 2023, but the Form 8-K and Regulation S-K reporting requirements have yet to take effect. The key compliance dates for each are as follows:

  • Form 8-K Item 1.05(a) Incident Reporting – December 18, 2023
  • Regulation S-K Periodic Reporting – Fiscal years ending on or after December 15, 2023

Smaller reporting companies are provided with an extra 180 days to comply with Form 8-K Item 1.05. Under this grant, small companies will be expected to begin incident reporting on June 15, 2024. No such extension was granted to smaller reporting companies with regard to Regulation S-K Periodic Reporting.[24]

Potential Impact On Cybersecurity Policy

The actual impact of the SEC’s new disclosure requirements will likely remain unclear for some time, yet the regulations compel companies to adopt a greater sense of discipline and transparency in their cybersecurity practices. Although the primary intent of these rules is investor protection, they may also influence how companies formulate their cybersecurity strategies, given the requirement to discuss such policies in their annual disclosures. This heightened level of accountability, regarding defensive measures and risk management strategies in response to cybersecurity threats, may encourage companies to implement more robust cybersecurity practices or, at the very least, ensure that cybersecurity becomes a regular topic of discussion amongst senior leadership. Consequently, the SEC’s initiative may serve as a catalyst for strengthening cybersecurity policies within corporate entities, while also providing investors with essential information for making informed decisions in the marketplace.

Further Information

The overview of the new SEC rules governing cybersecurity disclosures provided above is precisely that: an overview. For more information regarding the requirements and applicability of these rules please refer to the official rules and the SEC website.

Notes

[1] Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure, Exchange Act Release No. 33-11216, Exchange Act Release No. 34-97989 (July 26, 2023) [hereinafter Final Rule Release], https://www.sec.gov/files/rules/final/2023/33-11216.pdf.

[2] Keman Huang et al., The Devastating Business Impact of a Cyber Breach, Harv. Bus Rev., May 4, 2023, https://hbr.org/2023/05/the-devastating-business-impacts-of-a-cyber-breach.

[3] Id.

[4] Final Rule Release, supra note 1, at 12

[5] Id. at 49.

[6] Id. at 76.

[7] Id. at 14.

[8] TSC Indus. v. Northway, 426 U.S. 438, 449 (1976).

[9] Id. at 450.

[10] Id. at 448.

[11] Final Rule Release, supra note 1, at 30.

[12] Id. at 31.

[13] Id. at 32.

[14] Id. at 28.

[15] Id. at 50–51.

[16] Id. at 61.

[17] Id. at 63.

[18] Id.

[19] Id. at 60.

[20] Id. at 12.

[21] Id. at 70.

[22] Id.

[23] Id. at 12.

[24] Id. at 107.


The Double-Helix Dilemma: Navigating Privacy Pitfalls in Direct-to-Consumer Genetic Testing

Ethan Wold, MJLST Staffer

Introduction

On October 22, direct-to-consumer genetic testing (DTC-GT) company 23andME sent emails to a number of its customers informing them of a data breach into the company’s “DNA Relatives” feature that allows customers to compare ancestry information with other users worldwide.[1] While 23andMe and other similar DTC-GT companies offer a number of positive benefits to consumers, such as testing for health predispositions and carrier statuses of certain genes, this latest data breach is a reminder that before choosing to opt into these sorts of services one should be aware of the potential risks that they present.

Background

DTC-GT companies such as 23andMe and Ancestry.com have proliferated and blossomed in recent years. It is estimated over 100 million people have utilized some form of direct-to-consumer genetic testing.[2] Using biospecimens submitted by consumers, these companies sequence and analyze an individual’s genetic information to provide a range of services pertaining to one’s health and ancestry.[3] The October 22 data breach specifically pertained to 23andMe’s “DNA Relatives” feature.[4] The DNA Relatives feature can identify relatives on any branch of one’s family tree by taking advantage of the autosomal chromosomes, the 22 chromosomes that are passed down from your ancestors on both sides of your family, and one’s X chromosome(s).[5] Relatives are identified by comparing the customer’s submitted DNA with the DNA of other 23andMe members who are participating in the DNA Relatives feature.[6] When two people are found to have an identical DNA segment, it is likely they share a recent common ancestor.[7] The DNA Relatives feature even uses the length and number of these identical segments to attempt to predict the relationship between genetic relatives.[8] Given the sensitive nature of sharing genetic information, there are often privacy concerns regarding practices such as the DNA Relatives feature. Yet despite this, the legislation and regulations surrounding DTC-GT is somewhat limited.

Legislation

The Health Insurance Portability and Accountability Act (HIPAA) provides the baseline privacy and data security rules for the healthcare industry.[9] HIPAA’s Privacy Rule regulates the use and disclosure of a person’s “protected health information” by a “covered entity.[10] Under the Act, the type of genetic information collected by 23andMe and other DTC-GT companies does constitute “protected health information.”[11] However, because HIPAA defines a “covered entity” as a health plan, healthcare clearinghouse, or health-care provider, DTC-GT companies do not constitute covered entities and therefore are not under the umbrella of HIPAA’s Privacy Rule.[12]

Thus, the primary source of regulation for DTC-GT companies appears to be the Genetic Information Nondiscrimination Act (GINA). GINA was enacted in 2008 for the purpose of protecting the public from genetic discrimination and alleviating concerns about such discrimination and thereby encouraging individuals to take advantage of genetic testing, technologies, research, and new therapies.[13] GINA defines genetic information as information from genetic tests of an individual or family members and includes information from genetic services or genetic research.[14] Therefore, DTC-GT companies fall under GINA’s jurisdiction. However, GINA only applies to the employment and health insurance industries and thus neglects many other potential arenas where privacy concerns may present.[15] This is especially relevant for 23andMe customers, as signing up for the service serves as consent for the company to use and share your genetic information with their associated third-party providers.[16] As a case in point, in 2018 the pharmaceutical giant GlaxoSmithKline purchased a $300 million stake in 23andMe for the purpose of gaining access to the company’s trove of genetic information for use in their drug development trials.[17]

Executive Regulation

In addition to the legislation above, three different federal administrative agencies primarily regulate the DTC-GT industry: the Food and Drug Administration (FDA), the Centers of Medicare and Medicaid services (CMS), and the Federal Trade Commission (FTC). The FDA has jurisdiction over DTC-GT companies due to the genetic tests they use being labeled as “medical devices”[18] and in 2013 exercised this authority over 23andMe by sending a letter to the company resulting in the suspending of one of its health-related genetic tests.[19] However, the FDA only has jurisdiction over diagnostic tests and therefore does not regulate any of the DTC-GT services related to genealogy such as 23andMe’s DNA Relatives feature.[20] Moreover, the FDA does not have jurisdiction to regulate the other aspects of DTC-GT companies’ activities or data practices.[21] CMS has the ability to regulate DTC-GT companies through enforcement of the Clinical Laboratory Improvements Act (CLIA), which requires that genetic testing laboratories ensure the accuracy, precision, and analytical validity of their tests.[22] But, like the FDA, CMS only has jurisdiction over tests that diagnose a disease or assess health.[23]

Lastly, the FTC has broad authority to regulate unfair or deceptive business practices under the Federal Trade Commission Act (FTCA) and has levied this authority against DTC-GT companies in the past. For example, in 2014 the agency brought an action against two DTC-GT companies who were using genetic tests to match consumers to their nutritional supplements and skincare products.[24] The FTC alleged that the companies’ practices related to data security were unfair and deceptive because they failed to implement reasonable policies and procedures to protect consumers’ personal information and created unnecessary risks to the personal information of nearly 30,000 consumers.[25] This resulted in the companies entering into an agreement with the FTC whereby they agreed to establish and maintain comprehensive data security programs and submit to yearly security audits by independent auditors.[26]

Potential Harms

As the above passages illustrate, the federal government appears to recognize and has at least attempted to mitigate privacy concerns associated with DTC-GT. Additionally, a number of states have passed their own laws that limit DTC-GT in certain aspects.[27] Nevertheless, given the potential magnitude and severity of harm associated with DTC-GT it makes one question if it is enough. Data breaches involving health-related data are growing in frequency and now account for 40% of all reported data breaches.[28] These data breaches result in unauthorized access to DTC-GT consumer-submitted data and can result in a violation of an individual’s genetic privacy. Though GINA aims to prevent it, genetic discrimination in the form of increasing health insurance premiums or denial of coverage by insurance companies due to genetic predispositions remains one of the leading concerns associated with these violations. What’s more, by obtaining genetic information from DTC-GT databases, it is possible for someone to recover a consumer’s surname and combine that with other metadata such as age and state to identify the specific consumer.[29] This may in turn lead to identity theft in the form of opening accounts, taking out loans, or making purchases in your name, potentially damaging your financial well-being and credit score. Dealing with the aftermath of a genetic data breach can also be expensive. You may incur legal fees, credit monitoring costs, or other financial burdens in an attempt to mitigate the damage.

Conclusion

As it sits now, genetic information submitted to DTC-GT companies already contains a significant volume of consequential information. As technology continues to develop and research presses forward, the volume and utility of this information will only grow over time. Thus, it is crucially important to be aware of risks associated with DTC-GT services.

This discussion is not intended to discourage individuals from participating in DTC-GT. These companies and the services they offer provide a host of benefits, such as allowing consumers to access genetic testing without the healthcare system acting as a gatekeeper, thus providing more autonomy and often at a lower price.[30] Furthermore, the information provided can empower consumers to mitigate the risks of certain diseases, allow for more informed family planning, or gain a better understanding of their heritage.[31] DTC-GT has revolutionized the way individuals access and understand their genetic information. However, this accessibility and convenience comes with a host of advantages and disadvantages that must be carefully considered.

Notes

[1] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[2] https://www.ama-assn.org/delivering-care/patient-support-advocacy/protect-sensitive-individual-data-risk-dtc-genetic-tests#:~:text=Use%20of%20direct%2Dto%2Dconsumer,November%202021%20AMA%20Special%20Meeting

[3] https://go-gale-com.ezp3.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[4] https://www.reuters.com/world/us/23andme-notifies-customers-data-breach-into-its-dna-relatives-feature-2023-10-24/#:~:text=%22There%20was%20unauthorized%20access%20to,exposed%20to%20the%20threat%20actor.%22

[5] https://customercare.23andme.com/hc/en-us/articles/115004659068-DNA-Relatives-The-Genetic-Relative-Basics

[6] Id.

[7] Id.

[8] Id.

[9] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[10] https://www.hhs.gov/sites/default/files/ocr/privacy/hipaa/administrative/combined/hipaa-simplification-201303.pdf

[11] Id.

[12] Id; https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[13] https://www.eeoc.gov/statutes/genetic-information-nondiscrimination-act-2008

[14] Id.

[15] https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3035561&blobtype=pdf

[16] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[17] https://news.yahoo.com/news/major-drug-company-now-access-194758309.html

[18] https://uscode.house.gov/view.xhtml?req=(title:21%20section:321%20edition:prelim)

[19] https://core.ac.uk/download/pdf/33135586.pdf

[20] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[21] Id.

[22] https://www.law.cornell.edu/cfr/text/42/493.1253

[23] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[24] https://www.ftc.gov/system/files/documents/cases/140512genelinkcmpt.pdf

[25] Id.

[26] Id.

[27] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[28] Id.

[29] https://go-gale-com.ezp2.lib.umn.edu/ps/i.do?p=OVIC&u=umn_wilson&id=GALE%7CA609260695&v=2.1&it=r&sid=primo&aty=ip

[30] Id.

[31] Id.


Who Is Regulating Regulatory Public Comments?

Madeleine Rossi, MJLST Staffer

In 2015 the Federal Communications Commission (FCC) issued a rule on “Protecting and Promoting the Open Internet.”[1] The basic premise of these rules was that internet service providers had unprecedented control over access to information for much of the public. Those in favor of the new rules argued that broadband providers should be required to enable access to all internet content, without either driving or throttling traffic to particular websites for their own benefit. Opponents of these rules – typically industry players such as the same broadband providers that would be regulated – argued that such rules were burdensome and would prevent technological innovation. The fight over these regulations is colloquially known as the fight over “net neutrality.” 

In 2017 the FCC reversed course and put forth a proposal to repeal the 2015 regulations. Any time that an agency proposes a rule, or proposes to repeal a rule, they must go through the notice-and-comment rulemaking procedure. One of the most important parts of this process is the solicitation of public comments. Many rules get put forth without much attention or fanfare from the public. Some rules may only get hundreds of public comments, often coming from the industry that the rule is aimed at. Few proposed rules get attention from the public at large. However, the fight over net neutrality – both the 2015 rules and the repeal of those rules in 2017 – garnered significant public interest. The original 2015 rule amassed almost four million comments.[2] At the time, this was the most public comments that a proposed rule had ever received.[3] In 2017, the rule’s rescission blew past four million comments to acquire a total of almost twenty-two million comments.[4]

At first glance this may seem like a triumph for the democratic purpose of the notice-and-comment requirement. After all, it should be a good thing that so many American citizens are taking an interest in the rules that will ultimately determine how they can use the internet. Unfortunately, that was not the full story. New York Attorney General Letitia James released a report in May of 2021 detailing her office’s investigation into wide ranging fraud that plagued the notice-and-comment process.[5] Of the twenty-two million comments submitted about the repeal, a little under eight million of them were generated by a single college student.[6] These computer-generated comments were in support of the original regulations, but used fake names and fake comments.[7] Another eight million comments were submitted by lead generation companies that were hired by the broadband companies.[8] These companies stole individuals’ identities and submitted computer-generated comments on their behalf.[9] While these comments used real people’s identities, they fabricated the content in support of repealing the 2015 regulations.[10]

Attorney General James’ investigation showed that real comments, submitted by real people, were “drowned out by masses of fake comments and messages being submitted to the government to sway decision-making.”[11] When the investigation was complete, James’ office concluded that nearly eighteen of the twenty-two million comments received by the FCC in 2017 were faked.[12] The swarm of fake comments created the false perception that the public was generally split on the issue of net neutrality. In fact, anywhere from seventy-five to eighty percent of Americans say that they support net neutrality.[13]

This is not an issue that is isolated to the fight over net neutrality. Other rulemaking proceedings have been targeted as well, namely by the same lead generation firms involved in the 2017 notice-and-comment fraud campaign.[14] Attorney General James’ investigation found that regulatory agencies like the Environmental Protection Agency (EPA), which is responsible for promulgating rules that protect people and the environment from risk, had also been targeted by such campaigns.[15] When agencies like the FCC or EPA propose regulations for the protection of the public, the democratic process of notice-and-comment is completely upended when industry players are able to “drown out” real public voices.

So, what can be done to preserve the democratic nature of the notice-and-comment period? As the technology involved in these schemes advances, this is likely to become not only a reoccurring issue but one that could entirely subvert the regulatory process of rulemaking. One way that injured parties are fighting back is with lawsuits.

In May of 2023, Attorney General James announced that she had come to a second agreement with three of the lead generation firms involved with the 2017 scam to falsify public comments.[16] The three companies agreed to pay $615,000 in fines for their involvement.[17] This agreement came in addition to a previous agreement in which the three stipulated to paying four million dollars in fines and agreed to change future lead generating practices, and the litigation is ongoing.[18]

However, more must be done to ensure that the notice-and-comment process is not entirely subverted. Financial punishment after the fact does not account for the harm to the democratic process that is already done. Currently, the only recourse is to sue these companies for their fraudulent and deceptive practices. However, lawsuits will typically only result in financial losses. Financial penalties are important, but they will always come after the fact. Once litigation is under way, the harm has already been done to the American public.

Agencies need to ensure that they are keeping up with the pace of rapidly evolving technology so that they can properly vet the validity of the comments that they receive. While it is important to keep public commenting a relatively open and easy practice, having some kind of vetting procedure has become essential. Perhaps requiring an accompanying email address or phone number for each comment, and then sending a simple verification code. Email or phone numbers could also be contacted during the vetting process once the public comment period closes. While it would likely be impractical to contact each individual independently, a random sample would at least flag whether or not a coordinated and large-scale fake commenting campaign had taken place. 

Additionally, the legislature should keep an eye on fraudulent practices that impact the notice-and-comment process. Lawmakers can and should strengthen laws to punish companies that are engaged in these practices. For example, in Attorney General James’ report she recommends that lawmakers do at least two things. First, they should explicitly and statutorily prohibit “deceptive and unauthorized comments.”[19] To be effective these laws should establish large civil fines. Second, the legislature should “strengthen impersonation laws.”[20] Current impersonation laws were not designed with mass-impersonation fraud in mind. These statutes should be amended to increase penalties when many individuals are impersonated.

In conclusion, the use of fake comments to sway agency rulemaking is a problem that is only going to worsen with time and the advance of technology. This is a serious problem that should be taken as such by both agencies and the legislature. 

Notes

[1] 80 Fed. Reg. 19737.

[2] https://www.brookings.edu/articles/democratizing-and-technocratizing-the-notice-and-comment-process/.

[3] Id.

[4] Id.

[5] https://ag.ny.gov/press-release/2021/attorney-general-james-issues-report-detailing-millions-fake-comments-revealing.

[6] https://www.brookings.edu/articles/democratizing-and-technocratizing-the-notice-and-comment-process/.

[7] Id.

[8] Id.

[9] Id.

[10] Id.

[11] https://ag.ny.gov/press-release/2021/attorney-general-james-issues-report-detailing-millions-fake-comments-revealing.

[12] Id.

[13] https://thehill.com/policy/technology/435009-4-in-5-americans-say-they-support-net-neutrality-poll/, https://publicconsultation.org/united-states/three-in-four-voters-favor-reinstating-net-neutrality/.

[14] Id.

[15] https://apnews.com/article/settlement-fake-public-comments-net-neutrality-ae1f69a1f5415d9f77a41f07c3f6c358.

[16] Id.

[17] Id.

[18] https://apnews.com/article/government-and-politics-technology-business-9f10b43b6aacbc750dfc010ceaedaca7.

[19] https://ag.ny.gov/sites/default/files/oag-fakecommentsreport.pdf.

[20] Id.


Whistleblowers Reveals…—How Can the Legal System Protect and Encourage Whistleblowing?

Vivian Lin, MJLST Staffer

In July 2022, Twitter’s former head of security, Peiter Zatko, filed a 200+ page complaint with Congress and several federal agencies, disclosing Twitter’s potential major security problems that pose a threat to its users and national security.[1] Though it is still unclear whether  these allegations were confirmed, the disclosure drew significant attention because of data privacy implications and calls for whistleblower protection. Whistleblowers play an important role in detecting major issues in corporations and the government. A 2007 survey reported that in private companies, professional auditors were only able to detect 19% of instances of fraud but whistleblowers were able to expose 43% of incidents.[2]In fact, this recent Twitter scandal, along with Facebook’s online safety scandal in 2021[3] and the famous national security scandal disclosed by Edward Snowden, were all revealed by inside whistleblowers. Without these disclosures, the public may never learn of incidents that involve their personal information and security.

An Overview of the U.S. Whistleblower Protection Regulations

Whistleblower laws aim to protect individuals who report illegal or unethical activities in their workplace or government agency. The primary federal law protecting whistleblowers is the Whistleblower Protection Act (WPA), passed in 1989. The WPA provides protections for federal employees who report violations such as  gross mismanagement, gross waste of funds, abuse of authority, or dangers to public health or safety.[4]

In addition to the WPA, there are other federal laws that provide industry specific whistleblower protections in private sectors. For example, the Sarbanes-Oxley Act (SOX) was enacted in response to the corporate accounting scandals of the early 2000s. It requires public companies to establish and maintain internal controls to ensure the accuracy of their financial statements. Whistleblowers who report violations of securities law can receive protection against retaliation, including reinstatement, back pay, and special damages. To further encourage more whistleblowers to come forward with potential securities violations, Congress passed the Dodd-Frank           Wall Street Reform and Consumer Protection Act (Dodd-Frank) in 2010 which provides incentives and additional protections for whistleblowers. The Securities and Exchange Commission (SEC) established its whistleblower protection program under Dodd-Frank to award qualified whistleblowers for their tips that lead to a successful SEC sanction. Finally, the False Claims Act (FCA) allows individuals to file lawsuits on behalf of the government against entities that have committed fraud against the government. Whistleblowers who report fraud under the FCA can receive a percentage of the amount recovered by the government. In general, these laws give protections for whistleblowers in the private corporate setting, providing anti-retaliation protection and incentives for reporting violations.

Concerns Involved in Whistleblowing and Related Laws

While whistleblower laws in the United States provide important protections for individuals who speak out against illegal or unethical activities, there are still risks associated with whistleblowing. Even with the anti-retaliation provisions, whistleblowers still face retaliation from their employer, such as demotion or termination, and may face difficulties finding new employment in their field. For example, a 2011 report indicated that while the percentage of employees who noticed wrongdoings at their workplaces decreased from the 1992 survey, about one-third of those who called out wrongdoings and were identified as whistleblowers experienced retaliation in the form of threats and/or reprisals.[5]

Besides the fear of retaliation, another concern is the low success rate under the WPA when whistleblowers step up to make a claim. A 2015 research analyzed 151 cases where employees sought protection under the WPA and found that 79% of the cases were found in favor of the federal government.[6] Such a low success rate, in addition to potential retaliation, likely discourages employees from disclosing when they identify wrongdoings at their workplace.

A third problem with the current whistleblowing law is that financial incentives do not work as effectively as expected and might negatively impact corporate governance. From the incentives perspective, bounty hunting might actually discourage whistleblowers when not used well. For example, Dodd-Frank provides monetary rewards for people who report financial fraud that will allow the SEC impose a more than $1 million sanction on the violator, but if an employee discovers a wrongdoing that will not lead to a sanction over $1 million, a study shows that the employee will be less likely to report it timely.[7] From a corporate governance perspective, a potential whistleblower might turn to a regulatory agency for the reward rather than reporting it to the company’s internal compliance program, providing the company with the opportunity to do the right thing.[8]

Potential Changes 

There are several ways in which the current whistleblower regulations can improve. First, to encourage employees to stand up and identify wrongdoings at the workplace, the SEC’s whistleblower protection program should exclude the $1 million threshold requirement for any potential reward. Those who notice illegal behaviors that might not result in a $1 million sanction should also receive a reward if they report the potential risks.[9] Second, to deter retaliation, compensation for retaliation should be proportionate to the severity of the wrongdoing uncovered.[10] Currently, statutes mostly offer backpay, front pay, reinstatement, etc. as compensation for retaliation, while receiving punitive damages beyond that is rare. This mechanism does not recognize the public interest in retaliation cases—the public benefits from the whistleblower’s act while she risks retaliation. Finally, bounty programs might not be the right approach given that many whistleblowers are motivated more by their own moral calling rather than money. Perhaps a robust system ensuring whistleblower’s reports be thoroughly investigated and building stronger protections  from retaliation would work better than bounty programs.

In conclusion, whistleblowers play a crucial role in exposing illegal and unethical activities within organizations and government agencies. While current U.S. whistleblower protection regulations offer some safeguards, there are still shortcomings that may discourage employees from reporting wrongdoings. Improving whistleblower protections against retaliation, expanding rewards to include a wider range of disclosures, and refining the approach to investigations are essential steps to strengthen the system. By ensuring that their disclosures are thoroughly investigated and their lives are not severely impacted, we can encourage more whistleblowers to come forward with useful information which will better protect the public interest and maintain a higher standard of transparency, accountability, and corporate governance in the society.

Notes

[1] Donie O’Sullivan et al., Ex-Twitter Exec Blows The Whistle, Alleging Reckless and Negligent Cybersecurity Policies, CNN (Aug. 24, 2022, 5:59 AM EDT), https://edition.cnn.com/2022/08/23/tech/twitter-whistleblower-peiter-zatko-security/index.html.

[2] Kai-D. Bussmann, Economic Crime: People, Culture, and Controls 10 (2007).

[3] Ryan Mac & Cecilia Kang, Whistle-Blower Says Facebook ‘Chooses Profits Over Safety’, N.Y. Times (Oct. 3, 2021), https://www.nytimes.com/2021/10/03/technology/whistle-blower-facebook-frances-haugen.html.

[4] Whistleblower Protection, Office of Inspector General, https://www.oig.dhs.gov/whistleblower-protection#:~:text=The%20Whistleblower%20Protection%20Act%20 (last accessed: Mar. 5, 2023).

[5] U.S. Merit Systems Protection Board, Blowing the Whistle: Barriers to Federal Employees Making Disclosures 27 (2011).

[6] Shelley L. Peffer et al., Whistle Where You Work? The Ineffectiveness of the Federal Whistleblower Protection Act of 1989 and the Promise of the Whistleblower Protection Enhancement Act of 2012, 35 Review of Public Personnel Administration 70 (2015).

[7] Leslie Berger, et al., Hijacking the Moral Imperative: How Financial Incentives Can Discourage Whistleblower Reporting. 36 AUDITING: A Journal of Practice & Theory 1 (2017).

[8] Matt A. Vega, Beyond Incentives: Making Corporate Whistleblowing Moral in the New Era of Dodd- Frank Act “Bounty Hunting”, 45 Conn. L. Rev. 483.

[9] Geoffrey C. Rapp, Mutiny by the Bounties? The Attempt to Reform Wall Street by the New Whistleblower Provisions of the Dodd-Frank Act, 2012 B.Y.U.L. Rev. 73.

[10] David Kwok, The Public Wrong of Whistleblower Retaliation, 96 Hastings L.J. 1225.


Taking Off: How the FAA Reauthorization Bill Could Keep Commercial Flights Grounded

James Challou, MJLST Staffer

The last year has been one that the airline industry is eager to forget. Not only did a record number of flight delays and cancellations occur, but the Federal Aviation Administration (FAA) suffered an extremely rare complete system outage and Southwest dealt with a holiday travel meltdown. These incidents, coupled with recent near collisions on runways, have drawn increased scrutiny from lawmakers in Congress as this year they face a September 30threauthorization deadline for the Federal Aviation Administration Reauthorization Act. And while the Federal Aviation Act is a hotly debated topic, lawmakers and industry professionals all agree that a failure to meet the reauthorization deadline could spell disaster.

The need for reauthorization arises from the structure and funding system of the FAA. Reauthorization is a partial misnomer. Though the airline industry was deregulated in 1978, the practice of FAA reauthorization originated with the Airport and Airway Revenue Act of 1970 which created the Airport and Airway Trust Fund (Trust Fund) that is used to finance FAA investments. The authority to collect taxes and to spend from the Trust Fund must be periodically reauthorized to meet agency and consumer needs. Currently, the Trust Fund provides funds for four major FAA accounts: Operations, Facilities & Equipment (F&E), Research, Engineering and Development (RE&D), and Grants-in-Aid for Airports. If the FAA’s authorization expired without an extension, then the agency would be unable to spend revenues allocated from the Trust Fund. The flip side of the unique reauthorization process is that it offers a regular opportunity for Congress to hold the FAA accountable for unfulfilled mandates, to respond to new problems in air travel, and to advocate for stronger consumer protections because enacted changes in reauthorization acts only span a set time period.

On top of the recent spate of industry complications and near disasters, Congress must sift through a myriad of other concerns and issues that pervade the airline industry for the potential upcoming reauthorization. Consumer protection has become an increasingly pressing and hot-button issue as the deluge of canceled flights in the past year left many consumers disgruntled by the treatment and compensation they received. In fact, the Consumer Federation of America and several other consumer and passengers’ right groups recently called upon the House Transportation Committee and the Senate Commerce Committee to prioritize consumer protections. Their requests include requiring compensation when consumers’ flights are delayed and canceled, holding airlines accountable for publishing unrealistic flight schedules, ending junk fee practices in air travel, including prohibiting fees for family seating and for other such services, and requiring all-in pricing, ending federal preemption of airline regulation and allowing state attorneys general and individuals to hold airlines accountable, encouraging stronger DOT enforcement of passenger protections, and prioritizing consumer voices and experiences.

However, not all are sold on enhancing consumer protections via the reauthorization process. Senator Ted Cruz, the top Republican lawmaker on the Commerce, Science, and Transportation Committee has expressed opposition to increased agency and government intervention in the airline industry, citing free market and regulatory overreach concerns. Instead, Cruz and his allies have suggested that the FAA’s technology is outdated, and their sole focus should be on modernizing it.

Indeed, it appears that in the wake of the FAA system outage most interested parties and lawmakers agree that the aging FAA technology needs updating. While at first glance one might think this provides common ground, the opinions on how to update the FAA’s technology are wide-ranging. For example, while some have flagged IT infrastructure and aviation safety systems as the FAA technology to target in order to augment the FAA’s cybersecurity capacity, others are more concerned with providing the agency direction on the status of new airspace inhabitants such as drones and air taxis to facilitate entrants into the market. Even despite cross-party assent that the FAA’s technology necessitates some level of baseline update, a lack of direction for what this means in practice remains.

Another urgent and seemingly undisputed issue that the reauthorization effort faces is FAA staffing. The FAA’s workforce has severely diminished in the past decade. Air traffic controllers, for example, number 1,000 fewer than a decade ago, and more than 10% are eligible to retire. Moreover, a shortage of technical operations employees has grown so severe that union officials have dubbed it to be approaching crisis levels. Resultingly, most lawmakers agree that expanding the FAA’s workforce is paramount.

However, despite the dearth of air traffic controllers and technical operations employees, this proposition has encountered roadblocks as well. Some lawmakers view this as a solution to increase diversity within the ranks of the FAAand offer solutions revolving around this. Currently, only 2.6% of aviation mechanics are women and 94% of aircraft pilots male and 93% of them White. Lawmakers have made several proposals intended to rectify this disparity centering around reducing the cost of entry into FAA professions. However, Republicans have largely refuted such efforts and criticized such efforts as distractions from the chief concern of safety. Additionally, worker groups continue to air concerns about displacing qualified U.S. pilot candidates and undercutting current pilot pay. Any such modifications to the FAA reauthorization bill will require bipartisan support.

Finally, a lingering battle between Democrats and Republicans regarding the confirmation of President Biden’s nominated commissioner have hampered efforts to forge a bipartisan reauthorization bill. Cruz, again spearheading the Republican contingent, has decried Biden’s nominee for possessing no aviation experience and being overly partisan. Proponents, however, have pointed out that only two of the last five commissioners have had any aviation experience and lauded the nominee’s credentials and experience in the military. The surprisingly acrid fight bodes ominously for a reauthorization bill that will have to be bipartisan and is subject to serious time constraints.

The FAA reauthorization process provides valuable insight into how Congress decides agency directives. However, while safety and technology concerns remain the joint focal point of Congress’ intent for the reauthorization bill, in practice there seems to be little common ground between lawmakers. With a September 13th deadline looming, it is increasingly important that lawmakers cooperate to collectively hammer out a reauthorization bill. Failure to do so would severely cripple the FAA and the airline industry in general.


Call of Regulation: How Microsoft and Regulators Are Battling for the Future of the Gaming Industry

Caroline Moriarty, MJLST Staffer

In January of 2022 Microsoft announced its proposed acquisition of Activision Blizzard, a video game company, promising to “bring the joy and community of gaming to everyone, across every device.” However, regulators in the United States, the EU, and the United Kingdom have recently indicated that they may block this acquisition due to its antitrust implications. In this post I’ll discuss the proposed acquisition, its antitrust concerns, recent actions from regulators, and prospects for the deal’s success.

Background

Microsoft, along with making the Windows platform, Microsoft Office suite, Surface computers, cloud computing software, and of new relevance, Bing, is a major player in the video game space. Microsoft owns Xbox, which along with Nintendo and Sony (PlayStation) is one of the three most popular gaming consoles. One of the main ways these consoles distinguish themselves from their competitors is by categorizing certain games as “exclusives,” where certain games can only be played on a single console. For example, Spiderman can only be played on PlayStation, the Mario games are exclusive to Nintendo, and Halo can only be played on Xbox. Other games, like Grand Theft Auto, Fortnite, and FIFA are offered on multiple platforms, allowing consumers to play the game on whatever console they already own.

Activision Blizzard is a video game holding company, which means the company owns games developed by game development studios. They then make decisions about marketing, creative direction, and console availability for individual games. Some of their most popular games include World of Warcraft, Candy Crush, Overwatch, and one of the most successful game franchises ever, Call of Duty. Readers outside of the gaming space may recognize Activision Blizzard’s name from recent news stories about its toxic workplace culture.

In January 2022, Microsoft announced its intention to purchase Activision Blizzard for $68.7 billion dollars, which would be the largest acquisition in the company’s history. The company stated that its goals were to expand into mobile gaming, as well as make more titles available, especially through Xbox Game Pass, a streaming service for games. After the announcement, critics pointed out two main issues. First, if Microsoft owned Activision Blizzard, it would be able to make the company’s titles exclusive to Xbox. This is especially problematic in relation to the Call of Duty franchise. Not only does the Call of Duty franchise include the top three most popular games of 2022, but it’s estimated that 400 million people play at least one of the games, 42% of whom play on Playstation. Second, if Microsoft owned Activision Blizzard, it could also make its titles exclusive to Xbox Game Pass, which would change the structure of the relatively new cloud streaming market.

The Regulators

Microsoft’s proposed acquisition has drawn scrutiny from the FTC, the European Commission, and the UK Competition and Markets Authority. In what the New York Times has dubbed “a global alignment on antitrust,” the three regulators have pursued a connected strategy. First, the European Commission announced an investigation of the deal in November, signaling that the deal would take time to close. Then, a month later, the FTC sued in its own administrative court, which is more favorable to antitrust claims. In February 2023, the Competition and Markets Authority released provisional findings on the effect of the acquisition on UK markets, writing that the merger may be expected to result in a substantial lessening of competition. Finally, the EU commission also completed its investigation, concluding that the possibility of Microsoft making Activision Blizzard titles exclusives “could reduce competition in the markets for the distribution of console and PC video games, leading to higher prices, lower quality and less innovation for console game distributors, which may, in turn, be passed on to consumers.” Together, the agencies are indicating a new era in antitrust – one that is much tougher on deals than in the recent past.

Specifically, the FTC called out Microsoft on its past acquisitions in its complaint. When Microsoft acquired Bethesda (another video game company, known for games like The Elder Scrolls: Skyrim) in 2021, the company told the European Commission that they would keep titles available on other consoles. After the deal cleared, Microsoft announced that many Bethesda titles, including highly anticipated games like Starfield and Redfall, would be Microsoft exclusives. The FTC used this in its complaint to show that any promises by Microsoft to keep games like Call of Duty available to all consumers could be broken at any time. Microsoft has disputed this characterization, arguing that the company made decisions to make titles exclusive on a “case-by-case basis,” which was in line with what it told the European Commission.

For the current deal, Microsoft has agreed to make Call of Duty available on the Nintendo Switch, and it claims to have made an offer to Sony, guaranteeing the franchise would remain available on PlayStation for ten years. This type of guarantee is known as conduct remedy, which preserves competition through requirements that the merged firm commits to take certain business actions or refrain from certain business conduct going forward. In contrast, structural remedies usually require a company to divest certain assets by selling parts of the business. One example of conduct remedies was in the Live Nation – Ticketmaster merger. The companies agreed not to retaliate against concert venue customers that switched to a different service nor tie sales of ticketing services to concerts it promoted. However, as the recent Taylor Swift ticketing dilemma proves, conduct remedies may not be effective in eliminating anticompetitive behavior.

Conclusion

Microsoft faces an uphill battle with its proposed acquisition. Despite its claims that Xbox does not exercise outsize influence in the gaming industry, the sheer size and potential effects of this acquisition make Microsoft’s claims much weaker. Further, the company faces stricter scrutiny from new regulators in the United States. Assistant Attorney General Jonathan Kanter, who leads the DOJ’s antitrust division, has already indicated that he prefers structural remedies to conduct ones, and Lina Khan, FTC commissioner, is well known for her opposition to big tech companies. If Microsoft wants this deal to succeed, it may have to provide more convincing evidence that it will act differently than its anticompetitive conduct in the past.


The Crypto Wild West Chaos Continues at FTX: Will the DCCPA Fix This?

Jack Atterberry, MJLST Staffer

The FTX Collapse and Its Implications

Over the last few weeks, the company FTX has imploded in what appears to be a massive scam of epic proportions. John Ray III, the former Enron restructuring leader who just took over FTX as CEO in their bankruptcy process, described FTX’s legal and bankruptcy situation as “worse than Enron” and a “complete failure of corporate control.”[1] FTX is a leading cryptocurrency exchange company that provided a platform on which customers could buy and sell crypto assets – similar to a traditional finance stock exchange. As of this past summer, FTX was worth $32 billion and served as a platform that global consumers trusted enough to deposit tens of billions of dollars in assets.[2]

Although FTX and its CEO Sam Bankman-Fried (“SBF”) engaged in numerous questionable and likely illegal business practices, perhaps the greatest fraudulent activity was intermingling customer deposits on the FTX exchange platform with assets from SBF’s asset management firm Alameda Research. Although facts are still being uncovered, preliminary investigations have highlighted that Alameda Research was using customer deposits in their trading and lending activities without customer consent – now customers face the unpleasant reality that their assets (in excess of $1 billion on aggregate) may never be returned.[3] While many lessons in corporate governance can be learned from the FTX situation, a key legal implication of the meltdown is that crypto has a regulatory problem that needs to be addressed by Congress and other US government agencies.

Current State of Government Regulation

Crypto assets are a relatively new asset class and have risen to prominence globally since the publishing of the Bitcoin white paper by the anonymous Satoshi Nakamoto in 2009.[4] Although crypto assets and the business activities associated with them are regulated in the United States, this regulation has been inconsistent and has created uncertainty for businesses and individuals in the ecosystem. Currently, the US Securities and Exchange Commission (“SEC”), state legislatures, the US Treasury, and a host of other government agencies have acted inconsistently. The SEC has inconsistently pursued enforcement actions, state governments have enacted differing digital assets laws, and the Treasury has banned crypto entities without clear rationale.[5] This has been a major problem for the industry and has led companies (including now infamously FTX) to move abroad to seek more regulatory certainty. Companies like FTX have chosen to domicile in jurisdictions like the Bahamas to avoid having to guess what approach various state governments and federal agencies will take with regard to its digital asset business activities.

Earlier in 2022, Congress introduced the Digital Commodities Consumer Protection Act (“DCCPA”) to attempt to fill gaps in the federal regulatory framework that oversees the crypto industry. The Digital Commodities Consumer Protection Act amends the Commodity Exchange Act to create a much-needed comprehensive and robust regulatory framework for spot markets of digital asset commodities. The DCCPA would enable the Commodity Futures Trading Commission (“CFTC”) to require digital asset commodity exchanges to actively prevent fraud and market manipulation, and would provide the CFTC regulatory authority to access quote and trade data allowing them to identify market manipulation more easily.[6] Taken as a whole, the DCCPA would implement consumer protections relating to digital asset commodities, ensure oversight of digital asset commodity platforms (such as FTX, Coinbase, etc.), and better prevent system risk to financial markets.[7] This regulation fills in a necessary gap in federal crypto regulation and industry observers are optimistic of its chances in getting passed as law.[8]

Digital Asset Regulation Has a Long Path Ahead

Despite the potential benefits and strong congressional regulatory action that the DCCPA represents, elements of the bill have been criticized by both the crypto industry and policy experts. According to the Blockchain Association, a leading crypto policy organization, the DCCPA could present negative implications for the decentralized finance (“DeFi”) ecosystem because of the onerous reporting and custody requirements that elements of the DCCPA would inflict on De-Fi protocols and applications[9]. “De-Fi” is a catch-all term for blockchain-based financial tools that allow users to trade, borrow, and loan crypto assets without third-party intermediaries.[10] The DCCPA attempts to regulate intermediary risks associated with digital asset trading whereas the whole point of De-Fi is to remove intermediaries through the use of blockchain software technology.[11] The Blockchain Association has also criticized the DCCPA as providing an overly broad definition for “digital commodity platform” and an overly narrow and ambiguous definition of “digital commodity” which could create future unnecessary turf wars between the SEC and CFTC.[12] When Congress revisits this bill next year, these complexities will likely be brought up in weighing the pros and cons of the bill. Besides the textual contents of the DCCPA, the legislators pushing forward the bill must also deal with the DCCPA’s negative association with Sam Bankman-Fried, the former FTX CEO. The former FTX CEO and suspected fraudster was perhaps the greatest supporter of the bill and lobbied for its provisions before Congress several times.[13] While Bankman-Fried’s support does not necessarily mean anything is wrong with the bill, some legislators and lobbyists may be hesitant to push forward a bill that was heavily influenced by a person who perpetrated a massive fraud scheme severely hurting thousands of consumers.

Though the goal of the DCCPA is to establish CFTC authority over crypto assets that qualify as commodities, the crypto ecosystem will still be left with several unanswered regulatory issues if it is passed. A key question is whether digital assets will be treated as commodities, securities or something else entirely. In addition, another key looming question is how Congress will regulate stablecoins—a type of digital asset where the price is designed to be pegged to another type of asset, typically a real-world asset such as US Treasury bills. For these unanswered questions Congress and the SEC will likely need to provide additional guidance and rules to build on the increased certainty that could be brought about with the DCCPA. By passing an amended version of the DCCPA with more careful attention paid to the De-Fi ecosystem as well as clarified definitions of digital commodities and digital commodity platforms, Congress would go a long way in the right direction to prevent future FTX-like fraud schemes, protect consumers, and ensure crypto innovation stays in the US.

Notes

[1] Ken Sweet & Michelle Chapman, FTX Is a Bigger Mess Than Enron, New CEO Says, Calling It “Unprecedented”, TIME (Nov. 17, 2022), https://time.com/6234801/ftx-fallout-worse-than-enron/

[2] FTX Company Profile, FORBES, https://www.forbes.com/companies/ftx/?sh=506342e23c59

[3] Osipovich et al., They Lived Together, Worked Together and Lost Billions Together: Inside Sam Bankman-Fried’s Doomed FTX Empire, WSJ (Nov. 19, 2022), https://www.wsj.com/articles/sam-bankman-fried-ftx-alameda-bankruptcy-collapse-11668824201

[4] Guardian Nigeria, The idea and a brief history of cryptocurrencies, The Guardian (Dec. 26, 2022), https://guardian.ng/technology/tech/the-idea-and-a-brief-history-of-cryptocurrencies/

[5] Kathryn White, Cryptocurrency regulation: where are we now, and where are we going?, World Economic Forum (Mar. 28, 2022), https://www.weforum.org/agenda/2022/03/where-is-cryptocurrency-regulation-heading/

[6] https://www.agriculture.senate.gov/imo/media/doc/Testimony_Phillips_09.15.2022.pdf

[7] US Senate Agriculture Committee, Crypto One-Pager: The Digital Commodities Consumer Protection Act Closes Regulatory Gaps, https://www.agriculture.senate.gov/imo/media/doc/crypto_one-pager1.pdf

[8] Courtney Degen, Washington wants to regulate cryptocurrency, Pensions & Investments (Oct. 3, 2022), https://www.pionline.com/cryptocurrency/washington-wants-regulate-crypto-path-unclear

[9] Jake Chervinsky, Blockchain Association Calls for Revisions to the Digital Commodities Consumer Protection Act (DCCPA), Blockchain Association (Sept. 15, 2022), https://theblockchainassociation.org/blockchain-association-calls-for-revisions-to-the-digital-commodities-consumer-protection-act-dccpa/

[10] Rakesh Sharma, What is Decentralized Finance (DeFi) and How Does It Work?, Investopedia (Sept. 21, 2022), https://www.investopedia.com/decentralized-finance-defi-5113835.

[11] Jennifer J. Schulpt & Jack Solowey, DeFi Must Be Defended, CATO Institute (Oct. 26, 2022), https://www.cato.org/commentary/defi-must-be-defended

[12] Jake Chervinsky, supra note 7.

[13] Fran Velasquez, Former SEC Official Doubts FTX Crash Will Prompt Congress to Act on Crypto Regulations, CoinDesk (Nov. 16, 2022), https://www.coindesk.com/business/2022/11/16/former-sec-official-doubts-ftx-crash-will-prompt-congress-to-act-on-crypto-regulations/